Algorithmic Bias | Towards greater Transparency, Fairness and Representation in automated systems design
AI

Algorithmic Bias | Towards greater Transparency, Fairness and Representation in automated systems design

Could you be a victim of algorithmic discrimination without even realising it?

​​Algorithms are increasingly being used in more settings than ever before and they are often relied upon to make snap decisions on future events, based of patterns found in historic data.

Understandably though, whilst the potential for automation and predictive analytics in decision making is vast; A rise in the adoption of algorithms has also brought with it increased risks of over reliance and social harm as algorithms continue to take an ever more prominent role in our lives.

AMZN.O’s recruiting algorithm and the the use of AI algorithms by the police are just two examples that have helped cast a spotlight on the falibility of automated system design. Most importantly, depending on how they are created, algorithmic errors can to not only reflect present and historic social issues, but aggravate them also. These risks and effects, will need to be closely monitored to prevent repeating human errors in an irreversible way.

Join WiBrief In this latest article where we adopt a case study approach to deep dive into recent high profile cases of algorithmic use and misuse:

We cover:
1. What are algorithms,
2. The algorithmic problem – systematising bias in code,
3. What is ‘Algorithmic Bias’ and the different types of bias,
4. AMZN.O’S recruitment Case study
5. The Legal Context
6.The Social Context
7. Outlook: The need for standardising an approach for detecting and measuring algorithmic Bias.
8. Key takeaways

Read the full article here:

https://wibrief.com/?p=7539

If you enjoyed this article, then look out for our upcoming coverage of the recent changes in AI Regulations, covered by AI WAVES

Read the full article here.

AI

DOES YOUR AI PASS THE EUROPEAN COMMISSION RISK-ASSESSMENT FRAMEWORK?

The European Commission has delivered a new risk-assessment framework for trustworthy Artificial Intelligence (AI).

The new Risk-Assessment Framework provides practical guidance for Developers and Programmers who are building AI. A key consideration of the Risk-Assessment, was the innovations impacts on core human rights.

Check out this latest ‘Brief’ which looks at the main aspects of the European Commission’s Risk-Assessment Framework, outlining its key features whilst also bringing in some analysis from recent developments in tech.

AI

Why UK Road Traffic Law needs to be modernized before self-driving cars hit the roads – or us.

This latest article takes a look at current UK road traffic law and examines its suitability for regulating vehicle accidents involving self-driving vehicles. But notably whilst the law has traditionally been focused on placing liability on human ‘drivers’ who have until recently been at the heart of most road traffic accidents, would it still be right or even fair to argue the same in light of the growing capabilities of highly autonomous self-driving vehicles?