As artificial intelligence (AI) continues to advance and become more widely adopted, ethical discussions surrounding its development have largely centred on manpower displacement, as well as the potential impact of errors or inaccuracies in the programming process.

One issue often overlooked is the inherent bias that comes with the very AI systems that are beginning to drive our society. For example, a study found that online ads for high-paying jobs powered by machine-learning are shown more often to men than women – raising concerns about the potentially discriminatory patterns of complex algorithms.

Studies have shown that algorithms trained on historically racist data have huge error rates for communities of colour, especially in over-predicting the tendency of a convicted criminal to reoffend.

Whatever these algorithms predict, there is no guarantee that judges or bail decision-makers will use their forecasts in a way that consistently reduces incarceration or protects public safety.

WHY IS THIS HAPPENING?

The truth is that the AI technology we use often comes built-in with its creators’ biases, and the reality is our community of creators is small and unrepresentative of society. Machines are not completely objective because the systems reflect what their creators think is important, and what they think is unimportant, what data is used to train, and what data is ignored.

Take voice-controlled digital assistants, such as Siri or Alexa for example. While these assistants are supposed to be able to enact voice commands we give them, many of us struggle to make the machines understand our instructions.

SERIOUS CONSEQUENCES

Biases in machine learning can have incredibly serious consequences.

What happens if people are not trusted or empowered to question a particularly expensive piece of equipment or software? A doctor or nurse may be trained on how to use the technology, but not necessarily how it works, how it arrives at its conclusions and what its biases or limitations may be. There is a dangerous tendency to idolise data and technology, to see it as all-seeing, all-knowing – and inherently better – at making decisions.

If yesterday’s data trains AI, historical bias gets locked into the system, which will lead to the perpetuating and reinforcing of past mistakes. It is also easier to shift responsibility for biased decision-making to technology and hide it behind the complexity of algorithms. Many of the AIs used to make those choices are black boxes too complicated to easily understand, or proprietary algorithms that companies refuse to explain.

The challenges of understanding the bias in AI is complicated by the emergence of tools and platforms – essentially AI as a service – that makes it ever easier to build systems. Today, it is relatively simple to build a facial-recognition system using cloud platforms that come with pre-trained image recognition tools. We need more people that can evaluate how to apply the AI training data responsibly, to test it and oversee the way it is being used.

EVERYONE’S RESPONSIBILITY

Although there are many reasons to be concerned about bias in AI, there is also good work being done that deserves to be recognised. Mozilla, for instance, is improving its voice-recognition technology by crowd-sourcing and collecting 10,000 hours of speech from people all over the world. Its Common Voice project is building an open and publicly available data set of voices that anyone can use to train speech-enabled applications.

To make a real change, we need everybody to understand where and how biases emerge and the actions they can take to prevent it. We have to take into account that we are all biased and therefore checks and balances must be put into place to minimise and eliminate the bias. The first step is to recognise the various biases in data and be more honest about the limitations of AI that is based on a data set. And that includes recognising and acknowledging our own personal bias. It’s time for everyone involved in tech to take responsibility for the biases in our systems and collectively help create solutions.

 

Read more here.

 

Source: The Straits Times, 9 April 2019