Why it’s not too late to resolve bias in AI | NTT DATA

Fri, 24 September 2021

Why it’s not too late to resolve bias in AI

Artificial intelligence is one of the most transformative technologies of our times. It’s improving decision making, automating mundane tasks, finding answers to complex issues, lowering human error, cutting costs, facilitating the development of life-changing applications and disrupting nearly every industry.

But while AI technologies offer so many different benefits and are advancing at a rapid rate, they’re not without flaws. A major problem undermining the enormous potential of AI is the fact that automated systems can sometimes show signs of prejudice, bias and stereotypicality.

Even though our society has come a long way in becoming more diverse and inclusive over the past few decades, humans are far from perfect and can be implicitly biased and prejudiced towards others. Crucially, these unconscious biases in human developers can very easily creep into the creation of AI technologies.

Although bias in AI is a significant issue facing the technology industry today, awareness and a desire to take action are growing across many organisations in the sector. As a result, it’s not too late to resolve this problem. But what do businesses need to know and do in order to prevent AI bias?

Human bias in AI

In recent times, there’s been a lot of discussion around biased AI systems and how they need to be overhauled. But what’s important to realise is that this isn’t a new issue. When you go back a few decades, you soon learn that unconscious bias has long existed in computing.

A notable historic example of how human bias can affect technology occurred at a British medical school in 1988. The UK Commission for Racial Equality found that one of the school’s computer programmes, which was used for screening applications from prospective students, put women and people with non-European names at an unfair disadvantage.

Subsequently, it found the school guilty of “practising racial and sexual discrimination in its admissions policy”. By training the computer system to (within a 90-95% degree of accuracy) replicate the trends in historic admission policy of the school, the computer system replicated long-running biases in how human members of staff screened applicants. The impact of this flawed educational computing system was huge, with 2000 people estimated to have lost out on interviews for a place at the school as a result of their sex or ethnicity.

An increasing issue

With new AI innovations constantly emerging and more organisations adopting them, this issue has skyrocketed in the past few years. And there are countless examples of bias in AI today.

In early September, Facebook issued an apology and vowed to fix its underlying AI technology after an automatically generated tag asked viewers of a 2018 Daily Mail video featuring black men whether they’d like to “keep seeing videos of primates”.

Facebook has also come under pressure over how its algorithms target people with job adverts. Global Witness, a global human rights organisation, found that mechanic job adverts were mostly aimed at male Facebook users while female users were more likely to see job adverts for nursery nurse positions on their feeds.

Other technology giants have also been affected by AI bias. Twitter admitted in May that its image cropping algorithm was flawed, preferring white people and women over black people and men. Two years ago, Amazon abandoned automated recruitment software because it turned out to be sexist towards female job applicants.

The flaws: data sampling

AI bias has clearly become prevalent in modern society, but what factors are causing this problem? A major reason why AI technologies can be discriminatory is that the underlying data used to train them is often inaccurate and doesn’t reflect different groups of people.

For instance, it’s been claimed that facial recognition technologies are accurate 90% of the time. But the Gender Shapes project, conducted in 2018, shows that this isn’t always the case. It discovered that facial recognition frequently produces inaccurate results for black female users, whose error rates are 34% greater than those of white men.

Of course, it’s not just inaccurate datasets that result in AI algorithms and technologies producing biased results. Our own unconscious bias as technology professionals can also lead to the development of AI technologies that are fundamentally flawed and alienate the people they are supposed to help.

This is simply not good enough and must change. If AI continues to produce racist, sexist or other biased results, people will become increasingly wary and mistrustful of this technology. Such effects could derail AI’s potential to build a more connected and innovative society in the coming years and decades.

Confronting the problems

To stop AI bias from spiralling out of control and damaging progress in such an exciting field of technology, it’s crucial that businesses take note and solve this issue. However, how can they do this in practice?

When it comes to developing AI products, technologists should ensure that the datasets, processes and models that power them are as inclusive and diverse as possible. AI technology needs to treat everyone fairly, regardless of gender, ethnicity, sexual orientation, age, disability or social status, and developing AI technology with this in mind is paramount.

Effectively communicating the ways in which unconscious bias can affect AI technologies and how to mitigate these will help technologists ensure their creations treat all users the same. NTT DATA is leading the way in this area with our AI ethics guidelines.

They set out how AI should be used to empower people and created with the input of different AI stakeholders. The guidelines also stipulate that AI must be fair, reliable and explainable while being safe and secure. NTT DATA is taking steps to boost people’s understanding of AI and its contributions to society, too.

Another way to resolve AI bias is by increasing diversity in the teams designing, developing, testing, deploying and refining automated technologies. Having technology experts of all backgrounds on your IT team will ensure AI technology suits the needs of every community and isn’t biased in any way.

NTT DATA is also taking a number of steps globally to improve diversity inside our organisation. Initiatives currently under discussion include omitting personal names from CVs to solely focus on an individual's ability during the hiring process to setting up internal diversity and inclusion schemes and providing employees from marginalised backgrounds with dedicated resources.

It is worth remembering that as humans we are far from perfect, and as such, AI developments might not always go to plan the first time around. However, by regularly testing new AI systems and collecting customer feedback, you should be able to identify any instances of AI bias and resolve them immediately.

Crucially, everyone has a personal responsibility to identify their own bias and look to help others address their own. By being more aware of people of different backgrounds, we can hopefully stamp out unconscious bias in all walks of life – which includes AI – and ultimately make the world a fairer place for all.


How can we help you

Get in touch