The four greatest challenges facing AIs at scale | NTT DATA

Wed, 14 October 2020

The four greatest challenges facing AIs at scale

There’s virtually no major industry that modern AI hasn’t already influenced. That’s especially true in the past few years, as data-gathering and analysis has increased considerably thanks to IoT connectivity, the explosion of connected devices, and a rise in computer processing speeds.

At the bleeding edge of AI, progress continues to be made. AlphaGo, for example, the computer program well-known for having AI that beat a professional Go player, continues to evolve. Alpha Zero, introduced in December 2017, is the latest release of the program demonstrating world-class ability not only in Go but also in chess and shogi. What was striking was that Alpha Zero taught itself this expertise after learning only basic game rules. This indicates that AI can autonomously develop abilities by extrapolating rules for use in other applications. Since AI learns differently from humans, it has the potential to create ground-breaking techniques and approaches from which humans can learn.

However, as AIs continue to evolve, businesses and society as a whole need to act conscientiously, taking an ethical approach. Over the last few years – whether it’s Facebook evading accountability for the Cambridge Analytica scandal or Alexa’s alleged eavesdropping – the public has seen how AI can be misused and how the best of intentions can go wrong.

With certain standards in place, the positives of AI can be plentiful and AI can directly benefit society. Nevertheless, to get there, individuals and companies operating in the AI space will need to overcome four primary barriers:

1. Explainability

Automated decision-making hasn’t exactly had the most positive press in recent years, but that’s in large part due to the kind of AI that’s often in use. Black box AI produces outcomes from the murky depths of data using algorithms understood by few, and lacking any real accountability. Much like a maths student not showing their working in an exam, black box AI produces decisions without clarifying the logic behind it. And because we don’t know how the AI goes from A to B, there are limited capabilities to review processes, test, and refine the AI model to be more accurate or predictive. To build trust and to enable the ability to audit AI, it’s important to finally open up the black box and make things interpretable and explainable.

Explainability is a crucial part of how we envision the future of AI at NTT DATA, and at last year’s R&D Forum, we demonstrated our explainable AI in several settings, combining it with our multimodal AI. One interesting application of our white-box model is at customs borders: by analysing x-ray images of packages, our software is able to decide whether illegal objects are present in luggage or containers and if the contents match what is stated on the customs declaration form – drawing attention to those requiring further inspection. Importantly, the AI is able to justify why the package required inspection.

2. Robustness

In some settings, robust AI produces better business decisions. In other settings, robust AI is a matter of life and death. One notable example is the autonomous vehicle sector. Autonomous vehicles have the potential to transform our lives. However, to be safe, neural networks will need to go through extensive training for every conceivable scenario. An autonomous car has to do more than just track the road, maintain speed, and stop as needed; it also needs to react safely to unexpected events, and in a whole variety of weather conditions.

To overcome this specific challenge, NTT DATA has built virtual testing environments like the one demonstrated at last year’s R&D Forum. The software provides automakers with a huge variety of conditions and scenarios before the vehicle even reaches the road. Training AI in this way enables automakers to safely fail, test and iterate their product. Compared to ‘real-world’ testing, the software also provides a safer, more scalable, and cost-effective way to conduct training.

3. Fairness

Biases in the human decision-making process – shaped, often unconsciously, by personality and society – are well documented. In the courts, for instance, judges’ decisions are known to be unconsciously influenced by their own personal characteristics, and even remarkably, whether they’ve recently had a lunch break. Employers, too, have been shown to demonstrate bias during recruitment by granting interviews at different rates to candidates with identical resumes but with names which mark out candidates as being from different ethnic backgrounds.

Acting purely on the hard data, AIs have the potential to reduce and even stamp out human biases. However, if managed incorrectly, AIs can also embed those same biases. In one famous study, an AI used in the Florida criminal justice system was found to incorrectly label African-American defendants as “high-risk” at nearly twice the rate of white defendants.

Another obstacle to fairness is the accuracy of trained deep learning AI models, which can degrade over time as outliers skew the model. NTT DATA has been working to address this with a solution which automatically evaluates AI models against known answers to update its algorithm. Our latest program was shown off at the NTT DATA R&D Forum 2019.

4. Privacy

As the data of individuals proliferates, increases in value, and is potentially leveraged in AI models, the ethical and social consequences of mishandling that data become more serious and complex. As such, service providers managing personal data need to gain the trust of individuals by maintaining a safe environment for that information.

The basic prerequisite to gain trust will involve preventing malicious attacks. In practice, this won’t be easy. As the use of mobile and cloud services has spread, the traditional concept of protecting against all intrusions has come close to reaching its limit. Using automatic log analysis to achieve immediate detection of abnormalities and the encryption of low-spec edge devices can certainly mitigate risk – but today’s networks will never reach a state of impenetrability.

Data protection is one of NTT DATA’s AI guidelines – principles which guide the company in developing AIs which benefits human and society. The development of fair, reliable, and explainable AI is also part of these same guidelines.

AI is reaching a point where a new norm is needed. To win the leadership of future society and business, to maintain the trust of individuals, to foster safety, to create a fair society, and to create a society that isn’t suspicious of AI applications, it’s essential that we maintain high AI standards. From our work at the University of Leicester, where we’re implementing machine learning to support predictive care, to our work in Las Vegas, where we’re using IoT devices to search for incidents and alert emergency services, NTT DATA is showing that it is committed to an ethical approach and fulfilling the positive opportunities of AI.


How can we help you

Get in touch