In a 2024 survey, 80% of executives admitted that leadership, governance, and workforce readiness have failed to keep pace with AI advancements.
AI’s applications are clear and persuasive, but it can be difficult to know where to start with risk management and governance. A lack of accountability, dedicated risk frameworks, and institutional AI literacy can leave businesses vulnerable – and not just to compliance risks. Without the right safeguards in place, companies risk damaging their reputation, losing customer trust, and missing out on valuable opportunities to open up new streams of revenue or add value to current services.
The financial services community has been an early adopter of AI governance frameworks, establishing AI risk committees and dedicated data governance teams, mainly due to their high regulatory burden. Other sectors, however, are still playing catch-up.
Organisations are beginning to recognise the urgency of the situation, mainly triggered by law requirements and according to Fortune, 56% of Fortune 500 companies now identify AI as a risk factor in their annual reports. AI governance is climbing to the top of business agendas, but the question remains. How can businesses ensure their AI governance aligns with both regulatory requirements and long-term strategic goals?
Many businesses struggle to define who is responsible for ensuring that AI is developed safely and ethically. At the same time, employees lack the necessary training to recognise and mitigate risks before they escalate, leaving businesses vulnerable overall.
AI risk is accelerating - but so is regulation
AI’s rapid growth has triggered an equally rapid regulatory response. Businesses are now navigating a complex web of global AI laws, with varying levels of enforcement and oversight. The EU AI Act is the most stringent, introducing strict mitigation requirements based on the level of risk. In contrast, the UK has taken a more pro-innovation stance, choosing to focus its attention on testing and best practices through initiatives like the AI Playbook.
But amidst the dense legal texts and regulatory ambiguity, companies are struggling with the same problem: how to translate legal requirements into sustainable, actionable governance. AI is a relatively new concept with plenty of interest and ambiguity surrounding it. Customers want to trust that it’s being used responsibly - ensuring fairness, security, and transparency.
In an era where trust is a currency in short supply, companies that fail to demonstrate responsible AI practices will undoubtedly find themselves at a disadvantage against competitors that make ethical AI a priority.
The challenge is making sure that AI governance covers every stage of the AI lifecycle, from development and deployment to ongoing monitoring. Without a structured approach, organisations will struggle to meet compliance obligations, mitigate risks, and build the trust needed for AI adoption at scale.
Embedding AI governance into business strategy
As a motivator, compliance alone isn’t enough for a successful AI risk framework. Companies want AI that delivers real strategic value while remaining safe and ethical. Last year, we worked with L’Oréal to elevate its customer experience using AI by delivering personalised product recommendations. We saw firsthand the reason companies continue to invest in AI despite its risks; because when implemented correctly, it works wonders.
In particular, we saw a dramatic decrease in the time it takes for consumers to fill their shopping carts. Previously, it took an average of six minutes to add five products to the cart. Now, that time has dropped to just 37 seconds. This boost in efficiency has doubled L’Oréal's conversion rates, proving that when customers receive the same high-quality assistance online as they do in physical stores, they are far more likely to make a purchase.
The value that AI added to L’Oréal’s services was undeniable and reinforced why businesses are continuing to bet big on AI. Having been on both sides of the process, from innovation to risk management, we know how to strike the right balance between driving progress and responsible adoption. So, we’ve identified four key steps to help you build a governance framework that maximises the value of your AI while keeping it safe and compliant:
1. Build a tailored risk framework
AI governance isn’t one-size-fits-all. Each industry, organisation, and use case requires customised risk frameworks that align with business objectives and regulatory expectations. A high-street retail bank will have very different priorities and obligations than a luxury automaker.
A crucial component is aligning key stakeholders, including Legal, Audit, and Finance teams, with business functions. By taking a customised approach to building your risk framework, you can ensure that AI governance aligns with operational needs as well as regulatory expectations.
2. Learn by doing: test, refine and strengthen your framework
Organisations learn best by doing. This involves testing and refining AI governance through real-world applications. As AI evolves, so too must our approach to risk assessment and mitigation. To stay ahead, employees and leaders need hands-on training in bias detection, explainability, and responsible AI practices, ensuring governance keeps pace with innovation.
By integrating AI oversight with traditional processes, businesses can navigate both known and emerging risks, strengthening their AI maturity and resilience in the face of rapid technological change.
3. Assess and address AI readiness and risk exposure early
An untrained workforce is your biggest risk from a deployment perspective. Yet, according to a 2024 report published by Raconteur, less than half (44%) of UK companies are taking steps to train workers on AI use. Building AI literacy and readiness starts with equipping your staff with the right protocols for safe implementation and the awareness to identify potential risks.
AI risks often take root in the development phase - long before deployment. By assessing AI readiness, data quality, and model robustness early, businesses can proactively identify vulnerabilities, ensure compliance, and embed ethical safeguards from the start. If you want AI to align with your organisation’s ethical principles, start by grounding decisions in real-world use cases. Using tested methodologies that are strategic and focused on long-term value ensures your actions reflect the values your organisation stands for.
4. Implement continuous and adaptive governance
AI governance must be ongoing and adaptable. Automated monitoring tools, AI ethics review boards, and structured governance frameworks will help you to navigate evolving regulations and risks. AI will continue to evolve, and governance strategies must evolve in step.
The UK government’s long-awaited AI Bill will be a defining moment for British businesses. However, organisations can’t afford to wait for the final text to be published before starting on their governance journey. The best approach is to build an agile governance framework now - one that can evolve and adapt as regulations take shape.
The best place to start is by looking to the EU’s AI Act, a landmark document which set a global precedent for AI oversight and accountability.. The UK Government’s AI Playbook provides a clear framework for responsible AI adoption, offering best practices that help companies stay ahead of regulatory changes.
By taking proactive steps today, companies can stay ahead of regulatory changes, reduce uncertainty, and ensure AI is deployed responsibly and strategically.
Building trust and competitive advantage
At its core, AI governance is about working towards a future where AI is fair, transparent and beneficial to everyone. To make that happen, we need to shift from reactive risk management to proactive value creation.
If you’re looking to build a robust AI governance framework but aren’t sure where to start, our team at NTT DATA works with organisations across industries to develop tailored governance strategies that align with business objectives, regulatory requirements, and ethical best practices.
Whether you’re establishing AI oversight from the ground up or strengthening existing processes, we can guide you through every stage - from risk assessment and compliance to implementation and ongoing monitoring.
If you’d like to know more, take a moment to get in touch and arrange a 45-minute consultation.
iihttps://www.gov.uk/government/publications/ai-playbook-for-the-uk-government