Establishing the business case for AI governance in 2025

AI Governance & Assurance
Ethics & Responsibility

At varying degrees globally, AI is increasingly becoming a regulated category of technology. Questions about safety, security, and ethics have become increasingly urgent as more sophisticated and consequential cognitive tasks are delegated to machines.

Industries like insurance are experiencing numerous state regulations designed to protect consumers. The NAIC approved the AI Model Bulletin in December of 2023 and more than 50% of states have adopted the standards at varying levels since then.

Further, some states in the US are introducing additional levels of consumer protections. The Insurance Circular Letter No. 7 reflects NYDFS's commitment to promoting innovation in the insurance industry while ensuring that the use of advanced technologies like AI does not lead to unfair discrimination or compromise consumer protection.​ The Colorado AI Act is set to be enforced starting in February 2026. The EU AI Act is in force with rolling compliance deadlines currently being enforced.

And yet, regulation is only part of the case for AI governance. When done properly, enterprise governance aligns the strategic objectives of a business with assessments and management of risk, and makes sure that company resources are used responsibly and efficiently. The objectives of AI governance are similar, and the prospect of a formal process represents an opportunity to improve the quality and expand the impact of AI models.

AI projects fail too often

In 2022, an alarming statistic for anyone interested in the responsible and efficient use of company resources showed that 60-80 percent of AI projects fall short of their intended objectives.[1] The study behind these figures blamed the problem on poor internal alignment and a lack of collaboration. Many AI systems cross internal silos, either in their inputs, outputs, or both.

Fast forward to 2024. Large insurance companies came up with an average 27 use case ideas for GenAI per company. From ideation to POC to Production, out of 27 gen AI use case ideas, 6 became a POC and 1 made it to production. [2]

Leaders in enterprises who still see evidence of these statistics in their organizations should be questioning the objectives and effectiveness of their overall AI investment. Those who do not should probably be looking more closely. There’s a good argument that the reach of innovation should exceed its grasp. However, when teams fail to work effectively with each other, they not only waste budgets and time but also thwart innovation and diminish future competitiveness.

AI governance: When "good enough" isn't good for business


Similar to the flow of this article, it's very easy to assume that the default business case for AI governance is a checkbox exercise for compliance. It's a small part until teams can't show business outcomes from their AI projects. That's when wise stakeholders look beyond the checkboxes to see that there's very little improvement to show for AI investments.

The business case for AI governance rests on uniting controls for risk with programs for achieving business and stakeholder objectives. Dedicated processes and frameworks can achieve this alignment by setting clear requirements and embedding best practices into the building of complex AI systems. The bonus is that these same controls also align with compliance needs.

[Data and analytics] and business strategy are among the main drivers for AI governance. When AI governance is lacking, increased costs is the most common negative impact. - AI governance frameworks for responsible AI, Gartner Peer Community

Key considerations for AI governance

  • Innovation: The performance and safety of AI innovation are enhanced when models are built and managed according to quality and ethical standards. Embedding clear requirements enables faster model development, approvals and deployments. The absence of standards and poor governance regimes can delay innovation or limit its value.
  • Risk: Businesses need to protect themselves and their customers from undesirable outcomes. Governance of quality and ethical standards helps businesses to understand and mitigate risk and safety concerns. Appreciation of risk and safety is often inconsistent throughout organizations, but governance can help to overcome this challenge.
  • Quality: Enforcing consistent model development and testing best practices delivers more robust applications that perform better in deployment. Governance helps businesses define good and bad outcomes, set clear expectations, and safeguard successful AI systems.
  • Goals: Businesses of any size can struggle to maintain alignment between their corporate goals and strategy and the work done by various operational teams. These goals can be protected through governance that drives more predictable project journeys.
  • Brand: Brand equity takes years to build but is quickly damaged by negative news and social media debate. Media and societal sensitivities about AI add prominence to negative stories. Standards and governance help businesses prevent negative events and improve their defense posture should a problem occur.

Shifting the narrative to AI acceleration

It’s no surprise that the tenets of AI governance are a complement to enterprise governance - the objectives of reducing costs and driving revenue are shared. While AI governance needs specialist knowledge, the stakeholders span the business.

If your role is related to data science or AI model building, risk, or governance, or if you’re an executive in a business that uses AI, you are likely among these stakeholders. Sooner rather than later, the outcomes of AI safety will directly affect your organization and your corporate responsibilities. AI technologies are becoming more sophisticated, regulation is evolving, and the business impact becoming costly if not managed properly. You must shift the narrative of AI governance from a compliance discussion to one about AI acceleration.

How are you meeting the regulatory timelines for you AI projects? Are your policies ready to handle dynamic requirements as your AI technologies get more sophisticated? Let us know.

Learn more about AI readiness