Establishing the business case for AI governance

AI Governance & Assurance
Ethics & Responsibility

It increasingly looks like a matter of “when” not “if” AI will become a regulated category of technology. Questions about safety, security, and ethics have become increasingly urgent as ever more sophisticated and consequential cognitive tasks are delegated to machines.

Some industries were already in legislators' crosshairs. A notable example affects life insurers operating in Colorado, where a first-of-its-kind regulation has introduced governance and risk management requirements for the use of algorithms or predictive models. Broader legislation is `taking shape at the federal level: the Biden-Harris administration recently signed an executive order about responsible AI that went further than many predicated.

The movement towards stricter regulation of AI is now clear in the U.S. and internationally. Focus is on the need for dedicated AI governance. However, while the advent of rules governing AI is novel and a threat if left unmanaged, it is important to remember that governance is not a synonym for compliance.

Done properly, enterprise governance aligns the strategic objectives of a business with assessments and management of risk, and makes sure that company resources are used responsibly and efficiently. The objectives of AI governance are similar, and the prospect of a formal process represents an opportunity to improve the quality and expand the impact of AI models.

AI projects fail too often

An alarming statistic for anyone interested in the responsible and efficient use of company resources is that 60-80 percent of AI projects fall short of their intended objectives.[1] The study behind these figures blamed the problem on poor internal alignment and a lack of collaboration. Many AI systems cross internal silos, either in their inputs, outputs, or both.

Leaders in enterprises who see evidence of this statistic in their organizations should already be questioning how to turn it on its head. Those who do not should probably be looking more closely. There’s a good argument that the reach of innovation should exceed its grasp. However, when teams fail to work effectively with each other, they not only waste budgets and time but also thwart innovation and diminish future competitiveness.

The business case for AI governance rests on uniting controls for risk with programs for achieving business and stakeholder objectives. Dedicated processes and frameworks can achieve this alignment by setting clear requirements and embedding best practices into the building of complex systems. The bonus is that these same controls also align with compliance needs.

[Data and analytics] and business strategy are among the main drivers for AI governance. When AI governance is lacking, increased costs is the most common negative impact. - AI governance frameworks for responsible AI, Gartner Peer Community

Key considerations for AI governance

  • Innovation: The performance and safety of AI innovation are enhanced when models are built and managed according to quality and ethical standards. Embedding clear requirements enables faster model development, approvals and deployments. The absence of standards and poor governance regimes can delay innovation or limit its value.
  • Risk: Businesses need to protect themselves and their customers from undesirable outcomes. Governance of quality and ethical standards helps businesses to understand and mitigate risk and safety concerns. Appreciation of risk and safety is often inconsistent throughout organizations, but governance can help to overcome this challenge.
  • Quality: Enforcing consistent model development and testing best practices delivers more robust applications that perform better in deployment. Governance helps businesses define good and bad outcomes, set clear expectations, and safeguard successful AI systems.
  • Goals: Businesses of any size can struggle to maintain alignment between their corporate goals and strategy and the work done by various operational teams. These goals can be protected through governance that drives more predictable project journeys.
  • Brand: Brand equity takes years to build but is quickly damaged by negative news and social media debate. Media and societal sensitivities about AI add prominence to negative stories. Standards and governance help businesses prevent negative events and improve their defense posture should a problem occur.

Are you a stakeholder in AI governance?

It’s no surprise that the tenets of AI governance are a complement to enterprise governance - the objectives of reducing costs and driving revenue are shared. While AI governance needs specialist knowledge, the stakeholders span the business.

If your role is related to data science or AI model building, risk, or governance, or if you’re an executive in a business that uses AI, you are likely among these stakeholders. Sooner rather than later, the outcomes of the AI safety debate will directly affect your organization and your corporate responsibilities. According to a recent AWS survey, few enterprises have established a dedicated AI governance function. But with regulation taking shape and the business impact growing, how this function will be structured, tasked, and resourced are near-term questions.

At Monitaur, we’re ready to help you achieve great AI governance for both the near and long term. Contact us to assess the best place to start in your organization.

Learn more about AI readiness

________________

[1] 1. Schmelzer R. (2022) “The One Practice That Is Separating The AI Successes From The Failures”