The AI market has surged throughout 2023 and there is every sign this growth will continue as the advancing capabilities of cognitive machines expand the scope and value of business use cases.
Parallel to this is increasing public concern and legislative focus on AI safety. While regulators figure out next steps, an organic and proactive understanding is emerging that the benefits and risks of AI are sufficiently unique and urgent to require major businesses to form dedicated governance practices, alongside existing enterprise risk management and compliance functions. Less clear, at least for most businesses, is who are the stakeholders in such a new AI governance function.
Further, 60-80 percent of AI projects fall short of their intended objectives due to poor internal alignment and a lack of cross-functional collaboration. The success of AI systems – effectiveness, safety, return on investment – depends on the right people coming together from across the business.
It’s a fair bet that someone in a risk management role will take the initial lead on exploring the need for AI governance and building the business case. When the time comes to secure internal support to fund and institute AI governance systems and practices, a hidden but important challenge is that most risk managers are not equipped to appreciate the demands and nuances of model development.
This lack of insight is perfectly understandable, but it means that risk functions will struggle to independently develop controls and governance programs that meet the needs of both the business and model-building teams.
Unlike data privacy or information security governance, AI governance cannot be solved via templated survey responses. An end-to-end governance process that considers the full AI model lifecycle needs to be adaptive and continuous because these systems have inherent variables and often support critical and real-time business decisions.
To be effective and respected, AI governance needs a common language and facilitated partnerships between model builders, risk management, and senior business leaders. The good news is that, despite the different perspectives of these roles, they have important overlapping interests.
Risk management, model building and business leadership can encompass a broad range of titles. As far as which roles are stakeholders in AI governance, let’s break them down based on their goals and interests:
Quality and safety are the primary concerns for risk managers. They need innovation to be aligned with business objectives, policies and regulatory requirements. They seek to control the risk in AI projects so that assurances can be given to the C-suite, board and regulators.
Those concerned with building AI models need their ideas and efforts to be valued and supported. They seek budgetary support for development costs and internal partners for deployments. They want to earn a reputation as problem-solvers that add quality and value to the business.
Leaders seek innovation that outcompetes rivals: bringing new and better products to market, driving growth, increasing brand value, reducing costs, and increasing profit. They see the potential of AI to support these goals and need cross-functional alignment for AI investments to be effective.
Model builders need business leaders to provide funding and resources. Risk managers need model builders to share intricate knowledge of modeling projects and provide documented evidence for risk mitigation. Business leaders want internal teams to understand business goals and operate efficiently as a team.
This is a trifecta in the need for a connecting framework. Even in organizations where there is a good appreciation of shared needs, there are rarely formalized practices and processes for triangulating AI innovation through both the development phase and ongoing deployment.
Monitaur was formed to solve this problem. We have created what we believe to be the most comprehensive and effective AI governance solution available. Crucially, our software provides tooling that establishes and maintains cross-functional processes, answering both the individual and shared needs of all three stakeholder groups.
If you are wondering how to get started with AI governance, schedule an assessment with us.