Why businesses need AI standards and governance

Risks & Liability
AI Governance & Assurance
Ethics & Responsibility

It is almost impossible to read the news or scroll through social media without seeing headlines about artificial intelligence (AI). Hopes and fears abound: AI is praised as the solution to some of our greatest challenges or lamented as the cause of future catastrophes.

This polarizing debate can both stimulate and hinder AI progress. While many leaders are enthusiastic about technology that promises to increase efficiency, productivity or competitiveness, failures to understand complex systems and blithe acceptance of output data can contribute to poor business decisions. AI and analytics errors are not only costly but can also be damaging to an organization's reputation and can materially affect people’s lives.

The regulatory trigger

As the use of AI accelerates, governments, regulators, and professional standards bodies are developing policies and legislation to prevent known and predicted risks. AI regulation in the U.S. is still nascent, but pressure is building at home and internationally (see below).

There is a need, now more than ever, for non-technical people to have confidence and trust in AI. Enterprises that build and use AI models, and the many technology vendors that supply them, will need to demonstrate effective self-governance at a minimum.

As of this writing, the regulatory landscape looks like this, with several global and industry regulations enacted in 2023 and several more pending for 2024:

However, while new rules and regulations can trigger an organization to consider a formal AI governance function and processes, approaching it solely as a matter of compliance misses the potential of governance to improve the quality and effectiveness of AI systems.

AI governance affects the entire business

One of the reasons that AI is being adopted at such a rapid pace (and attracting regulatory attention) is because the technology holds immense potential to improve outcomes across many different parts of a business or organization. Well-designed AI governance can increase the quality of AI systems and speed up their development while also mitigating or even avoiding risks. It increases ROI in a crucial area of technology research and development.

The pace of change, the scale of the business impact, and the complexity of interacting and sometimes contradictory AI regulations, industry rules, and codes of conduct mean AI governance is no longer a matter that can be left to any single internal organization. There are multiple stakeholders:

  • Risk managers have the responsibility for developing and managing policy.
  • AI model builders want to solve problems and need to make their work understandable to non-technical audiences.
  • Business leaders carry ultimate responsibility for the outcomes of AI systems and need the assurance that connecting frameworks for governance are in place.

All teams considered, structured and cross-functional governance will be helpful whether businesses are challenging ‌or being challenged by AI disruption, seeking to optimize operations and processes, or defending their actions to auditors and regulators.

It can be hard to fully understand the need for and scope of governance when it is not your core competence. Now is the right moment for businesses to take a step back, gather the relevant stakeholders and experts, and consider a collaborative approach to the management of AI that serves both commercial goals and compliance requirements.

Get ready for AI Governance

If you are wondering how to get started with AI and model governance, contact us for an assessment.

Learn more about AI readiness