Responsible AI doesn’t exist without governance

AI Governance & Assurance
Ethics & Responsibility
enterprise ai governance post image

Why Enterprise AI governance is at the heart of RAI

Say you hired a human to do a job. They dive into the work, and at some point along the way you ask them to explain why they made a certain decision. You might be asking the question to better understand their process, or you might be asking because of a particular outcome (good or bad) that resulted from their work. 

If they repeatedly replied with “I’m not telling you” – what would you do?

Most likely, the person would (at least) be put on a performance improvement plan - or ultimately be fired. This kind of behavior and lack of communication isn’t generally tolerated in the workplace among humans.

So, why would we accept this from our models? 

Operating your AI without good governance practices is the equivalent of allowing your models to say “I’m not telling you” when it comes to the why, what, and how these automated algorithms are making decisions on behalf of your business.

What’s the point of AI? You’re investing in models to do a job. And you should expect accountability for how that job is performed and what impact is made.

The responsible AI revolution is coming

Two trends have been growing in parallel over the past decade:

  1. The use of AI in business. 
  2. The expectation that businesses act responsibly, in a greater sense, as seen in the Environmental, Social, and Governance (ESG) and Corporate Social Responsibility (CSR) movements. 

In a recent survey by Deloitte, 79% of respondents say they've fully deployed three or more types of AI compared to just 62% in 2021. In the same report, 94% of business leaders surveyed say AI is critical to success today. Such remarkable growth in a single year underscores the pervasive importance of advanced modeling to businesses worldwide. 

Additionally, more and more technology vendors with AI-first offerings are emerging in every market, from the most technically savvy to the most lagging industries. Building AI into software products has begun to move beyond marketing hype to reality. As with the software revolution before it, predictions that models will run the world are coming to fruition. There is every reason to believe every structured data will eventually have learning models running attached to it.

With that eventuality in mind, it is not surprising that we are experiencing a great awakening of the risks associated with that influence on our lives. ESG and CSR have become closely intertwined because AI is such an important driver of how responsible and ethical an organization is. Without AI governance, organizations will not be able to demonstrate responsible practices around their most consequential models – those with the capacity to impact consumers’ lives most and those that can create adverse business outcomes.

A report conducted by MIT Sloan in partnership with Boston Consulting Group found that “while 84% of organizations view Responsible AI (RAI) as a top management issue, only a quarter have fully mature RAI programs.” Moreover, those organizations with stronger programs benefited significantly, since they were able to put more models in production with confidence.

Finally, Responsible AI as a job title and business function has grown quickly this year. At a high level, this is likely for three reasons:

  • Businesses that create strong responsible AI programs generate better business outcomes and a higher ROI.
  • Consumers, regulators, investors, and society in general increasingly demand responsible business practices.
  • People (in this case business leaders) generally want to do the right thing when given a chance.

What is AI governance?

So, AI governance is a part of ESG and CSR. It also relates to operational risk management, cyber security, privacy policy, and data governance. How companies organize these various initiatives can vary, but the important thing is that they exist and create transparency across the many stakeholders involved.

It’s time to ditch the intimidation that often rides along with the word “governance.”  Think of AI governance as no more than the act of defining and executing good practices related to your machine learning and artificial intelligence models. 

Framed in those terms, governance becomes something we should expect every company to do – and something that businesses should all want to do. “I don’t want to adopt best practices'' is not a phrase you ever hear pass the lips of business leaders and employees.

Enterprise AI governance is the work of applying tried and true business practices to this newer area of business empowered by AI. This includes:

  • Strategizing around the risks and opportunities associated with AI, just as you would with other business initiatives and opportunities. 
  • Understanding the impact of AI on your business, on your customers, and on society (again, as you would with any initiative).

AI governance is similar to other business practices in that it involves:

  • Creating, documenting, and managing policies
  • Cross-functional integration and collaboration
  • Oversight and reporting

The main difference is that with A, we are outsourcing business decision-making to a non-human entity. And the question is how we govern that effectively and responsibly.

Model governance vs. AI governance

AI has been eye-opening in making us reconsider how we govern our models in general. Model governance in business has been pretty loosey-goosey.

model meme
This meme is humorous, but the reality is not too far off when we consider the lack of governance around financial models alone.

When it’s a statistical model or a financial model, we build the model around assumptions and expectations that the model will continue to operate successfully. Good governance is to document, monitor, and test those assumptions – whether related to a specific model or an overall AI program.

Ironically then, the emergence of AI governance has brought greater visibility into the overall need to question, monitor, and measure model performance. Along with the need to consider risks associated with your models.

So, the practice of model and AI governance are basically the same. It’s a question of the scope of the type of model. But both are to do with:

  • Documenting business goals and assumptions
  • Understanding the desired outcome from the model’s “job”
  • Defining what technology and methodology is being used
  • Defining under what conditions the model should be re-evaluated
  • Understanding the context of your data and how it is used
  • Establishing a knowledge management system and a source of truth
  • Comparing performance and impact with expectations

Three principles for managing responsible AI and establishing governance

Context is documented and understood.

Context means that the business reasons, scope, risks, limitations, and data are well-defined and fully documented before a model goes into production.

Naturally, as the model operates, the focus of AI governance switches into an execution phase that centers on monitoring the contextual elements, particularly key tests and validations of models that demonstrate unbiased and optimal operation.

Verification processes are in place.

Every business and technical decision and step in the model development process should be able to be verified and interrogated. Verification requires visibility, so it is key to have a central system of record that enables:

  • Capturing evidence of those tests in a systematic way that is transparent, available, and indelible
  • Providing uninvolved, knowledgeable reviewers (see more below) the ability to verify and approve that good practices were used
  • Documenting any variances that emerge and how those problems were remediated

Objective evaluation by a third party happens routinely.

The gold standard of governance is when any ML system can be reasonably evaluated and understood by an objective individual or party not involved in the model development. If a machine learning project is built with the prior two principles of context and verifiability, it is far more likely that your business and risk partners can act effectively as that second-line and third-line objective parties to evaluate it and greenlight your work to go into production.

Creating lines of defense

The result of using the above principles will help your organization create multiple lines of defense against the outsized risks that AI can represent to the business. Lines of defense that follow best practices are both internal and external. AI governance should cover:

  • Model documentation and evidence relating to business processes, model decisions, data, and controls
  • Internal ML monitoring
  • Defined controls and frameworks
  • Compliance mapping
  • Deployment approval workflows
  • Objective controls reviews and oversight
  • Independent monitoring
  • Ad-hoc audits and testing
  • And more

The push for external independent audits of AI is a common thread in regulatory and risk management discussions in the broader world. The EU AI Act, for example, will require external independent audits of high-risk AI systems. The NIST AI Risk Management Framework also states in Measure 1:

“Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, and external stakeholders and affected communities are consulted in support of assessments.” 

We should expect this expectation will propagate around the world as more AI-specific regulations and standards frameworks are launched.

Good governance is the road to rooting out bias

In parallel with recent social movements, AI has also brought a new level of awareness around bias in data. Bias is not just an “AI problem” – as a society, we are grappling with equity and fairness and as we continue to advance those conversations, we should expect our technology to reflect those principles and expectations. Historical data is probably going to be biased because society has been biased. This is not only an ethical and reputational risk for businesses; it’s a huge legal risk.

Once we acknowledge that bias is a HUMAN problem, we can recognize that unfairness in machine learning models is both caused by and perpetuated by humans, not the machine. ML uses data that humans provide and performs those functions that we humans assign it. Any malfunctions, maladaptations, or maliciousness are thus extensions of human actions and choices. Wide-scale deployments making consequential decisions in the real world have only uncovered the need for more human responsibility and accountability. This puts the onus on human prevention, which ultimately falls under the purview of AI governance.

A holistic approach to lifecycle AI governance helps root out bias so companies can mitigate and manage the problem before it scales. Moreover, a strong program of governance helps companies anticipate areas where bias might emerge and take a more proactive approach earlier in the model lifecycle. The end goals should be to:

  • Prevent models that are highly susceptible to bias from ever making it to production environments
  • Identify bias as soon as it appears in models that are in production

Responsible AI is the future

The responsible AI that good governance enables addresses three paramount concerns for corporations today: business performance and ROI, government regulations and compliance, and corporate responsibility.

Below are a few of the business benefits of effective AI governance.

  • Compliance with AI and algorithmic regulation
  • Protecting the company from reputational and brand damage
  • Accelerating AI/ML innovation with faster approval processes
  • Cost savings via the efficiency of filing and audit-related work
  • Faster detection and resolution of key risk events
  • Improving internal and external communication, cross-functional collaboration
  • Establishing repeatable and effective Responsible AI practices

With these benefits also comes better ROI on AI programs. In a 2020 study from ESI ThoughtLab, researchers found that “overperformers'' had a higher ROI from AI implementations. 

AI overperformers were enterprises that had more of a business practice foundation in these areas:

  • Addressing privacy, regulatory, and security concerns
  • Creating standards and training on AI ethics
  • Gathering, integrating, and formatting data for AI use
  • Measuring and tracking AI performance results
  • Preparing an AI platform to manage data
  • Defining business cases, models, and plans

When you reflect on the business case, touched on at the highest level above, combined with the peace of mind that comes with doing the right thing for consumers and society, AI governance starts to look less like a chore and more like a vital business enabler.