Responsible AI doesn’t exist without governance

AI Governance & Assurance
Ethics & Responsibility

What is the importance of governance for responsible AI?

Suppose you were to ask a team member for their reasoning on a decision they made during the job. This might be to understand how they worked or because of a result, either good or bad, from their labor.

If they continued to tell you “I'm not telling you”, what would be your next step?

At the least, the individual will probably be put on a performance improvement plan - or fired. This type of conduct and lack of communication is not typically accepted in the workplace.

What makes this acceptable coming from our models?

Without proper governance practices, deploying AI is like asking your models to keep silent about why, what, and how they're making decisions that affect your business.

The purpose of AI is to create models that achieve desired outcomes—thus, it's important to be mindful of the results they yield and the impact they make.

Can you explain that decision? Could a machine?

Who is accountable for AI models and their outcomes?

Climate change and the implementation of artificial intelligence (AI) have both been on the rise in the past ten years.

Businesses are increasingly expected to take into account social and environmental issues, as evidenced by the Environmental, Social, and Governance (ESG) and Corporate Social Responsibility (CSR) movements.

A Deloitte survey found that within one year, the percentage of people who had deployed three or more types of AI rose from 62% to 79%, and 94% of business leaders felt that AI was vital for success. This rapid expansion demonstrates just how pervasive advanced modeling has become in businesses around the world.

Moreover, more and more tech businesses with AI-first offerings are appearing in every sector, from the most state-of-the-art to the laggard markets. Installing AI into software applications has started to become an actuality rather than merely empty hype. As it did previously with the software wave, prophecies that models will control the world are becoming a reality. It seems logical to assume that each organized data will eventually have learning models connected to it.

Given the implications of AI, it is understandable that ESG (Environmental Social Governance) and CSR (Corporate Social Responsibility) are more closely linked. To be responsible and ethical, organizations must have proper governance over their AI models – those that can shape consumer lives or lead to negative business results. Without this governance, firms will not show responsible behavior.

A joint study by MIT Sloan and Boston Consulting Group showed that "though nearly all companies saw Responsible AI (RAI) as a major concern, just one in four had fully developed RAI programs." Furthermore, those organizations with better programs saw more success as they were able to introduce more models with assurance.

In the quest to meet market demand, technological advancement, and urgent global needs, there was an increased demand for the job of Responsible AI and its associated business function. This is most likely due to markets seeking new opportunities, technological advances in the field, and a pressing need for reliable solutions on a global scale.

Businesses that invest in responsible AI are likely to see an improved return on their investment. A growing expectation from consumers, regulators, investors and society is that businesses conduct themselves responsibly.

Business leaders tend to make the right decision when given the chance.

What is AI governance?

Consequently, AI governance is just one factor of ESG, CSR, operational risk management, cyber security, privacy policy and data governance. Although the ways companies manage these initiatives can differ, the significant matter is that they exist and bring clarity among the many stakeholders involved.

Let's not be intimidated by the concept of AI governance. It is simply setting and maintaining good standards related to your ML and AI models.

In this context, governance is something that all businesses should aspire to do. There's no excuse not to adhere to the expected standards of corporate behavior.

Executing enterprise-level AI governance requires applying traditional business management principles to this newer technology, including:

  • Strategizing around the risks and opportunities of AI should be done in the same manner as other business initiatives.
  • Comprehending how AI can influence your business, customers, and society is key when making decisions.

Similar to other business practices, AI governance requires a combination of people, structures, and processes to be successful across the entire data science lifecycle. Good governance adheres to the following requirements:

  • Policies are created, documented, and managed effectively
  • Cross-functional integration and collaboration
  • Oversight and reporting

A key distinction exists in that AI entrusts business decisions to an automated system. So the concern then becomes how we effectively and responsibly regulate it.

What's the difference between model governance and AI governance?

AI has really challenged us to think differently about our model governance, which had previously been quite lax.

Depiction of Microsoft Excel holding the entire financial system on its shoulders in 2013
This meme may be funny, however the actual situation is not too different when we contemplate the lack of regulations relating to financial models.

When constructing a statistical model or a financial model, we form it based on our belief that it will continue working efficiently. Good governance is making sure to record, keep track of, and validate those expectations - either concerning one particular model or a larger AI program.

Consequently, the rise of AI governance sheds light on the necessity to query, monitor, and evaluate model performance, as well as contemplate risks connected with the models.

Therefore, the scope of model and AI governance are similar in the following ways:

  • Tracking business objectives and premises can help evaluate your firm's development.
  • Grasping the intended result of the model's "responsibility", which aids data scientists in utilizing the correct methods for a successful outcome
  • Establishing the technology and methodology for application development as a crucial to success
  • Knowing when it's necessary to reassess the model 
  • Having an understanding of the context of your data and its usage, which leads to more effective business decision-making and can ensure your teams are aligned.
  • Putting in place a system of knowledge management and creating a single source of truth can help make sure that your teams are all working together
  • Comparing performance and impact against clear economic and business expectations

Let's look at three principles for managing responsible AI and establishing governance

Context of the situation is clear

Prior to production, it is important to ensure that the goals of the business, scope, potential risks, existing limitations, and data are properly defined and documented in order to maintain clear context.

When the model is in use, the focus of AI governance moves to monitoring the context around it, particularly testing and validating that the model is operating fairly and efficiently.

There are verification protocols in place

For any business or technical decision and action taken during model development, verification and scrutiny is important. Having a central system of record that provides visibility helps to ensure that the team is accountable to governance:

  • Capturing evidence of those tests in a systematic way that is transparent, available, and indelible
  • Providing uninvolved, knowledgeable reviewers (see more below) the ability to verify and approve that good practices were used
  • Documenting any variances that emerge and how those problems were remediated

Third-party objective evaluation is a regular occurrence

Adopting the gold standard of governance ensures that ML models can be evaluated and comprehended by an impartial individual or entity not affiliated with the model's construction. If a machine learning initiative is designed with context and transparency in mind, stakeholders like risk managers have the necessary information to confidently approve its deployment.

Creating lines of defense from model governance best practices

The result of using the above principles will help your organization create multiple lines of defense against the outsized risks that AI can represent to the business. Lines of defense that follow best practices are both internal and external. AI governance should cover:

  • Model documentation and evidence relating to business processes, model decisions, data, and controls
  • Internal ML monitoring
  • Defined controls and frameworks
  • Compliance mapping
  • Deployment approval workflows
  • Objective controls reviews and oversight
  • Independent monitoring
  • Ad-hoc audits and testing
  • And more

The push for external independent audits of AI is a common thread in regulatory and risk management discussions in the broader world. The EU AI Act, for example, will require external independent audits of high-risk AI systems. The NIST AI Risk Management Framework also states in Measure 1:

“Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, and external stakeholders and affected communities are consulted in support of assessments.” 

We should expect this expectation will propagate around the world as more AI-specific regulations and standards frameworks are launched.

Good governance reduces bias and improves overall model performance

In parallel with recent social movements, AI has also brought a new level of awareness around bias in data. Bias is not just an “AI problem” – as a society, we are grappling with equity and fairness and as we continue to advance those conversations, we should expect our technology to reflect those principles and expectations. Historical data is probably going to be biased because society has been biased. This is not only an ethical and reputational risk for businesses; it’s a huge legal risk.

Once we acknowledge that bias is a HUMAN problem, we can recognize that unfairness in machine learning models is both caused by and perpetuated by humans, not the machine. ML uses data that humans provide and performs those functions that we humans assign it. Any malfunctions, maladaptations, or maliciousness are thus extensions of human actions and choices. Wide-scale deployments making consequential decisions in the real world have only uncovered the need for more human responsibility and accountability. This puts the onus on human prevention, which ultimately falls under the purview of AI governance.

A holistic approach to lifecycle AI governance helps root out bias so companies can mitigate and manage the problem before it scales. Moreover, a strong program of governance helps companies anticipate areas where bias might emerge and take a more proactive approach earlier in the model lifecycle. The end goals should be to:

  • Prevent models that are highly susceptible to bias from ever making it to production environments
  • Identify bias as soon as it appears in models that are in production

Responsible AI is the future

The responsible AI that good governance enables addresses three paramount concerns for corporations today: business performance and ROI, government regulations and compliance, and corporate responsibility.

Below are a few of the business benefits of effective AI governance.

  • Compliance with AI and algorithmic regulation
  • Protecting the company from reputational and brand damage
  • Accelerating AI/ML innovation with faster approval processes
  • Cost savings via the efficiency of filing and audit-related work
  • Faster detection and resolution of key risk events
  • Improving internal and external communication, cross-functional collaboration
  • Establishing repeatable and effective Responsible AI practices

With these benefits also comes better ROI on AI programs. In a 2020 study from ESI ThoughtLab, researchers found that “overperformers'' had a higher ROI from AI implementations. 

AI overperformers were enterprises that had more of a business practice foundation in these areas:

  • Addressing privacy, regulatory, and security concerns
  • Creating standards and training on AI ethics
  • Gathering, integrating, and formatting data for AI use
  • Measuring and tracking AI performance results
  • Preparing an AI platform to manage data
  • Defining business cases, models, and plans

When you reflect on the business case, touched on at the highest level above, combined with the peace of mind that comes with doing the right thing for consumers and society, AI governance starts to look less like a chore and more like a vital business enabler.

Related information