A brief introduction to Machine Learning Assurance

AI Governance & Assurance

Machine Learning (ML) consists of a computer recognizing patterns without having to be explicitly programmed. With the rise of big data and computing power, more and more companies are creating algorithmic models to make decisions that affect consumers daily. Medical imaging models to help radiologists find cancer, to self-driving cars to prison sentencing to fraud detection, models are changing the world around us. However, these models are not always transparent or intuitive about how they work, which significantly affects trust in the system.

There is a lot of innovation and progress waiting to be released in the form of AI/ML applications in many industries. For example, in Radiology, a classification model can be used to help doctors determine if an X-Ray of a tumor is benign or metastatic. Or in Finance, a classification model could help determine if a customer will default on their loan payments. However, with the lack of built out Machine Learning Assurance capabilities, many of these groundbreaking innovations stay in a sandboxed state. In this blog post, we will introduce the field of Machine Learning Assurance and how its principles, combined with the assistance of Monitaur, ML innovation can be unleashed.

Machine Learning Assurance is an iterative process where every stage sets the groundwork for the following stage. The five tenets of Machine Learning Assurance are:

  1. Holistic Business Process Understanding
  2. Logging
  3. Verifiability
  4. Reperformance
  5. Objective Third Parties

Holistic Business Process Understanding

To fully understand a process, stakeholders need answers to key questions:

  • What are the key objectives and business goals?
  • What risks or adverse events are possible?
  • Where does the data come from?
  • How is it transformed?
  • Which model was used?
  • How was testing conducted?
  • How was the model deployed?
  • What is the monitoring process?

Documentation from each step (along with accompanying flowcharts) in conjunction with Logging, Verifiability, and Reperformance establishes the process as thoughtful and controlled.

Logging

In financial auditing, there is an oft-repeated phrase: if it isn’t documented, it didn’t exist. The lack of consistent, comprehensive, readily accessible, and easy-to-understand model inputs and decision logs plagues verifiability for ML implementations. Without detailed and reliable logs, Machine Learning Assurance is unachievable.

Verifiability

Verifiability is the ability to access, examine, and audit decisions within a time and condition specific context.

For example, to verify whether an applicant was correctly assigned a $10,000 credit limit increase, the transaction behind the assignment - even if from months ago - needs to be accurately captured and available for a business review. This allows a “double check” of the decision that resulted in a credit limit bump. This work is particularly essential in regulated industries. Model auditors can verify delivery of the same decision, using the same inputs, assuring the model’s decision making.

Reperformance

“Reperforming” a given transaction or decision is a critical part of any audit.

For example, in a financial audit, it is common for auditors to recalculate cash inflows and outflows. That being said, replicating a model’s decision is complicated because of how the input data is transformed before being fed into the model. Managing past model versions, accommodating environmental shifts and navigating data privacy also complicate reperformance. Yet, model trust requires the capability to rerun, or reperform, a decision. This is a behavioral expectation of risk managers, auditors and regulators.

Objective Third Parties

Regardless of process design thoroughness, errors may be introduced. Individual or group biases, whether implicit or explicit, imprint workflows. The enormity of the consulting and auditing sectors attest to this – and the value of an independent, third party perspective. Maximizing assurance in high risk/high reward processes requires objective scrutiny. This also supports adherence to legal and ethical standards.

High-risk AI/ML models should only be deployed with objective, third party assurance. The repercussions of deploying an under-scrutinized system can be fatal, particularly in emerging domains with fully autonomous vehicles or cancer screening, for example.

Monitaur supports each of the five tenets in enabling AI/ML innovation by providing a comprehensive assurance and model management platform that enables recording, monitoring, verification, and auditing of your machine learning models. For companies in regulated industries using models to make decisions, Monitaur delivers transparency and auditability that’s necessary to manage compliance and unlock innovation.