An introduction to ML Assurance

assurance:

Machine Learning Assurance (MLA) is a controls-based process for ML systems that establishes confidence and verifiability through software and human oversight.

The objective of MLA is to assure interested stakeholders that an ML system is functioning as expected, but in particular, to assure an ML system's transparency, compliance, fairness, safety, and optimal operation.

ML is unique

ML systems present a new paradigm of technology and business decisioning.

Individuals and organizations responsible for managing risk and compliance – including regulators – have not previously contended with such dynamic and variable models and systems.

Data
Huge data collections
Decisions
Rapid, opaque decisions
Models
Evolving, rapidly scaling models
Technology
Specialized, complex technology

ML needs assurance

Because machine learning is making key decisions that affect people's lives, livelihoods, and opportunities, trust and confidence in ML systems is paramount. There should be a reasonable ability to evaluate and verify their safety, fairness, and compliance.

Companies, regulators, and consumers all benefit from an objective method of assuring ML systems.

For Regulators

ML systems are opaque to non-technical professionals, requiring more ongoing attention from objective parties to ensure safety and compliance.

For Society

Unassured ML applications can lead to broad misconceptions and bias, undermining the long-term promise of the technology.

Pillars of MLA

Three core pillars empower an effective MLA function and responsible use of ML.

Context

MLA requires clear understanding and documentation of the considerations, goals, and risks evaluated during the lifecycle of an ML application.

Verifiability

MLA requires each business and technical decision and step have the ability to be verified and interrogated.

Objectivity

MLA requires any ML application can be reasonably evaluated and understood by an objective individual or party not involved in the model development.

Machine Learning Assurance Framework

Creating assurance ML systems requires a continuous, coherent approach throughout the lifecycle of projects, as well as across your enterprise operations. Careful coordination of people, processes, and systems can create the clarity, confidence, and accountability that practitioners need.

Organizations can utilize the established, effective CRISP-DM steps and deploy detective controls vital for machine learning systems to drive a powerful framework for assurance and robust risk/control matrices.
CRISP-DM Steps Controls Business Understanding Data Understanding Data Governance Data Preparation Data Preparation Data Segmentation Modeling Algorithm Selection Cross-Functional Review Metric Selection Evaluation Model Validation Deployment Executive Accountability Monitoring Process Logging & Intepretability
A down arrow
ML Assurance controls for the Business Understanding step

Business Understanding

Evaluate key business drivers and questions at project inception and revisit at every following step.

A down arrow
ML Assurance controls for the Data Understanding step

Data Understanding

Understand data lineage, quality, and usage rights.

Control Area

  • Data Governance
A down arrow
ML Assurance controls for the Data Preparation step

Data Preparation

Pre-process standardized data sets for training, test, and production.

Control Areas

  • Data Preparation
  • Data Segmentation
A down arrow
A down arrow
A down arrow

Modeling

Create the simplest, best fit, and performant models.

Control Areas

  • Algorithm Selection
  • Cross-Functional Review
  • Metric Selection
ML Assurance controls for the Modeling step
ML Assurance controls for the Evaluation step

Evaluation

Determine accuracy and precision of models before launch and continuously in production.

Control Area

  • Model Validation

Deployment

Deliver business sign-off and develop capacity for continuous inspection and review.

Control Areas

  • Executive Accountability
  • Monitoring Process
  • Model Logging & Interpretability
ML Assurance controls for the Deployment step

Related Resources

Visit our NEW AI Trust Library