Our Machine Learning Assurance Platform

The first and only software built specifically to drive assurance and governance of machine learning, Monitaur's suite of products enable stakeholders across the enterprise to deliver trust and confidence throughout the full lifecycle of their ML and AI systems.
RecordML Log – Version – Understand AuditML Inspect – Test – Verify MonitorML Bias – Drift – Anomalies GovernML Document – Control – Comply
RecordML Log – Version – Understand AuditML Inspect – Test – Verify MonitorML Bias – Drift – Anomalies GovernML Document – Control – Comply


Monitaur RecordML creates a single source of truth for ML transactions across your applications, the foundation for understanding what your ML is actually doing. Accelerate forensic discovery for audit and compliance, reveal ML decisioning, and get transparency across all of your ML deployments.
Capture the unique parameters behind every ML decision

Model & code versioning

See which models and code files produced every ML decision

Decision understanding

Generate explainability models for ML decisions automatically


Monitaur AuditML ensures objective verifiability of your ML models and systems by enabling non-technical users to find, review, and test ML decisions. Risk, compliance, and governance owners can self-serve, auditors and regulators can inspect with exposing IP, all without demanding effort from your technical teams.
Enable sampling and spot testing of ML transactions

Reperform transactions

Examine relationships between outcomes and inputs

Counterfactual analysis

Enable "what-if" scenario analysis of ML decisions


No matter how much work you invest in pre-production, you need ongoing visibility into their decisions and the movement of the models. With MonitorML, you can identify potential risks proactively, empower compliance to operate continuously, and maximize the performance and security of your ML systems.
Implement controls through custom thresholds

Detect bias and anomalies

See when ML introduces bias and outliers

Identify feature and model drift

Use statistical tests for bad inputs and outputs
Bolster existing processes for model and code changes


To manage risk, AI and ML models require intentional oversight throughout their lifecycles – starting with the business problem, through the model development, and into deployment. GovernML creates visibility and verifiability across lines of defense, aligning technical and non-technical stakeholders around assurance of models.
Define policies and controls unique to high-risk models

Drive cross-functional alignment

Collaborative workflows for tech, risk, and business stakeholders

Create a central system of record

Organized evidence, approvals, and status in a single application
Share your proof of responsible model governance


Interested in independent validation and certification of your ML systems?