Introducing Monitaur - the Machine Learning Assurance company

AI Governance & Assurance

Machine Learning (ML) models have been built and trained to make important decisions we care about.

Every day, more and more decisions about our lives are being made by models. Advances in ML are allowing us to build models that can diagnose diseases and recommend treatments to patients. They can make decisions about whether or not a consumer should get credit or mortgages. A machine might be deciding whether or not you get hired.

But with great promise comes risks…

These models can be complex and that complexity means that understanding how decisions were reached is not always easy. Lack of understanding and clarity presents risks, which companies obviously want to mitigate. To add to this, regulators are also currently establishing policies governing the use of models.

ML Assurance and Governance

Companies need to establish ML Assurance and ML Governance.

When a model denies a consumer of a loan, a company needs to have the ability to explain that decision and verify compliance of that decision with policies and regulations.

When a heart monitor, which utilizes model-based software, triggers some medical treatment, the medical device company is expected to have had ongoing monitoring of the model and a readiness to support regular audits and inspections of the model.

These are just a couple of examples – every industry across the globe is working to build models to help make decisions.

Transparency and Governance

For companies making life impacting decisions using models, Monitaur’s AI auditing software, establishes transparency and governance expected by the public and regulators.

We’re building Monitaur to unlock the potential of AI and ML. We’re all-in on the possibilities they present, but we understand the adoption headwinds they will face without proper tools and solutions to manage assurance, establish governance and mitigate risk.

Who Are We?

Since 2016, through very different journeys and paths, we developed an interest-turned-obsession with this need and opportunity.

Andrew, whose career started as an auditor, earned a Master’s in data science and applied that, among other things, towards building models to perform audits.

As an auditor-first, he wondered… who is auditing the model?

Andrew’s curiosity lead to authoring this paper on ML Audit, which lead to his industry speaking and participation in drafting this ISACA guidance. This expertise and thought leadership brought Andrew to Capital One to build ML Assurance.

Anthony is an active participant in Boston’s vibrant AI ecosystem. As an advisor to several AI-based companies, he saw fears related to trust and transparency of models create friction to adoption.

He saw that AI needed a co-pilot… solutions that can deliver model controls to help companies spend money they are dying to spend on AI.

In founding, we combine our assurance and business expertise with a top software expert. Co-founder of Real Python, Michael Herman is a software ninja, application architect and mentor to software engineers around the world.

We are excited to have some other amazing people around the table… more on that another day.

What are we building?

We’re launching Monitaur with a focus on supporting the assurance and governance of machine learning systems. Monitaur helps companies record, monitor, and audit every decision made by an ML model. We’re focused on the needs of risk managers, compliance leaders, internal auditors through developing the field of Machine Learning Assurance.

Our goal is to build a company and brand that becomes a partner to the AI/ML industry – a bridge between the model makers and model consumers, between companies and regulators, between technology and the public. A Monitaured company is one who invests in establishing responsible use of models making decisions. A Monitaured company believes machine learning does not have to be a “black box”. With good practices, it can be a trusted and transparent user of AI/ML.