Model Assurance: The Future Is Now

Thanks to the advent of Big Data and advancements in machine learning and statistical modeling techniques, the insurance industry as a whole is moving important business and customer decisions into the hands of models. Models now affect so many core business functions; however, they often operate with a higher degree of trust in their decisions than is likely warranted considering the new class of risk they present to carriers.

As carriers evaluate their investments in and utilization of models, there needs to be intentional planning and consideration for the risks of such models. What brand, financial, customer, and regulatory risks emerge by using a model or AI system to make decisions for the business? Are new controls in place, built with these new model types in mind, that help to identify models not performing as expected, to evidence testing for fairness and soundness, to ensure a cross-functional team of stakeholders understands and has evaluated the risk versus reward of the selected modeling type?

Regulators are considering these risks and advancing policy and guidance. The NAIC's attention to the most advanced models deployed in insurance companies – those using Artificial Intelligence (AI) and Machine Learning (ML) – has generated increased regulatory scrutiny for all models, as well as the data that carriers are allowed to use in them. As state DOIs like Colorado and Connecticut also step up to oversee models and permissible data used by models, the Federal Trade Commission has re-emphasized that existing laws on fairness cover algorithmic decision-making.

While new regulatory conditions are still forming, a thematic expectation is apparent. Regulators are going to ask for proof of intentionality of incremental risk management, governance, and accountability around models, in a more significant way than ever before. From the NAIC to the FTC and beyond, AI has awakened a broader awareness and realization about how much we have possibly over-trusted algorithms and greater scrutiny is inevitable. There is broad agreement regulators are going to expect evidence of how carriers manage their models for fairness, accountability, compliance, transparency, and robustness.

It is essential for carriers to establish new model assurances which reduce potential harm and negative consequences from models performing outside business expectations and regulatory expectations.

How Monitaur helps

Think of Monitaur's software as guardrails for your AI, ML, and other advanced models. Our software helps insurance companies govern and assure their model-based systems to create trust and confidence in the high stakes decisions these systems are making more and more frequently these days.

The risks created by AI are not simply technical risks. Rather, they are business risks that every organization needs to align, organize, and collaborate on across teams and business units. As a result, we've focused our attention on developing software that supports the unique needs and use cases of both technical and non-technical stakeholders.

Monitaur's software helps to ensure your organization has evidence of comprehensive governance and assurance across the entire lifecycle of all of your models.

Why insurance

Insurance, maybe more than any other major industry, has so much transformative potential through use of data and models. It is also an industry that has to incorporate oversight, compliance, and consumer protection as a central element of every innovation. We fundamentally believe that by helping carriers establish transparency, trust, and confidence in their models and AI systems, we unlock this great potential and ultimately positive benefits for consumers.

We work with large enterprises across industries and one of the most exciting observations we've gained about insurance are the breadth and depth of our conversations with some of the largest carriers in the world. We are consistently finding the C-suite and executive stakeholders driving efforts and priorities to enhance governance and demonstrate responsible use of data and models. Insurance leadership is personally invested in addressing bias and fairness. They are looking to take positive steps today for their own internal needs, while preparing for the future as regulatory attention and action gains momentum. Carriers recognize the competitive advantage and important corporate citizenship derived from having trusted models and development of AI.

But insurance also presents an especially unique and challenging complexity in building model assurance – how to assure fairness. One aspect of working with our insurance customers that we did not fully appreciate was the challenge of delivering assurance of fairness when data about the protected class of individuals is not known to the carrier. This lack of data creates a special challenge to prove fairness and identify possible proxy discrimination when evaluating advanced statistical models and machine learning algorithms. Bias is much easier to identify when you have the associated variables available for measurement and validation. We're actively partnering with carriers to develop several approaches that can reliably deliver independent model validations in concert with a lifecycle governance approach for bias and fairness. It will be a long journey, and we feel the industry's commitment to addressing this complex issue.

The bottom line

Insurance as a business doesn't get as much credit as it deserves for positively impacting the lives of people directly. Whether gaining peace of mind thanks to coverage or helping families stay afloat in the midst of unexpected challenges, having good insurance is foundational to the success and prosperity of everyone covered.

We know that – with the proper governance and assurance – AI and ML will help carriers extend coverage to more individuals and improve the products they're offered. We are excited that Monitaur can play a role in creating a deeper, stronger financial foundation for many more families and communities by building trust and confidence in model decisions.


As first published in Demotech, Inc. Summer 2021/Vol 7, No. 3