Following the Thread – Newsletter Issue 3

Anthony Habayeb
September 15, 2020

[Video transcript lightly edited for clarity and grammatical errors.]

Hello, Anthony Habayeb here, the co-founder and CEO of Monitaur, a Machine Learning Assurance company that is bringing you this Machine Learning Assurance Newsletter, a newsletter at the intersection of machine learning, regulation, and risk.

The power of soft law and principles

Recently – I think it was in July actually – I read an article from the Brookings Institute about the importance of soft law in regulating AI. It was not one of the content pieces in this issue, but it talks about the soft laws, principles, and standards being a huge enabler of artificial intelligence while at the same time providing a good regulatory strategy to communicate to companies what’s expected of them.

What I like about principles – and this issue includes some examples of principles from regulators – is that they say very clearly, “These are the things we’re worried about that we need you, Company Operator, to prove you are intentionally and specifically trying to manage. You are specifically building controls to handle transparency throughout a lifecycle. You are specifically building controls to consider bias and fairness risks throughout a lifecycle. And you are specifically placing accountability for those decisions throughout your organization.

Those are three examples of principles from the NAIC. You can potentially see how principles give enough instruction but allow enough latitude for the company to clearly demonstrate how they are trying to meet those principles and, in doing so, to communicate to a regulator, “We are a good actor that is committed to doing the right thing as best as we can.”

Emergent regulatory areas require intentional approaches

Because regulators understand mistakes can happen. After all, we humans have been making them for decades when it comes to regulation – and we’ll continue to. ML systems will make some mistakes. But what will be unacceptable is deploying these systems that can have huge impacts on our population without the companies using these systems demonstrating how they are specifically building controls to manage the things we are worried about.

Enjoy this issue. If you know anyone else who might be interested in the ML Assurance Newsletter, please share. And if you have anything you’d like to talk to me about, please reach out. Thank you and have a great day.

Want all our best news and analysis on trust and AI delivered straight to your inbox?