Following the Thread – Newsletter Issue 9

Risks & Liability

Hello, welcome to the Machine Learning Assurance Newsletter. I'm Anthony Habayeb, the CEO of Monitaur. We provide software that helps enterprise companies build governance and assurance around their machine learning and AI systems.

Regulatory concerns for AI

In this issue of the newsletter – and really over the past month – I have found myself reflecting quite a bit on a question that happens in a lot of sales calls with enterprise companies considering Monitaur.

The questions might be phrased in one of the following ways:

  • "Why implement governance, or auditability, or oversight of my AI systems today?"
  • Or "What laws exist that tell me what I should be doing?"
  • Or "What is the regulator's expectation?"
  • Or even "You talk to the FDA and NAIC, and you talk to federal regulators. What are they telling you we have to do?"

There is not perfect clarity on what a large organization must do. It's not consistent across industries or geographies yet. No regulator is saying, "You should do A, B, C, and D when deploying an AI or ML system." Germany does have AIC4, which has some explicit expectations for auditing AI systems. There are many governmental bodies like the European Commission, the Federal Reserve, state regulators in insurance in the US, and the FDA that are putting forward principles, but we don't have a mature, robust regulatory perspective on AI systems.

So you are a C-level officer at a really large company – what do you do?

Proactive risk management

Isaac Sacolick, who's a writer for many publications and also an author of books about disruption, transformation, and data governance, uses the phrase "proactive governance." Independently, I have been using the same phrase quite often when we are talking about Monitaur. There's really no downside – there's only upside – to implementing controls and proactively looking to mitigate the risks of your AI systems.

In this issue of the newsletter, you'll read about this bounty program that Microsoft and Twitter have launched recently. It is significantly less expensive to reward someone with $4,000-10,000 for identifying some bias in a system, versus facing public scrutiny, reputational damage, potentially regulatory fines, and consumer lawsuits from some biased, unfair, or unsafe system.

Liz O'Sullivan wrote in a reflective piece with TechCrunch that you'll see linked in this issue as well, in which she reviews how the New York Department of Financial Services, the financial services oversight agency in New York, assessed whether or not there was unfairness or bias in Goldman Sachs' practices with the Apple Card. She points out that Goldman Sachs seemingly is OK per the current expectations and standard of regulatory guidance; however, it is now known in the market that there was a lack of transparency and that there were not specific tests or controls put around those systems.

The responsible choice

What kind of a corporate citizen do you want your company to be? Would you prefer to proactively demonstrate that you have good controls around your systems that you're taking incremental steps to mitigate and root out bias?

I've said this before in other setups of our newsletter and in comments I've made publicly. There is not a silver bullet to root out bias from AI systems. It doesn't exist. Instead, there's a need for companies to have process controls, people controls, and technology controls to mitigate unfair biases, intentional biases, unknown biases. Data has too much societal baggage, people carry too much history, and weak processes create the opportunity for unfairness or inequities against certain classes of people to exist. That is a fact.

But as a corporate executive, you can recognize those facts and invest effort, people, process, and technology that tries to do better.

Read on and learn more

The articles in this issue of the newsletter include a look at examples of some emerging regulatory opinions; reflections on unfortunate events that have happened because of ungoverned AI; an article about how large enterprises are trying to reduce their risk thresholds with bug bounty programs; a legal perspective and insight around the emerging risk that corporations using AI are facing; and an examination of how the legal system is considering the fault and the damages related to AI incidents.

I hope you enjoy this issue. If there are things you are reading that you think our audience would be interested in the future, please share with them with us.

I hope you're enjoying your summer, and I look forward to sharing some more news with you in upcoming issues of our Monitaur's Machine Learning Assurance Newsletter. Have a great day.