Following the Thread – Newsletter Issue 5

[Video transcript lightly edited for clarity and grammatical errors.]

Welcome to the fifth edition of the Machine Learning Assurance Newsletter, a newsletter at the intersection of machine learning, regulation, and risk brought to you by Monitaur. I’m the CEO and co-founder of Monitaur. Happy new year and great to be with you.

Manage ML risk like human risk

In this edition I found that these articles are pretty thought-provoking around how companies, risk officers, and compliance officers can start to implement good control frameworks and policies for the responsible use of machine learning systems. There were some phrases in various articles that I thought were really great taglines. One of them was effectively “treat machine learning as if it’s human.”

From that perspective, assume that something will happen that you don’t expect. How do you make sure you have controls in place to account for that unpredictability? I thought the Compliance Week article did a really good job of saying, “Listen, you’re not going to get it perfect, but you should be implementing and enriching your data analytic controls immediately so that, when something happens, you are prepared to defend yourself with evidence that you had good controls in place.”

That article referenced the almost $1 billion fine that had been levied against JP Morgan Chase, which was incredibly punitive because regulators found there was just a complete lack of controls in place around some of their data analytics programs.

ML can create a more fair and equitable business

Monitaur posted an article in September around bias as a human problem. As we think about machine learning systems, we actually have the potential for ML to be more fair. There’s been so much written in 2020 about the fears of machine learning being unfair and being biased – the doomsday scenarios. But as you think about data-driven applications versus human decisions, you can actually see – if you build good controls – how your models made decisions.

We have the potential for companies using machine learning to create more fair experiences for our customers, for our employees, and for the products and services that we offer. Another article in the newsletter covers that concept and argues that companies should lean into the potential for ML to make them more equitable and offer more equitable experiences to consumers and the general population.

So enjoy this issue. As always, if you find articles that you think are interesting, please send them our way.