Following the Thread – Newsletter Issue 6

Regulation & Legislation
Risks & Liability

[Video transcript lightly edited for clarity and grammatical errors.]

Welcome to the sixth edition of the Machine Learning Assurance Newsletter brought to you by Monitaur – a newsletter at the intersection of machine learning, regulation, and risk.

Civic engagement with AI

In this sixth edition, there’s one piece in particular that really jumps out to me that I think captures a lot of conversations that Monitaur has been having in the market about how we manage fairness, bias, and oversight of ML and AI application. It’s this idea of civic competence.

A researcher named Abhishek Gupta out of Montreal has really been leaning into it. Oversimplifying the idea, it’s that laypeople should have an ability to think about and inform some of these new technologies that are being developed.

That really tracks to a lot of the regulatory guidance we’re seeing like the European Commission saying that high risk systems should be human verifiable. Even some of the emerging federal regulations, now with the Democrats in control of both houses, we’re seeing algorithmic accountability concepts moving forward and being discussed further.

Those further include ideas like consumers having a right to understand how decisions are made and that things should be auditable and verifiable. Across all of these contexts, we’re saying we’d like to have everyday people be able to understand what these systems are doing.

Regulatory engagement with AI

Regulators are thinking similarly because in most cases they are not data scientists. Even if they employ folks who are actuaries, data scientists, or machine learning engineers, they’re unlikely to have the same level of skill or capabilities as the folks developing models inside companies deploying machine learning at scale.

So this idea of civic competence – where ethics, functionality, transparency, and trust in these systems can have a vital influence from everyday people – really is interesting to me. I think we should all be paying attention to how models are built, how they’re deployed, and what transparency and assurances are incorporated into those systems in ways that serve everyday folks like me and you.

As always, thank you for your time. I hope you enjoy this issue of the ML Assurance Newsletter. Please do send any feedback or thoughts our way.