Following the Thread – Newsletter Issue 2

Principles & Frameworks

[Video transcript lightly edited for clarity and grammatical errors.]

Hello, Anthony Habayeb, the CEO of Monitaur, a Machine Learning Assurance platform. Welcome to the second edition of the ML Assurance Newsletter, newsletter at the intersection of machine learning, regulation, and risk. In this edition, we’ve organized some articles and content pieces that think a little bit about what auditability and post-process verification of machine learning look like.

Accountability for ML requires downstream review

Much developing regulation, like one of them we reference here – the ICO out of the UK – considers the fact that systems will make mistakes, just like humans do. There needs to be an accountability and an enablement of someone after the fact who is not a deep data scientist in order to ask very basic questions of a system.

Like Been Kim at Google is suggesting, using this example of “I use a chainsaw, not because I understand how the chainsaw works, but because it’s a tool that I understand the risks of and I know how to use it appropriately. I really like that analogy.

As you read some of these articles, I’d like you to think a little bit about how are you building systems and enabling people who are not the system builders to verify the basis of a decision to be able to interrogate an outcome.

Black boxes, auditability, and assurance

These are really basic questions. Black boxes are in airplanes because something can happen, and when it does, people want to know what happened and have an ability to access the records that show what caused this plane to come down.

When a machine learning system makes a decision that people aren’t happy with, there should be an ability for somebody to look into that. And these articles, I think, do a pretty good job of helping to not just talk about that enablement of audit. As Open AI suggests, third-party audits should be fundamental.

The articles in this issue do a really good job of demystifying the complexity of this conversation around explainability or interpretability.

You can take most systems and make them accessible to an average person with certain instrumentation and process enablement, and these articles hopefully get your mind turning. If you are deploying machine learning systems or you’re in a community that spends time thinking about the infrastructure of machine learning, how much time have you spent thinking about that after-the-fact verifiability or auditability? Those concepts are really fundamental to assurance because, to have a system that people can trust and feel comfortable with, there needs to be an ability for someone after the fact to see what happened.

Enjoy this issue. As always, feedback is welcome. Please share the newsletter with anyone who might be interested. Have a great day.