ML Assurance Newsletter

Issue 2

 – 

August 11, 2020

Following the Thread

Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching in this issue of the newsletter. Read the video transcript.

Trust & AI: Must Reads

Guidance on AI and data protection

The Information Commissioner's Office (ICO) in the UK, the public institution chartered with protecting data privacy, published very comprehensive guidance on auditing of AI systems. In addition to valuable discussions about accountability, fairness, and transparency, this paper also explores the implications of the data minimisation requirements in the design of AI systems. As with many other regulatory bodies around the world, the ICO is pushing the boundaries of ML Assurance that practitioners should take advantage of to educate themselves and improve their internal processes.

Money Quote: "Mitigation of risks must come at the design stage: retrofitting compliance as an end-of-project bolt-on rarely leads to comfortable compliance or practical products."

A Case for Cooperation Between Machines and Humans

This article covers the mission to create Human-Centered Artificial Intelligence (HCAI) that is reliable, trustworthy, and safe. Ben Shneiderman, Professor of Computer Science at the University of Maryland, has pursued the larger goal of more humanistic computer systems for decades, and lately he has focused his attention on the risks and inequities that artificial intelligence and machine learning have surfaced. His recent paper on the topic expounds on designing computer systems to augment humans' abilities – rather than replace them – with a two-dimensional model to evaluate the appropriate amount of human control and computer automation. At its most extreme, systems without human control create the framework for dissolving our ethical responsibility as humans, the sort of dystopic scenario that permeates the public imagination and should be obviated.

Facebook civil rights audit urges ‘mandatory’ algorithmic bias detection

An independent audit of Facebook's use of artificial intelligence found a dangerous lack of controls and limited reach across the teams that take advantage of AI. Civil rights lawyers Laura Murray and Megan Cacace, along with firm Relman Colfax, determined that the social media giant's attention to algorithmic bias is laudatory but also far too nascent for an organization with so much influence on people's lives and livelihoods.

In the Auditor Observations section of the published report, the authors were critical of the limited reach of Facebook's Fairness Flow and Responsible AI initiatives, believing these should be mandatory, not voluntary as they are today. Perhaps more importantly, they noted the difficulty they faced in understanding the effectiveness of these programs since "the Auditors have not had full access to the full details of these programs". Providing visibility into these applications for non-technical users is required for a proper audit function for ML and AI systems.

Improving Verifiability in AI Development

A number of important organizations in the AI community collaborated to create this toolkit focused on developers. However, the brief provides fantastic high-level guidance for every stakeholder in the lifecycle of AI applications. We applaud the focus on objectivity and auditability that tops the recommendations for institutional and software mechanisms. To be more specific, the Open AI team emphasizes the importance of third-party auditing. Without independent verification by uninvolved external auditors and regulators, practitioners will always struggle to decouple their familiarity, and potentially their self-interest, from the evaluation process. More information can be found in the full report as well.

Enhancing trust in artificial intelligence: Audits and explanations can help

In this overview of the current state of technology, Carl Schonander compares audits and explainability as solutions to the problem of gaining trust and confidence in machine learning and AI systems. He posits auditing as the most reliable approach today. Then he enumerates the challenges and possibilities of explainability, transparency, and reproducibility. While tools for explainability have progressed quite a bit lately, it will always create problems for non-builders as models become more complex. Transparency risks exposing the company's intellectual property if not done carefully. Reproducibility requires capturing the immense complexity of ML systems, although numerous providers are currently working on this capability, including Monitaur's Audit product with counterfactuals. Ultimately, combining the power of auditing with explainability promises a long-term, more balanced solution set.

Been Kim is building a translator for artificial intelligence

Although over a year old, this interview with AI thought leader and practitioner Been Kim is worth a read, or a reread, even 18 months later. Kim has a gift for translating the drive for responsible AI into relatable and evocative metaphors. Beyond addressing the black box problem of machine learning, she and her team at Google Brain are actively seeking ways to make algorithms interpretable by humans through a system called Testing with Concept Activation Vectors (TCAV). The goal is for it to reflect human concepts of understanding, rather than just the input features that the computer relies upon. As she notes, "You don’t have to understand every single thing about the model. But as long as you can understand just enough to safely use the tool, then that’s our goal."

From a broader perspective, she rightly notes the importance of ML practitioners delivering interpretability, one facet of assurance, to ensure that humans don't abandon the potential of AI: "in the long run, I think that humankind might decide — perhaps out of fear, perhaps out of lack of evidence — that this technology is not for us."