ML Assurance Newsletter

Issue 3

 – 

September 15, 2020

Following the Thread

Some quick thoughts from Monitaur's CEO, Anthony Habayeb on the stories we're watching. You can Read the video transcript here.

Trust & AI: Must Reads

NAIC Unanimously Adopts Artificial Intelligence Guiding Principles

The National Association of Insurance Commissioners (NAIC), the primary regulatory body of insurance in the United States, took a positive step toward enhanced regulation of artificial intelligence and machine learning systems. The core principles it established are represented in the acronym FACTS:

  • Fair and Ethical
  • Accountable
  • Compliant
  • Transparent
  • Secure/Safe/Robust

Explicitly calling out Accountability as a key need for AI systems fits into a larger trend that we're seeing pushing beyond responsible use into assigning accountability and creating the expectation that companies operate their intelligent technology with intentionality.

NIST Asks A.I. to Explain Itself

While not explicitly a regulatory agency, the National Institute of Standards and Technology (NIST) has massive reach and influence, so its request for public comment on newly created principles for explainable AI shows the current groundswell. The title of this article undersells some of the deeper content covered after the principles themselves, which align with many other groups' conceptions: evidence for every output; understandable to users; reflective of systems' processes; and operating within design constraints. Jonathon Phillips, one of the authors, expounds on challenges in the field of AI explainability, since different users have different expectations of comprehensibility. And he also questions the reliability of human explanations, even imagining a future in which machines strengthen our human capabilities in this area.

CFPB Seeks Input On Improving Access To Credit

The Consumer Financial Protection Bureau (CFPB) wants to broaden access to credit in the United States, and the institution is seeking comment on a wide range of related topics. Curiously, artificial intelligence and machine learning are framed solely as potential solutions to the unequal access to credit and inequity that exists in society. Of course, it is just as likely – if not more so – that AI and ML will contribute to inequity and inequality without proper safeguards and regulatory enforcement.

Researchers claim bias in AI named entity recognition models

Researchers at Twitter announced the discovery of bias in named entities that form the linguistic foundation for a wide range of online properties, from search engines to knowledgebases. While most bias and fairness research focuses on a single variable like gender or race, this project explored the intersection of race and gender in the named entity recognition models. The bias derives, at least in part, from bias in the training data, an issue that compounds itself, as underrepresented classes of individuals are excluded now, which in turn excludes them from future training data sets.

Despite this insightful work, Twitter still struggles with the problem of bias itself, as revealed this week alongside claims of bias in Zoom's background isolation algorithm.

The Responsible Machine Learning Principles

From the Institute for Ethical AI & Machine Learning, this set of principles for the responsible development of ML systems is well worth your time. Not only is the language very easy to understand for the level of conceptual complexity, but each principle is further explored through practical, accessible, and appropriate examples. The first principle of "Human augmentation", effectively keeping a human in the review process of ML systems, is laudable and necessary, though perhaps undercut by the allowance of its temporary status. Our prevailing opinion is that an evolving model should always have a human-in-loop since models degrade and environments change. Similarly, while the value of "Reproducible operations" is recognized, reproducibility is seen as primarily a technical need. We believe that reperformance by objective, non-technical audiences creates optimal assurance and guarantee of responsible use of ML.

Before we put $100 billion into AI...

Chad Jenkins, a leader in the field of robotics and AI as well as founder of BlackInComputing.org, calls for a new level of attention and accountability across institutions in this important read. The lack of diversity in ML and AI research is deeply entrenched in the academic institutions, thanks to how research is funded at a federal level. He argues for the need for political leaders to commit to accountable action to ensure more inclusion in funding research grants. And for those operating in the academic institutions, "you can first look at your own organization and your own working environments and see whether you are living up to the civil rights statutes."

Why Explainable AI Must Be Grounded In Board Director’s Risk Management Practices

Thought leader and AI SaaS CEO Cindy Gordon makes a compelling case that executives and company boards need to pay much closer attention to how their organizations deploy AI. With the increased regulatory attention and the drumbeat of exposures, the uppermost echelons of corporations must learn to examine how they manage risk around their fast-scaling systems. It is essential that these leaders educate themselves about how AI and ML work in their organizations and ensure that they have explainability built into their governance functions. She also argues for the importance of leadership engaging in a broader discussion about the need for explainable AI across industries and geographies.

Want all our best news and analysis on trust and AI delivered straight to your inbox?