ML Assurance Newsletter

Issue
14
-
Jan 6, 2022

Trust & AI: Must Reads

In Jean-François Gangé’s recent opinion piece in ServiceNow, he outlines the importance of AI governance. In most industries – excluding highly regulated industries – AI governance is not mandated because it requires a significant amount of resources to create a comprehensive approach. However, this is likely to change in the next few years. Frameworks and proposals for AI regulation have arrived in the EU, the US, and across the world. 

Holistic AI governance allows companies to monitor and support their algorithms. When it comes to artificial intelligence systems, Gangé believes managers should treat them like human employees. There should be clear oversight and accountability over decisions. AI governance software can expedite this process by identifying “red flags” raised by monitoring software and allow it to be reviewed in line with a company’s governance framework. Along the lines of what we fundamentally believe here at Monitaur, Gangé believes that AI governance can yield positive business outcomes, the trust provided by AI governance is priceless.

AI Governance & Assurance

AJ Abdallat discusses the growing impression of artificial intelligence on businesses and executives in his latest article in Forbes. Recently, a dependency on technology has cemented into business practices. This trend is expected to grow exponentially. By 2026, it is predicted that the computing market will grow by over 29%. Despite this rapid adoption of new and innovative technologies, there has been some hesitancy to trust uninterpretable AI models.

With large pools of complex data, it is now more important than ever for humans to have a stake in understanding how AI systems operate, which has given rise to cognitive AI. By using AI governance strategies that make models easier to understand and interpret, consumers and executives alike will be more likely to trust their AI systems over black box models. Alongside its potential to increase GDP across the globe by 16%, artificial intelligence is a lucrative opportunity that will be gradually implemented into business practices across the globe.

AI Governance & Assurance

In his recent article in NPR, Martin Austermuhle reports on D.C. Attorney General Karl Racine’s latest attempt to mitigate societal bias in artificial intelligence. This first of its kind bill proposes a ban on algorithmic discrimination, “the practice of computer algorithms that discriminate against certain people who apply for jobs, seek a place to live, or try to get a loan.” Building on the District’s Human Rights Act, this bill will extend the prohibition of discrimination based on a number of protected characteristics to technology and algorithms. 

Similar to the motivation behind the new Data & Trust Alliance, the bill intends to debunk the myth that algorithms are innately egalitarian. Along with mandatory annual audits and documentation on how algorithms are built, if adopted, this bill would outlaw the use of discriminatory algorithms in systems that make high risk decisions. This bill comes shortly after the New York City Council passed a bill regulating the use of harmful algorithms in the hiring process. Though Racine’s bill extends to even more industries, the intention behind the bill remains the same: The time is now to regulate AI.

Regulation & Legislation
Ethics & Responsibility

In Jamilah Lim’s latest article in TechWire Asia, she illuminates the newest addition to the Chinese judicial system: An AI system to help prosecutors expedite the sentencing process. “Theoretically, the machine would be able to reduce the workloads of prosecutors, so they can focus their time and efforts on more difficult tasks.” 

The integration of artificial intelligence into the Chinese judicial system is not a new phenomenon. AI has played a role in evaluating the strength of evidence and the level of danger a suspect poses to society since 2016 through System 206. However, this new development will be the first time AI plays an integral role in the decision-making and sentencing processes. This technology has raised concerns over the dangerous impacts of AI bias and who will ultimately be held responsible for its decisions.

Ethics & Responsibility
Risks & Liability

In Steve Lohr’s recent New York Times article, he reports on the creation of The Data & Trust Alliance. Composed of some of the largest corporations in the US, this group will work to mitigate AI bias in the hiring process. To mitigate risk, the Alliance has created a scoring and evaluation system for AI software in the corporate world. From the diversity of training data to “neutral” datasets, this framework intends to help corporations identify where the algorithms used in the hiring process may be unfairly treating protected classes. 

This evaluation framework comes after the FTC warned companies that those who do not take accountability for the harm their AI systems may unintentionally create will be held liable by the federal government. “The Data & Trust Alliance seeks to address the potential danger of powerful algorithms being used in work force decisions early rather than after widespread harms are apparent.” The movement by the private sector to hold themselves accountable will go a long way towards compliance once the rising calls for AI regulations are codified.

Principles & Frameworks
Ethics & Responsibility