AI Trust Library

Article Summary

Article Summary of:

These three layers of governance are critical in building ethical, effective AI

Originally published:
Nov 5, 2021

Businesses across almost every industry are implementing artificial intelligence to solve some of the most pressing problems. However, as this Forbes article by John Asquith explains, AI “can be fickle and fallible.” While it is true that artificial intelligence has the potential to fundamentally change and improve the decision-making process, it can produce unfair and harmful biases that undermine it. Specifically, there are two forms of bias to be concerned about: algorithmic bias and societal bias. Algorithmic bias comes as a result from unrepresentative training data. Meanwhile, societal bias comes from our own personal biases and blindspots embedded in the data from which advanced models make their decisions. 

To tackle both, Asquith argues that three complementary layers of governance are required to mitigate these biases:

  1. Technical governance: the mathematical methods used to build an algorithm and the testing requirements
  2. Ethical governance: a dedicated committee to evaluate and balance the benefits or trade-offs for those impacted by the algorithm
  3. Legal governance: clear regulations placed on how to reduce bias to motivate businesses to act to reduce both algorithmic and societal biases
Want all our best news and analysis on trust and AI delivered straight to your inbox?