Photo by Tingey Injury Law Firm on Unsplash
It seems every day brings more and more examples of biased Artificial Intelligence (AI) and Machine Learning (ML). Every exciting release of the next great thing – like GPT-3 – has a fast follow of news stories and research studies identifying racial, gender, and other forms of bias that emerge in real-world use cases and tests. Article after article has articulated the particular challenges that lead to bias in ML and AI, so why does the problem persist? And how do we move forward to create more fair and responsible systems?
The first key to recognize is that unfairness in machine learning is both caused by and perpetuated by humans, not the machine. Despite the popular conception of it as “intelligent”, ML Is quite dumb by the standard of human intelligence. It uses data that we provide and performs those functions that we assign it. Any malfunctions, maladaptations, or maliciousness are extensions of human actions and choices. Wide-scale deployments making consequential decisions in the real world have only surfaced the need for more human responsibility and accountability.
By and large, the way that most people are talking about fixing the problem of bias in machine learning and artificial intelligence focuses on pre-production. Most practitioners look to the data first since your outputs can only ever be as good as your inputs, garbage in gets you to garbage out.
The bias starts with the majority of practitioners using very similar data sets to train their machine learning applications. Certainly, these data sets make research and development far easier and cheaper because they are cheap and available widely. However, without an understanding of how the data were produced, there is no real way to know to what degree embedded biases may infect the corpus.
It has been shown that most popular data sets have proven to be riddled with bias, reflecting the biases of the data collection procedures, subjectivity in the data production, and the limited reach beyond majority classes of individuals. This bias is merely a shadowy reflection of the biases that permeate our human societies. We are arriving at the point where anyone who is not aware of any embedded bias in your available training data is willfully sticking their head in the sand.
Another pre-production approach involves avoiding or deleting potentially problematic variables to prevent bias. While attractive in theory, it proves just as dangerous as ignorance of biases in the training data sets. Simply avoiding variables, such as ones associated with protected classes like gender or ethnicity, does not eliminate bias.
The so-called Apple Card debacle is instructive. The bank funding the credit lines, Goldman Sachs, claimed that the credit assessments could not possibly be gender-biased since gender was not an input. However, proxy variables like working history, salary, and education correlate strongly with gender, creating a potential Trojan horse of sorts for gender bias to sneak into the algorithm. By attempting to prevent gender bias by ignoring, the developers cannot guarantee models wil be free from bias because the approach is neither exhaustive nor sound technical practice.
In sum, data scientists and engineers should be held accountable to evaluate potential bias with much more intentionality than in the past as they prepare data and develop models. Such a focus will drive positive change. But it will never be enough to guarantee fair and unbiased ML and AI.
Of course, bias does not become an actual problem until a decision is delivered by machine learning or artificial intelligence, and that’s the crux of the developer’s dilemma. When you’re building, you can’t guarantee the future data or environments your model will face, so you don’t have any idea how or when bias might enter the picture. A few poignant examples:
For all the reasons above, practitioners cannot set these evolving models and algorithms free in the wild, at least not without expecting dire results. The news has heightened the public’s consciousness of these risks. More PR disasters will invite more scrutiny that will in turn make it more difficult to innovate and deliver on the promise of artificial intelligence, machine learning, neural networks, and deep learning.
In the end, the developers of ML models aren’t in the crosshairs when things go sideways. Executives, line of business owners, and risk managers are held accountable for model biases and erratic model performance that adversely affects customers. Therefore, these business decision-makers need to ensure that a process is in place to support the collection and curation of balanced, representative data sets.
Once the system is live – ingesting and making decisions based on new data in a now-evolving environment into its models – how can both technical and business stakeholders control for bias?
First and foremost, you need solid records of every decision that the machine and involved stakeholders make. Without the ability to review and inspect those decisions, you will struggle to identify and mitigate bias that is happening no matter how much time you invest in cleaning the data and selecting variables.
For developers, monitoring capabilities that call attention to drift and anomalies are of course valuable, but having a complete and transparent record of machine decisions allows practitioners to examine the specific transactions that were problematic, as well as the state of the model at that time.
As a complement to technical transparency, business stakeholders must ensure that transparency and documentation extend to the:
In addition to a transparent record and effective monitoring, solving for bias demands the ability to recreate individual transactions retroactively. Aggregate testing and monitoring will only surface the middle, where you are generally safe from bias, so the focus on single decisions helps to prove fairness around the edges. Objective internal and external evaluators in compliance, risk, audit, and quality control roles need to be able to:
One final, and sometimes very contested method for managing bias, is to intentionally apply cohort outcome distribution analysis. Thanks to effective decision and model monitoring, a company could very specifically evaluate, and if needed, layer on specific data, to identify whether or not any protected classes or similar cohorts of people were affected in a consistently different way. Let’s assume you have done everything you can to manage bias, but you still find issues in your outcome distribution. What kind of fault or harm will you face?
After decades of regulatory and class action history, we would argue the result of a proactive identification and mitigation of issues is always a better outcome than knowingly ignoring or not trying. Evidence of intentionality will always put you in a better light with regulators.
The best of all worlds approach should consider the people, process, and systems at every step in the lifecycle of ML systems that the organization as a whole needs to manage bias. From a technical perspective, sensible model surveillance along with a well-considered schedule of model checks will catch some bias before it becomes pervasive.
But it is vital to complement a reduction of bias in training data with a robust and principled framework of controls, and especially so once a machine learning system is live in production. To mitigate risk and provide ongoing assurance, R&D teams, risk leaders, and business owners must continuously examine bias and fairness from a holistic perspective.