AI Trust Library

Article Summary

Article Summary of:

AI accountability: Who’s responsible when AI goes wrong?

Originally published:
Aug 19, 2021

AI and ML systems have the potential to improve and streamline business processes, however, they can also go awry. Without proper oversight, systems can reinforce biases in their algorithms that can harm marginalized and protected classes unintentionally, and they can act in unpredictable ways in unpredictable moments. SearchEnterpriseAI showcases the importance of AI governance and oversight while exploring accountability when using these evolving systems, quoting experts like Forrester’s Brandon Purcell and IBM’s AI chief Seth Dobrin.

Who should be held accountable when things go wrong with AI? Nascent legal tests will eventually turn into more clear case law around liability. The answer may turn out to be a pastiche of companies, vendors, and other parties that, oddly enough, mirrors the complex inner workings of the systems themselves. What is clear is that companies must be able to identify when and how algorithms and models have gone wrong to demonstrate an accountable approach to AI governance. Blaming the machine may not be an acceptable defense for much longer.

Want all our best news and analysis on trust and AI delivered straight to your inbox?