AI Trust Library

Article Summary

Article Summary of:

Enhancing trust in artificial intelligence: Audits and explanations can help

Originally published:
Aug 13, 2019

In this overview of the current state of technology, Carl Schonander compares audits and explainability as solutions to the problem of gaining trust and confidence in machine learning and AI systems. He posits auditing as the most reliable approach today. Then he enumerates the challenges and possibilities of explainability, transparency, and reproducibility. While tools for explainability have progressed quite a bit lately, it will always create problems for non-builders as models become more complex. Transparency risks exposing the company's intellectual property if not done carefully. Reproducibility requires capturing the immense complexity of ML systems, although numerous providers are currently working on this capability, including Monitaur's Audit product with counterfactuals. Ultimately, combining the power of auditing with explainability promises a long-term, more balanced solution set.

Want all our best news and analysis on trust and AI delivered straight to your inbox?