AI Trust Library

Article Summary

Article Summary of:

Council Post: STAR Framework for Measuring AI Trust

Originally published:
Jan 3, 2022

In his recent article in Analytics India Magazine, Suresh Chintada outlines the framework he believes will empower companies to implement more trustworthy AI systems. Before diving into the details of his framework, Chintada outlines the problem facing AI: The black box problem. How artificial intelligence reaches decisions is often misunderstood and, therefore, is categorized as untrustworthy. So, to address this problem, Chintada offers a four part framework titled STAR:

  1. Safety: Ensure safeguards are in place to detect and deal with threats to AI systems to protect users from harm.
  2. Transparency: Introduce strategies – including explainable AI and ML observability – to help non-technical stakeholders understand AI decision-making.
  3. Accountability: Develop policies that explicitly outline accountability factors.
  4. Reliability: Take a user-centric approach to identify a model’s key performance indicators.
Want all our best news and analysis on trust and AI delivered straight to your inbox?