Article Summary

VISIT AI Trust Library
Article Summary of:

Seven Legal Questions for Data Scientists

Published:
January 19, 2021

From a legal perspective, the use of predictive analytics escalates a wide range of issues that companies need to educate themselves on, not just in the chief counsel's office but across departments and functions. Authors Patrick Hall and Ayoub Ouederni of boutique law firm BHN.ai provide an incisive look (with illustrative examples) at the dimensions of legal risk that can emerge:

  • Fairness: Are there outcome or accuracy differences in model decisions across protected groups? Are you documenting efforts to find and fix these differences?
  • Privacy: Is your model complying with relevant privacy regulations?
  • Security: Have you incorporated applicable security standards in your model? Can you detect if and when a breach occurs?
  • Agency: Is your AI system making unauthorized decisions on behalf of your organization?
  • Negligence: How are you ensuring your AI is safe and reliable?
  • Transparency: Can you explain how your model arrives at a decision?
  • Third Parties: Does your AI system depend on third-party tools, services, or personnel? Are they addressing these questions?

On the first topic, Google launched a fascinating online tool where you can play with different conceptions of fairness and how they manifest in ML models.

Text Link
Risks & Liability