Home > INBLF Articles and Media > INBLF Articles > Artificial Intelligence & Cybersecurity > When Machines Discriminate – NIST Tackles Bias in AI

When Machines Discriminate – NIST Tackles Bias in AI

Rothwell Fig logo 1 | International Network of Boutique and Independent Law Firms

Author: Jennifer B. Maisel

Published: July 8, 2021

At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes.

Click here to view the entire article at Rothwell, Figg, Ernst & Manbeck P.C.

privacy law, Rothwell Fig, artificial intelligence