At this point you have probably heard about one of the many incidents where an AI-enabled system discriminated against certain populations in settings ranging from healthcare, law enforcement, and hiring, among others. In response to this problem, the National Institute of Standards and Technology (NIST) recently proposed a strategy for identifying and managing bias in AI, with emphasis on biases that can lead to harmful societal outcomes. The NIST authors summarize:
“[T]here are many reasons for potential public distrust of AI related to bias in systems. These include:
- The use of datasets and/or practices that are inherently biased and historically contribute to negative impacts
- Automation based on these biases placed in settings that can affect people’s lives, with little to no testing or gatekeeping
- Deployment of technology that is either not fully tested, potentially oversold, or based on questionable or non-existent science causing harmful and biased outcomes.”
As a starting place, the NIST authors outline an approach for evaluating the presentation of bias in three stages modeled on the AI lifecycle: pre-design, design & development, and deployment. In addition, NIST will host a variety of activities in 2021 and 2022 in each area of the core building blocks of trustworthy AI (accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience), and bias). NIST is currently accepting public comment on the proposal until September 10, 2021.
Notably, the proposal points out that “most Americans are unaware when they are interacting with AI enabled tech but feel there needs to be a ‘higher ethical standard’ than with other forms of technologies,” which “mainly stems from the perceptions of fear of loss of control and privacy.” From a regulatory perspective, there currently is no federal data protection law in the US that broadly mirrors Europe’s GDPR Art. 22 with respect to automated decision making – “the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” But several U.S. jurisdictions have passed laws that more narrowly regulate AI applications that have the potential to cause acute societal harms, such as the use of facial recognition technology in law enforcement or interviewing processes, and perhaps more regulation is likely as (biased) AI-enabled technology continues to proliferate into more settings.