NIST Offers AI Governance Guidelines to Help Avoid Bias Liability

The issue of bias in artificial intelligence is assuming increased urgency in courtrooms around the country.  Organizations that use AI to scan resumes can be sued for employment discrimination.  Companies using facial recognition on their property might face premises liability.  And numerous government agencies have announced their focus on companies that use AI in ways that violate federal antidiscrimination laws.  Avoiding the inadvertent use of AI to implement or perpetuate unlawful biases requires thoughtful AI governance practices.

Basically, AI governance describes the ability to direct, manage, and monitor an organization’s AI activities.  Put simply, your clients should no more uncritically accept mass-produced AI output than you would uncritically believe a salesperson you had just met. 

The U.S. National Institute for Standards and Technology (“NIST”) has recently offered AI governance protocols to minimize bias.  Those protocols include the following:

 

1.         Monitoring.  AI is not “set it and forget it.”  Organizations should monitor their AI systems for potential bias issues.  Organizations should also have a procedure for alerting the proper personnel when the monitoring reveals a potential problem.  Through appropriate monitoring, organizations can know about a potential liability before a lawsuit or a government enforcement action tells them about it.

2.         Written Policies and Procedures.  Organizations should have robust written policies and procedures for all important aspects of their business, and AI is no exception.  Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business subunits, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and detail for any plans related to incident response for AI systems.

3.         Accountability.  Organizations should have a person or team in place who is responsible for protecting against AI bias, or else your AI governance efforts will probably go to waste.  Ideally, the accountable person or team should have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be.  And accountability mandates should also be embedded within and across the teams involved in the use of AI systems.

Implementing effective AI governance to minimize biases requires careful thought.  However, this implementation is crucial to protect against AI bias lawsuits or enforcement actions.

Previous
Previous

US Department of Labor Offers AI Guidance for Government Contractors and Other Employers

Next
Next

Federal Trade Commission Targets Healthcare Companies for Unauthorized Data Disclosures