US Civil Rights Enforcers Warn Employers About Using Biased AI to Hire Employees

The U.S. Department of Justice has issued a warning to employers about the dangers of using biased AI to hire employees. The warning is based on the data that AI models use to create their algorithms. If these algorithms aren’t trained with accurate information, they will replicate the status quo and be a source of unfair hiring. However, there are steps that employers can take to mitigate the risks associated with hiring biasedly.

Artificial intelligence is currently used by employers for a number of purposes including screening job candidates, automating interviews and writing job descriptions. While these tools have numerous benefits, the danger of racial or ethnic discrimination remains. According to Keith Sonderling, the Commissioner of the US Equal Opportunity Commission, which enforces federal anti-discrimination laws, these algorithms can create bias, which could lead to rehiring and other problems.

While the anti-discrimination laws of the United States predate advanced technologies like facial recognition, they still apply to these technologies. The U.S. Equal Employment Opportunity Commission is currently overseeing the use of artificial intelligence in hiring and other decisions. The agency’s response should be expected by September. Until then, employees should contact the Equal Employment Opportunity Commission to file complaints against companies who use facial recognition algorithms.