Using AI in Screening Job Applicants is Leading to Even More Lawsuits

If you’re a company and are looking for ways to screen potential applicants, the idea of using AI may be appealing. AI can help initially weed out candidates, or tell you things about them without you having to actually physically or personally evaluate those applicants.
But be careful, because the use of AI in screening job applicants is coming under fire. Numerous class action lawsuits have highlighted how easy it is for AI to break the law. And yes, as an employer using AI, you are the one ultimately responsible for discrimination–you cannot just “blame the software” or the developer.
How Was it Trained?
One problem with using AI to screen job applicants is that you have no idea how the AI was trained. AI doesn’t know things on its own, it often has to be trained through interaction with live human beings. But if those live human beings have their own biases or opinions, the AI picks up on that and will also learn those biases.
And in some cases, the bias isn’t accidental. There are stories of companies specifically training their AI to, for example, reject applicants over a certain age or who are of a certain gender.
There are ways to unbias AI, but even those can have problems as well–for example, in trying to unbias AI, some feel that the AI goes too far in “the other direction,” and thus simply replaces one bias with another.
Disability Discrimination
But discrimination doesn’t just end at racial, ethnic, age or gender bias. It includes discrimination against people with disabilities.
AI screening has been known to disqualify applicants who couldn’t type fast enough, not knowing they had a disease, or AI has been known to punish employees who didn’t smile enough or “look the part,” when those employees had strokes or other disabilities. The AI simply isn’t programmed or trained to account for disabilities.
Lie Detectors
Lawsuits against companies using AI have also alleged that the companies, without disclosing it to applicants, were secretly scanning faces to detect lies or mistruths. In effect these companies were illegally subjecting applicants to lie detector tests without the applicants being aware.
Illegal Background Checks
Other cases, including one that is seeking certification for a class action, alleged that the AI, in doing its evaluation, was pulling information from people’s background. In other words, doing what amounts to a background and credit check on applicants, without the employers using proper documentation and disclosures which would normally be required.
How Did Applicants Know?
In most of these cases, applicants say they knew that they were being rejected on AI alone, because their rejection came immediately after their online application–there was no time for a human to possibly review their application.
Call our West Palm Beach commercial litigation attorneys at Pike & Lustig to help you in your hiring processes, to make sure you’re operating within the law.
Source:
ftc.gov/business-guidance/resources/using-consumer-reports-what-employers-need-know
