The Use of A.I. in the Workplace – Discrimination Concerns (Part 2)



The use of AI in the workplace can help streamline many tasks, but it can also come with potential discrimination concerns for employers. Meagan Bainbridge and Lukas Clary review some of these concerns and share best practices for employers in this episode of California Employment News.

Watch this episode on the Weintraub YouTube channel here.

Find part one of this two-part series here.

Show Notes:

Meagan:
Hello, everyone. Thank you for joining us for this installment of the California Employment News, an informative video and podcast resource offered by the Labor and Employment Group at Weintraub Tobin. My name is Meagan Bainbridge, and I’m a shareholder in the firm’s Labor and Employment group. Today, I am joined by my colleague and partner, Lukas Clary, for the second episode in our two-part series regarding artificial intelligence in the workplace. The first episode concentrated on what employers need to understand regarding AI’s implication on privacy and intellectual property. Today, we’re going to focus on the potential discrimination concerns that can come out of the use of AI. Lukas, over the past few years, the EEOC has really led the charge for developing guidance with respect to the use of AI in the workplace. Why does the EEOC care so much?

Lukas:
Thanks, Meagan. Good question. Well, when we think about what the EEOC is, at its core, it’s the federal agency tasked with enforcing Title VII. And Title VII generally prohibits employment discrimination based on a person’s race, color, religion, sex, or national origin. So how does that relate to the use of AI in the workplace? Well, Title VII not only prohibits intentional discrimination, but also what we call in the legal world, disparate impact discrimination. Disparate impact occurs when an employer policy or practice that is neutral on its face has the effect of disproportionately excluding persons based on their race, color, religion, sex, or national origin, or other protected characteristic that will be unlawful disparate impact discrimination unless the policy or practice is job related and consistent with a business necessity. Examples of how that might come about in AI is by use of tests or selection procedures to aid in hiring, compensation decisions, or promotion decisions. If use of an algorithmic decision-making tool has an adverse impact on individuals of a particular protected class, then use of the tool will violate Title VII unless the employer can show that such use is both job-related and consistent with the business necessity.

And then, even if it makes that showing, the employer will also need to show that there is not a less discriminatory alternative available. So, Meagan, how might this type of discrimination arise when employers use AI for decision-making?

Meagan:
Well, yeah. So potential discrimination in automated systems may come from various sources, including problems with the data themselves or those data sets, transparency, or the simple fact that developers who are making this computer software and applications do not understand the context to which a particular program will be used. All of this can lead to unintended consequences and possible discrimination. For instance, AI can be biased, creating concerns of illegal discrimination depending on how the technology and data are used. In a well-reported case several years ago, Amazon developed and utilized a tool to review job applicants’ resumes. The company realized after implementing this tool that the system was not rating candidates for a software developer jobs and other such technical posts in a gender-neutral way. It found that the computer models were trained to vet applicants by observing patterns and resumes submitted from the company over a ten-year period, and historically, most of those came from men. As a result, women were unintentionally being screened out by the software. Obviously, that’s a problem.

Age bias is another problem employers must watch for when using AI. Last month, iTutor group agreed to pay $365,000 to more than 200 job applicants allegedly passed over because of their age. Specifically, the EEOC alleged that the software was designed to automatically reject female candidates over the age of 55 and male candidates over the age of 60. If true, this practice would clearly be illegal and discriminatory to individuals who are over the ages of 55 and 60. There are also significant concerns related to AI’s disparate impact on disabled individuals, especially with respect to reasonable accommodations. As we know, employers should consider whether an employee is able to perform the essential functions of a particular job with or without a reasonable accommodation. But what happens when the employer does not provide a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm? In such circumstances, the employer may be relying on an algorithmic decision-making tool that intentionally or unintentionally screens out an individual with a disability, even though the individual is able to do the job with or without a reasonable accommodation. So Lukas, what are employers going to do? Are there any ways they can use AI to help in the workplace without running afoul of the discrimination laws?

Lukas:
Yes, absolutely there is. And I think it starts by employers auditing any AI tool they are using or considering using to determine if in fact that tool is having a disparate impact on any protected classes of either employees or applicants. If so, then the employer needs to ask, can we show that this practice is job related and consistent with the business necessity? And can we show that there is no less discriminatory alternative available? But if that audit reveals any problems when asking those questions, employers should consider what adjustments they might be able to make to eliminate those issues. For example, employers might want to work with IT to eliminate personal identifiers and unique data points about employees or applicants. These audits should also occur periodically rather than just at the outset, because AI platforms are constantly evolving, and so too can the risk for disparate impact problems. Employers should also develop clear policies that account for antidiscrimination concerns and make sure that AI users, the platform users, are aware of and trained around these policies. The policies should also contain a reporting mechanism when employees suspect violations. I think by taking those steps, employers gain the benefit of AI tools while effectively mitigating the discriminatory risks.

Meagan:
Absolutely good advice, Lukas. And that does it for today. You can continue to find California Employment News on our blog at www.thelelawblog.com and wherever you listen to your favorite podcast. We’ll see you next time.