In Articles

By: Paul Finamore, Esquire

Artificial Intelligence is everywhere, but what it is?  Oxford Dictionary defines it as follows:

The theory and development of computer systems able to perform task that normally require human intelligence, such as visual perception, speech recognition, decision-making, translation between languages.

For many employers, artificial intelligence (“AI”) is a tool in recruiting that allows employers to perform complex tasks in extraordinarily short periods of time while eliminating human bias and increasing outreach to diverse candidates.  What is the legal risk for AI?

Studies show that 80% of employers use AI in recruiting.  If anyone needs to assess whether the risk is real, the EEOC has published guidance regarding AI and the ADA in employment in May of 2022.   DOJ has done the same.

So, how does it work?  AI uses machine learning to analyze all kinds and types of data from facial expressions and body language to searching social media, open source data, computer-based testing and automated screening to acquire and evaluate data to attempt to predict the best candidate for the position for which it is searching.  AI uses tech screening chat bots, and gaming software, among others, to take what used to take recruiters days or hours to screen candidates into minutes or less.  AI can immediately eliminate candidates who do not meet the job requirements.  AI can immediately respond to frequently asked questions.  AI can automatically schedule follow up interviews.  Recruiters have learned over the years that the speed of employer feedback and communication is directly related to hiring the best candidates, which is one of the main advantages of AI in recruiting.  In fact, some studies have shown that the use of AI can reduce 23 hours per hire, does not disrupt workflow, and reduces the costs to screen applicants by 75% while at the same time decreasing turnover by 35% and increasing revenue per employee by 4% by finding better fits for positions.  In fact, with AI, resumes can be reviewed quickly, interviews can be scheduled electronically, and frequently asked questions can be addressed immediately.  Sounds great, right?  If the system is computerized without human intervention, what is the risk?

Unless properly tested, AI has the risk of creating unfair outcomes or privileging one group over others inadvertently.  So how does that work?  It can happen in a number of ways, none of which are intentional, but all of which can lead to disparate impact claims.  A few examples include programming that looks for gaps in employment, which sound innocuous on the surface, but which can lead to bias against woman who have taken off time for child care.  Another example relates to profiles of “successful traits of successful employees” who resemble current company leadership that are then used as predictors of the best applicants.  How is this a problem?  If all of the successful employees are from one group, then all other applicants, including those in protected classes, may not meet the “successful’ predictors that resemble current leadership.  Some AI uses automated games that differentiate among applicants.  While seemingly simple, the reality is that gaming technology can unfairly disadvantage individuals with disabilities who would otherwise be able to compete for positions with reasonable accommodations. Is this risk real?

Many employers have been challenged regarding whether their AI algorithms create a disparate impact.  The media has reported on a few that garnered national attention, including Google Photos that was alleged to be racially discriminatory because it used image recognition to tag photos what were considered to be racist.  IDEMIA used facial recognition technology to analyze mug shots and was alleged to have made significant mistakes racially.  COMPAS used algorithms to predict recidivism, but was alleged to have incorrectly identified black criminals as more likely to commit future crime.  These are only a few companies whose AI practices have been challenged.  This risk is real.  If the risk is real, what can employers do to take advantage of AI while managing risk?

We all know the issue is not going away.  As of this writing, Illinois, Maryland, New York City, Washington, D.C. and California have passed legislation relating to the use of AI in employment.  While not all provided private rights of action, most of them included some form of consent to AI use in the recruitment process in their language.

Just this summer, President Biden issued an AI Bill of Rights earlier this year in which he identified a number of rights to be protected:

  • You should be protected from unsafe or ineffective systems.
  • You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

In light of all of this, what can employers do to take advantage of AI while managing risk?  A few examples of best practices include the following:

  1. Require hiring managers to identify truly necessary qualifications for positions.
  2. Advise applicants that AI is being used in the process and how it will be used to evaluate applicants.
  3. Provide sufficient information to allow applicants to determine whether they should seek reasonable accommodations in the application process.
  4. Train employees to identify requests for accommodation and implement procedures for providing reasonable accommodations.
  5. Require bias audits from all contractors utilizing AI.
  6. If employers identify disparate impacts, challenge the AI data.
  7. Never forget the importance of human intelligence when considering AI.

 

 

 

Paul Finamore is a Member in PK Law’s Labor and Employment Group.  He is an experienced trial lawyer who has practiced in state and federal courts throughout Maryland and the District of Columbia for over 30 years.  His experience includes litigation of general and professional liability matters, including first and third party insurance claims; insurance fraud; liability claims involving accountants, insurance agents, dentists, title companies and underground facility locating services; and the defense of employment litigation claims such as Title VII; age, race, and gender discrimination; wage and hour; whistle blowing; wrongful termination; ADA, FMLA and FLSA. Mr. Finamore regularly counsels employers on employment issues, workplace policies, and compliance with federal, state, and local employment laws.  He began practicing in employment law while serving on active duty in the US Army Judge Advocate Generals Corps at Aberdeen Proving Ground.  As a trial lawyer, Mr. Finamore understands the importance of providing his clients with practical advice to assist them with balancing business risk while avoiding unnecessary litigation. However, in the event of litigation, Mr. Finamore has the depth of experience, skill and efficacy required.

Mr. Finamore can be reached at 410-740-3170 or pfinamore@pklaw.com.

0 Shares