{ Banner Image }

Employment Law Blog

ADA Basics for AI Auditors and Developers
ADA Basics for AI Auditors and Developers

As discussed here, the EEOC issued a Technical Guidance on the use of software, algorithms and AI to assess job applicants and employees. This is the second blog in a series that breaks down this expansive guidance. The purpose of this series is the educate HR Professionals and AI Developers on the rapidly changing regulatory landscape.

Employers using AI tools for recruitment, hiring and promotion must understand their exposure to liability. At the same time, prudent AI developers understand that in order to be trusted in this space, they must have a full grasp of the employer’s exposure under discrimination laws. As we discussed, the EEOC Guidance focuses solely on the ADA.

This blog is directed to the AI vendors, as it provides an overview of the ADA basics. Just as the HR Professionals needed a primer on the AI lingo, AI folks could use an overview of the ADA. The EEOC Guidance spends significant time explaining the basics, so we will too.

Who Has to Pay Attention to the ADA?

The ADA applies to employers or labor organizations with 15 or more employees on payroll for 20 or more calendar workweeks.

What is a Disability?

There are 3 definitions of “disability” under the ADA: (1) physical or mental impairment, (2) regarded as disabled, and (3) record of disabled. The EEOC’s Guidance focuses on the first definition.

So let’s break this down: A physical or mental impairment that substantially limits one or more of the major life activities of such individual.

  • PHYSICAL OR MENTAL IMPAIRMENT. This includes:
    • any physiological disorder or condition
    • cosmetic disfigurement
    • anatomical loss affecting on or more body systems (ie- neurological, musculoskeletal, special sense organs, respiratory (including speech organs), cardiovascular, reproductive, digestive, genitourinary, immune, circulatory, hemic, lymphatic, skin and endocrine) or
    • Any mental or psychological disorder, such as an intellectual disability, organic brain syndrome, emotional or mental illness and specific learning disabilities.
  • MAJOR LIFE ACTIVITIES. The regulations provide a non-exhaustive list of examples:
    • caring for oneself
      • performing manual tasks
      • seeing
      • hearing
      • eating
      • sleeping
      • walking
      • standing
      • sitting
      • reaching
      • lifting
      • bending
      • speaking
      • breathing
      • learning
      • reading
      • concentrating
      • thinking
      • communicating
      • interacting with others
      • working
    • Operation of major bodily functions:
      • immune system
      • special sense and skin organs
      • normal cell-growth
      • digestive genitourinary
      • bowel
      • bladder
      • neurological
      • brain
      • respiratory
      • circulatory
      • cardiovascular
      • endocrine
      • hemic
      • lymphatic
      • musculoskeletal
      • reproductive functions
      • ***This includes operation of an individual organ within a body system.
  • SUBSTANTIALLY LIMITS. This phrase is meant to be construed broadly.
    • A disability is said to substantially limit if it limits an individual as compared to most of the general population. As you can see, this is fairly amorphous. The regulations attempt to help the cause by explaining that the disability does not need to, “prevent, or significantly or severely restrict the individual from performing a major life activity.”
    • The assessment of whether a disability “substantially limits” shall NOT demand an extensive analysis.
    • No need to utilize scientific, medical or statistical analysis.
    • An impairment that is episodic or in remission can be a disability if it would substantially limit a major life activity when active.

HOW CAN AN AI DECISION MAKING TOOL VIOLATE THE ADA?

The next round of blogs will cover how exactly these AI tools can expose employers to liability under the ADA. Below are some topics that will be addressed later on in this series:

  • Failure to Accommodate. Employers can be on the hook for an ADA violation for failing to provide a reasonable accommodation that is necessary for a job applicant/employee to be rated fairly and accurately by an algorithm.
  • Screen Out. Employers can be relying on an AI tool that “screens out” an individual with a disability. Screening out is when a disability prevents an individual from meeting a selection criterion and thereby not getting the job or promotion. There are scenarios when an individual will get screened out of a job, even though they are able to perform the job with a reasonable accommodation.
  • Medical Examination. As we know, the ADA has parameters relating to medical examinations and “disability related inquiries.” AI vendors and HR Pros must ensure that the AI tool being used does not result in a de facto medical examination of the individual.

EMPLOYER LIABILITY FOR AI TOOL

Lastly, and perhaps the most glaring is that the EEOC provides that employers may be responsible for discrimination brought on by an AI tool. In other words, if it is found that an employer discriminated against individuals with disability because of their use of an AI tool created by an outside vendor, that employer is liable- not the vendor. The EEOC goes even further to say that an employer may be responsible for agents (ie- software vendors) if the employer has given the vendor authority to act on behalf of the employer.

HR Professionals Must Know the Questions to Ask AI Vendors. In some ways, this is unchartered territory. Yet at the same time, it is not. First, this highlights at HR folks must stay up-to-date on the AI technology being offered. HR Pros should be well-versed in the questions to ask their vendors as it relates to these products.

Liability Seems to Stay with the Employer Whether or Not They Use AI Tools. As we all know, employers have always been on the hook under Title VII and the ADA. Under these laws there is no individual liability. For example, an HR Director who is in charge of sifting through resumes and controlling the hiring process, cannot be held personally liable for discrimination. The employer at large would be exposed to discrimination based on the HR Director’s discriminatory behavior.

It seems that the EEOC is taking the position that the AI Employment tool is simply a replacement for that HR Director (human) who is making the employment decisions. It seems the EEOC is maintaining the same standard of liability exposure. If the human is replaced by the AI tool is violative of the ADA, then the company is exposed to liability under this law. By simply swapping out the human for the AI, the EEOC is not rocking the boat. The employer is using the tool, therefore they are responsible for the results.

Indemnification. Employers may want to explore negotiating terms into contracts with their vendors. Perhaps indemnification could be worked into vendor contracts.

  • Susie  Cirilli
    Shareholder

    For more than a decade, Susie Cirilli has served as a trusted advisor to employers, employees, and other professionals to resolve their employment and labor-related matters, including hostile work environment claims and issues ...

Recent News

In light of recent changes to data protection laws, we have updated our Privacy Policy and Terms & Conditions, which explain how we collect, use, maintain, and secure your information. By using this site, you agree to our updated Privacy & Terms of Use Policies