Using AI to Pursue Hiring and DEI Goals: A Q&A for Employers


5 minute read | November.12.2024

Employers are increasingly using AI in hiring to streamline recruitment, reduce time-to-hire and eliminate bias. Companies are also considering how AI tools can further diversity, equity and inclusion (DEI) efforts, such as by helping source a more diverse pool of candidates.

Using AI for targeted recruitment or hiring can increase efficiency – and introduce risk. Employers who use AI in hiring, including to advance DEI goals, should be mindful of the changing legal landscape and their evolving legal obligations.

Here are a few considerations:

Can employers use AI to target would-be applicants along race and gender lines?

  • Traditionally, employers have advanced DEI goals through outreach designed to broaden the applicant pool to include more diverse candidates. They have attended hiring fairs at Historically Black Colleges and Universities (HBCUs), for example. They have also partnered with institutions serving diverse populations.
  • Now, some employers use AI to target potential applicants that are (or appear to be) a particular gender or racially/ethnically diverse.
  • The legal risk associated with this type of outreach is evolving.
    • In the U.S., targeted recruiting practices historically have carried minimal risk as long as they are designed to broaden the pool of qualified applicants.
    • With DEI under increased scrutiny, though, we expect expanding legal challenges to practices that once seemed common.
  • Significantly, traditional targeted outreach has not excluded non-diverse populations.
    • Job fairs at HBCUs, for example, typically supplement rather than replace traditional recruiting methods, such as attending other job fairs and posting openings on an employer’s website or sites like LinkedIn or Indeed.
    • Employers could face enhanced risk if an AI tool replaces, rather than supplements, traditional recruiting methods, particularly if the tool excludes non-diverse populations.
    • Limited recruiting practices, such as advertising job opportunities only by word of mouth, have long given rise to claims of disparate impact and intentional discrimination.
  • U.S. companies operating globally need to consider whether any laws limit collecting certain candidate data. Some countries, for example, prohibit collecting candidate data related to gender or race.
  • As a result, employers should assess which data a given AI tool collects, how it does that and if any restrictions affect collecting or retaining that data in relation to employment.

Can employers use AI to identify candidates’ and employees’ race to further DEI initiatives?

  • Many companies have implemented DEI initiatives, such as seeking to achieve diverse interview slates or aspirational improvements in workforce representation. Measuring the company’s progress toward such DEI goals necessarily requires accurate demographic information.
  • However, applicants and employees do not always elect to voluntarily disclose their gender, race, and ethnicity. That can result in recruiters and managers having to guess an employee’s or applicant’s demographic characteristics based on visual inspection.
  • AI tools could be used to infer an applicant’s race or gender from public sources to fill in the gaps left by voluntary self-identification, without recruiters and managers needing to do so. However, challenges to DEI are on the rise and involving AI may further enhance risk.
    • A conservative activist group, America First Legal, asked the EEOC this year to investigate the NFL’s Rooney Rule, which requires teams to interview minority and/or female applicants.
    • Another conservative activist group, the National Center for Public Policy Research, has been sending mass letters to the CEOs of Fortune 500 companies, claiming that aspirational representation goals are actually illegal quotas.
    • On the other side of the equation, there have also been a number of shareholder derivative lawsuits accusing companies of making false and misleading statements about their claimed commitment to DEI and other disclosures about workforce representation improvements.
    • We can expect an increase in legal challenges involving DEI. Using AI to identify demographic information could inject erroneous assumptions into the output could be highly embarrassing to the company and, obviously, difficult to defend.
  • Global employers should consider that case law and legislation outside the U.S. on diverse slates and DEI initiatives is not as robust and is still evolving.

What about privacy concerns?

  • Using AI tools in hiring may raise significant privacy concerns.
  • A primary privacy concern involves the amount and type of personal data AI tools collect about candidates. AI tools can analyze a candidate's resume, social media profile and digital footprint. This raises questions such as:
    • Notice: Has the candidate been notified of what data will be collected and how it will be used and shared?
    • Consent: Has the candidate given consent for the collection and use of certain data?
  • Another concern is whether data from a candidate’s resume will be used to train a service provider’s AI model. Increasingly, AI companies commit to using personal data only to provide a service, but employers should review contracts on this point.
  • Laws worldwide impose different requirements on collecting and using personal data. Some laws may prohibit an employer from collecting certain data elements or require express consent to use the data.

Are any AI-specific laws in play?

  • Legislators are proposing and adopting numerous laws surrounding the use of AI worldwide, including measures affecting employment and labor laws.
  • In the U.S., agencies including the EEOC, Department of Labor, Office of Federal Contract Compliance Programs and Department of Justice have released guidance about using AI in employment.
  • Other states are considering legislation. The presidential election also may affect AI regulation.
  • In Europe, the Artificial Intelligence Act (AI Act) establishes a common regulatory and legal framework in the European Union. Considered the world's first comprehensive legal framework for AI, it provides for EU-wide rules on data quality, transparency, human oversight and accountability.
  • Countries such as the United Kingdom, Brazil and Israel have proposed AI legislation.
  • Complicating the legal landscape: Authorities layer some AI-specific regulations on top of existing employment and privacy rules and laws.

AI tools have the power to increase efficiency, provide more data and insights and make tasks easier. Employers should consider how employees can use AI in a way that meets the company’s legal obligations.

Remember, the landscape is changing, so watch for forthcoming legislation, litigation and agency guidance.

Want to know more? Ask one of the authors.