AI & Its Role in High Volume Hiring

Artificial intelligence and machine learning is all anyone seems to want to talk about – and we totally get it!

Hiring and recruiting is a ‘people business’ but talent acquisition teams tasked with recruiting at high volume struggle to balance their process with the people – enter the promise of AI; efficiency, speed, and near-immediate ROI.

HR and TA teams can do more with less with AI. AI can take care of the tedious processes that distract us from connecting with candidates. Algorithms can drive assessments, candidate matching, and performance monitoring. Chat bots can answer questions and engage employees.

As more HR and TA teams are turning to AI solutions, we are seeing a pattern – solutions typically fall into 7 common types:

  1. Resume Screening: Algorithms analyze resumes to identify relevant information like experience, education, and skills. This helps recruiters quickly filter through massive stacks of resumes in minutes to identify the best candidates for a role.
  2. Chatbot and Virtual Assistants: Chatbots and virtual assistants engage with candidates, answer questions, and provide insight into a job and company – freeing up recruiters’ time and, in theory, providing a consistent candidate experience.
  3. Video Interviews: AI assists in conducting and reviewing video interviews with some automated video interviewing platforms using algorithms to analyze candidates’ facial expressions, body language, speech patterns, in addition to their responses to provide insights about candidates’ suitability for the role, and shortlisting candidates.
  4. Skills and Personality Assessments: AI-based assessment tools evaluate candidates’ skills and personality traits through online tests or simulations. These assessments provide objective data to assess candidates’ fit for specific job requirements and identify promising candidates.
  5. Predictive Analytics: Algorithms can analyze vast amounts of historical hiring data to identify patterns and predict future hiring outcomes such as identifying which sourcing channels, recruitment strategies, or candidate attributes are most likely to lead to successful hires and long-term retention.
  6. Candidate Matching: AI solutions compare candidate profiles with job requirements to match candidates to roles based on relevant skills, experience, and qualifications.
  7. Automation of Administrative Tasks: AI can automate administrative tasks associated with high-volume hiring, such as scheduling interviews, sending follow-ups, and managing candidate communication.

 

According to many HR solution providers AI can do anything – we just aren’t so sure it should.

While AI can, and does, positively influence efficiency in high-volume hiring, we believe that it should never replace human oversight in any recruiting strategy.

The debate around AI is not a new one. AI has been making waves for more than a decade, but the implications of its application have multiplied as the models grow more sophisticated and available to the public.

 

The Dark Side of AI

Recently some organizations have come under fire for their use of AI in recruiting. A common factor of many these situations is that the AI in question is being used to make hiring decisions without human oversight.
Human oversight is necessary to ensure fairness and avoid bias.

This is especially true when we make final hiring decisions or vetting candidates. If a company is using artificial intelligence to help assess candidates on nuanced concepts like honesty or work ethic you can quickly enter into a legal gray area.

CVS recently found itself in hot water for this very thing. As of the writing of this blog, CVS is facing legal consequences in Massachusetts for their use of HireVue’s AI-assisted video-interviewing screening solution. (source)

What was the issue?

Since 1988, federal law prohibits most private employers from leveraging lie detectors to select employees. The Massachusetts law goes further. It forbids all employers from using a polygraph or any other device, mechanism, or instrument to “assist in or enable the detection of deception” as a condition of employment.

It’s illegal to use a lie detector to screen job applicants – if you use an AI solution to assess a candidate’s honesty you are just using an AI powered lie detector.

At least that is what the lawsuit against CVS is claiming.

It is very easy for humans to believe that AI is unbiased, but the reality is that even good AI systems are informed by imperfect data sets. Humans are responsible for building the algorithms, feeding in the data to train these systems, and testing the accuracy of the outputs.

“Recent awareness of the impacts of bias in AI algorithms raises the risk for companies to deploy such algorithms, especially because the algorithms may not be explainable in the same way that non-AI algorithms are. Even with careful review of the algorithms and data sets, it may not be possible to delete all unwanted bias, particularly because AI systems learn from historical data, which encodes historical biases.”

(source)

So when a large organization starts leveraging AI to assess candidates they enter into dangerous territory. If they leverage AI to assess abstract qualities, like honesty, they are trusting an imperfect system.

Tesla may sell self-driving cars but that doesn’t mean you can fall asleep behind the wheel.

 

What You Need to Know: Regulations & Legislation

2022 was a landmark year for artificial intelligence (AI) by many measures. The introduction of ChatGPT as an open source tool sent shockwaves through many industries and within the first few weeks of 2023, Microsoft was in talks to invest $10 billion into the parent company, OpenAI.

Global revenue projections estimate that the AI market is set to reach $500 billion this year, with an estimated year on year growth rate of 19.6%. (source)

With all this noise around AI and organizations coming under fire it is critical to understand what is happening at the federal and local levels to regulate AI.

Another well known, possibly the best-known, real-world example of a case brought against a major organization who ran afoul due to AI in hiring decision is from a 2018. Amazon was leveraging an AI software that was found to be systematically discriminating against women in the hiring process (Reuters).

Ultimately, Amazon cut ties with the tech.

Many of these cases are built around regulation that is not actually specific to AI, rather it is built on legislation tied to bias and discrimination. That doesn’t mean that legislation won’t be built to specifically address AI – in fact, current trends are calling for more specific AI regulations.

The European Commission’s Harmonised Rules on Artificial Intelligence, also known as the EU AI Act, was first proposed in 2021 and sought to lead the world in AI regulation. It was adopted in December of 2022 and is said to be the ‘gold standard’ for AI regulation.

If the EU regulation set the gold standard, the US is leading the charge. In the US, the Equal Employment Opportunity Commission is at the forefront of AI regulation and oversight – they recently released guidance and documentation to help employers navigate through the potential pitfalls of AI solutions.

The EEOC is on record that an employers’ use of AI could violate workplace law and that the employers are ultimately responsible for problems caused by their selected AI vendors. They (The EEOC) recommends proactive audits of AI solutions and that the “Four-Fifths Rule” be applied to AI selection/decision algorithms.

But it doesn’t stop there, the Whitehouse published its Blueprint for an AI Bill of Rights: Making Automated Systems work for the American People this past October, outlining the United States dedication to protecting US citizens from the potential harm of AI.

There is also the proposed Algorithmic Accountability Act at the federal level, and the National Institute for Standards and Technology (NIST) has produced an AI Risk Management Framework.

In addition to federal regulations, States and other jurisdictions have passed or proposed additional laws to regulate AI use in specific areas.

  • Illinois enacted the Artificial Intelligence Video Interview Act, requiring employers to notify candidates when their video interviews are being screened by algorithms.
  • The New York City Council passed legislation mandating bias audits of automated tools used to make decisions about hiring candidates and promoting employees.
  • Colorado’s General Assembly enacted legislation to prevent insurance providers from using biased algorithms or data to make decisions.
  • Washington DC proposed the Stop Discrimination by Algorithms Act, in an effort to prevent discrimination in automated decisions about employment, housing, and public accommodation, and to require audits for discriminatory patterns.

 

So, How Should AI be leveraged in Hiring?

There isn’t an easy answer but the reality is AI is here to stay – and we think that is a good thing. The question is how do employers ensure they are leveraging the right tools in the right ways? How can you enjoy the benefits of AI while avoiding the pitfalls of bias?

While there is no hard, clear line, certain solutions are, generally, safer than others.

Automation of Administrative Tasks and virtual assistants are the most accessible options. They augment the recruiter experience and focus on lower impact decision-making like which response to send next or who has the bandwidth to run a phone-screen in the candidates time zone.

The lesson seems to be ‘recruiting teams should be cautious of providers who over-index on AI algorithms for hiring decisions or candidate vetting. When AI is trusted with important decisions that may affect people’s lives, mistakes have a massive impact on individuals.

If you choose to leverage an assessment tool be sure that the machine is not deciding for you, particularly in the negative. Leveraging AI to identify a good fit is more acceptable than leveraging AI to disqualify candidates.

In every case, be sure to understand your local legislation and regulations.

Did I use AI in the crafting of this blog? Yes, would I trust AI to write the blog itself? Absolutely not. AI is incredibly powerful but it is not infallible – nor is it the best writer.

At Grayscale we believe that AI should enrich the hiring experience for both the recruiter and candidate. It should put more time into the recruiters calendar to connect with candidates with empathy and dignity.

Candidates don’t want to think of themselves as just another cog in the machine, they want to be seen and appreciated – AI can’t do that, only a human being can build that kind of connection.

Request a demo to see how Grayscale can streamline and enrich your hiring processes.

 

Additional Sources
  • https://www.boston.com/news/the-boston-globe/2023/05/22/milton-residents-lawsuit-cvs-ai-lie-detectors/
  • https://www.jdsupra.com/legalnews/eeoc-s-latest-ai-guidance-sends-warning-7103163/
  • https://link.springer.com/article/10.1007/s10551-022-05049-6
  • https://link.springer.com/article/10.1007/s43681-022-00166-4
  • https://arxiv.org/pdf/1706.03021.pdf