Who Gets the Job? How the U.S. and EU Regulate AI Hiring

By: Emmanuela Yiannikakis

Artificial intelligence is quickly becoming part of everyday hiring. Employers use AI tools to screen resumes, rank applicants, analyze recorded interviews, and make predictions about who is most likely to succeed in a role. On paper, that sounds efficient. In practice, it raises a much harder question: what happens when the tool making hiring decisions is not actually neutral?

That is the issue both the United States and the European Union are now trying to confront. But they are doing it in very different ways. In the United States, regulators have mostly tried to fit AI hiring tools into legal frameworks that already exist, especially employment discrimination law. The European Union, by contrast, has taken a more direct approach under the EU AI Act, which treats many AI systems used in employment as “high-risk” and regulates them before they are fully deployed.

The American approach is familiar: regulators are mostly using existing legal rules to deal with technology. The EEOC has made clear that employers cannot escape liability simply because a machine helped make the decision. If an AI tool screens applicants in a way that discriminates based on race, sex, disability, religion, or another protected characteristic, federal anti-discrimination law may still apply. That principle matters because it prevents employers from treating AI as a legal shield. If the outcome is discriminatory, the fact that software was involved does not make the problem disappear. 

Still, the U.S. system has a real weakness: it is mostly reactive. It usually kicks in only after harm has already happened. A rejected applicant has to recognize that something may have gone wrong, connect the decision to an algorithm, and then fit that injury into a legal doctrine that predates modern AI hiring tools. That is not a simple task, especially when employers rely on outside vendors and applicants may have no idea how the system worked. In theory, existing federal employment discrimination law covers AI-assisted decisionmaking. In practice, proving that an opaque tool caused a discriminatory result can be much harder than saying the law applies. 

Some U.S. states and cities have tried to be more proactive. New York City’s Automated Employment Decision Tools law requires certain employers to complete a bias audit before using covered hiring tools, publish a summary of that audit, and provide notice to candidates or employees. Illinois took a narrower but still important step through its Artificial Intelligence Video Interview Act, which requires notice, an explanation of how the AI works, and consent before employers use AI to analyze recorded video interviews for Illinois-based jobs. These laws show that some U.S. lawmakers understand that general anti-discrimination principles may not be enough on their own. But they also expose the bigger problem: the U.S. approach is turning into a patchwork. Protections vary by state and city, even though the same hiring software may be used across the country. 

The European Union has taken a more structured route. Under the EU AI Act, many AI systems used in employment, worker management, and access to self-employment are treated as high-risk. That includes systems used for recruitment, selection, and decisions that affect workers in meaningful ways. The idea behind this classification is simple: employment decisions are too important to leave to black-box systems without stronger safeguards. Instead of waiting for discrimination lawsuits to reveal the problem later, the EU is trying to regulate these systems earlier. 

That is where the EU model becomes especially interesting. Under the European Commission’s current timetable, the high-risk and transparency rules are set to apply in August 2026. For employers, that means AI hiring tools are not just a compliance afterthought. They come with legal obligations built into the system itself. High-risk AI systems must involve human oversight, include technical documentation, and provide enough information to users to support appropriate use and interpretation. In practical terms, employers using covered systems will need to think about oversight, explainability, and documentation from the start, not only after a worker challenges the result. 

In my view, that is the biggest difference between the two systems. The U.S. mostly addresses discrimination after a problem appears, while the EU tries to regulate these systems before employers rely on them.

Of course, the EU’s approach is not automatically better in every respect. It is more demanding, more bureaucratic, and probably more expensive to implement. Smaller employers may struggle with the compliance burden, especially if they rely on third-party software. The United States has the advantage of flexibility, and that flexibility may encourage faster experimentation. But the downside is clear: when the law steps in only after the fact, people may lose fair opportunities that stronger safeguards could have prevented.

That is why I think the EU currently has the more convincing framework. Hiring decisions shape who gets access to income, experience, and long-term career opportunities. If AI is going to play that role, it should not be treated as just another business convenience tool. It should be treated as a system capable of distributing opportunity, and reproducing bias, at scale.

The United States does not need to copy the EU AI Act line for line. But it should take the EU’s basic point seriously. AI in hiring is not just a technology issue, it is an equality issue. And if American law continues to rely mostly on after-the-fact enforcement, it may keep reacting to algorithmic discrimination instead of preventing it.

Leave a Reply

Your email address will not be published. Required fields are marked *