High-Tech, Hire Standards: The Rules Defining AI’s Role in Employee Decisions

By: Mariana Salazar

At the start of a new job, buried in the intimidating stack of onboarding paperwork is typically a consent form for a background check, accompanied by disclosures detailing the information that will be collected, how it may be used in employment decisions, and your rights to request copies of the report. Employers are required to provide these documents to comply with the consent, notice, and disclosure requirements set by the Fair Credit Reporting Act (FCRA), which aims to keep consumers informed and in control of how their personal information is used.

The FCRA isn’t just another acronym; it’s a powerful consumer protection law that continues to shape hiring practices today. Passed in 1970, the FCRA was one of the earliest efforts to address privacy in an increasingly data-driven world through its mission to promote accuracy, fairness, and the privacy of personal information collected by Credit Reporting Agencies (CRAs). While originally focused on credit reporting, the law has since been broadened to include consumer reports. These reports, commonly used in employment decisions, can include a range of financial and personal information such as credit history, payment habits, rental records, and even public records like liens, bankruptcies, or criminal records. Over time, the FCRA has expanded its oversight to include not only CRAs but also companies that use consumer reports and entities that provide data to CRAs.

The decisions influenced by these reports can significantly impact a person’s life. Take, for example, the FTC’s 2018 action against RealPage, a tenant screening company, which resulted in a $3 million civil penalty after its automated background checks falsely flagged prospective renters with non-existent criminal records. The lesson? With great automation comes great responsibility. The FTC’s case against RealPage sent a clear message that accuracy still matters, even when machines are in control.

Enter AI: the dazzling new technology that can make or break careers in the blink of an algorithm. Today’s employers increasingly rely on AI-driven software to make crucial employment decisions – from résumé scanners seeking specific keywords to interview tools that analyze micro-expressions. As AI advances and remote work expands, many employers now use third-party tools to monitor everything from productivity metrics to physical movement during shifts, using this data to inform hiring, promotions, and even terminations.

On October 24, 2024, the Consumer Financial Protection Bureau (CFPB) published a circular with guidance clarifying how AI-driven workplace monitoring is subject to the FCRA. The CFPB emphasized that many of these advanced monitoring tools qualify as consumer reports under the FCRA when provided by third-party vendors. For example, a phone app assessing a transportation worker’s driving skills could be considered a CRA if it uses third-party or public data sources to generate a score – a common practice in AI-driven monitoring. These “background dossiers” and algorithmic scores from external sources – often including AI-derived insights into employee performance or risk – are regulated under the FCRA in much the same way as traditional background checks. When employers rely on such reports to make employment decisions, they must comply with FCRA requirements: obtain employee consent, provide notice if the report influences employment decisions, and allow workers the opportunity to dispute any inaccuracies.

The FCRA’s protections are designed to ensure that any information used to impact employment decisions is accurate, transparent, and fair – a standard that has become even more essential with the rise of AI-driven workplace monitoring tools. At the core of the FCRA is the principle of validating information that could impact employment decisions. Extending FCRA protections to AI-driven tools is crucial, as it ensures employment decisions aren’t made based on unverified data and requires employers to notify workers about the information influencing these decisions. The CFPB’s recent guidance reflects an understanding that today’s digital tracking tools introduce new risks to worker rights and autonomy. Without oversight, employers could base hiring and management decisions on factors beyond workers’ control, bypassing the transparency and accuracy standards the FCRA aims to uphold. Practically speaking, if an employer relies on data from a third-party AI tool to assess job performance, potential for reassignment, or retention risk, they must comply with FCRA requirements.

The CFPB recognizes the risk in allowing private data to determine someone’s future, especially as AI-driven tools make it easier to collect and analyze sensitive information while complicating the ability to trace which data was actually used. Beyond concerns over AI’s potential for bias, the necessity of FCRA protections becomes clear when considering the deeply personal nature of data that can be used in consumer reports – consider geolocation information tracking visits to sensitive locations like medical and reproductive health clinics, places of worship, or domestic abuse shelters and addiction recovery centers. Additionally, under FCRA Section 609(a), consumer reporting agencies are required to disclose both the original sources and any intermediary sources that contribute to these reports. However, the complexity of AI algorithms can hinder this transparency, creating a lack of accountability and exposing employers to legal risks. Fortunately, the CFPB’s guidance offers employers a silver lining: by adhering to these protections, employers can shield themselves from the potential privacy and compliance pitfalls associated with storing highly sensitive employee data.

In contrast, the European Union’s General Data Protection Regulation (GDPR) sets an even more stringent standard for consumer data protection and transparency. Self-proclaimed as the “toughest privacy and security law in the world,” the GDPR applies to any company processing the personal data of individuals in the EU, regardless of the company’s location. This expansive regulation encompasses rights similar to those under the U.S. FCRA, particularly concerning employee monitoring. Rooted in broad data privacy principles, the GDPR is guided by seven core principles outlined in Article 5.1-2:

  1. Lawfulness, fairness and transparency – Processing must be lawful, fair, and transparent to the data subject.
  2. Purpose limitation – Data should only be processed for legitimate purposes explicitly specified to the data subject at the time of collection.
  3. Data minimization – Only the minimum amount of data necessary for the specified purpose should be collected and processed.
  4. Accuracy – Personal data must be kept accurate and up to date.
  5. Storage limitation – Data should not be stored for longer than necessary for the intended purpose.
  6. Integrity and confidentiality – Processing must ensure appropriate security, integrity, and confidentiality (e.g., through encryption).
  7. Accountability – The data controller must be able to demonstrate compliance with all these principles.

Under the GDPR, employers must notify employees about any automated decisions that have a “legal… or similarly significant” impact on their employment status. They are also required to disclose the logic behind the AI’s decision-making process – a significant challenge, given that many AI algorithms are protected by trade secrets. Non-compliance with GDPR regulations can result in hefty fines – up to €20 million or 4% of global revenue – as well as other remedies such as compensation for damages.

To complement these strict data privacy guidelines, the EU’s Artificial Intelligence Act, which became the first comprehensive legal framework for AI systems across the EU in August 2024, further strengthens oversight. Employment is one of the eight high-risk areas under the EU AI Act, and AI systems performing profiling of natural persons are always considered high-risk per Article 6(3). The EU AI Act imposes extensive obligations on various stakeholders in the lifecycle of high-risk AI systems, including requirements for data training, data governance, technical documentation, recordkeeping, transparency, human oversight, and cybersecurity. Although the EU AI Act is new, its influence is will certainly shape employer practices and reinforce the EU’s firm stance on regulating employee privacy.

Both the FCRA and GDPR address a central question: how do we protect individuals as their lives become increasingly shaped by complex data systems? The FCRA provides a framework for U.S. companies leveraging consumer data in employment decisions, prioritizing accuracy and fair use. Meanwhile, the GDPR requires a high level of transparency, holding companies accountable not only for how they use data but also for ensuring individuals understand the technology impacting them. In essence, the GDPR emphasizes transparency and accountability, while the FCRA, with its narrower but still formidable scope, ensures that worker data is not misused. At the end of the day, the vast amounts of data collected through AI monitoring raise significant concerns about privacy, fairness, and control. As both U.S. and EU regulators update their legal frameworks, companies must balance the drive for data-driven efficiency with the responsibility to protect employee rights.

Leave a Reply

Your email address will not be published. Required fields are marked *