Artificial Intelligence Regulation in 2024: Examining the U.S.’s Market-Driven Strategy in Comparison to the EU’s Proactive Approach

By: Sophia Ward

We find ourselves at a turning point in 2024. Once thought of as a far-fetched futuristic idea, artificial intelligence (AI) is now deeply ingrained in our daily lives, shaping the way we live and engage with the world around us. But as AI becomes more pervasive, so do the privacy risks and ethical concerns that come with it. In the past year alone, we’ve seen hiring algorithms that reinforce social prejudices, facial recognition overreach, and machine learning tools operating with little to no accountability.

The question is not whether AI needs regulation—it is how we should go about regulating it. Currently, two of the world’s major powers, the European Union and the United States, are taking drastically different approaches to regulation. On one hand, the EU launched the world’s first comprehensive AI regulation, the Artificial Intelligence Act (AI Act), and is betting on strict rules to maximize AI’s promise and manage its hazards. Meanwhile, the U.S. is committed to an innovation-first and hands-off approach, trusting that the market will self-regulate.

These two approaches are more than simple policy differences—they reflect a battle for leadership in the global AI race. Will the EU’s AI Act become the blueprint for international AI regulation or will the U.S.’s focus on revolutionary innovation give it the upper hand in this high-stakes competition? The year 2024 could be the tipping point that defines the future of AI and the extent of human control we truly have over it. This post analyzes how these two approaches stack up.

The EU’s AI Act is the world’s first-ever legal framework for the use and development of general-purpose AI systems. The Act’s declared purpose is to “improve the functioning of the internal market by placing down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems” in the EU. The Act officially entered into force on August 1, 2024, with some of its provisions set to apply as soon as February 2, 2025.

The Act sorts AI systems into three risk categories: (1) unacceptable risk, (2) high-risk applications, and (3) minimal risks. First, AI systems deemed an “unacceptable risk,” like China’s infamous social credit system, are considered too dangerous and are outright banned. Second, high-risk applications must comply with stringent standards set out by the Act to mitigate potential harm to health, safety, and fundamental rights. High-risk systems fall into two sub-categories: regulated consumer products and AI systems used for impactful socioeconomic decisions, such as hiring, financial services, educational access, and border control. Third, systems with minimal risk, such as spam filters and AI-generated video games, face no obligation under the AI Act but may commit to voluntary codes of conduct.

Further, the AI Act holds AI developers and deployers accountable with significant penalties for non-compliance with the Act’s obligations—up to €35 million or up to seven percent of their global annual revenue, whichever is higher. Any U.S. based company doing business in the EU is subject to the AI Act’s penalties, effectively shutting U.S. companies out of the profitable European market if they fail to adopt a dual compliance system. The AI Act also imposes transparency obligations on developers and deployers of high-risk systems to ensure users are aware that they are interacting with AI. For example, AI systems generating deep fakes or guiding hiring decisions must disclose their presence and label its AI output as such.

On the other side of the Atlantic, the U.S. is taking a much different route. Despite growing calls for stronger regulation—especially after the senate hearings with OpenAI in 2023—the U.S. still lacks comprehensive federal AI legislation. The logic? Overly restrictive laws could slow technological progress, particularly as the U.S. faces China in the so-called “AI race.” This approach puts a great deal of trust in the private sector to push innovation and “do the right thing” but risks backfire if AI misuse remains largely unchecked.

The U.S. relies on a patchwork of industry-specific rules rather than a universal strategy. For example, the Food and Drug Administration (FDA) is in charge of AI in medical devices, while the Securities and Exchange Commission (SEC) oversees AI in finance. Despite the fact that numerous federal AI regulations have been proposed, Congress has yet to pass one; thus, state and local governments are tasked with stepping up and filling the gaps.

Besides sectored regulations, various other frameworks exist to guide AI policy in the U.S. On October 30, 2023, President Biden issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO directed federal agencies to take over 100 actions aimed at promoting “responsible innovation” through broad guidelines to boost U.S. leadership in AI while also mitigating risks. It also encourages voluntary compliance with industry-led standards, building on the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and voluntary commitments to testing by independent security experts with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.  

This brings us to the primary conflict of 2024: the battle between innovation versus regulation.

Critics of the AI Act argue that over-regulation could stifle innovation as startups and small and medium-sized enterprises (SMEs) may struggle to comply with the Act’s complex requirements. Proactively, the EU’s Future of Life Institute addressed this concern by developing an AI Act Compliance Checker to aid SMEs and startups determine whether their system will be subject to AI Act regulation. Yet, critics urge that the EU’s tight leash will deter European AI startups from the European market and prompt them to target the U.S. as their primary market entry. In that sense, the EU’s emphasis on “trustworthy AI” ensures long-term stability and protection at the risk of losing its competitive edge to less regulated markets. But the EU is not just about regulating technology—it is building trust in AI by ensuring that its benefits don’t come at the cost of people’s rights and privacy.

Conversely, the U.S.’s system prioritizes high-risk, high-reward innovations that often sidelines privacy interests in favor of letting technology to take center stage. The U.S.’s strategy offers advantages as AI developers can easily experiment with the boundaries of AI technology and potentially bring the U.S. in lead of the AI Race. Although this approach could result in groundbreaking advancements in the U.S., its patchwork rules and jurisdictional inconsistencies can increase uncertainty, complicate compliance, and perhaps even hamper innovation.

However, there is also the issue of the government’s ability to “keep up” with the pace of technological progress in the U.S. Legislation, by nature, moves slowly. In many ways, flexibility is essential in rapidly evolving industries like AI, where rigid legislation can quickly become obsolete, but legislative lag can lead to gaps where adverse consequences can flourish. Further, if the private sector appears to effectively regulate itself in line with the speed of innovation, lawmakers might hesitate to impose restrictive regulations that could stunt future AI innovation. However, a façade of control can mask latent or unintended consequences that could surface after the harm has already been done. This begs the question of whether the U.S. is sacrificing long-term societal well-being at the expense of short-term technological gains.

In light of this, the U.S. could shifts its focus toward striking a fair balance between encouraging AI developments and guaranteeing its ethical and responsible usage. Undoubtedly, the U.S. could benefit from taking a page out of the EU’s book. Similar to AI Act, the EO recognizes the significance of risk profiles, but omits any specific standards. Instead of passing sweeping, all-encompassing laws, the U.S. could begin by implementing a risk-based classification system that keeps up with technology advancements. While heavy-handed regulation is not warranted for every AI chat-bot or AI-generated video game, AI systems that directly impact people’s rights and freedoms—such as biometric surveillance or healthcare AI—absolutely should.

Additionally, the U.S. can maintain its innovation-centric focus, while minimizing ethical concerns by also implementing “regulatory sandboxes.” These controlled settings enable businesses to explore and experiment with new AI products under government oversight—a concept outlined in the AI Act. A sandbox model allows the U.S. to continue prioritizing fast-paced innovation while proactively identifying risks and privacy issues. The U.S. could also benefit by embracing stronger transparency and accountability measures like the AI Act’s requirements, as public trust in AI has dropped significantly. Clearer rules around AI disclosure could make great strides in rebuilding that trust and confidence in the use of AI systems. And while the EU poses harsh penalties, it signifies that accountability matters—something that should be equally important to the U.S.

As 2024 wraps up, it is clear that the debate on how to regulate AI is far from settled. For now, the U.S. takes the lead in AI development, but the EU’s AI Act has the potential to become the gold standard for ethical AI deployment with substantial impacts on privacy protection and consumer confidence. The solution? Finding a middle ground between these vastly differing approaches will be key in ensuring that AI advances in way that can benefit both consumers and industry alike.

Leave a Reply

Your email address will not be published. Required fields are marked *