top of page

REGULATORY FRAMEWORKS IN THE DIGITAL AGE: NAVIGATING THE AI GOVERNANCE LABYRINTH IN THE EU AND THE USA


The burgeoning field of artificial intelligence (AI) presents many risks and opportunities, necessitating robust regulatory frameworks. Taking a proactive stance, the European Union (EU) introduced the AI Act, a comprehensive proposal adopting a risk-based approach to AI regulation. This Act ensures AI's safety, transparency, traceability, non-discrimination, and environmental friendliness. Meanwhile, the United States (U.S.) is shaping its AI policy through national standards and state-level regulations, such as New York City's Local Law 144 and initiatives in California and New Jersey. The U.S. has also issued an Executive Order (EO) on safe AI and a blueprint for an AI Bill of Rights, among other discussion groups and white papers. This article aims to dissect these varied approaches, exploring the nuances and implications of the EU and U.S. policies on AI. We will delve into the complexities of AI regulation, comparing these influential jurisdictions to understand their impact on global AI governance and future technology developments to suggest policy recommendations.

 

Charting the Course

 

The EU is positioning itself as a global leader in regulating AI, much like its pioneering role in personal data protection with the General Data Protection Regulation (GDPR). The EU's AI Act, proposed by the European Commission in April 2021, presents a comprehensive regulatory framework. It is set to pass within the year, with full effectiveness anticipated around mid-2025. To bridge the gap until its enactment, the EU is crafting a voluntary AI Pact focusing on transparency and accountability in AI. This act forms a part of a broader, multifaceted EU strategy on digital regulation, encompassing the Digital Services Act (DSA) and Digital Markets Act (DMA). Integral to this strategy is the GDPR, which notably includes provisions for human oversight in algorithmic decision-making and a controversial "right to explanation" regarding the logic of algorithms.

 

Meanwhile, the United States adopts a risk-based, sector-specific approach distributed across federal agencies. This approach was first outlined in the 2019 Executive Order, "Maintaining American Leadership in Artificial Intelligence" (EO 13859), and subsequent guidance from the Office of Management and Budget. These guidelines and orders advocate for a risk-based approach (A "risk-based approach" to AI regulations refers to a framework where the intensity of regulatory oversight corresponds to the potential harm an AI system poses. This method classifies AI systems into four distinct risk categories: 1) Low-risk, 2) Limited or minimal risk, 3) High-risk - systems with a substantial impact on user's life prospects, requiring stringent regulations and conformity assessments before entering the EU market, and 4) Systems with unacceptable risk, which are prohibited in the EU market. This approach aims to balance fostering technological innovation and ensuring public safety.) to AI oversight, emphasising the significance of managing AI risks through regulatory and non-regulatory interventions. They call for scientific evidence-based assessment of AI capabilities, enforcing non-discrimination statutes, considering disclosure requirements, and promoting safe AI development. This approach, however, has led to uneven development of AI policies across federal agencies, reflecting the previous administration's minimalist regulatory perspective while requiring agencies to devise plans for regulating AI applications.

 

The EU's and the U.S.’s approaches reflect their distinct regulatory philosophies and objectives. While the EU's strategy is comprehensive and prescriptive, the U.S. framework is more distributed and sector-specific, highlighting differing priorities in managing AI risks and innovations.

 

Diverging Definitions and Scopes

In comparing the regulatory approaches of the AI Act and the American EO on AI, it becomes evident that they differ significantly in several critical aspects while sharing common goals.

 

While both regions emphasise safety, transparency, and non-discrimination principles, the divergence reflects their respective regulatory philosophies and highlights the challenges in aligning international AI governance standards. Specific points of differences and similarities are listed below:

 

1.     Both the AI Act and the EO emphasise the importance of system testing and monitoring throughout the lifecycle of AI systems. The AI Act mandates comprehensive pre-market testing procedures and post-market tracking, focusing on compliance and performance. Similarly, the EO emphasises testing, evaluation, and post-deployment performance monitoring, particularly in AI-enabled healthcare technologies. While healthcare is singled out, similar requirements may apply to other sectors, potentially leading to harmonisation between the two approaches.

 

2.     Cybersecurity standards are another commonality, with both mandating adherence to security principles. The EO explicitly counters malicious cyber entities' exploitation of AI models. Although this focus is less pronounced in the AI Act, other EU laws like the NIS2 Directive and the Cyber Resilience Act may impose similar obligations.

 

3.     The Executive Order introduces unique elements such as AI "testbeds" and government-driven initiatives to influence industry standards and innovation. These initiatives are not explicitly present in the AI Act but may be covered by other EU laws and initiatives.

 

4.     Intellectual property compliance is an ongoing debate in the AI Act, particularly regarding the requirement to disclose the protected materials used in AI system training fully. In contrast, the EO advocates clarifying patent and copyright law boundaries concerning AI-supported creations, a perspective not explicitly addressed in the AI Act.

 

Therefore, while the European AI Act and the American EO on AI share common objectives such as testing, privacy protection, cybersecurity, and ethical considerations, they differ in their regulatory reach, sectoral vs. horizontal approach, and the introduction of unique elements.

 

 

Challenges and Criticisms in AI Regulations

 

The regulatory frameworks for AI in the EU and the U.S. bring several criticisms and challenges, particularly concerning innovation. These challenges could impact the effectiveness of AI regulations in both regions.

 

One key challenge is the broad and somewhat vague definitions of AI within the EU AI Act. The EU's definitionencompasses virtually all algorithms and computational techniques, which has drawn criticism for its need for more specificity. This broad definition could stifle innovation by subjecting a wide range of AI technologies to stringent regulations, potentially discouraging start-ups, and smaller companies from pursuing AI development due to compliance burdens.

 

The EU AI Act's comprehensive coverage of AI applications and its broad set of rules for AI applications used in impactful socio-economic decisions could lead to significant misalignment with the more limited regulatory approach in the U.S. While some U.S. agencies are adapting existing regulatory authority to address AI, the coverage may not align with EU standards, causing confusion and challenges for global organisations trying to comply with both sets of regulations. Furthermore, the heavier legislation in the EU has raised concerns among Europe-based tech start-ups that it may hinder innovation, making it difficult for them to compete with their U.S. counterparts. The bureaucratic nature of regulations and compliance requirements can disproportionately burden smaller companies with fewer resources, favouring established players and tech giants.

 

The challenge of regulatory misalignment is further exacerbated by the need for U.S. legislation addressing critical areas such as online platforms, social media, and E-commerce, which the EU has tackled through the DSA and DMA. The absence of precise regulatory approaches in the U.S. for these areas can create uncertainties and challenges for businesses operating in both regions.

 

Hence, while the EU and the U.S. have taken distinct regulatory approaches to AI, the criticisms and challenges faced by each framework, especially in terms of innovation, highlight the complexity of harmonising global AI governance. Striking a balance between regulation and innovation will be crucial for the continued development and adoption of AI technologies in both regions.

 

Shaping the Future of AI Regulation

 

From the information above, it is evident that the future of AI regulation stands at a critical juncture, with the EU and the U.S. playing pivotal roles in shaping global governance. The EU adopts a comprehensive approach while the U.S. pursues a more decentralised path, with federal agencies adapting to AI without new legal authorities. Despite these differences, both regions recognise the importance of alignment in facilitating trade, enhancing regulatory oversight, and fostering transatlantic cooperation.

 

Several policy recommendations emerge to mould the future of AI regulation effectively.

 

Firstly, the United States should prioritise its domestic AI risk management agenda. This involves ensuring federal agencies develop AI regulatory plans to gain a comprehensive understanding of their domestic AI risk management authority.

 

Secondly, the EU can enhance flexibility in implementing the EU AI Act at the sectoral level, allowing for tailored approaches to high-risk AI applications and improving the act's overall effectiveness.

 

Thirdly, addressing the absence of a legal framework for online platform governance in the U.S. is crucial. Simultaneously, the EU and the U.S. can collaborate on shared documentation of recommender systems and network algorithms and conduct joint research on online platforms.

 

Fourthly, deepening knowledge sharing is essential, encompassing cooperation on standards development, AI sandboxes, large public AI research projects, regulator-to-regulator exchanges, and developing an AI assurance ecosystem.

 

Furthermore, ensuring consumer awareness of AI usage is paramount. Both regions should consider enhancing transparency and labelling AI-generated content to protect individuals from AI-enabled fraud and deception. Lastly, international cooperation is vital. Collaboration between the EU and the U.S. on international AI standards and research can profoundly impact global AI governance.

 

While challenges and differences exist, the EU-U.S. Trade and Technology Council has demonstrated promise in working collaboratively on AI-related initiatives. A commitment to responsible AI governance and innovation can help build a harmonious future in AI regulation, showcased by the collaborative “AI Code for Conduct” formed by the EU and the U.S. As AI evolves rapidly, staying well-informed and proactive in policymaking will be essential for shaping a global AI landscape that balances risk with innovation and fosters trust among consumers and businesses.







Author: Anjali Tripathi

University and Year : O.P. Jindal Global University, 3rd Year

Programme: BA LLB Honours

bottom of page