In our Artificial Intelligence (AI) journey, much like the timeless question "Quo vadis, Domine?"-meaning "Where are you going, Lord?"-we find ourselves at a crossroads, questioning the direction of our technological trajectory. So as we traverse the AI landscape, we should ask ourselves, "Where is our AI going, and what are the implications for us?"

At this critical juncture, when the path forward may seem uncertain, well-structured legal frameworks provide a basic roadmap. These frameworks not only provide clarity on the permissible boundaries of AI development and application, but also serve as foresight tools that anticipate potential challenges and pitfalls. As I argued in a previous blog, law generates a dialectical process wherein the tech industry, responding to legal constraints, presents its antithesis by seeking new solutions, ultimately leading to a synthesis that reconciles legal requirements and fosters human agency.

Consequently, nations around the world are trying to figure out how to adapt to and capitalize on these evolving technologies. Unfortunately, while the impact of AI is global, many countries have yet to publish comprehensive AI regulations and policies. This gap presents both challenges and opportunities.

On the one hand, embracing standardized regulations could foster global collaboration, addressing ethical concerns and privacy issues. On the other hand, the absence of clear guidelines may hinder the progress of global AI initiatives, potentially leading to misuse, fragmented approaches, and inconsistencies.

Therefore, by exploring and comparing recently published AI regulations, individuals and organizations can proactively navigate the evolving landscape. This approach allows them to stay ahead of the curve, respond resiliently to change, and integrate practices and solutions that may not have been adopted in their home countries.

Implications of the EU’s AI Act

On December 8, 2023, after more than 36 hours of negotiations, a political agreement was reached on the EU's Artificial Intelligence (AI) Act. The AI Act adopts a "risk-based approach" that prioritizes the regulation of the use of AI over the technology itself, with the aim of protecting European values and the rule of law. The legislation sets clear obligations based on the potential risks and impact of AI systems.

Key provisions include:

  • Prohibition of biometric categorization systems using sensitive characteristics (for example, political, religious, philosophical beliefs, sexual orientation, race)
  • Restriction on untargeted scraping of facial images for facial recognition databases
  • Ban on emotion recognition in workplaces and educational institutions
  • Prohibition of social scoring based on personal characteristics
  • Prohibition of AI exploiting vulnerabilities based on age, disability, or socioeconomic status
  • Use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes
  • Citizens' right to launch complaints and receive explanations about decisions based on high-risk AI systems
  • Transparency requirements for general-purpose AI systems, including technical documentation and compliance with EU copyright law

The AI Act focuses on risk assessments for AI systems, with obligations increasing based on the potential risk to individual rights or health. The agreement introduces transparency requirements for all general-purpose AI models and imposes tougher regulations on more powerful models. In essence, the EU AI Act strategically adapts its regulatory intensity based on the perceived risk levels associated with different AI systems.

By doing so, the legislation seeks to strike a balance between fostering innovation, ensuring transparency, and mitigating potential harms arising from advanced AI technologies. This nuanced approach reflects the EU's commitment to responsible AI governance and aligns with its broader initiatives to establish a trustworthy and ethical AI landscape.

The AI Act won't take effect until two years after final approval by European lawmakers, expected sometime in early 2024. Moreover, the AI Act will apply to the EU's nearly 450 million residents, but experts say its impact could be felt far beyond that because of the EU's leadership role in creating regulations and policies that serve as a global standard.

The law's global influence is expected to extend as a blueprint for other nations navigating AI regulations. However, by not fully banning live facial recognition, the EU's decision has drawn criticism, with Amnesty International deeming it a "hugely missed opportunity" and expressing concerns about potential global repercussions, particularly in the realm of human rights and civil liberties.

Decoding the American AI Regulatory Landscape

The U.S. AI regulatory landscape is characterized by a mix of existing federal and state laws, frameworks from various federal agencies, and a growing number of state-specific laws. It is also characterized by a decentralized approach, in contrast to the EU's top-down approach.

The legislative approach to AI is centered on an "incentive-based" strategy, with policymakers seeking to create conditions that will retain AI developers. The emphasis on incentives reflects a concerted effort to prevent the potential shift of AI development to competing nations, highlighting the delicate balance between promoting innovation and avoiding national security issues.

Despite the polarized and challenging political environment in the U.S. in recent years, both parties in Congress have consistently supported AI governance efforts. This bipartisan commitment reflects a shared recognition of the transformative potential and strategic importance of AI.

While the EU's AI law is in its final stages, the U.S. is unlikely to pass a comprehensive national AI law in the near future. Instead, a flurry of executive actions is expected. The outcome may not be a broad national AI law, but rather a nuanced, domain-specific regulatory landscape that navigates the complexities of AI governance.

For example, in October 2023, U.S. President Joe Biden signed an executive order on AI. The order mandates prominent AI developers to disclose safety test results and pertinent information to the government. Furthermore, government agencies will establish standards to guarantee the safety of AI tools before their public release, accompanied by guidelines for labeling AI-generated content.

Biden's directive extends the momentum generated by voluntary commitments from major tech companies, such as Amazon, Google, Meta, and Microsoft, which had pledged to ensure the safety of their products before launch. In addition, the U.S. government is expected to increase spending on AI and AI research, especially in the defense and intelligence sectors.

At KuppingerCole, we foresee that while private companies may implement their "responsible AI" initiatives, potential AI trade frictions with the EU may arise. Moreover, concerns linger about enforceability and binding measures, with critics highlighting potential disparities with the more stringent measures adopted by the EU in the recently adopted AI Act.

Balancing Acts: Concluding Thoughts

The dominance of major AI players in the US raises concerns about the delayed availability of powerful AI models in the EU, potentially impacting technological and economic development in the region. This dichotomy also underscores the global competition in AI, with China and Russia potentially adopting different, and less democratic-oriented, approaches. The question arises: can safety be adequately ensured through legal frameworks alone, especially when considering the divergent strategies employed by major players in the AI arena?

Exploring this critical angle prompts a deeper reflection on the effectiveness of legal frameworks in balancing safety concerns with the rapid development and deployment of advanced AI technologies, especially in the context of varying global approaches. The global AI debate involves geopolitical competition, national security, and economic competitiveness. Therefore, the answer to ensuring safety in AI may involve a multifaceted approach that goes beyond legal regulations, incorporating international collaboration, ethical standards, and shared best practices

Despite differing approaches to AI regulation, both the US and EU share the common goal of ensuring safe and democratic AI, fostering the potential for aligned transatlantic regulatory approaches. The EU-US Trade and Technology Council demonstrates early success in collaborating on trustworthy AI metrics and methodologies, emphasizing the need for further alignment and knowledge sharing. It is therefore essential that the EU and the U.S. align their approaches in key areas, given their combined influence on global AI governance.