chat
expand_more

What the US Can Learn From the UK and EU About Regulating AI

There are ways to protect the public from the potential dangers of AI without stifling innovation—and the Europeans have already shown us how.
November 6, 2024

This article originally appeared in SC Media.

California Gov. Gavin Newsom vetoed a bill last month that would have enacted the most significant AI legislation to date in the United States.

The measure was seen by legislators as offering a potential blueprint for federal regulation, focused on making tech companies legally liable for the harm caused by their AI models. It would have forced the industry to conduct safety tests on powerful AI models and mandated that tech companies enable a “kill switch” for AI technology to stop potential misuse.

Newsom argued that while the AI safety bill's intentions were valid, it used a broad brush approach, applying uniform regulation to all large models without distinguishing between high-risk AI applications and more benign ones.

The governor pointed out that the bill focused on large-scale, expensive AI models, which would potentially give the public a false sense of security by targeting only high-cost systems. Smaller, more specialized AI models, which arguably pose equal or even greater risks, were not sufficiently addressed. Additionally, the bill applied strict safety protocols to all large models, regardless of their actual deployment in high-risk environments or their involvement with sensitive data. As a result, Newsom feared that the bill could create an overly restrictive environment that might hamper innovation.

The bill—and Newsom’s decision to veto it—has sparked widespread debate about the best approach to regulating AI, specifically when it comes to reducing risk without stifling innovation. Other regions, such as the UK and the European Union (EU), are also navigating this debate using various approaches.

Considerations for Regulating AI

So, what are some of the important considerations that go into developing regulation for AI? And what can the U.S. learn from the UK and EU where they are doing it effectively today?

Let’s take a closer look:

In comparison to the U.S., both the UK and EU are further along in their regulatory efforts. And unlike the proposed AI bill in California, both of these regions emphasize regulation focused on distinguishing high-risk applications, no matter if they used large models or smaller, specialized ones.

For example, under Prime Minister Keir Starmer’s government, the UK promotes a safety-focused AI regulatory framework that seeks to prevent misuse by enhancing transparency, human oversight, and data quality standards. It’s particularly focused on high-risk sectors like healthcare and criminal justice, areas in which AI is most likely to be misused or abused.

This approach aligns closely with the EU’s AI Act, which also imposes compliance requirements on high-risk AI applications, such as those in healthcare, finance, and public services. The stringent EU AI Act bans AI systems that pose an "unacceptable level of risk," including social scoring algorithms. Both the UK and EU recognize the importance of public trust in AI, especially in critical sectors, and their regulatory frameworks aim to ensure that AI systems are explainable, reliable, and fair.

But while both the UK and EU regulations aim to mitigate risks, there are still concerns that this strict approach might stifle innovation, particularly for smaller companies. For example, the compliance costs associated with these regulations could become prohibitive for startups—potentially limiting the development of cutting-edge AI technologies.

Lessons for the United States

The U.S. – which today lacks comprehensive federal AI regulation – could learn several lessons from the UK and EU. First, the European regulations are based on the actual risk an AI system poses. Both the UK and EU focus on strictly regulating high-risk AI systems while allowing more flexibility for low-risk applications. This targeted approach could help avoid stifling innovation because of over-regulation, which was one of the main concerns Newsom highlighted in his veto.

Additionally, the emphasis on transparency, human oversight, and accountability in both models offers a roadmap for how the U.S. could structure its own AI governance. Ensuring that AI systems are explainable and accountable is crucial for public trust, particularly as these technologies become more integrated into everyday life.

Another strategy that the UK has adopted, which the U.S. could potentially benefit from, is the use of regulatory sandboxes. Sandboxing lets tech companies experiment with AI technology in a controlled environment, fostering innovation while ensuring that AI applications are subject to rigorous safety testing before being deployed at scale.

Finally, as the U.S. considers its own AI regulations, it should also focus on international competitiveness. The EU's AI Act has already set a global standard, and many U.S. companies will need to comply with these rules when operating in Europe. Aligning U.S. regulations with global standards could help streamline compliance and ensure that American companies remain competitive on an international stage.

In short, Gavin Newsom’s veto of California’s AI safety bill highlights the challenges of balancing innovation with safety in a rapidly evolving landscape. While concerns are valid, the experiences of both the UK and the EU show that it’s possible to create a regulatory framework that protects public safety without unduly restricting technological development.

Adopting targeted, risk-based regulations, fostering transparency and accountability, and supporting innovation through regulatory sandboxes are just a few of the strategies that the U.S. may consider as it continues to develop complex legislation around AI—legislation that's essential for maintaining public trust and driving responsible AI development.

Interested in learning more about AI and how it can protect your organization from advanced cyber attacks? Schedule a demo today!

Schedule a Demo
What the US Can Learn From the UK and EU  About Regulating AI

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Manufacturing Industry Attack Trends Blog
New data shows a surge in advanced email attacks on manufacturing organizations. Explore our research on this alarming trend.
Read More
B Dropbox Open Enrollment Attack Blog
Discover how Dropbox was exploited in a sophisticated phishing attack that leveraged AiTM tactics to steal credentials during the open enrollment period.
Read More
B AISOC
Discover how AI is transforming security operation centers by reducing noise, enhancing clarity, and empowering analysts with enriched data for faster threat detection and response.
Read More
B Microsoft Blog
Explore the latest cybersecurity insights from Microsoft’s 2024 Digital Defense Report. Discover next-gen security strategies, AI-driven defenses, and critical approaches to counter evolving threats and safeguard your organization.
Read More
B Osterman Blog
Explore five key insights from Osterman Research on how AI-driven tools are revolutionizing defensive cybersecurity by enhancing threat detection, boosting security team efficiency, and countering sophisticated cyberattacks.
Read More
B AI Native Vendors
Explore how AI-native security like Abnormal fights back against AI-powered cyberattacks, protecting your organization from human-targeted threats.
Read More