chat
expand_more

What the US Can Learn From the UK and EU About Regulating AI

There are ways to protect the public from the potential dangers of AI without stifling innovation—and the Europeans have already shown us how.
November 6, 2024

This article originally appeared in SC Media.

California Gov. Gavin Newsom vetoed a bill last month that would have enacted the most significant AI legislation to date in the United States.

The measure was seen by legislators as offering a potential blueprint for federal regulation, focused on making tech companies legally liable for the harm caused by their AI models. It would have forced the industry to conduct safety tests on powerful AI models and mandated that tech companies enable a “kill switch” for AI technology to stop potential misuse.

Newsom argued that while the AI safety bill's intentions were valid, it used a broad brush approach, applying uniform regulation to all large models without distinguishing between high-risk AI applications and more benign ones.

The governor pointed out that the bill focused on large-scale, expensive AI models, which would potentially give the public a false sense of security by targeting only high-cost systems. Smaller, more specialized AI models, which arguably pose equal or even greater risks, were not sufficiently addressed. Additionally, the bill applied strict safety protocols to all large models, regardless of their actual deployment in high-risk environments or their involvement with sensitive data. As a result, Newsom feared that the bill could create an overly restrictive environment that might hamper innovation.

The bill—and Newsom’s decision to veto it—has sparked widespread debate about the best approach to regulating AI, specifically when it comes to reducing risk without stifling innovation. Other regions, such as the UK and the European Union (EU), are also navigating this debate using various approaches.

Considerations for Regulating AI

So, what are some of the important considerations that go into developing regulation for AI? And what can the U.S. learn from the UK and EU where they are doing it effectively today?

Let’s take a closer look:

In comparison to the U.S., both the UK and EU are further along in their regulatory efforts. And unlike the proposed AI bill in California, both of these regions emphasize regulation focused on distinguishing high-risk applications, no matter if they used large models or smaller, specialized ones.

For example, under Prime Minister Keir Starmer’s government, the UK promotes a safety-focused AI regulatory framework that seeks to prevent misuse by enhancing transparency, human oversight, and data quality standards. It’s particularly focused on high-risk sectors like healthcare and criminal justice, areas in which AI is most likely to be misused or abused.

This approach aligns closely with the EU’s AI Act, which also imposes compliance requirements on high-risk AI applications, such as those in healthcare, finance, and public services. The stringent EU AI Act bans AI systems that pose an "unacceptable level of risk," including social scoring algorithms. Both the UK and EU recognize the importance of public trust in AI, especially in critical sectors, and their regulatory frameworks aim to ensure that AI systems are explainable, reliable, and fair.

But while both the UK and EU regulations aim to mitigate risks, there are still concerns that this strict approach might stifle innovation, particularly for smaller companies. For example, the compliance costs associated with these regulations could become prohibitive for startups—potentially limiting the development of cutting-edge AI technologies.

Lessons for the United States

The U.S. – which today lacks comprehensive federal AI regulation – could learn several lessons from the UK and EU. First, the European regulations are based on the actual risk an AI system poses. Both the UK and EU focus on strictly regulating high-risk AI systems while allowing more flexibility for low-risk applications. This targeted approach could help avoid stifling innovation because of over-regulation, which was one of the main concerns Newsom highlighted in his veto.

Additionally, the emphasis on transparency, human oversight, and accountability in both models offers a roadmap for how the U.S. could structure its own AI governance. Ensuring that AI systems are explainable and accountable is crucial for public trust, particularly as these technologies become more integrated into everyday life.

Another strategy that the UK has adopted, which the U.S. could potentially benefit from, is the use of regulatory sandboxes. Sandboxing lets tech companies experiment with AI technology in a controlled environment, fostering innovation while ensuring that AI applications are subject to rigorous safety testing before being deployed at scale.

Finally, as the U.S. considers its own AI regulations, it should also focus on international competitiveness. The EU's AI Act has already set a global standard, and many U.S. companies will need to comply with these rules when operating in Europe. Aligning U.S. regulations with global standards could help streamline compliance and ensure that American companies remain competitive on an international stage.

In short, Gavin Newsom’s veto of California’s AI safety bill highlights the challenges of balancing innovation with safety in a rapidly evolving landscape. While concerns are valid, the experiences of both the UK and the EU show that it’s possible to create a regulatory framework that protects public safety without unduly restricting technological development.

Adopting targeted, risk-based regulations, fostering transparency and accountability, and supporting innovation through regulatory sandboxes are just a few of the strategies that the U.S. may consider as it continues to develop complex legislation around AI—legislation that's essential for maintaining public trust and driving responsible AI development.

Interested in learning more about AI and how it can protect your organization from advanced cyber attacks? Schedule a demo today!

Schedule a Demo
What the US Can Learn From the UK and EU  About Regulating AI

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Reg AI
There are ways to protect the public from the potential dangers of AI without stifling innovation—and the Europeans have already shown us how.
Read More
B Proofpoint Replacement Industrial Equipment Leader
A leading heavy machinery dealer freed up 255 SOC hours and bolstered protection across their 17,000+ mailboxes by switching from Proofpoint to Abnormal.
Read More
B Human Risk
Humans are the biggest concern in cybersecurity, and AI is needed to protect them. Discover how Abnormal takes an AI-native approach to protecting human behavior.
Read More
B Proofpoint Replacement Multinational Healthcare Service Provider
Global healthcare provider detects 868 missed attacks and saves 13,000+ hours annually after moving from a Proofpoint SEG to Abnormal’s AI-native solution.
Read More
B Convergence S3 Recap Blog
That’s a wrap on Season 3! Explore a few of the biggest takeaways from chapters 7-9 and learn how to watch all three sessions on demand.
Read More
B CSAM SOC
Explore key insights from the SOC Unlocked podcast on enhancing cybersecurity awareness and training. Learn how offensive tactics, insider threats, AI, and cloud security shape effective defense strategies.
Read More