Skip to content

The EU’s Landmark AI Act Will Put the U.S. in 3rd (Maybe 4th) Place

China has already developed draft regulations to manage generative AI products like ChatGPT. The European Parliament votes on their proposed rules Wednesday and the U.K. will hold the world’s first global summit this fall. The U.S. needs to pick up the pace.

Perspective

(UPDATE: European Parliament lawmakers voted to approve a draft version of the Artificial Intelligence Act on June 14. Final approval of the bill is expected by the end of this year, following negotiations with the Council of the European Union and EU member states.)

The European Parliament is debating landmark legislation today for regulating artificial intelligence (AI) based on the level of risk posed by a technology. The measure is expected to pass in a vote on Wednesday, which will make the European AI Act the first law in the West to try to gain control over this fast-evolving field.

This follows China, which released draft rules for its AI policies in April. And last week the U.K. announced plans for the first AI global summit, which will bring together an international array of government leaders, tech company executives, and scientific researchers.

Meanwhile, the U.S. approach to AI regulations has been fragmented, with states enacting their own patchwork AI laws and various bills in Congress jockeying for position with proposals and anticipated rules from a bevy of new offices and task forces at federal administrative agencies. The cacophony, like any sprawl of regulations, is confusing—good luck to any company trying to weigh its options and obligations when it comes to adopting AI technology. It’s also slowing us down.

Gain visibility to sensitive data at scale and meet regulatory compliance requirements.

If the U.S. hopes to lead the world in AI governance, it’s now going to have to come up from behind. Right now, it looks like we’re in third place. Maybe fourth.

Though the U.S. is making progress in developing AI standards, the lack of a national comprehensive strategy means we’re playing catch-up, reacting to others’ policies rather than establishing our own rules of the road.

“The absence of a more comprehensive approach means that the U.S. is unable to present a model for how to move forward globally with AI governance, and instead is often left responding to other countries’ approaches to AI regulation,” wrote Brookings Institute senior fellow Joshua P. Meltzer in a blog post last month.

Here’s a quick breakdown of where the front-runners stand in terms of AI regulation.

The EU’s AI Act is all about proportional risk

The AI Act divvies up AI applications into four categories: minimal to no risk, limited, high, and unacceptable risk.

For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue.

Peter Welch, U.S. Senator

Anything that falls in the latter camp would be banned in the EU, including applications that use:

  • Subliminal, manipulative, or deceptive techniques to distort behavior. Think deepfake videos that show celebs, politicians, bullied kids, and anybody else in a (fake but all-too-real-looking) compromising position. These are coming soon to a TikTok video near you.
  • Predictive techniques that purport to anticipate criminal behavior. Stream the Tom Cruise film Minority Report for a (fictional but not unrealistic) primer on how wrong that can go.
  • Emotion AI, a growing field in which machines can analyze microscopic details in facial expressions and voice inflections to better read human emotion. (Be honest—you’ve yelled at Siri or Alexa more than once, we’re guessing. They’ll soon be listening and taking note.)
  • Untargeted scraping, a method of swiping images off Facebook and other social media platforms without users’ consent, to create or expand facial-recognition databases. (Clearview AI is a leader in this pack—in April it admitted to the BBC it has harvested billions of social media photos and sold them to police departments.)

[Read also: What the tech sector can learn from TikTok—trust is everything]

That European act also takes a swipe at ChatGPT. Companies working on generative AI—that includes the Microsoft-owned OpenAI and its ChatGPT, or Google’s Bard—large language models (LLMs), and other “foundation models” would have to incorporate data governance measures into product development, assessing and mitigating risks to health, safety, and human rights before unleashing their products on the public.

China’s rules emphasize conformity

Back in 2017, two chatbots were taken offline when users of a popular messaging app shared screenshots of their human-to-robot conversations online. Comments like “My China dream is to go to America” and “No” (in response to the question “Do you love the Communist Party?”) didn’t exactly please government censors, and officials at China’s cyberspace agency are making sure that kind of embarrassment doesn’t happen again.

No.

A Chinese chatbot, when asked if it loved the Communist Party. (It was promptly censored.)

The agency’s new Measures for the Management of Generative Artificial Intelligence Services outline 20 articles to govern corporate behavior, including requirements that providers bear responsibility for the legality of the sources of generative AI (article 7), prevent addiction to generated content (article 10), and purge unprotected personal identifiable information (article 11). The most chilling, though not terribly surprising, is article 4’s demand that all generated content reflect the Chinese Communist Party’s “core values.”

[Read also: AIOps don’t always have to compete with humans in IT Operations—here’s how they can work together]

China, which began doubling down on its efforts to become a world leader in AI technology as far back as 2017 (when those chirpy chatbots caused a stir), seems to have achieved a strong lead. Make that a “stunning lead,” as a recent think tank study (funded by the U.S. State Department) called it, finding that China is ahead in 37 of 44 critical and emerging technologies. How this hard-line take on regulation will impact AI development remains to be seen.

U.S. rules emphasize—wait! What rules?

Regulation here in the U.S. remains a work in progress, with Washington (so far, at least) accepting a voluntary approach to compliance. To be fair, various legislators and bodies are trying to change that, including:

  • The White House—Its “Blueprint for an AI Bill of Rights,” released last year, outlines five principles to help protect user privacy and safety and prevent discrimination.
  • NIST—The National Institute of Standards and Technology published its AI Risk Management Framework in January, along with an accompanying playbook, explainer, roadmap, crosswalk, and explanatory perspectives. They’re nothing if not thorough. All the rules and guidelines, of course, are purely voluntary.
  • Congress—Besides summoning OpenAI CEO Sam Altman to Capitol Hill to testify, leaders are pitching various regulatory ideas. Senate Majority Leader Chuck Schumer (D-NY) launched an effort in April to mobilize industry leaders to help refine legislation that would, in part, require companies to permit an independent expert review of AI technologies before public release. And Sens. Michael Bennett (D-CO) and Peter Welch (D-VT) reintroduced legislation last month to establish an all-new Federal Digital Platform Commission to oversee digital technology.

[Read also: Forget ChatGPT hearings and Meta fines—the surgeon general may be Big Tech’s biggest threat]

“Big Tech has enormous influence over every aspect of our society,” said Welch in a statement. “For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue.”

“We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest,” Bennett concurred.

That’s really all any country wants for its people. We’ll see who gets there first.

Joseph V. Amodio

Joseph V. Amodio is a veteran journalist, television writer, and the Editor-in-Chief of Focal Point. His work has appeared in The New York Times Magazine, Men's Health, Newsday, Los Angeles Times, CNN.com, and Barrons.com, and has been syndicated in publications around the world. His docudramas have aired on Netflix, Discovery, A&E, and other outlets. He also produces Tanium’s new Let’s Converge podcast—listen here.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW