Skip to content

How to Prepare for the EU’s AI Act: Start With Your Risk Level

Last month, the European Parliament formally adopted the world’s first law governing artificial intelligence. Other governments are setting rules to keep AI in check. With regulatory temps rising, it’s time enterprises assessed their place in the AI value chain.

Perspective

Most businesses dipping their toes into AI waters will not be significantly affected by the new EU AI Act – at least not at first. But they still need to take a closer look, because even seemingly minor uses of AI could be subject to transparency requirements and risk assessments.

The legislation was adopted by the European Parliament on March 13, and the endorsement of the EU Council is expected later this month, with the formal law likely to go into effect in late May or early June.

The EU AI Act creates a risk-based framework that will govern the development, deployment, and use of AI systems in Europe, with the rules taking effect in stages between 2024 and 2026. Companies that ignore or otherwise run afoul of the rules could face fines of up to 7% of their global revenues.

Gain visibility to sensitive data at scale and meet regulatory compliance requirements.

“If you’ve got an AI system that is going to touch Europe in any way, you need to be thinking about the EU AI Act,” says Gretchen Scott, a technology partner in Goodwin Law’s London office.

What’s your AI risk level? The EU AI Act expects you to know

Attorneys recommend taking several steps to assess your standing.

If you’ve got an AI system that is going to touch Europe in any way, you need to be thinking about the EU AI Act.

Gretchen Scott, technology partner, Goodwin Law

First, Scott says, it’s important to know your place in the AI value chain to see if you might fall into one of the key roles the EU AI Act seeks to regulate, such as providers, deployers, distributors, manufacturers, and importers of AI systems.

Next, she says you should determine the risk level of your AI system based on the four levels spelled out in the EU AI Act.

1. Unacceptable risk

This is the category that bans uses of AI that the EU believes would violate fundamental citizen rights or exploit vulnerable groups. AI systems that are prohibited include those that: cause significant harm by exploiting vulnerabilities to distort behavior; emotion recognition in workplaces and schools; biometric categorization systems using sensitive characteristics; scraping of facial images from the internet and TV to create facial recognition databases; social scoring, predictive policing (think Minority Report), and AI tools that manipulate human behavior.

2. High risk

If your business falls into this category, you will face considerable regulatory requirements. AI systems covered by EU product safety laws (such as toys, medical devices, and machinery) are deemed high risk, as are certain AI systems that are intended to be used within critical infrastructure, education and vocational training, employment, essential private and public services like healthcare and banking, particular systems in law enforcement, migration and border management as well as justice and democratic processes.

[Read also: The 3 biggest GenAI threats (plus 1 other risk) and how to fend them off]

Businesses providing high-risk AI systems must implement comprehensive risk and quality management processes and systems, assess and reduce risks, maintain use logs, be transparent and accurate, ensure human oversight, and establish post-market monitoring systems. Meanwhile, those that use high-risk AI systems also have robust obligations to use the AI system as instructed by the provider, carry out impact assessments, ensure human oversight and transparency, and monitor the system for risks.

3. General purpose AI (GPAI)

Many well-known names in the AI space will be covered by this category, including OpenAI, Google, and Microsoft. For the most part, these are considered low-risk applications. But because GPAI models provide the foundation for many downstream AI systems, GPAI providers are subject to obligations, such as transparency, to recognize their role and responsibilities within the AI ecosystem.

A flurry of U.S.-based intellectual property litigation has highlighted that training GPAI models is like the Wild West right now – especially with respect to allegedly borrowing other peoples’ data to make AI smart. The EU requires GPAI providers to implement policies to adhere to regional copyright law and to publish detailed summaries of the content used for training. Free open-source models are exempted from these requirements. More powerful GPAI models that carry systemic risk are subject to stricter requirements. Scott says if you are building and distributing large language models (LLMs) with high-impact capabilities, you’ll also need to prove that you are conducting regular model evaluations, assessing and mitigating systems risks, and reporting on any incidents that arise, including cyberattacks.

4. Limited risk

AI systems that do not fall within any of the other categories will be subject to transparency requirements to ensure users are aware that they are interacting with AI or that output they are reading is artificially generated. For example, artificial or manipulated images, audio, and video content (like deepfakes) need to be clearly labeled as such. Businesses that develop or deploy AI will also need to ensure a sufficient level of AI literacy within their organization.

[Read also: Meet the chief AI officer (CAIO) – it’s time to make room in the C-suite]

Sorting out your EU AI Act obligations (and AI governance)

Once you understand your role in the AI value chain and the risk classification of your AI systems, you will be able to determine your regulatory obligations. Scott says most AI systems will not be prohibited or considered high-risk under the EU AI Act. “But we believe the impact of the EU AI Act will be felt widely as we see business take measures to mitigate risks associated with developing and deploying AI systems that carry the highest regulatory burden,” she says.

Have a task force or an AI committee to assess the organization’s development or use of AI and make sure key stakeholders within the organization are involved.

Ana Hadnes Bruder, partner, Mayer Brown

Establishing at least a minimal level of AI governance is key to compliance, says Ana Hadnes Bruder, a partner with the Mayer Brown law firm. And companies whose AI falls into any high-risk category should implement robust AI governance.

“What that means in practical terms is you should have a task force or an AI committee to assess the organization’s development or use of AI and make sure key stakeholders within the organization are involved,” says Bruder, who is based in Frankfurt, Germany, and specializes in data privacy, cybersecurity, and AI matters. “And you should try to make that group as diverse as possible because different people have different points of view, and that’s really valuable for making sure you are avoiding bias in your AI.”

The EU AI Act sparks more debate – to comply or not to comply

Bruder says one of the key questions most AI committees will invariably raise is whether their AI implementations really rise to a level where they need to try and achieve compliance with the EU AI Act. After all, complying with any regulation can be time-consuming and costly. But Bruder believes the writing is on the wall: This regulation is just the beginning.

“I’ve been advising my clients they should expect this to pick up,” Bruder says.

In fact, while attorneys do not expect the EU AI Act to be replicated elsewhere, like the General Data Protection Regulation (GDPR) was, numerous governments are looking to slow AI’s unfettered expansion. For instance, the Biden administration signed an executive order last year requiring developers of AI systems that might threaten U.S. national security, the economy, public health, or safety to disclose results of safety tests before publicly releasing their products. Chinese regulators have also laid down rules governing generative AI. As of last October, 31 countries had reportedly passed AI legislation and 13 more were debating AI laws.

For the most part, large AI companies have given a lukewarm thumbs-up to such regulations because they recognize they can’t escape such laws and hope to establish ground rules to keep them in business. At the same time, some AI leaders have publicly expressed worries about the potential for people to lose control of AI without limits.

Yet, there have also been concerns about regulations stifling innovation – something lawmakers sought to avoid as they negotiated elements of the EU AI Act over the last five years or so.

[Read also: The ultimate guide to AI cybersecurity – benefits, risks, and rewards]

Evi Fuelle, global policy director for CredoAI, an AI governance platform, believes the final legislation landed in a good spot in that regard.

“The EU AI Act is not stifling innovation; it is encouraging it,” she says. “We often say that a lack of trust is a bigger blocker in this ecosystem than compliance.”

The EU AI Act gives businesses certainty by outlining which uses of AI are prohibited, high-risk, or minimal to no risk, and the corresponding transparency obligations for each, Fuelle says.

“It gives companies the ability to invest with certainty in responsible AI by design because they know that this major marketplace has vehemently agreed that it will not accept the use or availability of high-risk AI models that are not transparent and trustworthy,” she says.

Wendy Lowder

Wendy Lowder is a freelance writer based in Southern California. When she’s not reporting on hot topics in business and technology, she writes songs about life, love, and growing up country.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW