A Privacy Engineer's Guide to the EU AI Act

I've been thinking about the ways the EU AI Act's requirements fit within existing privacy review or software development processes more generally. After the past few years of gradually improving data governance practices thanks to GDPR and other sources, tossing in a few more compliance requirements shouldn't be a big deal, right? Here are some answers and plenty of references for those who are just getting started.

What is it?

The EU AI Act is a risk-based framework for evaluating creation, use, and deployment of models. Some uses of AI are strictly prohibited, while requirements for others vary. Here's the raw text and a helpful AI Act Explorer


Obligations are based primarily on whether you're a provider (of a general-purpose AI system) or deployer. (Definitions here.) It will be interesting to see where the line is drawn between provider and deployer; in other words, whether a deployer can modify a general purpose system enough to become a provider.

Jurisdiction is very broad, and the risk assessments are valuable no matter what (and likely to be mirrored in other laws around the world), so it's best to start thinking of this stuff. Your lawyer should have more to say on the matter if you're really curious.

Are all models covered by the Act?

The Act defines the following risk categories:
  • Prohibited Unacceptable risk AI systems, like social scoring
  • High risk AI systems, which are subject to strict requirements
  • Limited risk AI systems, like a chatbot
  • Minimal or no risk can be used freely, like spam filters
Check out this handy infographic for more examples of each category, and definitely read this explainer from the European Commission (EC). The Act details the obligations, if any, for each category. Additional guidance and rulemakings from regulators are on the way.

There are separate disclosure requirements for general purpose (aka foundation) models, too.

What's a conformity assessment?

For high risk systems, a conformity assessment must be performed and registered with regulators to ensure the Act's requirements are met for high-risk models. This includes showing implementation of the following controls before a system is available or deployed:
In some cases like biometrics, a third party will need to perform the assessment. Use of third parties will broaden over time. Check with your lawyer. All of the categories should be familiar to privacy practitioners, and are hopefully being done (under a different heading) already, minus any AI-specific checks.

This November 2023 guide based on a draft of the Act digs in more, or the plain text of the requirements can be found here. This May 8, 2024 post from Securiti provides a great breakdown of the requirements, who must perform them, and additional details. They're just one of many compliance SaaS companies offering support services.

This graphic comes from the EC explainer:



Look for regulators to issue more guidance on the contents and coverage of these assessments.

Can I just tweak my existing privacy program to comply?

That seems like a great place to start. While there are AI Act-specific requirements, they should fit comfortably within your overall GRC program. Here are a few ideas:
  • Add "Add AI Safety checks" to your risk registry
  • Add an AI Safety item to meeting agenda and PRD templates
  • Update existing meeting invitations to include AI stakeholders as needed, until totally separate reviews become necessary
  • Update your data tagging taxonomy to include AI risk- and safety-related tags. Step 0 might be just indicating whether a dataset has been reviewed and later which type of system (high risk, etc.) it's being used for
  • Update privacy intake or review forms to include checks for the prohibited or high risk model types, which may trigger additional requirements and conformity assessments. If you've got a "Do we need a DPIA?" question that kicks off additional review, this might be a good spot to start.
From there, start moving toward a more formal governance framework (see below) or maybe a third party compliance tool to help tighten the screws.

How are people managing AI risk now?

Where can I learn more?

Specifically for the AI Act, check out the Future of Privacy Forum's coverage which includes resources like timelines, analysis of GDPR cases on automated decision making (that may influence interpretations of the AI Act), and a few other reports and recorded webinars. For more general info about the ML safety world, see below.

These open source collections are goldmines of practical and academic details for security and privacy specifically:
The AI Incident Database is .. you guessed it. Use it during product reviews, e.g., a search for "personalization" if you're building such a system.

This AI Ethics & Policy News spreadsheet curated by Professor Casey Fiesler includes 1,154 categorized news articles and counting.

Check out the case studies on Google's AI Governance reviews and operations page.

AI Alignment: A Comprehensive Survey provides "a comprehensive and up-to-date overview of the alignment field," which "aims to make AI systems behave in line with human intentions and values." Asking models like ChatGPT to define and discuss alignment is also fun. Leading companies like Anthropic and OpenAI have published info about their approach, and communities like the AI Alignment Forum capture the latest research in one place.

What should I be on alert for?

  • Guidance and further rulemaking on conformity assessments and documentation requirements
  • EU DPA or EDPB guidance related to automated decision making under the GDPR, which are likely to be referenced in AI Act interpretations. Analysis of relevant cases through May 2022 can be found here.
  • Activity in the US
Good luck, and stay safe out there!

Popular posts from this blog

Thinking About BIPA and Machine Learning

Changing PDF Metadata with Python