A Privacy Engineer's Guide to the EU AI Act
I've been thinking about the ways the EU AI Act's requirements fit within existing privacy review or software development processes more generally. After the past few years of gradually improving data governance practices thanks to GDPR and other sources, tossing in a few more compliance requirements shouldn't be a big deal, right? Here are some answers and plenty of references for those who are just getting started.
Obligations are based primarily on whether you're a provider (of a general-purpose AI system) or deployer. (Definitions here.) It will be interesting to see where the line is drawn between provider and deployer; in other words, whether a deployer can modify a general purpose system enough to become a provider.
Jurisdiction is very broad, and the risk assessments are valuable no matter what (and likely to be mirrored in other laws around the world), so it's best to start thinking of this stuff. Your lawyer should have more to say on the matter if you're really curious.
This November 2023 guide based on a draft of the Act digs in more, or the plain text of the requirements can be found here. This May 8, 2024 post from Securiti provides a great breakdown of the requirements, who must perform them, and additional details. They're just one of many compliance SaaS companies offering support services.
These open source collections are goldmines of practical and academic details for security and privacy specifically:
This AI Ethics & Policy News spreadsheet curated by Professor Casey Fiesler includes 1,154 categorized news articles and counting.
Check out the case studies on Google's AI Governance reviews and operations page.
AI Alignment: A Comprehensive Survey provides "a comprehensive and up-to-date overview of the alignment field," which "aims to make AI systems behave in line with human intentions and values." Asking models like ChatGPT to define and discuss alignment is also fun. Leading companies like Anthropic and OpenAI have published info about their approach, and communities like the AI Alignment Forum capture the latest research in one place.
What is it?
The EU AI Act is a risk-based framework for evaluating creation, use, and deployment of models. Some uses of AI are strictly prohibited, while requirements for others vary. Here's the raw text and a helpful AI Act Explorer.Obligations are based primarily on whether you're a provider (of a general-purpose AI system) or deployer. (Definitions here.) It will be interesting to see where the line is drawn between provider and deployer; in other words, whether a deployer can modify a general purpose system enough to become a provider.
Jurisdiction is very broad, and the risk assessments are valuable no matter what (and likely to be mirrored in other laws around the world), so it's best to start thinking of this stuff. Your lawyer should have more to say on the matter if you're really curious.
Are all models covered by the Act?
The Act defines the following risk categories:- Prohibited Unacceptable risk AI systems, like social scoring
- High risk AI systems, which are subject to strict requirements
- Limited risk AI systems, like a chatbot
- Minimal or no risk can be used freely, like spam filters
There are separate disclosure requirements for general purpose (aka foundation) models, too.
What's a conformity assessment?
For high risk systems, a conformity assessment must be performed and registered with regulators to ensure the Act's requirements are met for high-risk models. This includes showing implementation of the following controls before a system is available or deployed:- A Risk management system
- Data governance
- Technical documentation
- Record keeping and logs
- Transparency and provision of information
- Human oversight
- Accuracy, robustness, and cybersecurity
This November 2023 guide based on a draft of the Act digs in more, or the plain text of the requirements can be found here. This May 8, 2024 post from Securiti provides a great breakdown of the requirements, who must perform them, and additional details. They're just one of many compliance SaaS companies offering support services.
Look for regulators to issue more guidance on the contents and coverage of these assessments.
Can I just tweak my existing privacy program to comply?
That seems like a great place to start. While there are AI Act-specific requirements, they should fit comfortably within your overall GRC program. Here are a few ideas:- Add "Add AI Safety checks" to your risk registry
- Add an AI Safety item to meeting agenda and PRD templates
- Update existing meeting invitations to include AI stakeholders as needed, until totally separate reviews become necessary
- Update your data tagging taxonomy to include AI risk- and safety-related tags. Step 0 might be just indicating whether a dataset has been reviewed and later which type of system (high risk, etc.) it's being used for
- Update privacy intake or review forms to include checks for the prohibited or high risk model types, which may trigger additional requirements and conformity assessments. If you've got a "Do we need a DPIA?" question that kicks off additional review, this might be a good spot to start.
How are people managing AI risk now?
- Use of the NIST AI Risk Management Framework
- Alignment to ISO 42001
- Publishing model cards
- Using a SaaS vendor
Where can I learn more?
Specifically for the AI Act, check out the Future of Privacy Forum's coverage which includes resources like timelines, analysis of GDPR cases on automated decision making (that may influence interpretations of the AI Act), and a few other reports and recorded webinars. For more general info about the ML safety world, see below.These open source collections are goldmines of practical and academic details for security and privacy specifically:
- https://github.com/trailofbits/awesome-ml-security
- https://github.com/stratosphereips/awesome-ml-privacy-attacks
This AI Ethics & Policy News spreadsheet curated by Professor Casey Fiesler includes 1,154 categorized news articles and counting.
Check out the case studies on Google's AI Governance reviews and operations page.
AI Alignment: A Comprehensive Survey provides "a comprehensive and up-to-date overview of the alignment field," which "aims to make AI systems behave in line with human intentions and values." Asking models like ChatGPT to define and discuss alignment is also fun. Leading companies like Anthropic and OpenAI have published info about their approach, and communities like the AI Alignment Forum capture the latest research in one place.
What should I be on alert for?
- Guidance and further rulemaking on conformity assessments and documentation requirements
- The European AI Office will be hosting a webinar on May 16, 2024 covering the risk management system requirement for high-risk systems and general purpose models.
- EU DPA or EDPB guidance related to automated decision making under the GDPR, which are likely to be referenced in AI Act interpretations. Analysis of relevant cases through May 2022 can be found here.
- Activity in the US
- US state laws and regulations. From a recent Gibson Dunn blog post, "Although SB 205 (the Colorado law) would be the most comprehensive AI-specific state law, it is not the only state to move in this area in 2024. This year alone, Utah and Tennessee enacted AI legislation (tackling consumer deception by generative AI and AI deepfakes, respectively), while the California Consumer Privacy Protection Agency (“CPPA”) has been making progress with its draft regulations related to automated decision-making technology (“ADMT”)."
- FTC guidance and enforcement activity
- Issuance of Guidelines and best practices from NIST per the October 30, 2023 Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence