1. What Is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the European Union's landmark horizontal regulation on artificial intelligence. It was formally adopted by the European Parliament on March 13, 2024, and published in the EU Official Journal on July 12, 2024, entering into force on August 1, 2024.
The Act takes a risk-based approach: it does not prohibit AI broadly, but categorises AI systems by the level of risk they pose to health, safety, and fundamental rights — and imposes proportionate obligations on each tier. Systems that pose unacceptable risks are banned outright.
Core goals
- Protect fundamental rights, safety, and democracy from harmful AI.
- Create legal certainty to support AI investment and innovation in the EU.
- Establish a level playing field by applying the same rules to all market participants inside the EU regardless of where they are established.
- Set a global standard — in the way GDPR shaped global data protection, the EU AI Act is expected to influence AI regulation worldwide.
2. Enforcement Timeline
- August 1, 2024 — Act enters into force.
- February 2, 2025 — Chapter I (general provisions) and Chapter II (prohibited AI) start applying. Prohibited practices are now illegal.
- August 2, 2025 — Chapter III Section 4 (notified bodies) and Chapter V (GPAI models) start applying. GPAI model providers must comply with transparency and documentation obligations.
- August 2, 2026 — The main body of the regulation fully applies, including all obligations for high-risk AI systems (Annex I and Annex III). Most compliance preparation should be complete by this date.
- August 2, 2027 — High-risk AI systems under Annex I (product safety directives) that were already on the market before August 2, 2026 must comply.
3. Who Does It Apply To?
The EU AI Act has extra-territorial reach, similar to GDPR:
- Providers — Companies or individuals who develop and place AI systems on the EU market or put them into service. Includes businesses based outside the EU whose AI outputs are used in the EU.
- Deployers — Businesses or public authorities that use AI systems in a professional context in the EU.
- Importers — Companies in the EU that import AI systems from outside the EU.
- Distributors — Chains making an AI system available in the EU market.
- Product manufacturers — Companies using AI components as safety features in regulated products.
Who is excluded: Private non-professional use. Research and development (under conditions). Open-source model providers with fewer than 100M parameters released with open weights (with caveats).
4. Risk Classification System
The Act creates four tiers of risk:
- Unacceptable Risk — Banned. AI systems that pose clear threats to fundamental rights.
- High Risk — Heavily regulated. Must meet strict conformity requirements before being placed on the market.
- Limited Risk — Transparency obligations only (disclosure to users).
- Minimal Risk — No mandatory obligations. Voluntary codes of conduct encouraged.
The majority of AI deployed today (spam filters, content recommendations, production scheduling tools) falls into minimal risk. High-risk is the tier that requires the most preparation.
5. Prohibited AI Practices
The following AI practices were banned from February 2, 2025:
- Subliminal manipulation — AI that exploits subliminal techniques or vulnerabilities to manipulate people in ways that damage their interests.
- Social scoring — General-purpose public authority social scoring systems that classify individuals by trustworthiness and subject them to detrimental treatment.
- Predictive policing based on profiling — AI used by law enforcement to predict criminal behaviour of individuals solely based on profiling or personality traits.
- Real-time remote biometric identification in public spaces — Live facial recognition by law enforcement in publicly accessible spaces (with narrow exceptions for specific serious crimes and terrorism, requiring prior judicial authorisation).
- Biometric categorisation by sensitive attributes — Using AI to deduce race, political opinions, religious beliefs, sexual orientation, or trade union membership from biometric data.
- Emotion recognition at work and education — Deploying emotion recognition systems in workplaces and educational institutions (with narrow exceptions for medical purposes).
- Manipulation of children — Exploiting vulnerabilities of minors (age, disability) to cause psychological harm.
6. High-Risk AI Systems
High-risk AI falls into two categories:
Annex I — AI as safety components in regulated products
AI embedded in products covered by EU product safety legislation: medical devices, vehicles, aviation, machinery, toys, elevators, etc. These must follow both the product safety directive and the AI Act.
Annex III — Standalone high-risk AI use cases
- Biometrics — Remote biometric identification, biometric categorisation, emotion recognition (excluding prohibited uses).
- Critical infrastructure — AI managing safety components in energy, water, transport, digital infrastructure.
- Education & vocational training — AI that determines access or assigns people to educational institutions; evaluates students.
- Employment & HR — AI used for recruitment, CV screening, promotion decisions, performance monitoring, contract termination.
- Essential services — AI for credit scoring, loan eligibility, insurance pricing, social benefits eligibility, emergency services dispatch.
- Law enforcement — AI for individual risk assessment, polygraph testing, crime analytics used on individuals.
- Migration & asylum — AI for risk assessment of irregular migrants, document authenticity verification, asylum processing.
- Justice & democracy — AI assisting courts in legal research, administration of justice.
7. Obligations for High-Risk AI
If your AI system is classified as high-risk, you must:
Before placing on the market
- Risk management system — Establish and maintain a continuous risk identification, analysis, and mitigation process throughout the lifecycle.
- Data governance — Training, validation, and test datasets must meet quality criteria: relevant, representative, sufficiently free of errors, complete for the intended purpose.
- Technical documentation — Detailed documentation (see Section 13) must be prepared before deployment and kept up to date.
- Record-keeping — Automatic logging of events ("logging by design") that enables post-hoc auditability and traceability.
- Transparency to deployers — Complete instructions for use covering purpose, performance, limitations, foreseeable risks, maintenance requirements, and human oversight measures.
- Human oversight — Design the system so natural persons with competence can oversee it, understand its limitations, disregard or override its outputs, and intervene or stop it.
- Accuracy, robustness & cybersecurity — Meet appropriate performance levels; be resilient against errors, faults, and adversarial inputs.
- Conformity assessment — Many high-risk systems require a third-party conformity assessment by a notified body. Some allow self-assessment.
- EU registration — Register the AI system in the EU database for high-risk AI systems before placing it on the market.
- CE marking — Affix the CE marking where required and draw up an EU Declaration of Conformity.
After deployment
- Post-market monitoring — Proactively collect and review performance data throughout the lifecycle.
- Serious incident reporting — Report serious incidents and malfunctions to national authorities within 15 days (or 72 hours in life-threatening cases).
- Cooperation with authorities — Provide documentation and access upon request from market surveillance authorities.
- Substantial modification — Reassess compliance if you make changes that materially affect the AI system's performance or risk level.
8. Limited-Risk AI Systems
Limited-risk AI has lighter obligations focused on transparency:
- Chatbots & conversational AI — Inform users they are interacting with an AI (unless obvious from context).
- Deepfakes & AI-generated content — Disclose that content is artificially generated or manipulated (except for clearly artistic or satirical purposes).
- Emotion recognition & biometric categorisation — Inform people when their emotions or biometric characteristics are being processed.
9. Minimal-Risk AI Systems
The vast majority of AI applications fall here: spam filters, AI-powered search, recommendation systems, inventory management, fraud detection in non-credit contexts, and most productivity tools. There are no mandatory obligations, though voluntary codes of conduct and adherence to standards are encouraged.
10. GPAI: General-Purpose AI Models
General-Purpose AI (GPAI) models are foundation models like GPT-4o, Claude, Gemini, Llama, and Mistral — trained on large amounts of data and capable of a wide range of tasks. The EU AI Act introduced specific obligations for GPAI model providers from August 2, 2025:
Obligations for all GPAI providers
- Technical documentation — Prepare and update documentation covering training data, training methodology, energy consumption, capabilities and limitations, and intended uses.
- Copyright compliance — Maintain a policy respecting copyright law and the EU Copyright Directive. Provide a machine-readable summary of training data.
- Downstream disclosure — Make documentation and information available to downstream providers who integrate the GPAI model into their own AI systems.
Open-source GPAI exception
GPAI models released as open-source (weights publicly available) are exempt from the technical documentation and downstream disclosure requirements — unless they pose systemic risk.
11. Systemic Risk GPAI Models
GPAI models trained with a computational budget exceeding 10²⁵ FLOPs are presumed to carry systemic risk. As of early 2026, this threshold is met by: GPT-4 class models, Claude 3 Opus class, Gemini Ultra class, and likely Llama 3 400B+ models.
Additional obligations for systemic risk GPAI providers
- Conduct and document model evaluations, including adversarial testing (red-teaming), before release and after significant updates.
- Assess and mitigate systemic risks at EU level.
- Ensure adequate cybersecurity protection of the model weights.
- Report serious incidents to the EU AI Office within 2 weeks of becoming aware.
- Implement measures to ensure energy efficiency and environmental footprint reporting.
12. Conformity Assessment
For high-risk AI systems, an EU conformity assessment must be completed before placing the system on the market:
- Annex III non-biometric systems — Providers may generally perform a self-assessment (internal control procedure) and draw up a Declaration of Conformity.
- Biometric identification systems and Annex I embedded AI — Require a third-party assessment by an EU notified body (accredited conformity assessment organisation).
The conformity assessment evaluates whether the AI system meets the technical requirements of the Act, including risk management, data quality, accuracy, robustness, and human oversight design.
13. Technical Documentation Requirements
High-risk AI providers must maintain Annex IV technical documentation covering:
- General description: intended purpose, version history, hardware/software requirements.
- Detailed description of system design: components, algorithms, model type, training data description.
- Development process: data collection, data preparation, design choices, training methodology.
- Performance metrics: accuracy, precision, recall, F1 on validation and test sets; performance across population subgroups.
- Risk and mitigation measures.
- Human oversight measures built into the system.
- Instructions for use for deployers.
- Post-market monitoring plan.
- Cybersecurity measures.
14. Transparency Obligations
Transparency rules apply to specific scenarios regardless of risk tier:
- AI that interacts with humans directly (chatbots, voice assistants) must disclose its AI nature.
- AI that generates synthetic content (text, audio, video, images) must mark that content as AI-generated using machine-readable metadata (e.g., C2PA standard).
- Deepfake video must be labelled, except for clearly artistic or satirical purposes.
15. The EU AI Office
The European AI Office (part of the European Commission) is the central body responsible for:
- Supervising GPAI model providers directly at EU level.
- Developing technical standards and evaluation methodologies.
- Maintaining the EU database for registered high-risk AI systems.
- Coordinating with national market surveillance authorities.
- Publishing guidance, codes of practice, and general-purpose AI guidelines.
The AI Office began operations in 2024 and published its first GPAI Code of Practice for public consultation in early 2025.
16. Fines & Enforcement
The EU AI Act's penalties are significant:
- Prohibited AI violations — Up to €35 million or 7% of global annual turnover, whichever is higher.
- Non-compliance with obligations for high-risk AI or GPAI — Up to €15 million or 3% of global annual turnover.
- Providing incorrect information to authorities — Up to €7.5 million or 1.5% of global annual turnover.
- SME reduction — For SMEs and startups, fines are capped at the percentage threshold (not the flat amount) when the percentage is lower.
Enforcement is carried out by national market surveillance authorities in each Member State. The EU AI Office handles GPAI infringements directly.
17. Exemptions & Special Cases
- National security — AI used exclusively for national security, defence, or military purposes is out of scope.
- Research & development — AI in R&D is generally exempt from high-risk obligations when not yet placed on the market.
- Personal non-professional use — A developer using an AI tool for personal, non-professional purposes has no obligations as a user (though the developer of the tool may).
- Open-source general exemption — Free and open-source AI components not constituting high-risk AI or prohibited AI are generally exempt from most obligations.
- SME simplification — SMEs benefit from simplified documentation requirements and priority access to regulatory sandboxes for testing.
18. Impact on Software Developers
For most developers, day-to-day impact depends on what you build and who uses it:
You are probably not directly impacted if you:
- Build internal tools with AI for your own team's productivity.
- Use AI APIs (OpenAI, Anthropic, Google) to build applications not in high-risk categories.
- Release open-source AI tools without commercial intent in the EU.
You need to act if you:
- Deploy or sell AI tools used for HR decisions (CV screening, performance evaluation).
- Provide AI for credit scoring, insurance pricing, or benefits eligibility.
- Build AI that processes biometric data or makes recommendations in education.
- Train or release a large foundation model (10²⁵ FLOPs+ training compute).
- Embed AI as a safety component in regulated products (medical devices, vehicles).
What all developers should do now:
- Disclose to users when they are interacting with an AI chatbot or generative AI tool.
- Label AI-generated content (images, audio, video) with machine-readable provenance metadata.
- Maintain basic documentation of AI systems you operate in a professional context.
19. Practical Compliance Checklist
Step 1: Classify your AI
- Does your AI fall under any prohibited use case? → Stop immediately.
- Is your AI used in any Annex III context (HR, credit, education, law enforcement, etc.)? → High-risk track.
- Is your AI a GPAI model with 10²⁵+ FLOPs training compute? → Systemic risk GPAI track.
- Is your AI interactive (chatbot) or generative (images, video, text)? → Transparency obligations.
- Otherwise → Minimal risk. Document and monitor voluntarily.
Step 2: High-risk track actions (by Aug 2, 2026)
- Establish a risk management system with documented processes.
- Audit training data for quality, representativeness, and bias.
- Prepare Annex IV technical documentation.
- Implement logging and record-keeping.
- Design human oversight into the system.
- Complete conformity assessment (self or third-party).
- Register in the EU high-risk AI database.
- Prepare instructions for use for deployers.
- Set up post-market monitoring and incident reporting processes.
Step 3: GPAI track actions (from Aug 2, 2025)
- Prepare technical documentation covering training data, compute, and capabilities.
- Draft and publish a copyright compliance policy.
- If systemic risk: conduct red-teaming, implement cybersecurity for weights, set up incident reporting.
20. US vs EU AI Regulation Comparison
| Aspect | EU AI Act | US AI Policy (2026) |
|---|---|---|
| Approach | Binding horizontal regulation | Executive orders + sector guidance (no federal AI law) |
| Risk basis | Yes — four-tier risk system | Partial (NIST AI RMF voluntary) |
| Prohibited practices | Explicit statutory list | No equivalent statutory prohibition |
| Fines | Up to 7% global turnover | Sector-by-sector FTC/CFPB enforcement |
| GPAI rules | Yes — binding obligations | Voluntary NIST guidance |
| Open-source | Partial exemption | No specific open-source rules |
21. FAQ
- Does the EU AI Act apply to US companies?
- Yes, if your AI system is used in the EU, regardless of where your company is based. This mirrors GDPR's extra-territorial reach. A US startup selling a high-risk AI tool to EU customers must comply.
- Does using ChatGPT or Claude in my app make me a "provider"?
- Using an API to build an application makes you a deployer (and possibly a provider if you substantially modify the model). The underlying model provider (OpenAI, Anthropic) has their own obligations as a GPAI provider.
- Is an internal HR chatbot high-risk?
- If the chatbot is used to screen CVs, rank candidates, or influence hiring decisions — yes, it likely falls under the high-risk Annex III employment/HR category. If it only answers FAQs about company policy, it is limited risk at most.
- Does my AI need to be registered?
- High-risk AI systems (Annex III) must be registered in the EU database for high-risk AI systems before being placed on the market. The database is operated by the EU AI Office.
- Are AI coding assistants (Copilot, Cursor) high-risk?
- No. Developer productivity tools do not fall into any Annex III high-risk category and are not prohibited. They are minimal risk with very limited obligations (transparency if the user doesn't know they're interacting with AI, which is implicit for coding tools).
22. Glossary
- GPAI Model
- General-Purpose AI Model — a model trained with large amounts of data that can perform a wide variety of tasks (e.g., GPT-4o, Claude, Gemini, Llama).
- High-Risk AI System
- An AI system listed in Annex I or Annex III of the EU AI Act that poses significant risk to health, safety, or fundamental rights.
- Notified Body
- An EU-accredited third-party organisation authorised to perform conformity assessments for certain high-risk AI systems.
- Conformity Assessment
- The process of verifying that an AI system meets the requirements of the EU AI Act before it is placed on the market.
- CE Marking
- A mark indicating that a product (including AI embedded in products) meets EU regulatory requirements, allowing free movement within the EU single market.
- Market Surveillance Authority
- The national authority in each EU Member State responsible for monitoring and enforcing the EU AI Act within its territory.
- Regulatory Sandbox
- A controlled testing environment created by national authorities that allows AI developers to test innovative AI systems under real conditions with reduced regulatory burden, subject to supervision.
23. References & Further Reading
- EU AI Act — Official Text (EUR-Lex)
- European Commission: AI Regulatory Framework
- EU AI Office — Official Website
- NIST AI Risk Management Framework
- IAPP: EU AI Act Practical Summary
- EU AI Act Explorer (Access Now)
24. Conclusion
The EU AI Act is the most consequential technology regulation since GDPR. Its August 2, 2026 full application deadline is approaching fast, and the compliance preparation timeline for high-risk AI systems — with conformity assessments, technical documentation, data governance reviews, and registration — is measured in months, not weeks.
Most developers and businesses building AI tools will find themselves in the minimal or limited risk tiers and face only light transparency obligations. But organisations deploying AI in HR, credit, critical infrastructure, healthcare, law enforcement, or education must act now to avoid significant fines and market access barriers.
Start by classifying your AI systems today. If any fall in high-risk categories, engage a compliance team now — waiting until 2026 leaves insufficient time for the required technical documentation, risk management, and conformity assessment.