The EU AI Act in practice: what changes in 2026 if you automate processes
Not a legal analysis. What an operator with a couple of agents in production needs to look at this year in Europe — without the legalese.
Regulation (EU) 2024/1689 — the AI Act — came into force on 1 August 2024. Almost two years on, most companies using AI in their operations still aren’t sure what applies to them, when, or who’s going to enforce it.
This post isn’t for lawyers. It’s for someone running a customer service agent, a CV classifier, or an internal assistant in production who wants to know what’s on their plate this year.
Three things worth nailing down before going into the detail.
One. The AI Act doesn’t force you to use AI. It puts obligations on whoever places it on the market and whoever uses it professionally.
Two. Obligations come in layers. Some have been live since February and August 2025. The big wave hits in August 2026.
Three. Spain has its own national supervisor: AESIA, headquartered in A Coruña, operational since 2024. It’s the first dedicated body of any member state and is already issuing guidance.
Quick timeline
| Date | What kicks in |
|---|---|
| 1 Aug 2024 | Regulation enters into force |
| 2 Feb 2025 | Prohibited practices (Article 5) and AI literacy obligation (Article 4) |
| 2 Aug 2025 | Obligations for general-purpose AI (GPAI) models |
| 2 Aug 2026 | High-risk Annex III systems, full governance, sanctions fully applicable |
| 2 Aug 2027 | High-risk systems embedded in regulated products (Annex I) |
The relevant year for most companies is 2026. That’s when real pressure starts on HR, scoring, healthcare, education, and essential services systems.
Which one are you? Provider or deployer
The regulation distinguishes several roles. For an operator the relevant split is:
- Provider — whoever develops an AI system and places it on the market.
- Deployer — whoever uses an AI system under their authority for a professional activity.
If you buy an agent from an agency and run it internally, you’re a deployer. If you build the agent in-house and expose it (to employees, customers, or another company), you’re the provider of that system.
Obligations differ. Most are heavier on the provider, but the deployer also carries duties of their own — especially if the system is high-risk.
What already applies, since February 2025
Prohibited practices (Article 5)
This isn’t a hypothetical list. Some things are flat-out prohibited:
- Inferring emotions of employees or students, except for medical or safety reasons.
- Biometric categorisation that infers race, political opinion, sexual orientation, etc.
- Subliminal manipulation systems or systems that exploit vulnerabilities.
- Social scoring by public or private bodies.
- Predicting crime based on profiling alone.
If you’re using AI to gauge your team’s emotional “engagement” through cameras or microphones, you’re in prohibited territory. Penalty: up to 7% of global turnover or €35 million.
AI literacy (Article 4)
It’s a short obligation, but a real one. Any company using AI must ensure the people involved have “a sufficient level of AI literacy.” No exam, no official certification — but you have to be able to show you’ve done something reasonable.
In practice: documented internal training, usage materials, an acceptable-use policy. If your team uses Copilot, ChatGPT, or a custom agent and nobody has written a page explaining limits and risks, that’s the first gap.
What applies to your providers, since August 2025
If you use general-purpose models (OpenAI, Anthropic, Google, Mistral, Meta, etc.), they carry obligations around transparency, technical documentation, copyright policy, and a training data summary. For models with systemic risk — those above the 10²⁵ FLOPs threshold — there are additional obligations on risk evaluation, cybersecurity, and serious incident reporting.
As a deployer, none of that applies to you directly. It affects you in three ways:
- The information you receive. The provider has to give you enough documentation for you to understand the system and use it compliantly.
- Terms of use. The conditions you accept reflect provider obligations that cascade to your use (registration, no use for prohibited cases, etc.).
- The voluntary Code of Practice. Signed in 2025 by most major providers. Meta declined to sign, which has caused commercial friction inside the EU. It’s a signal worth reading when picking a model.

What’s coming in August 2026: high-risk
The block that matters most for a company with automation in operations.
Typical cases that fall under high-risk (Annex III)
- Systems used for selection, evaluation, or decisions about employees — filtering CVs, assigning tasks, monitoring performance, deciding promotions or terminations.
- Access to essential services — credit scoring (with exceptions for fraud detection), life and health insurance underwriting, eligibility assessment for public services.
- Education — admission, exam grading, monitoring during tests.
- Healthcare and medical devices where Annex I applies.
- Justice and democracy — support to judicial decisions, electoral influence.
- Migration and border control — risk assessment, asylum decisions.
If your agent touches any of these areas — even as a support tool, not automating the final decision — you’re in high-risk.
Deployer obligations under high-risk
These aren’t just on the provider. The deployer also carries duties (Article 26):
- Use the system according to the provider’s instructions.
- Assign human oversight that is competent and has real authority to override decisions.
- Keep logs of system operations.
- Monitor performance and notify serious incidents to the provider.
- Conduct a fundamental rights impact assessment (FRIA) when you’re a public body or providing public services.
- Inform affected individuals when a decision affects them significantly.
Human oversight can’t be a rubber stamp. If the agent filters CVs and a person signs off “all good” without reviewing anything, that’s not oversight under the regulation.
What doesn’t change much — but is worth a look
Transparency (Article 50). Already in 2026, chatbots have to be clearly labelled as such, deepfakes as synthetic content, and emotion recognition systems where they’re permitted. A minor point, but enforceable.
Sanctions. Up to €35 million or 7% of global turnover for prohibited practices. Up to €15 million or 3% for general non-compliance. Up to €7.5 million or 1% for incorrect information to authorities. Fully applicable from August 2026.
Who enforces in Spain. AESIA is the national authority. Coordination with AEPD on data, with CNMC on markets, with Banco de España on credit. If you fall into a high-risk case, you’ll have more than one regulator looking at you.
Three sensible things to do this year
Inventory. A list of AI systems in use, with provider, model, purpose, and the data they touch. Without this, you can’t decide whether anything is high-risk.
Minimum usage policy. One page, written, signed off, explaining what the team can and can’t do with generative AI and agents. That covers the AI literacy obligation almost on its own.
Specific review for systems that touch people. If your agent decides on or influences employees, candidates, customers in essential processes, or users in healthcare or education — explicitly ask the provider how they’re going to meet the high-risk obligations in August 2026. If they don’t have a clear answer, you’ve got 12 months to find one.
The AI Act isn’t a paperwork exercise, but it isn’t a wall either. What it asks for is what any critical system should already have: documentation, oversight, logs, and a human accountable for them.
If your current agent doesn’t have those four things, the regulation isn’t your first problem.
Knowing what to automate, and at what risk, is the first decision — not the last. Start with the free diagnostic at canihireanai.com — and walk into 2026 with a map, not surprises.