AI ActComplianceBusiness

EU AI Act 2026: executive compliance guide for businesses

The EU AI Act (Regulation EU 2024/1689) takes effect in stages between 2025 and 2027. This executive guide explains the four risk categories, the timeline that applies to businesses operating in Europe, real fines (up to €35 million or 7% of global turnover) and a compliance checklist broken down by company size. No Official Journal translation: actionable decisions only.

What is the AI Act? Context 2026

The AI Act is the common name for Regulation (EU) 2024/1689, approved by the European Parliament and Council on 13 June 2024 and published in the Official Journal on 12 July 2024. It is the world's first comprehensive AI regulation, with direct application across all 27 member states without national transposition. Each member state designates a national supervisory authority — Spain appointed AESIA (based in A Coruña) as the competent authority, alongside sector-specific coordination with the data protection agency, central bank and sectoral regulators.

The regulation takes a risk-based approach: it does not regulate all AI equally but classifies systems by potential impact on fundamental rights, safety and health. Four-tier segmentation with increasing obligation density (Art. 5 absolute prohibitions → Annex III high-risk → Art. 50 limited transparency → minimal). A spam filter is not regulated the same way as a system deciding credit approval or staff selection. Benchmark: according to a 2025 AESIA study, 68% of Spanish SMEs with AI in production lack documented classification six months before the August 2026 full compliance deadline — a material compliance gap with growing regulatory exposure.

Territorial scope. It affects companies established in the EU, but also non-EU companies that market AI systems in the European market or whose outputs are used inside the EU. A US startup selling its AI SaaS to Spanish clients must comply with the AI Act just like a Madrid-based company.

Authority in Spain — AESIA. The Spanish AI Supervision Agency, based in A Coruña, is the national competent authority. It coordinates with the European AI Office and sector-specific authorities (AEPD on data protection, Bank of Spain on financial services, among others). Other EU member states have their own equivalents.

Beyond the AI Act, businesses operating in Europe remain subject to concurrent regulation: Spain's LOPDGDD 3/2018, the GDPR (EU 2016/679) and applicable sector-specific legislation (healthcare, financial, labour).

The 4 AI Act risk categories

The core of the AI Act is its classification into four risk levels with differentiated obligations. Correctly classifying your AI systems is the first compliance task — without this piece, everything else (DPIA, FRIA, technical documentation, EU registration) is built on sand. Operational framework in 3 steps: (1) inventory AI systems in production + shadow IT; (2) classify via the AI Act Traffic Light or qualified legal peer review; (3) documentation-closure plan by category with a 2 August 2026 deadline. So what: always prioritise by the highest risk level identified — any Cat III high-risk system triggers DPIA + FRIA + significant HITL + EU registration and blocks commercialisation until closed.

1. Unacceptable risk (prohibited)

Systems that violate fundamental rights, expressly prohibited since 2 February 2025. Article 5 lists practices such as:

  • Social scoring by public authorities or for general purposes (state-style social rating systems).
  • Subliminal manipulation or exploitation of vulnerabilities (age, disability, socio-economic situation).
  • Real-time remote biometric identification in public spaces by law enforcement, except narrowly tailored exceptions (missing-person search, imminent terrorist threats) with judicial authorisation.
  • Predictive policing based solely on profiling.
  • Untargeted mass scraping of facial images from the internet or CCTV to build facial recognition databases.
  • Emotion inference in workplace and educational settings (unless for medical or safety reasons).

Any use of these systems triggers the maximum fines.

2. High-risk

Allowed but with strict obligations. Annex III lists the areas considered high-risk:

  • Critical infrastructure (transport, water, energy, health).
  • Education and vocational training (admission, assessment, cheating detection).
  • Employment and worker management (recruitment, selection, promotion, dismissal, monitoring).
  • Essential public and private services (credit assessment, scoring, life/health insurance, benefits allocation).
  • Law enforcement (risk assessment, fraud detection, polygraphs).
  • Migration, asylum and border control.
  • Administration of justice and democratic processes (judicial assistance, electoral verification).
  • Medical devices with AI (diagnosis, clinical decision support) — intersection with MDR 2017/745.

Obligations: risk management system, data governance, technical documentation, activity logging, transparency and user information, human oversight, accuracy and cybersecurity, conformity assessment before commercialisation, registration in the EU database. In force from 2 August 2026 (except Annex II systems with specific deadlines).

3. Limited risk (transparency)

Systems with transparency obligations but no prior assessment. Particularly affects:

  • Chatbots and conversational assistants: the user must know they are interacting with AI.
  • Deepfakes and AI-generated or manipulated content: mandatory labelling.
  • Emotion recognition or biometric categorisation systems in permitted uses.
  • Generative AI (image, video, audio, text) in contexts where the recipient could confuse it with human content.

Most enterprise chatbots fall here: only the obligation to inform the user and, for generated content, label it. In force from 2 August 2026.

4. Minimal risk

Everything else: spam filters, AI in video games, non-critical recommender systems, internal semantic search, writing assistants, generic transcription. No specific AI Act obligations — though GDPR and other regulation may still apply. The European Commission encourages voluntary adherence to codes of conduct for this category, but it is not mandatory.

AI Act matrix — categories × examples × obligations × dates
CategoryBusiness examplesKey obligationsIn force
UnacceptableSocial scoring, subliminal manipulation, workplace emotion recognitionFull prohibition2 Feb 2025
High-riskStaff selection, credit scoring, AI clinical diagnosis, assisted justiceRisk management, DPIA/FRIA, logging, human oversight, EU registration2 Aug 2026 (Annex III)
LimitedChatbots, deepfakes, generative AI text/image, emotion recognition (allowed)Transparency (inform the user, label content)2 Aug 2026
MinimalSpam filters, AI in video games, non-critical recommenders, transcriptionNone specific (voluntary)N/A

Fuente: Regulation (EU) 2024/1689 — Art. 5 and Annexes II-III

Key entry-into-force dates

The AI Act applies in staggered phases. Knowing the timeline is critical to prioritise compliance work:

  1. 1 August 2024 — Publication in the Official Journal of the EU and formal entry into force (progressive application).
  2. 2 February 2025 — Prohibitions (Ch. II) and AI literacy obligations (Art. 4). Any unacceptable-category system must be retired.
  3. 2 August 2025 — Obligations for general-purpose AI models (GPAI, Ch. V): transparency, training data summaries, copyright compliance. Affects providers such as OpenAI, Anthropic, Google and Mistral.
  4. 2 August 2026 — Obligations for high-risk systems (Annex III) and transparency (Ch. IV). Most businesses need their systems classified and documented by this date.
  5. 2 August 2027 — Full compliance: Annex II high-risk systems (regulated products, e.g. medical devices), full obligations for pre-existing GPAI, framework close-out.

Official source: Article 113 of Regulation (EU) 2024/1689. Additional briefing at European Parliament.

How does it affect your business?

The level of obligation depends more on use than size — but size determines the reasonable operational effort.

SME (under 50 employees)

Most SMEs use minimal- or limited-risk AI: web chatbots, writing assistants, copilots, SaaS tools with integrated AI. Realistic minimum obligations:

  • Transparency in chatbots and generated content — clear notices to the user.
  • Internal AI usage policy — which tools are allowed, what data can enter prompts (never sensitive personal data without prior analysis).
  • AI literacy (Art. 4): staff using AI must have basic adequate training.
  • Provider contracts: signed DPA, EU residency when possible, AI Act compliance clause.

If the SME develops AI (not just uses it), scale proportionally: product risk classification, minimum documentation, logging.

Mid-size company (50-250 employees)

Add to the above:

  • Internal audit of all AI systems in use (including shadow AI).
  • Risk classification of each system by Annexes II-III.
  • Technical documentation for high-risk systems.
  • DPO or equivalent if processing requires it under GDPR.
  • Internal register of systems for traceability.
  • FRIA (Fundamental Rights Impact Assessment) for Art. 27 high-risk systems.

This is the highest-complexity band: companies may have high-risk systems (e.g. CV screening in HR) without realising it.

Enterprise (over 250)

The above plus:

  • Formal compliance programme with a designated officer (CAIO or equivalent).
  • Risk management system documented (ISO/IEC 42001 as international reference).
  • DPIA + FRIA mandatory for high-risk.
  • Registration in the EU database for high-risk systems before commercialisation.
  • Conformity assessment (internal or external depending on the case).
  • Post-market monitoring and serious incident reporting to the national authority.
  • Role-specific training (legal, technical, product, business).

2026 compliance checklist

Ten actionable steps applicable across sizes. Recommended order:

  1. AI systems inventory in use or development, including those used by departments without central governance (shadow AI).
  2. Risk classification of each system applying Annexes II-III.
  3. Confirm no unacceptable category — if anything enters there, immediate retirement plan.
  4. Contractual review with AI cloud providers: DPA, data residency, AI Act clauses.
  5. Internal AI usage policy approved and communicated to staff.
  6. Basic AI training (Art. 4) for all staff who use it.
  7. Technical documentation for high-risk systems.
  8. DPIA and FRIA where applicable.
  9. Auditable logging of relevant AI-assisted decisions.
  10. Internal incident reporting channel and escalation procedure to the national authority.

Fines and sanctions

Article 99 of the AI Act establishes a tiered sanction regime. The amounts are maximums — authorities apply proportionality based on severity, duration, recidivism and company size.

AI Act sanctions regime — offence × amount × % turnover × article
Type of offenceMaximum amount% global annual turnoverRef. article
Prohibited practices (unacceptable cat.)€35 M7%Art. 99.3
Non-compliance with high-risk, transparency, governance€15 M3%Art. 99.4
Incorrect/misleading information to authorities€7.5 M1%Art. 99.5
GPAI non-compliance (foundation model providers)€15 M3%Art. 101

Fuente: Regulation (EU) 2024/1689 — Articles 99-101

SMEs and startups. The regulation provides for the smaller amount (fixed figure or percentage) between the two options — an important nuance protecting small companies from disproportionate fines. Reference: Art. 99.6 of Regulation (EU) 2024/1689.

What should you do now?

Five operational steps for companies that have not yet started compliance work:

1. Internal audit (2-4 weeks). Inventory all AI systems in use and development. SaaS tools with integrated AI count. Interview area owners. Output: register with system name, provider, use, data processed, internal owner.

2. Classify by risk (1-2 weeks). Apply Annexes II-III of the regulation. Flag any doubtful system for qualified legal review.

3. Document (4-8 weeks depending on volume). For high-risk systems: technical fact sheet, risk management system, DPIA/FRIA, human oversight measures. For transparency: notices on chatbots and generated content.

4. Implement (ongoing). Approved internal policy, training, revised contracts, logging activated, operational incident channel.

5. Review (annual). The AI Act will evolve with Commission guidance and national authority guidelines. Minimum annual internal register audit and update. Also check the European Commission's official page for regulatory updates.

If your business has systems that could fall into high-risk (customer scoring, AI staff selection, assisted clinical diagnosis, credit management), the useful window to comply before 2 August 2026 is tight. Starting in 2026 is not urgent — it is necessary.

Frequently asked questions

Frequently asked questions about the EU AI Act

Does the EU AI Act affect my SME?
Yes, but proportionally. If you only use SaaS tools with AI (ChatGPT, Claude, chatbots, copilots), your obligations are mainly transparency, staff AI literacy (Art. 4) and contractual due diligence with providers. You do not need a risk management system or DPIA unless you enter high-risk territory (AI-driven staff selection, customer scoring, etc.).
Which authority supervises the AI Act in Spain, and when does it contact businesses?
AESIA is the Spanish AI supervision authority, based in A Coruña. Its functions are: market surveillance, high-risk system assessment, sanctioning non-compliance, coordination with the European AI Office, and issuing interpretive guidance. It contacts businesses through complaints, proactive inspection or after a serious incident report. For most businesses it is an authority to report to, not one you interact with daily. Each EU member state has its own designated authority; AESIA is Spain's.
Do I need a DPO if I use generative AI in my business?
It depends on GDPR, not the AI Act directly. If you process personal data at scale, sensitive data or conduct systematic monitoring, a DPO is mandatory under GDPR regardless of AI use. If you use generative AI with personal data, a DPO is usually necessary or strongly recommended. The AI Act requires a responsible party for the risk management system in high-risk cases — this may or may not overlap with the DPO.
Are ChatGPT or Claude high-risk systems?
The models themselves (GPT-5, Claude 4, Gemini, Mistral Large) are general-purpose AI models (GPAI) — a specific AI Act category with obligations for providers (OpenAI, Anthropic, Google, Mistral), in force since August 2025. The use you make of them in your business determines your category: using them to draft emails is minimal or limited risk; using them to screen candidates is high risk and makes you a deployer of a high-risk system.
What real fine can an SME expect for non-compliance?
The sanctions regime establishes a maximum of €35 million or 7% of global turnover for the most serious violations. Article 99.6 states that for SMEs and startups the smaller amount (fixed figure or percentage) applies — significantly reducing real exposure. Authorities also apply proportionality: first offences for minor non-compliance without serious harm are usually resolved with formal requirements and remediation deadlines before sanctions.
Does GDPR still apply on top of the AI Act?
Yes. These are complementary regulations that coexist. GDPR regulates the processing of personal data (lawfulness, transparency, minimisation, security). The AI Act regulates AI systems as such (risk, governance, documentation). A single system may be subject to both: AI-driven CV screening processes personal data (GDPR) and is a high-risk system under Annex III (AI Act). In Spain, add LOPDGDD 3/2018 and sector-specific regulation.

Need an assessment of your AI Act exposure?

At Genai Sapiens Consulting we work with businesses operating in Europe to inventory AI systems, classify their risk under Regulation (EU) 2024/1689 and design the minimum viable compliance programme for 2026. Optional AaaS retainer for ongoing AI governance and regulatory updates. A free diagnosis gives you a realistic view of the work ahead and associated cost.

Book a free diagnosis →

Looking for vertical context? See the Spanish edition of our AI Act Spain 2026 guide with additional Spanish-law detail.