Skip to content

    We use cookies for analytics and to improve your experience. Learn more in our Cookie Policy.

    Back to blog
    2026-04-139 min read

    EU AI Act Compliance: What Your AI Agents Need Before February 2027

    EU AI ActComplianceAI GovernanceGDPR

    The EU AI Act is the first comprehensive legal framework for artificial intelligence, and it is now in the phase where businesses actually have to prepare. The general-purpose AI provisions took effect in August 2025. The high-risk system obligations start applying in August 2026. The full regime, including conformity assessments for most high-risk categories, lands on 2 February 2027. Any company deploying AI agents that touch EU users — regardless of where the company is headquartered — needs a plan before then.

    This article is a practical checklist for mid-market companies. It is not legal advice. If you are close to any of the high-risk thresholds below, talk to qualified counsel. What follows is the working framework we use with clients to decide what matters, what to document, and what can wait.

    The four risk categories

    The AI Act classifies systems into four tiers based on risk. Which tier you fall into determines nearly everything else.

    • Unacceptable risk (banned): Social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, cognitive manipulation of vulnerable groups. These are prohibited outright. If your system does any of this, stop.
    • High risk: AI used in critical infrastructure, education and vocational training, employment and worker management (including CV screening), access to essential services (credit scoring, insurance pricing), law enforcement, migration, and administration of justice. Most of the compliance burden lives here.
    • Limited risk: Chatbots, emotion recognition systems, biometric categorization, and AI-generated content (including deepfakes). These are subject mainly to transparency obligations — users must be informed they are interacting with AI, and synthetic content must be labelled.
    • Minimal risk: Everything else. Spam filters, AI in video games, basic recommendation systems for entertainment. No specific obligations under the Act, though GDPR and product liability law still apply.

    For most mid-market AI deployments — customer support agents, sales automation, internal productivity tools, marketing content — you are almost certainly in the limited risk tier. The compliance burden is real but manageable. The companies that have to worry hardest are those deploying AI in HR, credit, insurance, healthcare, and education.

    Limited risk: the checklist most businesses need

    If your AI agents are limited risk, the obligations are focused on transparency. The practical checklist:

    • Disclose AI interactions: Users must know they are talking to an AI, not a human. A one-line disclosure at the start of a chat, clearly visible, is sufficient. "Hi, I'm an AI assistant" works. Burying it in the ToS does not.
    • Label synthetic content: Any AI-generated images, video, audio, or text that is published must be machine-readable as AI-generated. C2PA metadata is the emerging standard.
    • Handle deepfakes carefully: If your agent generates content that appears to depict real people or events, the labelling requirement is strict. Exception exists for creative/satirical works with clear context.
    • Respect the right to human review: If your agent makes decisions that affect users (even soft decisions like "route to this sales tier"), users should be able to request a human review. Build the escalation path.

    None of this is technically hard. The mistake we see most often is treating it as a launch-day add-on instead of a design constraint. Disclosure strings, human-review flows, and content labelling are much easier to design in than to retrofit.

    High risk: the heavy lift

    If your system falls into the high-risk category, the obligations are substantial. In summary: you need to establish a risk management system, use high-quality training data and document it, maintain technical documentation showing how the system works, implement logging and traceability of system decisions, ensure human oversight of decisions, and meet accuracy, robustness, and cybersecurity requirements. Before deployment, most high-risk systems require a conformity assessment — either self-assessed or by a notified body, depending on the category.

    The documentation alone is serious. Expect 40–100 pages covering intended purpose, training data lineage, testing methodology, risk mitigations, and known limitations. The data governance requirements mean you need to know where every training example came from and whether it contains personal data.

    For mid-market companies, the honest advice is usually: if you can avoid being in scope, do. If your use case is genuinely high-risk, budget 3–6 months and external compliance help. If it is borderline, consider whether the feature can be redesigned to land in limited risk territory instead.

    Penalties

    The Act's penalties are among the highest of any EU regulation. Prohibited AI practices: up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk obligations: up to €15 million or 3% of turnover. Supplying incorrect information to authorities: up to €7.5 million or 1% of turnover. For a mid-market company with $50M revenue, the upper bracket is genuinely existential.

    Enforcement will be uneven in the first year — every regulator warns before it fines — but by 2028 we expect to see the first significant enforcement actions. The right framing is not "we will probably be fine," it is "this is the new baseline, same as GDPR was in 2018."

    What to do before February 2027

    A realistic 12-month prep plan for a mid-market company:

    • Months 1–2: Inventory. List every AI system you deploy or embed, including third-party tools. For each, record: purpose, data flows, who uses it, what decisions it influences. You cannot classify risk without this.
    • Months 3–4: Classify. Map each system to a risk tier. Most will be limited or minimal. Flag anything that could be high-risk for detailed review.
    • Months 5–8: Remediate. Add disclosures, labelling, and human-review flows for limited-risk systems. For high-risk systems, start the conformity assessment track.
    • Months 9–12: Document and test. Produce the technical documentation, run compliance tests, get external review if high-risk is in scope.

    The mistake companies make is waiting. February 2027 feels far away until you realize that the documentation work alone is measured in months, not weeks, and external audit slots fill up as the deadline approaches.

    If you are running AI agents that touch EU users and the compliance picture is unclear, that is exactly the kind of assessment we help with at N40 — we are not lawyers, but we can tell you which systems are in scope, what documentation you need, and where to invest the remediation effort. Start a conversation at /contact.