Good — solid picture of where things stand. The key facts are clear, the competition is mostly law firms and consultancies writing dense legal content. A practical, plain-English guide aimed at UK SMBs doesn't really exist yet. Here's the article:
EU AI Act: What UK Businesses Need to Know in 2026
The EU AI Act is the world's first comprehensive AI regulation. It came into force in August 2024 and is being phased in through to 2027. Despite Brexit, it applies to a significant number of UK businesses — and the main compliance deadline is August 2026.
If you're a UK business that uses AI tools, sells software with AI features to EU customers, or processes data from EU citizens using AI systems, this regulation likely affects you. Here's what you need to know.
Does the EU AI Act Apply to UK Businesses?
Yes — for many UK businesses, it does. The short answer is that the EU AI Act has extraterritorial reach, similar to GDPR. It applies based on where your AI system's outputs are used, not where your business is based.
Specifically, the Act applies to UK businesses that:
- Sell or deploy AI systems in the EU market — if you provide software, SaaS, or AI-enabled products to EU customers, you're in scope as a "provider"
- Use AI systems whose outputs affect people in the EU — if you deploy AI tools that make decisions about EU citizens (employees, customers, users), you're in scope as a "deployer"
- Have EU subsidiaries or partners — if your AI systems reach the EU through a subsidiary or distribution partner, you're still in scope
The UK is not bound by the EU AI Act as domestic law — the UK has taken a different approach, relying on existing regulators rather than passing a single AI law. However, that doesn't exempt UK businesses from the Act's reach when they operate in or serve the EU market.
As PwC's UK team puts it: UK entities are within scope as providers when releasing AI systems through EU subsidiaries, and also within scope if their models are not deployed in the EU but their outputs are intended to be used there.
The practical question isn't whether the EU AI Act is technically UK law — it isn't. The question is whether your business touches the EU market in ways that bring you within its scope. For many UK businesses, the answer is yes.
Key Dates and Deadlines
The Act is being phased in over three years. Here's where things stand as of 2026:
| Date | What happened / happens |
|---|---|
| 1 August 2024 | EU AI Act entered into force |
| 2 February 2025 | Prohibitions on unacceptable-risk AI systems took effect |
| 2 February 2025 | AI literacy obligations for employers began |
| 2 August 2025 | Obligations for general-purpose AI (GPAI) model providers began |
| 2 August 2026 | Most remaining obligations apply — including high-risk AI system requirements |
| 2 August 2027 | Obligations for certain high-risk AI in regulated products (toys, medical devices, etc.) |
The critical date is 2 August 2026 — that's when the bulk of compliance requirements kick in for high-risk AI systems. If your business uses or provides AI systems that fall into the high-risk category, you need to be compliant by then.
There is a potential extension on the table. The EU Commission's "Digital Omnibus" proposals from November 2025 could push some high-risk obligations to 2027 for SMEs, but this hasn't been adopted yet and shouldn't be relied on for planning purposes.
How the Act Categorises AI Systems
The EU AI Act uses a risk-based approach — the obligations you face depend on which risk category your AI system falls into.
Unacceptable risk — prohibited outright (in effect since February 2025):
These are banned entirely. Examples include:
- AI systems that manipulate people using subliminal techniques
- Social scoring systems that rate citizens based on behaviour
- Real-time biometric surveillance in public spaces (with narrow exceptions)
- Predictive policing based on protected characteristics
- AI that infers emotions in workplace or educational settings (except for medical reasons)
If your business uses any of these, you need to stop immediately — these prohibitions are already in effect.
High risk — significant compliance obligations (from August 2026):
High-risk systems are those used in areas where errors could cause serious harm. The Act lists specific categories including:
- Biometric identification systems
- AI used in critical infrastructure (energy, water, transport)
- AI used in education — student assessment, admissions decisions
- AI used in employment — recruitment, performance evaluation, termination decisions
- AI used in access to essential services — credit scoring, insurance, benefits
- AI used in law enforcement
- AI used in administration of justice
If your business uses AI in any of these areas — including AI-powered HR tools for screening CVs, credit risk models, or automated performance management — you're likely dealing with high-risk systems.
Limited risk — transparency requirements (from August 2026):
These are systems that interact directly with people or generate content. Requirements are lighter but real:
- Chatbots must disclose they are AI
- AI-generated images, audio, or video must be labelled as artificially generated
- Deepfake content must be disclosed
Minimal risk — no specific obligations:
Most AI tools fall here — spam filters, AI-powered search, recommendation engines, basic automation. No specific Act obligations apply, though the EU Commission encourages voluntary codes of conduct.
What High-Risk AI Compliance Requires
If your business provides or deploys high-risk AI systems, the August 2026 obligations are substantial. Here's what's required:
For providers (organisations that develop and place AI systems on the market):
- Quality management system — documented processes covering design, development, testing, and monitoring of the AI system
- Technical documentation — detailed records of the system's purpose, design, training data, performance metrics, and risk mitigation measures
- Data governance — evidence that training data is representative, of sufficient quality, and free from bias
- Human oversight — documented mechanisms for human intervention, including when and how humans can override the system
- Conformity assessment — a formal assessment confirming the system meets Act requirements (self-assessment for most systems, third-party for highest-risk)
- Registration — high-risk systems must be registered in the EU's AI database before being placed on the market
- Post-market monitoring — ongoing monitoring of system performance after deployment
- Incident reporting — serious incidents must be reported to national authorities
For deployers (organisations that use AI systems in their operations):
- Risk assessment — assess how the AI system affects people in your specific deployment context
- Human oversight — implement the oversight measures specified by the provider
- Fundamental rights impact assessment — required for certain deployers (public bodies and some private operators of high-risk systems)
- Staff training — ensure staff using high-risk AI have sufficient AI literacy
- Record keeping — maintain logs of system operation where the system generates them
What Are the Penalties?
The fines are significant and based on global turnover — not just EU revenue:
| Violation | Maximum fine |
|---|---|
| Prohibited AI practices | €35 million or 7% of global annual turnover (whichever is higher) |
| Most other violations (high-risk obligations, transparency) | €15 million or 3% of global annual turnover |
| Providing incorrect information to authorities | €7.5 million or 1.5% of global annual turnover |
For SMEs, fines are capped at the lower of the percentage or fixed amount thresholds — but even 3% of a £5m turnover business is £150,000. These are not trivial numbers.
Does the UK Have Its Own AI Regulation?
As of March 2026, the UK does not have a single AI law. The UK government has taken a principles-based, sector-specific approach — asking existing regulators (ICO, FCA, CMA, etc.) to apply existing laws to AI in their respective areas.
This means:
- UK businesses are not subject to the EU AI Act as domestic law
- But UK businesses operating in the EU market face the EU AI Act regardless
- The UK's approach creates divergence — you may face different obligations for your UK operations versus your EU-facing operations
- A UK AI Bill was anticipated in 2025 but did not materialise — the government has indicated more time is needed
The practical implication: UK businesses with EU exposure need to comply with the EU AI Act for that part of their operations, while also tracking what UK-specific requirements emerge from sector regulators like the ICO.
What Should UK SMBs Do Now?
The August 2026 deadline is close. Here's a practical starting point:
Step 1: Inventory your AI systems List every AI tool your business uses or provides — including third-party software with AI features. Include HR tools, customer-facing chatbots, predictive analytics, automated decision-making systems, and anything that uses machine learning.
Step 2: Assess whether you're in scope For each system, ask: does it touch the EU market? Do EU customers, employees, or users interact with it or are affected by its outputs? If yes, the Act applies.
Step 3: Categorise each system by risk For each in-scope system, determine which risk category it falls into. Most business tools will be minimal or limited risk. HR and credit-related AI tools are more likely to be high-risk.
Step 4: Check for prohibited practices Review any AI systems that make inferences about people's emotions, characteristics, or behaviour. If any cross into prohibited territory, stop using them — these rules are already in effect.
Step 5: Build documentation for high-risk systems If you have high-risk systems, start building the required documentation — intended use, risk assessments, human oversight procedures, data governance records. This takes time to do properly.
Step 6: Review supplier agreements If you use third-party AI systems (which most businesses do), check whether your suppliers have provided the required technical documentation and whether your contracts allocate compliance responsibilities appropriately.
How This Relates to Your Existing Compliance Programme
If you're already working towards ISO 27001 or have a GDPR compliance programme in place, you have a head start on EU AI Act compliance. Several requirements overlap:
- Data governance — GDPR Article 30 records and data quality processes feed directly into AI Act data documentation requirements
- Risk assessment — your existing risk register methodology applies to AI system risk assessments
- Incident management — AI Act incident reporting builds on your existing security incident procedures
- Policy management — AI governance policies fit naturally within your existing ISMS policy framework
The EU AI Act isn't a completely separate compliance programme — it's an extension of your existing governance framework into AI systems specifically. Organisations with a mature ISMS will find the documentation and process requirements familiar.
SnapGRC supports ISO 42001 (the AI management system standard that aligns with EU AI Act requirements) alongside ISO 27001, GDPR, and other frameworks — so your AI Act compliance documentation sits in the same place as the rest of your compliance programme.
See how SnapGRC supports AI Act compliance →
Summary
The EU AI Act is real, the deadlines are close, and extraterritorial scope means many UK businesses are affected despite Brexit.
The key points:
- If you sell AI systems or AI-enabled software to EU customers, you're in scope as a provider
- If you use AI tools that affect EU employees or customers, you're in scope as a deployer
- Prohibited AI practices are already banned — check your systems now
- The main compliance deadline for high-risk systems is 2 August 2026
- The UK has no equivalent domestic AI law yet — but that doesn't exempt you from the EU Act's reach
- Penalties are based on global turnover and are substantial
For most UK SMBs, the immediate actions are an AI system inventory, a scope assessment, and checking for any prohibited practices. High-risk systems require more significant documentation work — start now if you haven't already.
SnapGRC is a compliance management platform for UK SMBs and MSPs. ISO 27001, ISO 42001, GDPR, Cyber Essentials, NIS2 and 40+ frameworks — without the enterprise price tag. Learn more →