Artificial intelligence is moving faster than most organisations can track. New tools, new capabilities, and new risks are emerging constantly — and the regulatory and standards landscape is scrambling to keep up. ISO 42001, published in December 2023, is the first serious attempt by the international standards community to provide a structured framework for managing AI responsibly.
If you're already familiar with ISO 27001 for information security or ISO 9001 for quality management, ISO 42001 will feel familiar. It follows the same high-level structure, uses the same management system approach, and can be integrated with your existing frameworks. But it addresses a set of risks that no previous standard has tackled directly — the risks that come from developing, deploying, and using artificial intelligence.
What is ISO 42001?
ISO 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System — an AIMS. Published by the International Organization for Standardization in December 2023, it provides organisations with a structured framework for governing AI responsibly across their operations.
The standard applies to any organisation that develops AI systems, deploys AI systems, or uses AI as part of its products or services — regardless of size, sector, or whether the AI is built in-house or procured from a third party.
Like ISO 27001, certification to ISO 42001 is voluntary. But as with ISO 27001, voluntary standards have a way of becoming de facto requirements when clients, regulators, and procurement teams start asking for evidence of responsible AI governance.
Why does ISO 42001 exist?
AI introduces a category of risk that existing management standards don't fully address. ISO 27001 covers the security of information systems but doesn't specifically address the risks that come from algorithmic decision-making, bias in training data, explainability of AI outputs, or the ethical implications of automated systems.
The GDPR addresses automated decision-making to some extent — particularly in Article 22, which gives individuals the right not to be subject to solely automated decisions that significantly affect them. But GDPR is a data protection law, not an AI governance framework.
ISO 42001 fills that gap. It provides a structured way for organisations to identify the risks specific to their AI use cases, implement appropriate controls, demonstrate accountability, and continually improve their approach as AI technology and its risks evolve.
Who needs ISO 42001?
ISO 42001 is relevant to three broad categories of organisation.
The first is AI developers — companies building AI systems, training models, or developing AI-powered products. If you're a technology company with AI at the core of what you do, ISO 42001 provides the governance framework to demonstrate that your development practices are responsible and auditable.
The second is AI deployers — organisations that take AI systems built by others and deploy them in their own products or services. If you're integrating a third-party AI model into your customer-facing application, you are a deployer under the EU AI Act framework, and ISO 42001 gives you a way to manage and demonstrate responsible deployment.
The third is AI users — businesses that use AI tools as part of their operations. This is the category most small and mid-sized businesses fall into. If you're using AI-powered recruitment tools, customer service chatbots, fraud detection systems, or content generation tools, you are an AI user, and ISO 42001 is increasingly relevant to how you govern that use.
What does ISO 42001 require?
ISO 42001 follows the same Plan-Do-Check-Act structure as other ISO management system standards. Its requirements cover several key areas.
Organisational context and leadership
Like ISO 27001, ISO 42001 requires organisations to understand their context — the internal and external factors that affect their AI-related risks and opportunities. It requires demonstrated leadership commitment, with senior management taking responsibility for the AIMS and setting AI policy at the organisational level.
AI policy
Organisations must establish a documented AI policy that sets out their principles and commitments around responsible AI. This should cover the organisation's approach to AI ethics, human oversight, transparency, and accountability — signed off by senior leadership.
Risk and impact assessment
At the heart of ISO 42001 is a requirement to assess the risks and impacts associated with AI systems. This goes beyond conventional risk assessment to include AI-specific considerations — bias and fairness, explainability, data quality, unintended consequences, and the broader societal impacts of AI decisions.
For high risk AI use cases this assessment needs to be thorough and documented. For lower risk use cases a lighter touch approach is appropriate.
AI system lifecycle management
ISO 42001 addresses the full lifecycle of AI systems — from design and development through deployment, monitoring, and decommissioning. Key requirements include data governance, model documentation, testing and validation before deployment, and ongoing monitoring of AI system performance in production.
Human oversight
One of the most important principles running through ISO 42001 is the requirement for meaningful human oversight of AI systems — particularly for decisions that significantly affect individuals. Organisations need to define where human review is required, how it is implemented, and how it is documented.
Supplier and third-party management
If you use AI systems built by third parties, ISO 42001 requires you to assess and manage the AI-related risks in your supply chain. This includes understanding what data your AI providers are using, how their models are trained, and what obligations they are taking on in relation to responsible AI governance.
Internal audit and continual improvement
Like all ISO management system standards, ISO 42001 requires periodic internal audits of the AIMS and a commitment to continual improvement. This ensures the framework remains effective as AI technology, organisational use cases, and the regulatory environment evolve.
How does ISO 42001 relate to ISO 27001?
ISO 42001 and ISO 27001 are highly complementary. Both follow the same high-level structure, which means they can be integrated into a single management system rather than maintained as separate parallel frameworks.
ISO 27001 covers the confidentiality, integrity, and availability of information — including information processed by AI systems. ISO 42001 addresses the risks specific to AI — bias, explainability, ethical impact, and the governance of AI decision-making. Together they provide comprehensive coverage of both the security and the responsible use dimensions of AI.
For organisations already certified to ISO 27001, adding ISO 42001 is significantly less work than starting from scratch. Many of the foundational elements — management system structure, risk assessment methodology, internal audit process, leadership commitment — are already in place. The additional work is primarily in the AI-specific risk assessments, AI policy, and lifecycle management documentation.
How does ISO 42001 relate to the EU AI Act?
The EU AI Act and ISO 42001 are not the same thing — one is law and one is a voluntary standard — but they are closely aligned and mutually supportive.
The EU AI Act imposes legal obligations on organisations that develop, deploy, or use AI systems in the EU, with the strictest requirements applying to high risk AI systems. ISO 42001 provides a management system framework that helps organisations meet those obligations in a structured and auditable way.
In practice, an organisation with a well-implemented ISO 42001 AIMS will have much of the documentation, governance, and process evidence needed to demonstrate EU AI Act compliance. The AI impact assessments, system documentation, human oversight mechanisms, and data governance practices required by ISO 42001 map closely onto the requirements the EU AI Act imposes on high risk AI developers and deployers.
For UK businesses monitoring the development of UK AI regulation, ISO 42001 also provides a framework that is likely to align well with whatever domestic requirements emerge — making it a sensible investment regardless of which regulatory regime ultimately applies.
Do small businesses need ISO 42001?
For most small businesses the honest answer is not yet — but it is worth understanding and preparing for. If your business uses AI tools but none of them are high risk under the EU AI Act definition, and you have no EU customers or operations, formal certification to ISO 42001 is probably not a priority right now.
However if you are building AI into your products, using AI for HR or customer-facing decisions, selling to enterprise clients who are starting to ask about AI governance, or operating in a regulated sector, ISO 42001 is worth taking seriously sooner rather than later.
The organisations that will be best positioned when AI governance requirements tighten — whether through the EU AI Act, emerging UK regulation, or client procurement requirements — are those that started building their AI governance foundations early rather than scrambling to catch up.
Getting started with ISO 42001
The most practical starting point for most small businesses is an AI inventory — a simple register of every AI system or AI-powered tool your organisation currently uses or is considering using. Without knowing what AI you have, you cannot assess the risks it poses or the governance requirements that apply.
From there, a gap analysis against the key requirements of ISO 42001 will tell you how much work is ahead of you and where to focus first.
If you're managing ISO 27001 alongside an emerging ISO 42001 programme, the administrative overhead of running two management systems simultaneously is significant — particularly for smaller teams. SnapGRC brings your compliance frameworks together in one place, making it straightforward to manage controls, evidence, risk assessments, and documentation across multiple standards without the complexity of separate spreadsheets for each one. Book a free demo to see how it works, or visit our compliance knowledge base for more guides like this one.