AI Conformity Assessment — Definition (Glossary)
Key Takeaway: A conformity assessment is the mandatory verification process that high-risk AI systems must pass before being placed on the EU market. It is a documented declaration — analogous to CE marking — that the system meets the requirements of EU AI Act Articles 9–15.
What Is an AI Conformity Assessment?
An AI conformity assessment is the process by which the provider of an AI system verifies — and documents — that the system meets the requirements of the [link:/glossary/ai-act] before placing it on the EU market or putting it into service. For [link:/glossary/high-risk-ai-systems], a conformity assessment is legally mandatory before deployment.
The assessment covers risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, and cybersecurity (Articles 9–15). Providers must maintain the documentation for a minimum of ten years and must repeat the assessment whenever the system undergoes a substantial modification.
Two pathways exist: internal assessment (provider self-certifies, applying harmonized standards — the default for most Annex III systems) and third-party assessment (a notified body conducts the review — required for biometric identification systems and AI safety components in regulated products).
For the procedural guide, CE marking workflow, and Article 43 step-by-step, see AI Conformity Assessment Framework.
Related Terms
- [link:/glossary/ai-act]
- [link:/glossary/high-risk-ai-systems]
- [link:/glossary/ai-risk-classification]
- [link:/glossary/ai-audit]
- [link:/glossary/iso-42001]
- [link:/glossary/trustworthy-ai]