Which AI Systems Are Classified as High-Risk Under the EU AI Act?

The EU AI Act, enforceable since August 2024, adopts a risk-based approach to regulate AI systems, categorizing them into unacceptable, high, limited, and minimal risk levels. High-risk AI systems face the strictest obligations due to their potential impact on health, safety, and fundamental rights.

The Risk-Based Framework of the EU AI Act

The Act classifies AI into four tiers: unacceptable risk (prohibited uses like social scoring), high risk (stringent compliance), limited risk (transparency requirements), and minimal risk (no obligations).[1][2] High-risk systems require conformity assessments, technical documentation, EU database registration, human oversight, accuracy, robustness, and cybersecurity measures.[4]

Article 6 outlines classification rules: AI systems are high-risk if they are safety components or products under harmonized EU legislation in Annex I (e.g., medical devices, machinery), or if listed in Annex III, unless they pose no significant risk to health, safety, or rights.[2][5]

Primary Categories of High-Risk AI Systems

High-risk AI falls into two main buckets:

  • Safety components in regulated products: AI integral to products like medical devices (MDR class IIa, IIb, III; IVDR class A-D), automotive systems, or aviation equipment, requiring third-party conformity assessment.[1][5]
  • Annex III applications: Standalone systems in critical areas, always high-risk unless proven otherwise.[2]

Providers must document assessments if claiming an Annex III system is not high-risk.[5]

Detailed Annex III High-Risk Use Cases

Annex III lists specific applications posing significant risks. Key examples include:

  • Biometrics and profiling: Remote biometric identification in public spaces (with exceptions for law enforcement), emotion recognition in workplaces/education, and biometric categorization inferring sensitive attributes.[1][3]
  • Infrastructure and security: Critical infrastructure management (e.g., road traffic, power plants), and systems for crime analytics or risk assessments in law enforcement.[1]
  • Education and employment: AI for student evaluation/admissions, worker performance assessment, or recruitment screening.[1][4]
  • Financial services: Creditworthiness evaluation or risk assessment.[1]
  • Border and migration: Polygraphs, real-time biometric identification for control, or migration/demographic profiling.[1]
  • Justice and administration: Predictive policing, judicial decision support, or public benefits/demographic management.[1]

Recent consultations, like the European Commission’s June 2025 stakeholder input on Article 6, seek examples from life sciences to refine classifications, emphasizing practical impacts on health and safety.

Obligations for High-Risk AI Providers

Deployers of high-risk systems must:

  • Conduct risk management and data governance.
  • Ensure transparency, human oversight, and post-market monitoring.
  • Affix CE marking and register in the EU database.[4]

Non-compliance risks fines up to €35 million or 7% of global turnover for prohibited systems, lower for high-risk violations.[1]

Examples Across Sectors

Healthcare: Diagnostic AI in medical devices (e.g., imaging analysis) is high-risk if requiring notified body assessment.[4]

Transportation: Autonomous driving AI or traffic management systems qualify as safety components.[4]

Employment: AI tools scoring job candidates or monitoring worker productivity fall under Annex III.[1]

In energy sectors, Eurelectric advocates limiting high-risk labels to direct safety components in electricity supply, excluding peripheral algorithms to avoid undue compliance burdens.[6]

Challenges and Recent Developments

Borderline cases, like supportive AI in regulated products, challenge classification. Providers must self-assess rigorously.[1][6] The ongoing 2025 Commission consultation addresses ambiguities in high-risk rules for sectors like healthcare.

General-purpose AI (GPAI) models become high-risk when used in Annex III contexts.[3]

Conclusion

Understanding high-risk classifications under the EU AI Act is crucial for compliance. Focus on Annex III and safety components to navigate obligations effectively. Stay updated via official EU consultations for evolving guidance.

References

  1. https://gdprlocal.com/ai-risk-classification/
  2. https://artificialintelligenceact.eu/article/6/
  3. https://www.trail-ml.com/blog/eu-ai-act-how-risk-is-classified
  4. https://www.modelop.com/ai-governance/ai-regulations-standards/eu-ai-act
  5. https://artificialintelligenceact.eu/high-level-summary/
  6. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai