DECODING THE AI ACT PART 4 – what is high-risk AI?

The EU’s AI Act (the “Act”) is a risk-based framework where so-called high-risk AI systems are subject to most of the Act’s provisions. Whether you are considered a provider, deployer, importer, distributor, or manufacturers of a high-risk AI system, you will be responsible for meeting certain requirements according to the Act. Failure to comply with these requirements can be costly in the form of hefty administrative fines imposed by supervisory authorities. In order to determine whether the requirements for high-risk AI systems apply to your organization, it is essential to have an understanding of the criteria that classifies AI systems as high-risk. We have, therefore, written this article to guide you through and help you sort out the challenging definition of high-risk AI systems.

What is a high-risk AI system?

An AI system is considered high-risk if it falls under one of these categories:

1)    Where the AI system is a product itself or is intended to be used as a safety component in a product, which is already regulated under the current Union harmonization legislation as listed in Annex I of the Act. However, the AI-system will only be regarded as high-risk if it is required to under go a third-party conformity assessment according to the applicable Union harmonization legislation before it is placed on the market or put into service in the EU.

The “Union harmonization legislation” refers to EU regulations that certain products must comply with before they can be placed on the EU market. These regulations set out a uniform standard across the Union, especially with regard to aspects such as the safety, quality, and reliability of the products. They are, thus, called “harmonized” rules. A “conformity assessment” is an analysis of whether a product actually meets the harmonized regulatory requirements applicable to the specific product in question, before it is sold on the market. It may include, for example, testing, inspection, and certification, but the procedures may differ for different types of regulated products and is further defined in the applicable regulations set out in Annex I of the Act.

2)    Where the AI system meets the description of one of the eight categories of use listed as high-risk in Annex III of the Act. However, it should be noted that the need to amend the list will be assessed by the European Commission and it may, thus, be adjusted in the future. Also, bear in mind that the use-cases listed in Annex III are only considered high-risk if they pose a significant risk of harm to the health, safety, or fundamental rights of human beings.

The lists of high-risk categories in the Annexes are long and detailed, and there are also exceptions. This can make the question – what is actually considered as high-risk AI?– a bit complex to find an answer to. We have therefore tried to break it down into a simpler “flowchart” of how we would structure the assessment, as presented below.

Examples of Annex III use-cases, in selection

In the below, we will take you through two of the high-risk use-cases listed in Annex III of the Act, namely in Employment and Biometrics. The reason why we have chosen to focus on these use-cases is that, in our experience, they are the ones that raise the most questions from our clients.

Example 1 - Employment, worker management and access to self-employment

This category includes AI systems intended to be used:

  • for the recruitment of candidates for a job position, e.g., placing targeted job advertisements,
  • for the selection of candidates, e.g., analyzing and filtering job applications or in other ways evaluating candidates,
  • to make decisions affecting terms of work-related relationships, as well as the promotion or termination of work-related contractual relationships,
  • to allocate tasks based on individual behavior or personal traits or characteristics of employees, or
  • to monitor and evaluate the performance and behavior of employees.

AI systems used for any of the above are to be considered high-risk mainly since they may have a big impact on an employee’s future career prospects, livelihood, and worker’s rights, especially considering the risk that such systems perpetuate historical patterns of discrimination.

Example 2 - Biometrics

AI systems used for the following purposes are considered to constitute high-riskas part of the category “Biometrics”:

Remote biometric identification

  • Remote biometric identification (RBI) is a technology that use AI systems for recognizing individuals who typically are at a remote location, usually a publicly accessible place such as streets or in public transport. In order to identify individuals, the RBI needs to scan the biometric data[1]of the individuals occupying the space.
  • NOTE: there is an exception for AI systems where the sole purpose is verifying that a person is who they claim to be – these types of AI systems are not regarded as high-risk.

Biometric categorization

  • Biometric categorization is according to the definition, a technology that uses AI systems to categorize people according to their biometric data. For example, categorization relating to aspects such as sex, age, hair colour, eye colour, tattoos, behavioral or personality traits, language, religion, membership of a national minority, sexual or political orientation.

Emotion recognition

  • Emotion recognition systems are defined as AI systems that has the purpose of identifying or inferring emotions or intentions of individuals on the basis of their biometric data (such as facial expressions, body language, tone of voice, etc.).  

AI systems that fall under Annex III but are NOT high-risk AI systems

Even if an AI system falls within one of the eight high-risk categories set forth in Annex III, the system would not be considered high-risk if it does not pose a “significant risk of harm” to the health, safety, or fundamental rights of a natural person (i.e., a human being).

The Act provides further guidance by listing four cases where the AI system cannot be considered to pose such harm. An AI system is not to be considered high-risk if the AI system is intended to:

a)    perform a “narrow procedural task”, for example transform unstructured data into structured data, classify incoming documents into categories, or detect duplicates among a large number of applications,

b)    improve the results of a previously completed human activity,

c)    detect decision-making patterns without replacing or influencing the previously completed human assessment (AI systems are, however, allowed to replace or influence human decisions if the AI system’s decision will under go proper human review),

d)    perform a preparatory task to an assessment relevant for the purposes of the cases listed in Annex lll.

Nevertheless, it is important to note that an AI system will always be considered high-risk if it performs profiling of natural persons,[2]regardless of whether the AI system is intended to fall under any of (a)-(d).

If you make the assessment that your AI system, which falls under one of the categories in Annex III, is not posing a “significant risk of harm”, it is of great importance to substantiate and clearly document this assessment. If the relevant supervisory authority has sufficient reason to consider that an AI system classified by the provider as non-high-risk is indeed high-risk, the supervisory authority can carry out an evaluation of the AI system and, if they consider it as high-risk, require provider to take all necessary actions to comply with the Act.

Key takeaways and next steps

There are multiple steps to be taken when trying to figure out whether your AI system is classified as high-risk under the AI Act, and there are plenty of categories of high-risk into which an AI system could fall. It is, undoubtedly, an important assessment to make, as the vast majority of all requirements in the Act are applicable to high-risk AI systems. If you suspect that an AI system you are, e.g., developing or using may fall into the high-risk categories, you should always consult a legal expert within the AI field.

The AI Act was finally published in the EU Official Journal during the summer, on 12 June, and came into force by the beginning of August. The majority of the requirements will come into effect two years after the Act entered into force (that is, August 2026). The European Commission shall no later than 2 February 2026 provide guidelines concerning the classification of AI systems as high-risk, including practical examples of use-cases of AI systems that are considered high-risk and not high-risk respectively. We look forward to such guidelines, and they will hopefully provide more clarity on how the categories of high-risk AI should be further understood.

Stay tuned for more updates on the AI Act in our Decoding the AI Act series! In the upcoming articles, we will guide you through some of the requirements on high-risk AI systems.  

Written by: Tara Turcinovic & Hugo Snöbohm Hartzell

Contributions: Elisavet Dravalou & Kristin Tell

[1] Biometric data is any data relating to physical, psychological, or behavioral characteristics of an individual.

[2] Profiling is, in the GDPR, defined as any form any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team