DECODING THE AI ACT PART 5 - Requirements for providers of High-Risk AI Systems Part 1

In the previous article in our series Decoding the AI Act, we explained what is considered a high-risk AI system under the AI Act. This naturally raises the question: What requirements apply to whom (depending on the role) when it comes to high-risk AI systems?  To say the least, there are many, especially for providers of high-risk AI systems. We have therefore decided to present the requirements in several articles, with this present article elaborating on the requirements set to providers of high-risk AI systems found in Section 2 of the Act.

In the next article we will finalise the list of requirements for providers of high-risk AI systems (yes, there are many!) to help you follow through and in the subsequent articles we will look at the responsibilities and obligations of the other parties involved. Stay tuned!

The “intended purpose” of the AI system

Something that is central for understanding the obligations of providers in relation to AI systems overall (irrespective of the risk level) is the intended purpose of the AI system. The AI Act defines this as “the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation”. In the same line as the GDPR is purpose centric (see the principle of purpose limitation) it appears that the AI Act has adopted a similar "intended purpose approach". This approach is illustrated by the fact that the various requirements under the AI Act are set forth with the intent to ensure that the AI system fulfils its intended purpose. With that in mind, let's go through the various requirements for high-risk AI systems found in Section 2 of the AI Act.

Requirements for high-risk AI systems found in Section 2 of the AI Act

1. Having a Risk Management System in place (Article 9)

What is a risk management system (“RMS”)?

An RMS is a process that enables the provider to:

(i)
identify the risks or adverse impacts on health, safety and fundamental rights.
It is important to consider not only the intended purpose of the AI system stricto sensu, but also the misuse of the AI system resulting from readily predictable human behaviour.

(ii)
implement mitigation measures for the known and reasonably foreseeable risks posed by the AI system.

Such mitigation measures should adopt the most appropriate risk-management measures in the light of the state of the art in AI.  In Article 9 of the AI Act, you can find more information about what you need to consider when choosing the appropriate measures. It is important that the outcome of the risk assessment or any residual risk is judged to be “acceptable”. Testing is also important to ensure that the AI system is used consistently for its intended purpose.

An RMS is not a one-off exercise but rather the opposite; it is a living process, that should cover the whole lifetime of the AI system, with regular reviews to ensure its continuous effectiveness. The RMS resembles the DPIA concept of the GDPR, which is also a process to ensure the risk management for high-risk processing of personal data.

2. Data and data governance (Article 10)

If the high-risk AI system involves training models with data, the training, validation, and testing data sets must fulfil certain requirements. The data sets must be relevant, sufficiently representative, and, as far as possible, free of errors and complete in relation to the intended purpose. However, these requirements should not affect the use of privacy-preserving techniques in the context of the development and testing of AI systems. In addition, the data sets shall be subject to data governance and management practices. These practices shall concern, for example:

  • data collection processes and origin of the data;
  • data preparation processes;
  • assessment of the availability, quantity and sustainability of the data;
  • examination of biases that could affect the health and safety of persons, have negative impacts on fundamental rights or lead to unlawful discrimination
  • identification of data gaps or shortcomings and how these can be addressed.

Additionally, the data sets must consider the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting in which the high-risk AI system is intended to be used to avoid biases.

What is interesting from a GDPR perspective is that the AI Act allows providers to process special categories of personal data as an exception, to the extent that it is strictly necessary to ensure bias detection and correction, provided of course that certain safeguards are met. It is fair to say that this exception will cause a lot of headaches for many data protection practitioners, but if applied correctly (see Article 10(5)), it is for a good cause.

3. Technical documentation (Article 11)

Before placing on the market or putting into service, providers must draft a comprehensive technical documentation (“TD”) of the high-risk AI system.

The TD should contain all the information necessary to demonstrate that the high-risk AI system complies with the requirements set out in Section 2 of the AI Act (which is the subject matter of this article) and to enable post market monitoring and overall traceability.

In plain language, providers must provide information on the general characteristics of the AI system, its capabilities, limitations, and the intended purpose, a description of the training, testing, and validation process used, among others. For the full list of information to be included in the TD see Annex IV of the Act. As Annex IV contains a lot of information to be provided, the AI Act adopts a more pragmatic approach for SMEs and start-ups and allows for the provision of the required information in a simplified form.

Important to note is that the TD can be requested by competent authorities as a proof of compliance.

4. Record-keeping (Article 12)

Providers must design high-risk AI systems in a way that allows for the automatic recording of events (so-called logs) throughout the entire lifecycle of the AI system. The purpose of the record-keeping is traceability; providers must be able to identify situations where the AI system poses a risk to fundamental rights and to monitor the functioning of the AI system in relation to its intended purpose.

5. Transparency and provision of information to deployers (Article 13)

In the AI Act transparency is crucial; providers must adopt a “transparency by design” approach before they place the high-risk AI system on the market or put it into service and provide deployers with clear information about how the system works, including its capabilities and limitations and enable deployers to evaluate its functionality.

In practice, this means that the provision of a high-risk AI system to any deployer should be accompanied by clear information and instructions from the provider for its use, including at least the information listed in Article 13(2)(a)-(f) of the AI Act in plain and clear language.

For data protection practitioners, this requirement is similar to the transparency requirement imposed on controllers by the GDPR.

6. Human oversight (Article 14)

Providers must develop and design high-risk AI systems in a way that natural persons can oversee their functioning and ensure that the system is fulfilling its intended purpose (human oversight) throughout the whole period of its use.  

The purpose of human oversight is to prevent and minimize the risks to health, safety, and fundamental rights that may arise when the system is used in line with its intended purpose or under foreseeable misuse. The human oversight is performed by the deployers of the high-risk AI system. To this end, providers must implement appropriate measures to enable human oversight by deployers before placing high-risk AI systems on the market or putting them into service. The oversight measures should be determined in proportion to the risks, level of autonomy of the AI system, and guarantee that the system is subject to built-in operational constraints that cannot be overridden by the system itself and its responsiveness to the human operator. The human operator of the human oversight should be able to:

  • understand the high-risk AI system’s capacities and limitations and monitor the system’s operation, and detect anomalies and unexpected performance;
  • be aware of "automation bias" (i.e. automatic reliance or over reliance on the output of the AI system);
  • correctly interpret the output and have tools and methods available for this;
  • decide not to use the AI system or to disregard, override or reverse the output;
  • intervene or interrupt the system using a stop button or similar.

7. Accuracy, robustness and cybersecurity (Article 15)

Providers shall develop and design high-risk AI systems in a way to achieve an appropriate level of accuracy, robustness, and cybersecurity throughout the AI system's entire lifecycle. This means that high-risk AI systems must be able to maintain their performance and accuracy under expected and unexpected variations in, for example, the input data and the environment. In other words, the system must be as resilient as possible to errors, failures or inconsistencies that may occur, for example, when a human interacts with the system. In addition, a high-risk AI system must have a level of protection against unauthorized attempts to alter the use of the AI system or its output. The accuracy levels and relevant accuracy metrics of high-risk AI systems must be stated in the accompanying instructions for use.

If a high-risk AI system is to continue learning after it is placed on the market or put into service, it must also be developed to eliminate or at least reduce the risk of any bias in the output affecting the input for future operations (so-called “feedback loops”) and ensure that these feedback loops are managed with appropriate mitigation measures.

Key takeaways and next steps

Before a high-risk AI system is placed on the market or put into service, and before the output can be used in the EU, providers must comply with all the requirements listed above. As mentioned already, these requirements stem from Section 2 of the Act and this list is not exhaustive; in the next article, we will analyse the requirements under Section 3 of the Act, so stay tuned to ensure that you have a full picture of the applicable requirements.

Written by: Elisavet Dravalou & Kristin Tell

Contributions: Tara Turcinovic & Hugo Snöbohm Hartzell

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team