DECODING THE AI ACT PART 7 – Requirements for Deployers, Distributors, and Importers of High-Risk AI Systems

In the previous two articles in our Decoding the AI Act series, we have gone through and explained the specific requirements on Providers of high-risk AI systems. But what about the other stakeholders? Fear not, we have not forgotten about them! There are several actors in the AI supply chain who are covered by the AI Act (the “Act”), and it’s a complex spider web of requirements that apply to them. These players are Deployers, Importers, Distributors, and Manufacturers of high-risk AI systems (hereinafter, at times, jointly referred to as “Actor(s)”). The requirements are designed to address the risks in relation to safety and fundamental rights throughout the AI supply chain. In this article, we’ll untangle the spider web of requirements placed on these key players in order to clarify their responsibilities under the Act.  

For the definitions and further explanations of the Actors, please revisit Article 2 in our Decoding the AI Act series: Who are the key players?

Requirements for Deployers

Many risks associated with AI systems arise from the way such systems are designed and programmed, but risks may also stem from how such AI systems are actually used from a more practical perspective. Thus, Deployers of high-risk AI systems play a crucial role as they are the Actor best placed to understand how the high-risk AI system will be used in practice. Deployers can, therefore, identify potential risks that were not anticipated at an early stage of the development due to a more precise knowledge of the context of use.  

Follow instructions from the Provider

What would an EU regulation within the tech area be without requirements to take technical and organizational measures? With that said, the Act, unsurprisingly, also contains such obligations (as does the GDPR, NIS2 Directive, etc.). Deployers of high-risk AI systems must take appropriate technical and organizational measures to ensure that these systems are used in accordance with the instructions for use accompanying the systems. As we already mentioned in the previous article: Requirements for Providers of High-risk AI Systems Part 2, the Provider is responsible for providing such instructions and the Deployer must ensure that such instructions are indeed provided. These instructions shall, among other things, include information on how the AI system works.  

Assign human oversight  

One of the main concerns about the use of high-risk AI systems is the potential negative impact on the rights of individuals. To mitigate such risks and keep the AI system “under control”, the Deployer must assign human oversight of the AI system to personnel who have the necessary competence, training and authority, as well as the necessary support to carry out this role. Thus, it is important that the Deployer ensures that the “overseers” are continuously provided with necessary training and support. Deployers should also ensure that they have implemented the human oversight measures indicated by the Provider and that the assigned “overseers” are given guidance on when and how to make informed decisions in order to avoid risks.  

Ensure appropriate input data (if under their control)

Another way to combat harmful, erroneous, or biased outputs from AI systems is to ensure, as far as possible, that the input data is relevant, qualitative, representative, accurate, and complete. Accordingly, where the Deployer exercises control over the AI system’s input data, the Deployer must ensure that the input data is relevant and sufficiently representative in relation to the intended purpose of the particular high-risk AI system.  

Ongoing monitoring and evaluation  

Deployers shall also monitor the operation of the AI system in order to detect any irregularities or risks and report any serious incidents. As soon as a Deployer has reason to believe that the use of the AI system in accordance with the Provider's instructions may result in that the AI system posing risks in relation to the health, safety or fundamental rights of individuals, the Deployer must, without undue delay, inform the Provider or Distributor and the relevant authority. Additionally, in such a case, the Deployer shall suspend the use of that system. Where Deployers have identified a serious incident, they must also immediately inform the Provider, and subsequently also the Importer or Distributor and the relevant authorities of the incident. This should be done as soon as a causal link between the high-risk AI system and the serious incident has been established, or a reasonable likelihood of such a link has been established.  

Keep logs (if under their control)

Deployers of high-risk AI systems shall keep logs (e.g., inputs and outputs) generated by that AI system, provided that such logs are under their control. The logs shall be maintained for a period appropriate to the intended purpose of the AI system. That period shall be at least six months, unless specified otherwise in applicable Union or national law, in particular in the GDPR. The logs may be used to trace back the cause of a particular error and to take corrective action.

Transparency and notification  

To begin with, Deployers who are employers must, prior to the use of a high-risk AI system in the workplace, inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This is similar to the obligation to implement an employee privacy notice according to the GDPR. Furthermore, where high-risk AI systems referred to in Annex III are used to make or assist in making decisions in relation to natural persons, the natural persons shall be informed that they will be subject to the use of the high-risk AI system. This information should, at least, include the intended purpose and the type of decisions it will make.

Use information to perform DPIA (if applicable)

Where applicable, Deployers of high-risk AI systems shall use the information provided by the Provider to comply with their obligation to carry out a data protection impact assessment under Article 35 of the GDPR.  

Cooperation with authorities and registration

Deployers must cooperate with the relevant authorities in any action those authorities take in relation to the high-risk AI system in order to implement the Act. Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies must also comply with certain registration obligations.  

Fundamental Rights Impact Assessments  

Prior to deploying a high-risk AI system referred to in Annex III of the Act, the Deployers listed in the bullet points below must perform an assessment of the impact on fundamental rights that the use of such system may lead to (a Fundamental Rights Impact Assessment, hereinafter referred to as “FRIA”).

  • Bodies governed by public law or private entities providing public services. Such private entities could, for instance, be entities in the areas of education, healthcare, social services, housing, and administration of justice.
  • Deployers that intend to use the AI system to evaluate credit worthiness and credit score.  
  • Deployers that intend to use the AI system for risk assessment and pricing in relation to natural persons for life and health insurance.  

The overall aim of the FRIA is for the Deployer to identify the specific risks to the rights of individuals or groups of individuals likely to be affected by the use of the AI system, and to identify measures to be taken if those risks materialize. The FRIA should be updated when the Deployer considers that any of the relevant circumstances have changed.

The assessment shall consist of (i) a description of the Deployer’s processes in which the high-risk AI system will be used in line with its intended purpose, (ii) a description of the period of time and with which frequency each high-risk AI system is intended to be used, (iii) the categories of natural persons and groups likely to be affected by its use in the context at hand, (iv) the specific risks of harm likely to have an impact on the categories of natural persons or groups identified, taking into account the information given by the Provider, (v) a description of the implementation of human oversight measures according to the instructions for use, and (vi), the measures to be taken in the case of the materialization of those risks, which could be governance arrangements such as arrangements for human oversight according to the instructions of use or compliant handling and redress procedures.  

When the FRIA has been conducted, the Deployer must notify the relevant authority of the results, which includes submitting a filled-out template that is developed by the AI Office.  

The requirement to perform FRIA appears to be inspired by the requirement to perform a data protection impact assessment (DPIA) under the GDPR. As a result, there are some similarities, but also some important differences.  Where a high-risk AI system requires both a FRIA and a DPIA to be carried out, they can be conducted together in a single assessment that addresses the relevant aspects under both the Act and the GDPR. If a DPIA has already been performed in relation to the AI system, a FRIA must complement that DPIA.  

Requirements for Importers

Importers of high-risk AI systems also play an important role in the AI supply chain, as they introduce the AI systems to the European market. They act as intermediaries between Providers, Deployers and the end-users. They also have to ensure that the imported AI systems comply with certain requirements in the Act.  

Due diligence  

Before placing a high-risk AI system on the market, the Importer must verify that: 

(i) The relevant conformity assessment has been carried out by the Provider,  

(ii) The Provider has drawn up the technical documentation of the AI System,  

(iii) The AI system bears the required CE marking,

(iv) The AI system is accompanied by a copy of the EU declaration of conformity,  

(v) The AI system is accompanied by the instructions of use, and  

(vi) The Provider has appointed an authorized representative.

Furthermore, where an Importer has sufficient reason to consider that a high-risk AI system is not in compliance with the Act, is falsified, or is accompanied by falsified documentation, it shall not place the system on the market until it has been brought into compliance. Presumably, the Provider is the Actor that must bring the high-risk AI system into compliance, given that the Importer is required in such cases to inform the Provider about the risks of the AI system (see the below).  

Inform relevant stakeholders

If the AI system poses a risk to the health, safety or fundamental rights of natural persons, the Importer must inform the Provider of the system, the authorized representative, and the relevant authority.

Provide information  

Importers must indicate their name, registered trade name or registered trademark and the address where they can be contacted, on the high-risk AI system and on its packaging or its accompanying documentation.

Storage conditions  

Importers must ensure that, while the high-risk AI system is under their responsibility, storage or transport conditions do not jeopardize the system’s compliance with the requirements set out in Chapter 3 Section 2 of the Act.  

Keep information  

Importers must keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, (i) a copy of the certificate issued by the notified body (where applicable), (ii) the instructions of use, and (iii) the EU declaration of conformity.  

Cooperation with relevant authorities  

Importers shall (i) provide information and documentation to relevant authorities upon request, and (ii) cooperate with relevant authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the Importers.

Requirements for Distributors

Distributors of high-risk AI systems also play an important role in the AI supply chain, as they make AI systems available on the European market. They act as intermediaries between Providers, Deployers and the end-users. They also have to ensure that the imported AI systems comply with certain requirements in the Act.

Due diligence  

Before making a high-risk AI system available on the market, the Distributer must verify that:  

(i) The AI system bears the required CE marking,

(ii) The AI system is accompanied by a copy of the EU declaration of conformity,  

(iii) The AI system is accompanied by the instructions of use, and  

(iv) The Provider and Importer, as applicable, have complied with the obligations to provide contact information and to have a quality management system in place (for Providers).  

Furthermore, where a Distributor considers or has reason to consider that the high-risk AI system is not in compliance with the requirements set out in Chapter 3 Section 2 of the Act, Requirements for providers of High-Risk AI Systems Part 1 it shall not make the AI system available on the market until the system has been brought into conformity with those requirements.  

Corrective measures  

If the Distributor considers or has reason to consider that the high-risk AI system which it has made available on the market does not comply with the requirements set out in Chapter 3 Section 2 of the Act, it must take corrective actions. The corrective actions to be taken shall be (i) actions necessary to bring the system into compliance with the requirements, (ii) to withdraw or recall the AI system, or (iii) to ensure that the Provider, the Importer, or any relevant Actor, as appropriate, takes such necessary actions.  

Inform relevant stakeholders

If the AI system poses a risk to the health, safety or fundamental rights of natural persons, the Distributor must inform the Provider or the Importer. This applies both before the Distributor has made the AI system available on the market and after the system has been made available on the market. If the AI system has already been made available and such risks are identified (or the Distributor has reason to identify such risks), the Distributor must also inform the relevant authority and provide details of the non-compliance and any corrective measures taken.  

Storage conditions  

Distributors must ensure that, while the high-risk AI system is under their responsibility, storage or transport conditions do not jeopardize the compliance of the system with the requirements set out in Chapter 3 Section 2 of the Act.  

Cooperation with relevant authorities  

Distributors shall, (i) provide information and documentation to relevant authorities upon request, and (ii) cooperate with relevant authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the Distributors.  

What about the Manufacturers – are they covered?  

Well, sort of. So-called Manufacturers of high-risk AI systems do not have any obligations per se. However, in some cases, they can “become” Providers and, thus, have the same obligations as Providers of high-risk AI systems.  

For high-risk AI systems that are safety components of products covered by Annex I of the Act, product Manufacturers are considered as Providers and, thus, must comply with the obligations imposed on Providers when:

(i) the high-risk AI system is placed on the market together with the product under the name or trademark of the product Manufacturer, or

(ii) the high-risk AI system is put into service under the name or trademark of the product Manufacturer after the product has been placed on the market.

Concluding remarks  

To say the least, it is not only Providers of high-risk AI systems that need to comply with the comprehensive requirements of the Act. From Deployers to Importers and Distributors, every Actor must ensure that AI systems are not only safe, but also respect fundamental human rights. The split responsibilities outlined in the Act create a collaborative framework, promoting both safety and trust in these technologies. As the regulatory landscape continues to evolve, it’s crucial that all stakeholders understand and embrace their role in shaping a future where AI innovation is both responsible and sustainable. We hope that this article has been of some help in untangling the spider's web of requirements, at least somewhat.  

In our upcoming article, we will move away from high-risk AI systems for a moment and instead dive into the requirements of so-called limited-risk AI systems. We hope you will stay tuned!

Written by: Hugo Snöbohm Hartzell

Contributions by: Elisavet Dravalou and Kristin Tell  

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team