DECODING THE AI ACT PART 6 – Requirements for Providers of High-risk AI Systems Part 2

In Article 16 of the AI Act, there is a list of obligations for providers of high-risk AI systems. First and foremost, providers must ensure that their high-risk AI system is compliant with the requirements set out in Section 2, which we explored in the previous article in our series Decoding the AI Act Read more here

Notably, the obligations listed in Article 16 are not exhaustive, with additional requirements appearing later in the AI Act. In this article we will unpack the key obligations from Article 16, along with other essential requirements that every high-risk AI provider must comply with.

What we need to emphasise at this stage is that, in the same line as under the GDPR, where a processor is deemed to be a controller if it determines the purposes and means of the processing, any distributor, importer, deployer or other third-party will qualify as a provider of a high-risk AI system and as a result, be subject to all obligations applicable to providers, if they either:

a) affix their name or trademark on a high-risk AI system already placed on the market or put into service;

b) make a substantial change to a high-risk AI system that has already been placed on the market or has already been put into service; or

c) change the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system becomes a high-risk AI system.

With that being said, let’s continue with the requirements applicable to providers of high-risk AI systems.

Obligations of providers

1. Provide information accompanying the high-risk AI system (Article 16)

Providers must indicate their name, registered trade name or registered trademark and the address where they can be contacted, either on the high-risk AI system or on its packaging or its accompanying documentation.

2. Establish a quality management system (Article 17 and Annex VI and VII)

A provider must set up a quality management system which is a structured framework, documented in the form of written policies, procedures and instructions. Taking into consideration the size of the organisation, this system will include for example:  

  • a strategy for regulatory compliance;
  • techniques, procedures and systematic actions to be used for design, design control and verification of the high-risk AI system as well as  for the quality  control and assurance of the high-risk AI system;
  • the risk management system referred to in Article 9;
  • the setting-up, implementation and maintenance of a post -market monitoring system;
  • procedures related to the reporting of serious incidents; and
  • an accountability framework that sets out the responsibilities of the management and other staff in relation to all the aspects referred to above.

3. Keep documentation (Article 18)

For a period of 10 years following the placement on the market or the deployment of the high-risk AI system providers must keep the documentation listed below at the disposal of the national competent authorities:

  • the technical documentation referred to in Article 11 (for further information on this, see our previous article Read more here
  • the documentation on the quality management system mentioned above;
  • where applicable, the documentation relating to the changes approved by the notified bodies;
  • where applicable, the decisions and other documents issued by the notified bodies;
  • the EU declaration of conformity (see point 6 below);

4. Store automatically generated logs (Article 19)

Providers shall keep the logs referred to in Article 12 (and discussed in our previous article) automatically generated by their high-risk AI systems to the extent that such logs are under their control. Providers must keep these logs for as long as they consider appropriate for the intended purpose of the high-risk AI system, with a minimum required time of six months. This period is without prejudice to any other national or EU applicable retention requirements, such as the GDPR.

5. Perform a conformity assessment (Article 43)

Providers must perform a conformity assessment of the high-risk system in order to ensure a high level of trustworthiness of the system before it is placed on the market or put into service. A conformity assessment is defined in the AI Act as “the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled”.

The conformity assessment procedure is based either on an internal control (see Annex VI) or an external control involving a notified body (see Annex VII). The type of assessment required will depend on the specific AI system at hand. For example, high-risk AI systems in the areas of critical infrastructure, education and employment shall follow the procedure based on internal control. In addition, specific rules apply to those high-risk AI systems that are covered by Union harmonisation legislation (see list in Annex I) which must follow the relevant procedure as required under those acts.

It should also be noted that providers must revisit and update their conformity assessment if their high-risk AI system becomes subject to substantial modifications, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. For AI systems that continue to learn after being placed on the market or put into service, changes that were pre-determined by the provider during the initial assessment and documented in the technical documentation are not considered substantial modifications.

6. Draw up an EU declaration of conformity (Article 47)

Providers must draw up and keep at the disposal of the national competent authorities for 10 years following the placement on the market or deployment of the high-risk AI system, a written machine readable, physical or electronically signed EU declaration of conformity. Similar as with the conformity assessment, the EU declaration of conformity refer to the requirements in Section 2 and shall state that these requirements are met. By drawing up the EU declaration of conformity the provider takes responsibility for the compliance of the requirements in Section 2.

Annex V lists all the elements that an EU declaration of conformity shall include and it is important that the content is  translated into a language that can be easily understood by the relevant national competent authorities .

7. Indicate the CE marking (Article 48)

The CE marking on a product signifies that the product has been assessed to meet high safety, health, and environmental protection requirements. The letters CE are an abbreviation for Conformité Européenne and signifies that the product is in compliance with the EU legislation applicable to the product. High-risk AI systems must bear the CE marking to indicate their compliance with the AI Act. If the high-risk AI system is governed by other Union law which also require the affixing of the CE marking, the CE marking shall indicate that the system also complies with the requirements of those laws.

Providers of high-risk AI systems must affix the CE marking to their high-risk AI system, or, if that is not possible, on the packaging or the accompanying documentation. If the high-risk AI system is provided digitally, a digital CE marking shall only be used if it is easily accessible through the interface from which the system is accessed or through an easily accessible machine-readable code or other electronic means.

Where applicable, the CE marking shall be followed by the identification number of the notified body in charge of the conformity assessment outlined in Article 43. The identification number must also be included in any promotional material stating that the high-risk AI system meets the requirements for CE marking.

8. Register the high-risk AI system (Article 49)

Providers (or where applicable, the authorised representatives) of high-risk AI systems shall register themselves and their system in the EU database for high-risk AI systems set up by the Commission. Providers (or authorised representatives) shall also register themselves and their AI systems if they have concluded that their AI system is not high-risk according to Article 6 (3) (which refers to the exceptions to those high-risk AI systems that are referred to in Annex III, such as biometrics, employment etc).

9. Take corrective actions and uphold duty of information (Article 20)

Providers of high-risk AI systems must promptly take corrective actions if they believe, or have reason to believe, that their high-risk AI system is not compliant with the AI Act. This involve bringing the system into compliance, withdrawing, disabling or recalling it. The providers must also inform relevant distributors, deployers, authorised representatives, and importers. If the high-risk AI system poses a risk to health or safety, or to fundamental rights of persons provider must immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities and, where applicable, notified body that issued a certificate for that high-risk AI system.

10. Cooperate with competent authorities (Article 21)

Upon request of a national competent authority, providers must provide the authority will all information and documentation necessary to demonstrate that the high-risk AI system complies with the requirements in Section 2 of the AI Act. Providers must also, upon reasoned request by a competent authority, give the authority access to the automatically generated logs (see Article 12 and Article 19 of the AI Act) of the high-risk AI system.

11. Comply with accessibility requirements (Article 16 point 1)

Providers must ensure that their high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 (accessibility of the websites and mobile applications of public sector bodies) and (EU) 2019/882 (accessibility requirements for products and services). For companies, the products and services in Directive (EU) 2019/882 includes for example, computer hardware systems, consumer terminal equipment and e-readers.

12. Appoint an authorised representatives (if not established in the EU) (Article 22)

If the provider is established outside the EU, it shall appoint, by written mandate, an authorised representative established in the EU before making its high-risk AI system available in the EU. The provider shall enable its authorised representative to carry out the tasks specified in the mandate.

The mandate shall empower the authorised representative, inter alia, to verify that the EU declaration of conformity and technical documentation have been drawn up and that the provider has carried out an appropriate conformity assessment procedure , and to keep at the disposal of the competent authorities for a period of 10 years following the placing on the market or the putting into service of the system, the contact details of the provider, a copy of the EU declaration of conformity, the technical documentation, and, where applicable, the certificate issued by the notified body.

If the authorised representative considers or has reason to consider that the provider is not complying with the obligations of the AI Act the authorised representative shall terminate the mandate. The authorised representative shall also inform the relevant market surveillance authority, and, where applicable, the relevant notified body, about this.

Here, we see a similar requirement to the GDPR, namely the concept of EU representatives for controllers or processors not established in the EU (Article 27 GDPR). However, given the risks that AI systems can pose to individuals, it seems that the representatives have been given a broader and more active role in the AI Act than in the GDPR. The GDPR representatives mostly have the role of a “mailbox” of the controller in Europe, whereas the authorised representatives in the AI Act have an active role, as they must, among other things, verify that the documentation drawn up by the provider is carried out correctly and cooperate with the competent authorities. In addition, Article 54(4) of the AI Act states that the mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the AI Office or the competent authorities, on all matters related to ensuring compliance with the AI Act. Having said that, providers must choose their representatives carefully.

13. Establish a post market monitoring (Article 72)

All providers of high-risk AI systems shall have a post-market monitoring system in place to be able to take into account the experience on the use of the high-risk AI system for the purpose of improving the system, the design and the development process and to be able to take any possible corrective or preventative actions. The post market monitoring system shall be proportionate to the nature of the AI technologies and the risks of the AI system.

The post market monitoring system shall collect, document and analyse data provided by deployers or other sources on the performance of the high-risk AI system which allows the provider to evaluate the continuous compliance of the AI system with the requirements in Section 2 of the AI Act. Furthermore, the post market monitoring system shall be based on a post-market monitoring plan (which shall also be included the technical documentation, see Article 11 and Annex IV). By February 2026 the Commission will have adopted further provisions on what this post-market monitoring plan should look like and include.

14. Report incidents (Article 73)

Providers must report any serious incident to the market surveillance authorities of the member states where it occurred. A serious incident is defined as an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

a) the death of a person, or serious harm to a person’s health;

b) a serious or irreversible disruption of the management or operation of critical infrastructure;

c) an infringement of obligations under EU law intended to protect fundamental rights; or

d) a serious harm to property or the environment.

The report should be made as soon as the provider establishes a causal link or a reasonable likelihood of such a link between the AI system and the incident. In any case, the report must be filed within 15 days of the provider or deployer becoming aware of the incident. However, the period for the reporting shall reflect the severity of the incident. In case of widespread infringements or serious incidents, the reports must be made immediately and no later 2 days after the provider or deployer became aware of the incident. To ensure timely reporting, the provider or the deployer, may submit an incomplete initial report that is later on followed by a complete one.

After reporting a serious incident, the provider must promptly investigate the incident and the AI system involved. This includes carrying out a risk assessment and taking corrective action. The provider must cooperate with the relevant authorities and, if applicable, the notified body during the investigation. The provider must not take any action that involves modifying the AI system in a way that could affect the evaluation causes of the incident before informing the authorities of such action.

Key takeaways and next steps

Together with our previous article on Section 2 of the AI Act, we have now covered all obligations of providers of high-risk AI systems.  We hope that these two articles have helped to clarify and simplify the navigation of these obligations. As mentioned above, many of the obligations relate back to Section 2, as providers will need to demonstrate how they ensure compliance with these fundamental requirements.

Our upcoming article will continue to focus on high-risk AI, but next time we will look at the obligations of the other actors in the AI Act.

Stay tuned!

Written by: Kristin Tell and Elisavet Dravalou

Contributions: Tara Turcinovic & Hugo Snöbohm Hartzell

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team