DECODING THE AI ACT PART 9 – General-Purpose AI

Since the first draft of the EU AI Act was published by the European Commission in April 2021, it's safe to say that a lot has happened in terms of AI development. Not least, the last few years have seen the launch of several major general AI services (such as ChatGPT), which have revolutionized the use of AI and made it much more accessible to the average person. This has caught the EU slightly off guard, and made them rethink the proposed regulation. Because it's not all sunshine and roses when it comes to big AI models, they also pose certain problems and risks. For example, they pose specific challenges due to their wide applicability and adaptability across different sectors, making it difficult to classify them under the “risk-based approach” in the AI Act. Therefore, in the middle of the legislative process, the EU added some new provisions for so-called General Purpose AI models (hereafter referred to as “GPAI models”). The provisions was voted through In June 2023 by the European Parliament.  

What is a GPAI model?  

First things first, as the name suggests, GPAI models are designed with generality, adaptability and broad applicability in mind. To achieve this, they are trained on very large sets of unlabelled data and can be used to perform many different tasks without much fine-tuning. In other words, GPAI models are not just programmed to perform one specific task, such as recognizing a certain pattern or predicting tomorrow’s weather; instead, GPAI models have capabilities that extend across a range of tasks, adaptively switching from, say, text translation one moment to generating creative artwork the next. Think of them as the Swiss army knives of the AI world, able to adapt to almost anything you throw at them. GPAI models are often used as a kind of digital infrastructure by downstream actors who base their services on the model and integrate it into their systems and applications. Examples of well-known GPAI models are ChatGPT and Gemini.

In the AI Act, GPAI models are, accordingly, defined as:  

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.  

It is important to keep in mind that while an AI model, for example a GPAI model, may be a part of an AI system, it is not an AI system in itself. Rather, an AI model is a component of an AI system on which the system is based upon. To put it another way, when a GPAI model is integrated into or forms a part of an AI system, the AI system is considered to be a GPAI system if the system is capable of serving a variety of purposes due to the integration of the GPAI model. However, as mentioned above, the GPAI model itself do not constitute a GPAI system. A typical example of GPAI models are large generative AI models, as they allow for the adaptive generation of content, such as in the form of text, audio, images, or video, that can accommodate a wide range of distinctive tasks.  

The regulation of GPAI models in the AI Act has a tiered approach, with certain obligations applicable to all providers of GPAI models (“Tier 1”), while there are additional obligations for providers of GPAI models that are considered to pose systemic risk (“Tier 2”). In the below, we will briefly guide you through the different obligations.  

Tier 1 - obligations for providers of GPAI Models  

According to the AI Act, providers of GPAI models play a particularly important role along the AI supply chain, as the models they provide may form the basis for a range of downstream AI systems provided by downstream providers. Transparency both towards downstream providers and public authorities has therefore been prioritized by the EU and characterizes all requirements. The obligations should apply as soon as the GPAI model is placed on the market. Providers of GPAI models are obligated to:

  • Draw up and maintain up-to-date technical documentation of the model, including its training and testing process and the results of its evaluation. The documentation must, at least, contain the information set out in Annex XI of the AI Act and shall, upon request, be provided to the AI Office and relevant national authorities. Note that Annex XI may, from time-to-time, be amended or further completed by the European Commission.
  • Draw up, maintain up-to date and make available information and documentation to enable downstream providers, who wish to integrate the models in their systems, to have a good understanding of the capabilities and limitations of the GPAI model and to comply with their own obligations according to the AI Act. Furthermore, the documentation shall, at least, contain the elements set out in Annex XII of the AI Act. Same as above, the EU Commission has the discretion to amend or complement Annex XII.  
  • Put in place a policy that outlines the steps taken to comply with EU copyright legislation, especially how to identify and comply with reservations made by rightsholders (so-called “opt-outs”) pursuant to the commercial text- and data mining exception in article 4 of Directive 2019/790.  
  • Draw up and publish a sufficiently detailed summary of the content used for training the GPAI model. The purpose of this requirement is to facilitate parties with legitimate interests (e.g., copyright holders to content used to train the model) to exercise and enforce their rights under EU legislation. The EU AI Office will provide a template of such a summary.  
  • Cooperate as necessary with the EU Commission and the relevant national authorities in the execution of their duties under the AI Act.
  • If the provider of the GPAI model does not have an establishment here in the EU, an EU representative must be appointed. Such representative shall, among other things, verify the technical documentation and provide information to and cooperate with the AI Office and relevant national authorities.  

To facilitate, there will be codes of practice that providers of GPAI models (including GPAI models with systemic risk, see the below section) may rely on to demonstrate compliance with the above-discussed obligations, until a harmonized standard is published. The EU AI Office may invite GPAI model providers to participate in the drawing up of such codes of practice. The final version of the first code of practice for GPAI models is aimed to be published in April 2025.  

The first two points above (regarding technical documentation) do not apply to providers of models that are released under a free and open-source license that allows for the access, usage, modification, and distribution of the model, and whose parameters are made publicly available (unless the GPAI models present systemic risks, see the below section).

Tier 2 – obligations for providers of GPAI Models with systemic risk

Some GPAI models that are considered to pose a so-called “systemic risk” are subject to some additional obligations under the AI Act. These obligations are intended to prevent GPAI models from being misused and to ensure that we do not “lose control” over these powerful models. Simply put, the EU wants to prevent GPAI models going “haywire” and, thus, causing harm and disruption to society. The term “systemic risk” means a risk that is specific to the high-impact capabilities of GPAI models, having a significant impact on the EU market due to their reach or due to negative effects on public health, safety, public security, fundamental rights, or the society as a whole. Examples of systemic risks could include negative effects in relation to major accidents, disruptions of critical sectors, serious consequences to public health and safety, negative effects on democratic processes and public and economic security, and the dissemination of illegal, false, or discriminatory content.  

A GPAI model is classified as posing systemic risk if it has high impact capabilities – which is to be evaluated on the basis of appropriate technical tools and methodologies (including indicators and benchmarks). If the GPAI model has an amount of computation used for training the model that is greater than 10^25 FLOPs, it is presumed to have high impact capabilities. FLOPs stand for “floating point operations”, which are basic mathematical operations performed by computers. This threshold is used to identify models that are particularly large and powerful, as the computational resources required for training can be an indicator of a model's capabilities and potential impact. While exact figures aren't public, it's likely that GPT-4 exceeds this threshold given its capabilities. The European Commission also has the possibility to take individual decisions designating GPAI models as posing a systemic risk, based on the factors described in Annex XIII of the AI Act (the models complexity, its input/output modalities, number of users, etc.). For providers of GPAI models that poses systemic risk, both the general requirements set out in the above and the following requirements apply:

  • Perform model evaluations in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks. Such model evaluations shall, notably, be performed prior to the models first placing on the market.
  • Assess and mitigate possible systemic risks and at an EU level stemming from the GPAI model. This can be done by, for example, putting in place risk-management policies, implementing post-market monitoring, taking appropriate mitigating measures along the entire model’s lifecycle, and cooperating with relevant actors along the AI supply chain.
  • Track, document, and report serious incidents to the AI Office and relevant national authorities without undue delay. The report to the AI Office and relevant national authorities must include relevant information about incident and the corrective measures to address it.
  • Ensure an adequate level of cybersecurity protection for the GPAI model and its physical infrastructure, taking into account, e.g., risks of accidental model leakage, unauthorized access, circumvention of safety measures, and defense against cyberattacks. The cybersecurity protection could include operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls.
  • Notify the European Commission of such GPAI models. Such notification shall be performed without delay and in any event within two weeks after either criterion is met or when the provider becomes known that it will be met.  

Concluding remarks  

In short, GPAI models aren’t just shaping our future - they’re driving it. With this new regulation, the EU is stepping up to the challenge, ensuring these powerful models don’t end up creating the kind of science fiction chaos we love to watch on TV but would not dare to see in real life. From transparency requirements to model evaluations and mitigating risks, the EU is setting a new standard on GPAI regulation. The EU may have been caught off guard by the launch of, e.g., ChatGPT, but with the AI Act’s guardrails firmly in place, we’re entering an era where innovation meets responsibility. The obligations for providers of GPAI models will be fully applicable from the 2 August 2025.

Stay tuned for the upcoming articles in this Decoding the AI Act series!  

Written by: Hugo Snöbohm Hartzell

Contributions: by: Elisavet Dravalou, Kristin Tell and Arunendu Mazumder

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team