DECODING AI ACT PART 1 - Unraveling the definition of “AI system”

The AI Act is quite technical, and from our experience, many companies are under the “AI Act fever”, trying to understand how to navigate this complex legislation. At Synch our view is to remain calm and allow us to help you understand how this new law will apply and what you can do already now to prepare. In this first article, the focus is on the definition of “AI system” which is a fundamental notion in the AI Act.

Rumours from the EU Parliament said that the definition of an “AI system” was challenging to bring together. It is worth mentioning that during the IAPP Data Protection Congress in Brussels in November 2023, one of the panellists who was a member of the EU Parliament apologised for “not having voice left to speak”  due to heated negotiations over the definition of “AI model” that same morning (and yes, Synch team was there to confirm that this is true!).

AI SYSTEMS

The AI Act defines AI shortly and concisely as follows:

“AI system’ means a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

For a non-technical person, this definition may be challenging to understand (including the author of this article at first :) ). To make your life easier, we have gathered together all the explanatory material currently available, consisting of (1) Article 3 and Recital 12 of the AI Act and (2) the

OECD’s explanatory memorandum on the definition of an AI system, as updated in 2023:

AI Act requirement Recitals and Articles in AI Act OECD Explanatory memorandum
1. Machine-based system The term ‘machine-based’ refers to AI systems running on machines. (no mention)
2. Designed to operate with varying levels of autonomy AI systems are designed to operate with varying levels of autonomy, meaning they have some degree of independence in actions from human involvement and the capability to operate without human intervention. Autonomy means the degree to which a system can learn or act without human involvement following the delegation of autonomy and process automation by humans. Human supervision can occur at any stage of the AI system lifecycle, such as during AI system design, data collection and processing, development, verification, validation, deployment, or operation and monitoring.
3. That may exhibit adaptiveness after deployment The adaptiveness that an AI system could exhibit after deployment refers to self-learning capabilities, allowing the system to change while in use. AI systems can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded). Adaptiveness is usually related to AI systems based on machine learning that can continue to evolve after initial development. The system modifies its behaviour through direct interaction with input and data before or after deployment. Examples include a speech recognition system that adapts to an individual’s voice or a personalised music recommender system.
4. For explicit or implicit objectives The reference to explicit or implicit objectives underscores that AI systems can operate according to explicitly defined objectives or implicit objectives. The objectives of the AI system may be different from its intended purpose in a specific context. AI system objectives can be explicit or implicit; for example, they can belong to the following categories, which may overlap in some systems:

• Explicit and human-defined – where the developer encodes the objective directly into the system (e.g., through an objective function). Examples of systems with explicit objectives include simple classifiers, game-playing systems, reinforcement learning systems, combinatorial problem-solving systems, planning algorithms, and dynamic programming algorithms.

• Implicit in (typically human-specified) rules – rules dictate the action to be taken by the AI system according to the current circumstance. For example, a driving system might have a rule, “If the traffic light is red, stop.” However, these systems’ underlying objectives, such as compliance with the law or avoiding accidents, are not explicit, even though they are typically human-specified.

• Implicit in training data – where the ultimate objective is not explicitly programmed but incorporated through training data and a system architecture that learns to emulate those data (e.g., rewarding large language models for generating a plausible response);

• Not fully known in advance – some examples include recommender systems that use reinforcement learning to gradually narrow down a model of individual users’ preferences.

AI systems can operate according to one or more types of objectives.
5. Infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions A key characteristic of AI systems is their capability to infer. This capability refers to the process of obtaining outputs, such as predictions, content, recommendations, or decisions, that can influence physical and virtual environments and to the capability of AI systems to derive models or algorithms from inputs or data. ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output; Outputs generated by the AI system reflect different functions performed by AI systems and include predictions, content, recommendations or decisions. The concept of “inference” generally refers to the step in which a system generates an output from its inputs, typically after deployment. When performed during the build phase, inference, in this sense, is often used to evaluate a version of a model, particularly in the machine learning context.

In the context of this explanatory memorandum,“infer how to generate outputs” should be understood as also referring to the build phase of the AI system, in which a model is derived from inputs/data.

Input is used both during development and after deployment. Input can take the form of knowledge, rules, and code that humans put into the system during development or data. Humans and machines can provide input.

The output(s) generated by an AI system generally reflect different functions performed by AI systems. AI system outputs generally belong to the broad categories of recommendations, predictions, and decisions. These categories correspond to varying levels of human involvement, with “decisions” being the most autonomous type of output (the AI system affects its environment directly or directs another entity to do so) and “predictions” the least autonomous. For example, a driver-assist system might “predict” that a pixel region in its camera input is a pedestrian; it might “recommend” braking, or it might “decide” to apply the brake.
6. That can influence physical or virtual environments Environments should be understood as the contexts in which AI systems operate. An environment or context in relation to an AI system is an observable or partially observable space perceived using data and sensor inputs and influenced through actions (through actuators). The environments influenced by AI systems can be physical or virtual and include environments describing aspects of human activity, such as biological signals or human behaviour. Sensors and actuators are either humans or components of machines or devices.

AI Model

Although the AI Act mentions the term “AI model”, do not try to find the definition within the AI Act; striking as it may be, the AI Act does not provide a definition, apart from in relation to general-purpose AI models (which we will analyse further in a following article). 

According to the OECD, “an AI model is a core component of an AI system used to make inferences from inputs to produce outputs”. 

It is essential to understand that an AI model can be part of an AI system, but it does not constitute an AI system on its own.

Conclusion

Please note that all six requirements listed in the table above (the number is according to our methodology) must apply cumulatively for an AI system to fall under the legal definition of the AI Act. As these are the key characteristics of AI systems that distinguish them from simpler traditional software systems or programming approaches, “AI systems” should not cover systems based on the rules defined solely by natural persons to execute operations automatically. Note that not all software using AI will fall under the legal definition of an AI system; therefore a first step for organisations developing or using AI systems to ensure compliance with the AI Act, is to map the systems developed or used and assess on a case-by-case basis if they meet all of the six requirements.

Stay tuned. In the following article, we will elaborate on the key players involved in the use/development of AI technology and the material and geographical scope of the AI Act.

Written by: Elisavet Dravalou

Contributions: Hugo Snöbohm Hartzell & Kristin Tell

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team