DECODING THE AI ACT PART 8 – from chatbots to deepfakes – transparency obligations for certain AI systems

Picture this: you walk into your favorite café, craving that perfect latte. You place your order with the friendly barista, who greets you with a smile, remembers your name, and even asks how your day’s going. There’s something a bit “robotic” about the interaction, but you shrug it off - it’s early in the morning after all. But when your coffee arrives - poured with mechanical precision - you start to wonder – wait, was that barista... a robot?  

However, you brush it off and enjoy your latte. As you're scrolling through your phone, you come across a video of a politician making an absolutely outrageous statement. I’m never voting for that person again; you think to yourself. Little do you know that the video was a deepfake designed to manipulate your opinion. Later, while browsing a web shop, a chatbot assistant pops up and helps you find what you're looking for. You pause and ask yourself - was I just chatting with a real person, or was that another AI?

While you may not interact with AI systems daily, one thing is certain; our society will increasingly be driven by AI in the future. The AI Act pursues to set the tone and ensure that you are not deceived by AI systems that you interact with, by establishing transparency obligations for these types of AI systems, irrespective of whether they qualify as high-risk or not. The AI systems covered by these obligations are commonly referred to as “limited risk AI systems” (in relation to the concept of “high-risk AI”, as discussed in our previous articles). For these "limited risk" systems, which range from chatbots to deepfakes, the aim of the AI Act is simple - transparency. Just as you deserve to know whether you’ve ordered your precious morning latte from a human or a machine, the AI Act ensures that you’ll be informed whenever you’re interacting with AI.

In this article, we will briefly guide you through which AI systems are affected by these transparency obligations and what the obligations look like. it's important to keep in mind that these requirements apply regardless of whether the AI system is considered high risk or not.  

Chatbots

Any AI system that interacts directly with individuals must disclose its non-human nature. The obligation is quite simple - if you’re talking to a chatbot, it has to tell you that it's a chatbot. AI systems that interact directly with individuals could be, for example, chatbots integrated into a website, virtual voice assistants, or AI-driven baristas taking orders and brewing coffee, as in the example above. It's up to the providers of the AI system to ensure that it is designed and developed in such a way that the obligation is met. However, no rule is without exceptions. This transparency requirement does not apply where it is obvious to a reasonably well-informed, observant and circumspect individual, taking into account the circumstances and the context of use. The characteristics of natural persons belonging to vulnerable groups, such as those related to age or disability, should be specifically taken into account if the AI system is intended to interact with these groups. In summary, this requirement mandates that whenever an AI interacts with individuals, it must include a disclaimer that lets users know they’re talking to an algorithm, not a human.

Generative AI

Providers of AI systems that generate text, video, image, or audio content – e.g., so-called Generative AI – are also subject to transparency requirements. When an AI system creates such content - whether it's generating product descriptions, writing a news article, or creating digital artwork – the users must be informed that the content is AI-generated. This means that the output of the AI-system shall be marked in a machine-readable format, ensuring it can be detectable as artificially generated or manipulated. The technical solutions used to achieve this must be sufficiently reliable, interoperable, effective, and robust, as far as technically feasible. Techniques such as watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity, logging methods, fingerprints, or other appropriate tools should be considered to meet this requirement. In implementing this obligation, the providers should take into account the relevant content that their AI system produces, the costs of implementation, the technical developments in the field, and the generally acknowledged state of art of these types of markings. Furthermore, to promote proportionality, the marking obligation does not apply to AI systems that primarily perform an “assistive function” for standard editing, or to AI systems that do not substantially alter the input data provided by the deployer or the semantics thereof.

The reason behind this requirement is, i.e., that the rapid development of Generative AI systems makes it increasingly difficult for humans to distinguish artificially generated content from human-generated authentic content. The EU introduced this transparency obligation in order to  reduce the risks of misinformation, fraud, impersonation, consumer deception, and similar.  

Emotion recognition systems or biometric categorization systems

Here we go again! It is safe to say that emotion recognition systems and biometric categorization systems are subject to quite a lot of regulation in the AI Act, as they, depending on the use case, may be prohibited, considered as high-risk, or “only” subject to this transparency obligation. The transparency obligation requires deployers of emotion recognition systems or biometric categorization systems to notify individuals when they are being exposed to AI systems that, by processing their biometric data, can identify or infer the emotions or intentions of those individuals or assign them to specific categories (such as sex, age, hair color, eye color, tattoos, personal traits, ethnic origin, personal preferences or interests). This obligation applies in addition to other requirements that may be applicable to the AI system (e.g., if it is considered as high-risk). Moreover, the deployer must ensure that the processing of such personal data is carried out in accordance with the GDPR.  

Deepfakes

Last but not least, the deepfakes. The term “deepfakes” is defined in the Act as:

AI-generated or manipulated image, audio, or video content that looks like existing persons, objects, places, entities or events, and would falsely appear to be authentic or truthful.

Such AI systems are perhaps the most concerning AI use case when it comes to deception. The aim here is to protect individuals from being misled or manipulated by fake videos or audio recordings that could impersonate, e.g., real people. Therefore, deployers of AI-systems that generate deepfakes must clearly and distinguishably disclose that the content has been artificially generated or manipulated. The disclosure should be performed by labelling the AI output accordingly and unveiling its artificial origin.  

However, to not impede with the rights to freedom of expression and freedom of the arts and sciences-in particular when the content is part of an artistic, creative, satirical, fictional or analogous work or program - the transparency obligation is limited to disclosing the presence of AI-generated or manipulated content in a way that does not interfere with the display or enjoyment of the work. A similar disclosure obligation applies to AI-generated text that is published with the purpose of informing the public on matters of public interest, unless the content has undergone a process of human review or editorial control.

Concluding remarks

The transparency obligations in the AI Act aim to build trust in AI systems by ensuring that users know when they’re interacting with or being exposed to AI. Whether it’s an AI-driven barista, a deepfake trying to sway your vote, or a virtual assistant helping you out on a website, you’ll know who - or what - you’re interacting with.  

The information that providers and developers are required to give you, as discussed in the above, must be provided in a clear and distinguishable manner. In addition, the information must be provided to the individual no later than the first interaction with or exposure to the AI system. Codes of practice for the implementation of these detection and labelling requirements will be drawn up and the Commission is empowered to approve them through implementing acts. The Commission may also adopt common rules for the implementation of these requirements.

Lastly, it is important to keep in mind that where personal data is processed, the transparency requirements of the GDPR apply in addition to the obligations of the AI Act.  

In our next article in the Decoding the AI Act series, we will introduce you to the regulation of General Purpose AI in the Act. You won't want to miss it, so we hope you will stay tuned!

Written by: Hugo Snöbohm Hartzell

Contributions by: Elisavet Dravalou and Kristin Tell  

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

More from our team