DECODING THE AI ACT PART 3 – what is prohibited?

AI that manipulates people's decisions, classifies people based on their social behaviour, scrapes facial images from the internet or infers emotions – sounds pretty scary, right? Undoubtedly, AI systems may be used for several unpleasant and “risky” purposes. With this in mind, the EU is now, by launching the AI Act (the “Act”), setting the boundaries of what is considered an unacceptable risk about AI systems to safeguard, e.g., human rights, safety, and transparency. Following our previous articles in our Decoding the AI Act series, the time has now come to dive deep into the AI practices that will be prohibited from being placed on the market, put into service, and used under the Act.1.

Subliminal or manipulative practices 

First, AI systems that deploy subliminal, manipulative, or deceptive techniques are prohibited under the Act. These include methods that, beyond a person's consciousness, have the objective to or aim at affecting a person's behaviour by weakening their ability to make an informed decision or impair their freedom of choice. In other words, these are tools that affect people's decision-making, especially when it is likely to cause significant harm to the individual at hand. It is important to highlight that the operator of the AI system doesn't need to have the intention to cause significant harm as long as such harm occurs from the manipulative or exploitative AI practice.

Some examples of practices that could fall into this category of prohibited AI include social media platforms promoting certain content through their algorithms to manipulate users’ feelings and behaviour, different “dark patterns” practices in marketing to encourage consumers to buy more products, techniques that subconsciously urge individuals to unveil their personal data and deepfakes that look and sound real enough to trick people into believing something false.

Practices exploiting vulnerabilities 

AI systems that exploit people’s vulnerabilities based on their age, disability, or social or economic situation to distort behaviour in a manner that causes or is likely to cause harm to the person or another person are also prohibited under the Act. 

This category of prohibited AI systems may include highly personalised online ads that take advantage of these types of vulnerabilities, AI systems that perpetuate bias or discrimination against certain groups, tools that gather sensitive personal data without consent (e.g., to profile voters in elections), and AI that encourages the referred-to vulnerable groups to do something dangerous. 

Social scoring systems 

Social scoring systems refer to AI systems used to evaluate or classify people based on their social behaviour or personal or personality characteristics. To be prohibited, the system must either lead to a detriment or unfavourable treatment in a social context unrelated to the context for which the data was initially generated or collected or unjustified or disproportionate to their social behaviour. 

For example, AI systems that somehow discriminate based on characteristics such as race, gender, or religion will likely not be allowed. Practices likely to be considered prohibited include employers using AI tools to analyse job applicants' social media to make hiring decisions based on factors unrelated to job performance, such as political views, religious beliefs, or membership in specific groups. Another example is financial institutions using AI tools influenced by non-financial data to evaluate credit scoring. Furthermore, unsurprisingly, any form of government scoring system to determine eligibility for public services where citizens' "score" is influenced by their social behaviour will not be allowed.

Crime prediction 

Crime prediction AI systems refer to AI systems that make risk assessments of individuals to assess or predict the risk of an individual committing a criminal offence, based solely on the profiling of that individual or on assessing their personality traits and characteristics (not related to, e.g., previous crimes). This prohibition endorses the principle of the presumption of innocence, upholding that individuals should be considered innocent until proven guilty. It also highlights the importance of relying on concrete actions rather than AI-generated predictions based on factors such as nationality, personality, and social or economic situation. A relatively well-known practical example of this was the “predictive policy” software that was elaborated within the U.S. police that resulted in, e.g., racial biases.2.

Facial recognition databases

Another type of AI system prohibited under the Act is systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This prohibition is a measure aimed at preventing the spread of a culture of mass surveillance and practices that infringe upon fundamental rights, particularly the right to privacy. A real-life example of a practice that is likely to be prohibited on this ground under the AI Act (which also contravenes the GDPR) is the broad scraping of images from the internet to build a facial recognition database performed by Clearview AI, which has faced fines under the GDPR.3.

Inferring feelings in workplace and educational institutions 

AI systems that aim to assess the emotions of natural persons in a workplace or educational institute will also be prohibited. However, this prohibition does not apply to AI systems where they are intended for medical or safety reasons. In environments such as workplaces or schools, using these types of "emotion detecting" AI systems could result in unfair treatment since the risk of incorrect assessments and biases is always present. According to a researcher at Michigan University, many companies today use AI to monitor their workers' emotions. An example may be AI tools used during job interviews to assess candidates' emotions or stress levels. 

Biometric categorization 

AI systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. There is an obvious risk that these AI systems may enable discriminatory practices and emphasise current societal inequalities. Examples of biometric data could, inter alia, be facial characteristics or fingerprints. This prohibition does not apply to law enforcement agencies' labelling and filtration of lawfully acquired biometric datasets. 

Real-time remote biometric identification systems in publicly accessible spaces 

The Act also prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. Such techniques refer to processes where capturing, comparing, and identifying biometric data occur more or less instantly. The aim of the prohibition is to prevent these AI systems from intruding into and surveilling individuals' lives and, thus, ensure that  individuals' fundamental right to privacy is respected.

However, the prohibition does not apply under certain circumstances where the use of such AI systems is considered critical to protect a significant public interest that outweighs the potential risks. The exceptions include searching for specific victims, preventing threats to life and safety from an imminent terrorist attack, and tracking or identifying certain individuals that are suspected of committing serious crimes. These use cases are, nevertheless, only permitted if the various requirements the Act sets out on law enforcement and the Member States to mitigate the risks are strictly complied with.

Comment

The list of prohibited AI practices should be the first thing operators of AI systems keep in mind when analysing their AI practices about the AI Act. If you suspect that an AI system you, e.g. develop or use may fall under the prohibited practices, you should always consult a legal expert within the AI field. If your use of an AI system is forbidden, you may be subject to hefty administrative fines.

Furthermore, it is important to keep in mind that the European Commission will assess the need to amend the list of prohibited practices and may, thus, adjust it moving forward. 

The AI Act will enter into force twenty days after its publication in the official Journal. The publication is estimated to take place in the coming weeks. The bans on prohibited AI practices will apply six months after the entry into force date. 

Stay tuned. In the upcoming articles in the Decoding the AI Act series, we will discuss the other risk levels in the Act, such as high-risk AI systems and the applicable requirements. Therefore, follow us on LinkedIn to stay posted on the forthcoming articles! 

Written by: Hugo Snöbohm Hartzell

Contributions: Elisavet Dravalou & Kristin Tell

Synch brings calm to the rapidly evolving field of artificial intelligence by deep-diving into the dos and don'ts. Whether you're a tech enthusiast, a professional in the field, or simply curious about the future, our series of articles about AI aims to both inform and inspire.

1. For further elaboration on the terms “placing on the market” and “putting into service”, please see our previous article Decoding the AI Act part 2.

2. https://www.edpb.europa.eu/news/national-news/2022/facial-recognition-italian-sa-fines-clearview-ai-eur-20-million_en

3. https://www.theverge.com/2018/4/26/17285058/predictive-policing-predpol-pentagon-ai-racial-bias

More from our team