The European Parliament approves the first ever law on AI

Parliament today adopted the first ever Artificial Intelligence Act, which guarantees security and respect for fundamental rights while boosting innovation, with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability in the face of high-risk AI, while boosting innovation and making Europe a leader in the sector. The Regulation sets out a number of obligations for AI based on its potential risks and level of impact.

Prohibited applications

The new rules prohibit certain AI applications that infringe citizens’ rights, such as biometric categorisation systems based on sensitive characteristics and the indiscriminate capture of facial images from the internet or surveillance camera recordings to create facial recognition databases. Emotion recognition in the workplace and in schools, citizen scoring systems, predictive policing (when based solely on a person’s profile or assessment of their characteristics) and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be banned.

Law enforcement exemptions

The use of biometric identification systems by law enforcement authorities is prohibited a priori, except in very specific and well-defined situations. Real-time” biometric identification systems can only be used if a number of strict safeguards are met, e.g. their use is limited to a specific time and place and subject to prior judicial or administrative authorisation. Such cases may include the targeted search for a missing person or the prevention of a terrorist attack. Recourse to such systems a posteriori is considered a high-risk use, which requires judicial authorisation as it is linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (because they can be highly damaging to health, security, fundamental rights, the environment, democracy and the rule of law). Examples of high-risk uses of AI include critical infrastructure, education and vocational training, employment, essential public and private services (e.g. healthcare, banking), certain law enforcement systems, migration and customs management, justice and democratic processes (such as influencing elections). These systems must assess and mitigate risks, maintain records of use, be transparent and accurate, and have human oversight. Citizens will have the right to complain about AI systems and to receive explanations of decisions based on them that affect their rights.

Transparency requirements

General-purpose AI systems and the models on which they are based must meet certain transparency requirements, respect EU copyright law and publish detailed summaries of the content used to train their models. More powerful models that could pose systemic risks will have to meet additional requirements, such as conducting model assessments, analysing and mitigating systemic risks, and reporting incidents.

In addition, artificial or manipulated images, audio or video content (“ultra-fakes”) should be clearly labelled as such.

Measures to support innovation and SMEs

Controlled test and trial spaces should be made available to SMEs and start-ups under real-life conditions at national level to allow them to develop and train innovative AI before commercialisation.