EU negotiators reach a provisional agreement on the Artificial Intelligence Act

11.12.2023

On 8 December, late at night, the negotiators from the EU Parliament and the EU Council reached a provisional agreement on the Artificial Intelligence Act (AI Act). This EU regulation is aimed to ensure that the AI systems placed on the EU market and used in the EU are safe and observe the fundamental rights and EU values.

Following a risk-based approach, the AI Act regulates the AI systems depending on their level of risk towards society.

Thus, the AI Act divides the AI systems into four categories:

  • Unacceptable risk systems – prohibited for use;
  • High risk systems – may be used with the observance of specific requirements (e.g., conducting a fundamental rights impact assessment before putting the AI system into use);
  • Limited risk systems – may be used with very light transparency obligations (e.g., disclosing that the content was generated with AI in order for the users to make informed decisions);
  • Minimal risk systems – are allowed for use without mandatory requirements under the AI Act.

The unacceptable risk and high risk categories were part of the intense negotiations between the EU institutions. According to the EU Council and the EU Parliament press releases, the following types of AI systems are among those which are included in the unacceptable risk list:

  • manipulation of human behaviour to circumvent their free will;
  • exploitation of vulnerabilities of people (due to their age, disability, social or economic situation);
  • untargeted scrapping of facial images from the internet or CCTV footage in order to create facial recognition databases;
  • emotion recognition in the workplace and the educational institutions;
  • social scoring based on social behaviour or personal characteristics (not to be confounded with the credit scoring);
  • biometric categorization to infer sensitive data (e.g., sexual orientation, political or religious beliefs).

Certain rules have been agreed on foundation models (i.e., large systems which are able to undergo a wide range of tasks, such as generating video, text, images, computing, or generating computer code). The provisional agreement sets forth that foundation models must comply with specific transparency obligations before they are placed on the market, while certain foundation models considered of “high impact” must observe a stricter regime.

The AI Act comes with hefty sanctions, which are even higher than those under the GDPR. Thus, fines may go up to EUR 35 million or 7% of the global turnover.

The final form of the AI Act text is not yet available, as there is still some work to be done on the text. The next steps are the following:

  • continuation of the work at the technical level to finalize the details of the regulation;
  • endorsement of the compromise text by the representatives of EU member states in Coreper;
  • confirmation of the finalized text by the EU Council and EU Parliament;
  • revision of the text from a legal-linguistic perspective;
  • adoption of the text by the EU Council and EU Parliament;
  • publication of the AI Act in the Official Journal of the European Union.

The AI Act should apply after two years after its entry into force, with some exceptions for specific provisions.

The EU Council press release may be read here.

The EU Parliament press release may be read here.

You may also want to see the previous official versions of the AI Act, which are available here:

Statistics