On 1 August 2024, the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (also known as the Artificial Intelligence Act or AI Act) entered into force. The aim of the Regulation is to create a harmonised horizontal regulatory framework for the development, introduction into the EU market and use of artificial intelligence (AI) products and services, with a focus on managing risks to health, safety and fundamental rights.
European Union - Regulation 2024/1689 on artificial intelligence (AI Act)
13 giugno 2024
The Artificial Intelligence Regulation (EU) aims to enhance the functioning of the internal market and promote the deployment of human-centered and reliable artificial intelligence (AI). Simultaneously, it seeks to ensure a high level of protection for health, safety, and fundamental rights, as outlined in the Charter of Fundamental Rights. This includes safeguarding democracy, the rule of law, and the environment against the harmful effects of AI systems within the Union while also encouraging innovation (Art. 1).
To achieve this, the AI Act first defines its scope of application (Art. 2) and provides definitions relevant for regulatory purposes (Art. 3). It then establishes rules for AI systems based on their level of risk: unacceptable, high, low, or minimal.
Specifically, the Regulation classifies AI systems into the following categories:
1. Unacceptable Risk AI Systems: These systems are explicitly prohibited from being marketed or used in the EU internal market (Chapter II, Art. 5). This category includes systems that:
- Use manipulative techniques against individuals.
- Exploit vulnerable groups.
- Classify individuals or groups based on social behavior or personal characteristics under certain conditions.
- Assess the risk of criminal behavior.
- Create facial recognition databases through scraping activities.
- Categorize people based on biometric data.
- Perform real-time biometric identification (with some exceptions).
2. High-Risk AI Systems: These systems are classified according to criteria set out in the Regulation (Art. 6). They may only be placed on the market or used after undergoing a conformity assessment, either by the supplier or a notified third-party body (Art. 43), to ensure compliance with the Regulation's requirements (Section 2, Chapter III). High-risk systems must implement a risk management system that consistently identifies, evaluates, and manages potential risks, including in post-market monitoring (Art. 9). Additionally, providers are required to maintain up-to-date technical documentation (Art. 11) and log records (Art. 12), and must design their systems to maintain adequate levels of accuracy, robustness, and cybersecurity throughout their lifecycle (Art. 15). Other requirements relate to data governance (Art. 10), system transparency (Art. 13), and human oversight (Art. 14). The use of AI as a medical device, AI systems intended to be used by public authorities (or on behalf of public) to evaluate eligibility of natural persons for essential public assistance benefits and services (including healthcare services) and AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services (e.g. medical aid and emergency healthcare patient triage systems) are all classified as high-risk AI systems (Annex I and Annex III). The AI Act also provides specific obligations for both providers and deployers of high-risk AI systems (e.g. the performance of a fundamental rights impact assessment, art. 27).
3. Transparency Risk AI Systems: If a natural person interacts with this type of AI, there is an obligation to inform them of the system's artificial nature and the content it generates (Art. 50).
4. Minimal Risk AI Systems: These systems are not subject to additional obligations beyond existing sector legislation and what is already stipulated at the European regulatory level. However, the Regulation encourages the adoption of codes of conduct to promote voluntary compliance with high-risk system requirements for minimal-risk systems as well (Art. 95).
The AI Act also establishes specific rules for general-purpose AI models (Chapter V), imposing stricter requirements for models classified as having systemic risk (Art. 51 ff.).
Furthermore, Chapter VI introduces measures to support innovation, focusing on creating regulatory sandboxes for the development and testing of innovative AI systems. In these sandboxes, a controlled environment is established to test new technologies under the supervision of competent authorities for a limited time, based on an agreed plan. The AI Act sets out rules for the real-world testing of AI systems.
It mandates Member States to appoint competent authorities and creates a European committee for artificial intelligence that will oversee the application and implementation of the Regulation. Additionally, the AI Act introduces administrative fines for non-compliance with its provisions.
The AI Act shall apply from 2 August 2026, but:
- Chapter I (general provisions) and Chapter II (prohibited practices) shall apply from 2 February 2025;
- Chapter III Section 4 (notified authorities and notified bodies), Chapter V (general purpose AI models), Chapter VII (governance) and Chapter XII (penalties) and Article 78 (confidentiality) shall apply from 2 August 2025 (except for Article 101)
- Article 6(1) and the corresponding obligations in this Regulation (high-risk systems rules and obligations) shall apply from 2 August 2027
The text of the Regulation is available at the following link and in the download box.