The European Parliament has officially granted final approval to the EU AI Act, making it one of the world’s first comprehensive regulations on artificial intelligence. The purpose of this act is to ensure that AI in the European Union is trustworthy, safe, and respectful of fundamental rights while also promoting innovation. The legislation received strong support with 523 votes in favor, 46 against, and 49 abstentions.
In a virtual press conference held prior to the vote, EU Parliament members Brando Benifei and Dragos Tudorache expressed their excitement, referring to it as a historic day in the journey towards AI regulation. Benifei emphasized that the final result of the legislation would lead to the development of safe and human-centric AI, aligning with the priorities of the EU Parliament.
The idea for this legislation was initially proposed five years ago but gained momentum in the past year as powerful AI models became more prevalent. In December 2023, after lengthy negotiations, a provisional agreement was reached, and on February 13, the Internal Market and Civil Liberties Committees voted 71-8 to endorse this agreement.
Following today’s approval, minor linguistic adjustments will be made during the translation phase of the law to accommodate all member states. The bill will then undergo a second vote in April and is expected to be published in the official EU journal, most likely in May. Starting in November, any bans on prohibited practices will come into effect. Benifei clarified that these bans will be mandatory from the moment of enactment, but full compliance will follow a timeline.
The EU AI Act categorizes machine learning models into four groups based on the level of risk they pose to society, with high-risk models being subject to the most stringent regulations. The legislation prohibits any AI system that poses an “unacceptable risk” to safety, livelihoods, and human rights, including social scoring by governments and voice-assisted toys promoting dangerous behavior. “High-risk” applications encompass critical infrastructures, education and training, safety components, public and private services, law enforcement, migration and border control, and administration of justice and democratic processes. “Limited risk” relates to transparency in AI usage, particularly in interactions with AI chatbots and the need to identify AI-generated content.
To help organizations determine their compliance with the EU AI Act, the EU has developed a tool called “The EU AI Act Compliance Checker.” This tool allows organizations to assess where they stand within the legislation. The legislation also permits the “free use” of “minimal-risk” AI, such as AI-enabled video games and spam filters, which currently constitute the majority of AI systems used in the EU.
Lawmakers have also included provisions for generative-AI models, considering the growing popularity of AI chatbots like ChatGPT, Grok, and Gemini. Developers of general-purpose AI models, including EU startups and well-known entities, will need to provide detailed summaries of their training data and comply with EU copyright law. Deepfake content generated using AI must also be clearly labeled in accordance with the law.
Initially, the EU AI Act faced opposition from local businesses and tech companies, who expressed concerns about overregulation stifling innovation. However, upon the approval of this comprehensive legislation, the EU Parliament received praise from IBM, with its vice president and chief privacy and trust officer, Christina Montgomery, commending the EU’s leadership and its alignment with ethical AI practices.
Overall, the EU AI Act sets a significant precedent in the regulation of AI, aiming to strike a balance between innovation and the protection of fundamental rights within the European Union.