The enactment of the world’s pioneering AI Act aims to establish boundaries for the technology and safeguard ‘fundamental rights’.
European legislators from two critical committees have endorsed fresh regulations to govern artificial intelligence (AI) prior to a pivotal vote that could lead to the inception of the globe’s first legislation on this technology.
The provisional legislation, which received overwhelming approval from the European Parliament’s committees on civil liberties and consumer protection, focuses on ensuring that AI adheres to the safeguarding of “fundamental rights”.
A decisive vote in the legislative assembly is slated for April.
The AI Act is designed to establish guidelines for a technology utilized across various sectors, including banking, automotive, electronics, aviation, security, and law enforcement.
Simultaneously, it seeks to foster innovation and position Europe as a frontrunner in the AI domain, as stated by the parliament.
This legislation is widely regarded as a worldwide standard for governments seeking to harness the potential benefits of AI while mitigating risks such as misinformation, job displacement, and intellectual property violations.
Initially proposed by the European Commission in 2021, the legislation faced delays due to disagreements concerning the regulation of language models that scour online data and the utilization of AI by law enforcement and intelligence agencies.
The regulations will also oversee foundation models or generative AI systems like the one developed by Microsoft-supported OpenAI, which are AI models trained on extensive datasets and capable of learning from new data for diverse tasks.
Eva Maydell, the MEP for Tech, Innovation, and Industry, hailed Tuesday’s approval as a “praiseworthy outcome” that fosters societal trust in AI while empowering companies to innovate and create.
Deirdre Clune, the MEP for Ireland South, noted that this development brings Europe a step closer to implementing comprehensive AI regulations.
Recently, European Union member states endorsed a December agreement on the AI Act, aimed at enhancing oversight of governments’ utilization of AI in biometric surveillance and regulating AI systems.
France successfully negotiated concessions to reduce the administrative burden on high-risk AI systems and provide enhanced protection for trade secrets.
The legislation mandates that foundation models and general-purpose AI systems meet transparency requirements before entering the market. This includes preparing technical documentation, complying with EU copyright regulations, and disseminating detailed summaries of the training data used.
Major tech firms have expressed reservations about the stipulations and their potential impact on innovation laws.
Tech companies operating in the EU will be compelled to disclose the data used to train AI systems and conduct product testing, particularly for applications with high-risk profiles like autonomous vehicles and healthcare.
The law prohibits the indiscriminate collection of images from the internet or surveillance footage to create facial recognition databases, with exceptions made for the “real-time” use of facial recognition by law enforcement agencies to combat terrorism and serious crimes.