LONDON: In the forthcoming week, European Union leaders engaged in extensive discussions late into the night to finalize an agreement on international regulations concerning the use of artificial intelligence within the 27-nation union.
The most recent legislation aimed at regulating AI systems in Europe, with potential global implications, is the Artificial Intelligence Act.
Let’s explore the regulations governing artificial intelligence:
Instead of focusing on the technology itself, the AI Act emphasizes regulating the purposes for which AI is utilized, embracing a “risk-based approach” towards products or services employing artificial intelligence. The main goal is to safeguard democracy, the rule of law, and fundamental freedoms like freedom of speech, while also promoting investment and innovation.
The stringency of the regulations depends on the level of risk posed by an AI system. Low-risk applications, such as content recommendation systems or spam filters, are only required to follow basic guidelines, like transparently indicating their AI-driven nature.
On the contrary, high-risk applications like medical devices are subjected to more rigorous requirements, including the utilization of high-quality data and offering clear instructions to users.
Certain AI applications, such as social scoring systems influencing behavior, specific predictive policing practices, and emotion recognition systems in educational institutions and workplaces, are prohibited due to their perceived risks.
Apart from serious crimes like kidnapping or terrorism, the general public cannot be subjected to AI-powered stand-alone “biometric identification” systems for eye scanning by law enforcement.
The AI Act is set to be enforced two years after receiving final approval from European lawmakers, anticipated in an early 2024 vote. Violations could lead to fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue.
While the AI Act may directly impact the nearly 450 million residents of the EU, experts predict that due to Brussels’ crucial role in establishing regulations that serve as global standards, its influence may extend far beyond the EU.
Previous directives from the EU have served a similar purpose, notably the mandate for a universal charging plug that compelled Apple to transition from its proprietary Lightning cable.
While many countries are contemplating the necessity of AI regulation, the extensive laws introduced by the EU are poised to serve as a blueprint for others.
Anu Bradford, a professor at Columbia Law School specializing in EU law and contemporary regulation, affirms that the AI Act signifies a comprehensive, horizontal, and binding framework for AI regulation that is poised to not only reshape the landscape in Europe but also potentially propel international efforts to regulate AI across jurisdictions.
She believes that this positions the EU uniquely to lead the way and demonstrate to the world that AI can be effectively governed and subjected to political oversight.
Even the aspects that the law does not address could have significant global ramifications, according to human rights organizations.
Amnesty International criticizes Brussels for “greenlighting futuristic surveillance” in the 27 EU Member States by not completely prohibiting live facial recognition, setting a concerning precedent globally.
Amnesty also denounces lawmakers for not outlawing the trade of AI technologies that could infringe upon human rights, such as those employed for social scoring, a practice prevalent in China to incentivize compliance with state surveillance.
The United States and China, as the two major players in AI development, have initiated their own regulatory processes.
President Joe Biden signed a comprehensive executive order on AI in October, expected to be reinforced by international treaties and regulations.
Leading AI developers are required to report health examination results and other pertinent information. Companies must establish standards to ensure the safety of AI tools before public release and provide guidelines for labeling AI-generated content.
Concurrently, China has issued “interim measures” for regulating cognitive AI, encompassing text, images, sound, video, and other content intended for Chinese users.
President Xi Jinping has proposed a Global AI Governance Initiative, advocating for a fair and transparent environment for AI advancement.
The rapid advancement of OpenAI’s ChatGPT highlighted the evolving nature of AI technology, prompting Western policymakers to reassess their strategies.
The AI Act covers chatbots and other public-purpose AI systems capable of diverse tasks like creating poetry, generating videos, and scripting system code.
Officials have adopted a two-tiered approach, with general-purpose systems required to comply with basic transparency standards, disclosing information about their data governance and the energy consumption during model training using extensive datasets from the internet.
Moreover, they must adhere to international rights regulations and disclose the sources of data used for training.
The most advanced AI technologies with substantial processing capabilities will face stricter regulations to prevent their proliferation into services developed by other program creators.