Written by 6:27 pm Discussions, Uncategorized

### Resurrecting Tiered Strategy: EU Commission’s Shift to General Purpose AI

The European Commission circulated on Sunday (19 November) a possible compromise on the AI law to b…

The European Commission proposed a potential compromise on AI regulations to break the deadlock on base models. This initiative, introduced on Sunday, November 19, follows a structured approach to General Purpose AI and advocates for codes of conduct for models with inherent risks.

The AI Act represents a landmark legislation aimed at regulating artificial intelligence based on its associated risks. Currently, the EU Commission, Council, and Parliament are engaged in trilogues, marking the final phase of the legislative process.

Amidst the ongoing debates, ChatGPT, a renowned General Purpose AI system powered by robust foundational models like GPT-4, has been at the center of contention among EU policymakers in recent months.

On November 10, the three major economies in Europe expressed opposition to the proposed structured approach concerning base models. Instead, they advocated for focusing solely on codes of conduct, raising concerns about the potential derailment of the entire legislative process, as reported by Euractiv.

While the European Parliament acknowledges the need for accountability regarding base models, they are not in favor of exempting them from obligations. A meeting is scheduled for Tuesday, November 21, where MEPs will convene to discuss governance, law enforcement, and base models.

Recent developments reveal that the European Parliament’s co-reporteurs reached a consensus with EU regulators on Sunday, with their counterparts following suit on Monday. The proposal emphasizes codes of conduct, upholding a layered approach with a specific focus on General Purpose AI.

General Purpose AI Systems and Models

Significant revisions have been made to the proposal compared to the version circulated by the Spanish presidency, addressing feedback from top MEPs earlier this month. The latest version distinctly categorizes General Purpose AI (GPAI) models and systems.

As per the revised definition, a “general-purpose AI model” refers to a model capable of proficiently performing a wide array of tasks, irrespective of its market release conditions, including training with extensive data through large-scale self-supervision.

Conversely, a GPAI system is defined as one “based on an AI type that has the ability to offer a variety of purposes, both for immediate use as well as for integration in another AI systems.”

GPAI versions are expected to incorporate systemic risks related to “fringe capabilities,” leveraging appropriate tools and methodologies. However, concerns have been raised by the co-reporteurs regarding the wording and clarity of the information provided.

Furthermore, the draft introduces a quantitative threshold to identify GPAI models with widespread risks based on the amount of computing power utilized during training, exceeding 10^26 floating-point operations.

To ensure the continual alignment of benchmarks with market advancements, the Commission is empowered to enact secondary legislation for further insights into technical aspects of GPAI models. Additionally, the Commission may be tasked with updating the GPAI model concept and establishing a methodology to evaluate training capabilities within 18 months of the regulation’s enforcement.

Providers of GPAI models meeting the specified criteria must notify the Commission, similar to the Digital Services Act. While the proposal allows for exemption requests from providers lacking frontier capabilities, the co-reporteurs question the necessity of this provision.

The Commission retains the authority to define GPAI models posing systemic risks under its program.

Requirements for GPAI Models

The proposal outlines horizontal requirements applicable to all GPAI models, necessitating comprehensive technical documentation through model cards—a provision also present in the Franco-German-Italian non-paper.

Model cards are expected to encompass details on the training process, evaluation methodologies, and relevant data for domestic operators intending to develop new AI systems in compliance with the AI Act. Additionally, providers must establish a mechanism to adhere to the Copyright Directive, particularly concerning rights reservation.

Providers are mandated to publish a detailed description of the data used for model training, ensuring transparency. Moreover, any artificially generated text or images must be clearly marked in machine-readable formats for easy identification.

GPAI Model Requirements with Systemic Risks

Providers of models with systemic risks are required to implement internal controls, collaborate with the Commission to identify potential risks, and devise mitigation strategies, potentially through a code of practice aligned with international standards. They are also obligated to promptly report any significant incidents and relevant preventive measures to the Commission or relevant authorities.

Operational Guidelines

In response to the France-led non-paper advocating for codes of conduct based on the G7’s Hiroshima Process principles, the EU regulatory body is tasked with facilitating the development of a code of practice.

The envisioned code of practice is expected to address the recognition of systemic risks at the EU level, risk assessment, mitigation measures, transparency requirements such as model cards and conclusion templates for all GPAI models. Legal experts and stakeholders will collaborate to establish the code, incorporating key performance indicators and standardized reporting mechanisms to ensure compliance with the regulations.

The Commission reserves the right to approve a code of conduct to aid in the implementation of the AI Act, providing assurance to adherents regarding compliance. Notably, the proposal does not elaborate on potential penalties associated with non-compliance.

Visited 2 times, 1 visit(s) today
Last modified: February 23, 2024
Close Search Window
Close