Written by 4:15 pm AI Guidelines, Discussions, Legal

### Unveiling the Latest “AI Act” Text in the EU Deal

On December 8, 2023, provisional agreement was reached between the European Union (EU) Parliament a…

“In certain organizations, a prudent strategy may involve creating an internal position known as an ‘AI regulatory agent’ to guide the process of ensuring that all business operations adhere to the expected provisions of the future AI Act.”

On December 8, 2023, the European Union (EU) Parliament and the EU Council reached a preliminary agreement outlining the fundamental elements of the upcoming AI Regulation, commonly known as the ‘AI Act,’ which is set to be incorporated into EU legislation. Although the exact wording has not yet been revealed to the public, as it is slated for further refinement in the coming week, insights into the potential framework of the AI Act are available in the public domain, including official communications from the European Union.

Prohibited Uses of AI

The proposed AI Act maintains the approach outlined in our previous report, highlighting the prohibition of specific high-risk AI applications within the EU. The banned applications of AI software may include:

  • Limiting the use of facial recognition systems (or ‘remote biometric identification systems’) in public spaces for law enforcement purposes, with certain exceptions,
  • Prohibiting the utilization of AI for manipulative practices that undermine individuals’ autonomy, termed as ‘cognitive behavioral manipulation,’
  • Banning the use of AI to analyze individuals’ emotions in work or educational environments,
  • Rejecting the concept of ‘cultural ranking’ based on personal traits or behaviors,
  • Forbidding the exploitation of individuals based on factors such as age, socioeconomic status, or disabilities,
  • Identifying sensitive attributes like political opinions, sexual orientation, or religious beliefs through biometric data,
  • Regulating specific predictive policing techniques.

Furthermore, the indiscriminate collection of visual data from online sources or CCTV for developing facial recognition systems is explicitly prohibited. While the standard is to disallow facial recognition systems, exceptions may be applicable to law enforcement in public areas under specific circumstances, such as obtaining administrative authorization for targeted crime prevention. Exceptions might also be granted for preventing severe crimes or averting imminent terrorist threats.

High-Risk AI Functions

The AI Act is anticipated to govern high-risk AI applications that present significant risks to human health, fundamental rights, environmental sustainability, political stability, or legal frameworks. Sectors such as education, employment, critical infrastructure, public services access, law enforcement, border security, political processes, and judicial administration are expected to fall under the ‘high-risk’ classification.

Key provisions expected in the AI Act comprise:

  1. Mandatory assessments for developers of high-risk AI systems to ensure adherence to essential criteria for ethical AI, covering data quality, program documentation, transparency, human oversight, accountability, security, and resilience.
  2. Requirement for conducting a ‘fundamental rights analysis’ in specific scenarios, entailing an evaluation of the potential impacts of the AI system, affected demographics, risk mitigation strategies, human oversight mechanisms, and mechanisms for addressing potential harms.

Additionally, individuals are anticipated to possess the right to challenge AI decisions and request explanations for AI-driven choices impacting their rights.

General-Purpose AI (‘Fundamental Models’)

The AI Act is poised to supervise ‘general-purpose AI models’ capable of varied applications, trained on extensive datasets, and possessing broad functionalities. Depending on the risk levels associated with the AI application, such models may be subject to distinct regulatory tiers beyond standard controls.

Providers of general-purpose AI may be required to:

  1. Maintain technical documentation and disclose design specifics to aid users in complying with the requirements of the AI Act.
  2. Implement mechanisms to ensure alignment with international rights standards, particularly respecting users’ decisions regarding data mining permissions.

In scenarios where general-purpose AI poses ‘widespread risk,’ additional obligations, classified as second-tier requirements, may be imposed on providers, encompassing risk assessments at the EU level, continuous monitoring, cybersecurity measures, energy consumption evaluations, and considerations of computing power thresholds.

Open-Source AI Systems

While open-source AI systems may receive partial exemptions from specific AI Act provisions, exemptions may not apply if they fall under prohibited or high-risk categories, pose manipulation risks, or exhibit systemic risks akin to general-purpose AI models.

Providers of open-source AI systems must comply with accountability and transparency requirements, notwithstanding the exemptions.

Enforcement and Penalties

The enforcement of the AI Act is expected to be overseen by the European Commission’s ‘AI Office’ for general-purpose AI within the EU, with enforcement duties delegated to designated national authorities.

The maximum fines for violations have been increased to 7% of the annual group turnover or €35 million (approximately USD 38.258 million), whichever is higher, indicating stringent penalties for non-compliance with the AI Act.

Implementation Schedule

The final text of the draft legislation is awaiting publication, with a projected timeline for full implementation as follows:

  • Approximately 5 to 6 weeks for the official publication in the EU’s Official Journal, followed by a 20-week grace period before enforcement, potentially extending into the summer of 2024.
  • Initial enforcement of prohibitions on specific AI categories within six weeks post-implementation, likely toward late 2024.
  • Enforcement of high-impact public-purpose AI systems and high-risk AI provisions expected after one year, possibly by summer 2025.
  • Provisions concerning leadership and compliance bodies likely to take effect by summer 2025.
  • Subsequent clauses to be enforced two years post-AI Act passage, possibly around summer 2026.

While the final text of the EU AI Act is pending, businesses should proactively evaluate the potential implications of the Act on their AI operations and prepare for compliance with its stipulations. Key considerations may involve data privacy, educational data usage, and rights protection.

Establishing an ‘AI regulatory officer’ internally to supervise compliance with the forthcoming AI Act could be a strategic step for certain businesses. This position would ensure the safe utilization of AI systems within the organization and the implementation of safeguards for external stakeholders when deploying AI solutions.”

Visited 1 times, 1 visit(s) today
Last modified: January 17, 2024
Close Search Window