A prospective agreement on law enforcement provisions within the Artificial Intelligence Act, being deliberated by the EU politicians responsible for its formulation, has been put forward for discussion. This particular aspect of the legislation is considered one of the most sensitive areas within the document.
The AI Act, a legislative proposal aimed at regulating artificial intelligence to safeguard individuals and their fundamental rights, is currently undergoing discussions in trilogues involving the EU Parliament, Council, and Commission.
The key components of the legislation that remain unresolved pertain to General Purpose AI requirements, governance, and law enforcement. Scheduled for Tuesday, November 21, MEPs engaged in the legislative process will convene to address these specific issues.
Ahead of the meeting, Dragoș Tudorache and Brando Benifei, co-reporteurs from the European Parliament, shared a draft concerning the intricate law enforcement segment, which was reviewed by Euractiv.
While the majority of Members have emphasized the importance of protecting basic rights, EU member states have been advocating for granting more flexibility to their law enforcement agencies.
Exemption from National Security Measures
France, within the EU Council, has been advocating for a significant provision concerning national security exemptions. However, there seems to be a consensus among political factions to adopt a more restrictive language, as outlined in the co-reporteurs’ proposal.
According to the draft, “This Regulation shall not be applicable to AI technologies developed or used exclusively for military purposes,” thereby ensuring that Member States retain their autonomy in conducting military, defense, or national security operations.
Remote Biometric Identification
The European Parliament appears inclined to relax the ban on remote biometric identification (RBI) systems in public spaces by considering specific exceptions for law enforcement purposes. Notably, the mention of “real-time” has been omitted from the latest version, suggesting that these exceptions could extend beyond immediate use. The exceptions include targeted identification of individuals involved in predefined serious crimes and suspect identification.
The draft specifies, “The usage should be limited to verifying the identity of the specified individual; it should not involve automated matching of real-time or post-event video footage with existing databases.” Furthermore, the application of RBI systems must be strictly necessary in terms of time, location, and scope, and must receive prior judicial approval, eliminating the possibility of unilateral authorization by law enforcement agencies.
Moreover, the provision allowing law enforcement agencies to continue using RBI systems in exigent circumstances without completing a fundamental rights impact assessment has been removed. Member States are required to notify the Commission of any national regulations governing the use of RBI in public spaces within 30 days of adoption, with each utilization of these systems subject to regulatory oversight.
Regulatory authorities are mandated to conduct annual reviews and provide reports to the Commission for comprehensive monitoring and control of RBI usage.
Additional Restrictions
In exchange for permitting RBI systems, Members propose expanding the list of prohibited AI applications. The European Parliament advocates for the prohibition of biometric classification systems that infer personal data, such as sexual orientation, with exceptions made for specific professional contexts.
The debate on predictive surveillance, previously classified as a high-risk application, continues, with most political parties opposing its inclusion and advocating for its prohibition.
High-Risk Use Cases
The AI Act includes a catalog of high-risk use cases that pose significant risks to individuals’ rights and safety. The co-reporteurs suggest broadening the RBI category to encompass all permissible uses, including those in publicly accessible areas.
Furthermore, they propose categorizing biometric categorization and emotion recognition technologies, not covered by existing bans, as high-risk applications. Instead of outright banning AI-powered polygraphs and similar tools, these would be included in the high-risk category when used by law enforcement agencies in compliance with EU and national laws.
Certain AI applications in law enforcement and migration, such as crime analysis, robust deception detection, authentication of travel documents, and forecasting migration patterns, have been excluded from the high-risk category. However, any AI application for border control, except for travel document authentication, may be deemed high-risk.
Oversight in Law Enforcement
The decision to deploy a high-risk system must be validated by a minimum of two individuals, aligning with the “four-eye” principle advocated by European authorities in law enforcement. Despite opposition from the majority of political factions, the co-reporteurs propose retaining the requirement for law enforcement agencies to register their high-risk AI systems in the EU database. While some suggest maintaining this data in a non-public section, others argue for partial public accessibility.
A provision allowing the use of AI systems without compliance assessment, subject to subsequent authorization within 48 hours, may be accepted in exchange for other concessions.