Users and creators of artificial intelligence (AI) systems are under pressure to adhere to upcoming European regulations if policymakers maintain their current course.
Following the unauthorized disclosure of the EU’s AI act this week, analysts are advising firms utilizing this technology to closely track the legislative advancements, as some entities may have a grace period of up to a year to comply.
Kirsten Rulf, a partner and associate director at Boston Consulting Group, emphasized that organizations may have a window of six to 12 months to prepare for the majority of regulations and constraints. Providers of high-risk systems are encouraged to expedite their compliance efforts.
Rulf stated, “Providers of general-purpose AI, such as foundational models and generative AI applications, must be prepared to adhere to the regulations within a year. These timelines are ambitious, particularly considering that many European companies are still defining what responsible AI entails for their operations.”
The current schedule outlines the formal approval at the EU ambassadorial level on February 2, with the potential publication and adoption of the act by May. This timeline is constrained by the upcoming European Parliament elections slated for June.
Rulf further noted, “The discussions within the European Parliament need to happen soon due to the approaching elections, but businesses should initiate their planning efforts. If the current pace is maintained, it is highly likely that the EU AI Act will be finalized by the end of this legislative term.”
According to Tanguy Van Overstraeten, partner and global head of privacy and data protection at the law firm Linklaters, the compliance requirements will be phased based on the AI categories outlined in the draft act. The legislators propose a tiered system where certain categories will be mandated to comply before others.
Organizations will be expected to adhere to regulations for prohibited applications six months after the enforcement date, including biometric categorization systems that classify individuals based on attributes like politics, religion, sexual orientation, and race. Additionally, practices such as indiscriminate facial image scraping, emotion recognition in professional or educational settings, and social scoring based on personal traits are prohibited under the provisional agreement.
The subsequent tier covers general-purpose AI, encompassing applications like OpenAI’s ChatGPT. Van Overstraeten clarified that the term “general-purpose AI” is intentionally broad to encompass more than just generative AI.
To meet the regulatory standards, general-purpose AI models must satisfy specific criteria, including model assessments, systemic risk evaluations, adversarial testing, incident reporting to the European Commission, cybersecurity measures, and energy efficiency reporting, as outlined in the provisional agreement.
High-risk systems will have extended compliance timelines, with autonomous systems granted 24 months and embedded systems, such as those in medical devices, allocated three years, Van Overstraeten explained.
Critics have raised concerns about the impact of the proposed AI Act on research activities. Yann LeCun, Meta’s chief AI scientist, argued that regulating foundational models essentially restricts research and development unnecessarily.
Van Overstraeten proposed that regulatory sandboxes could facilitate innovation outside the stringent regulatory framework, subject to approval by the authorities. This approach allows for real-world testing over a six-month period, enabling AI developers to iterate and enhance their technologies while complying with the law.