The EU is poised to introduce the next significant regulatory framework for artificial intelligence through the upcoming AI Act, which carries ambitious objectives. Nonetheless, there is a notable divide among the bloc’s policymakers and the broader tech sector regarding its stringent approach to General Purpose AI (GPAI) and foundational models.
Concerns have arisen within Europe’s IT community regarding the AI Act’s perceived shortcomings in promoting technological independence and implementing risk-based controls, particularly post the recent Trilogue discussions involving the Commission, the Council, and the Parliament.
In a collective statement issued by stakeholders such as DOT Europe, apprehensions were raised about the compatibility of the proposed GPAI and foundational models with the intricate AI value chain. The signatories questioned the Act’s focus on risk mitigation rather than the specific technology employed, highlighting a disconnect in the evaluation criteria concerning the potential risks posed by AI systems.
Moreover, there is contention over the classification of these technologies as either “highly capable” or “having a significant impact,” as the current EU criteria do not directly correlate with the actual risk levels associated with AI systems.
To facilitate a collaborative regulatory approach that incorporates insights from stakeholders across the value chain, the stakeholders suggest that any regulatory obligations concerning foundational models should consider the global and multi-stakeholder nature of the ecosystem.
For those interested in attending the TNW Conference, mark your calendars for June 20–21, 2024, to immerse yourselves in cutting-edge technology, engage with numerous tech enthusiasts, and contribute to shaping the future.
However, industry representatives are pushing back against potential additional requirements related to utilizing copyrighted data for AI training, citing the existing robust framework within the bloc for safeguarding intellectual property rights. They argue that the AI Act’s primary focus on health, safety, and fundamental rights may be incongruent with these added legal complexities.
In addition to DOT Europe, the Software Alliance (BSA), Computer & Communications Industry Association (CCIA), Developers Alliance, Information Technology Industry Council (ITI), and Association of the Internet Industry (eco) have also endorsed the joint statement.
Beyond the IT sector, concerns surrounding the AI Act have been echoed internationally, with the US cautioning that the associated costs could disproportionately impact smaller Western enterprises while benefiting larger corporations. Executives from leading European companies have similarly expressed reservations about the potential stifling of innovation due to excessive regulatory measures.