Written by 9:00 am AI, AI Prediction, Future of AI

### Predictions for AI Regulations in 2024: An Insightful Forecast

The coming year is going to see the first sweeping AI laws enter into force, with global efforts to…

The MIT Technology Review’s What’s Second series delves into a variety of sectors, trends, and structures to offer a glimpse into the future. The rest of our collection is now accessible. The landscape of artificial intelligence (AI) policy and regulation underwent a significant shift in 2023, transitioning from a relatively obscure topic to making headlines worldwide. This transformation was partly catalyzed by OpenAI’s ChatGPT, which not only facilitated the broader adoption of AI but also exposed individuals to both the capabilities and limitations of AI technologies. The enactment of the initial comprehensive AI regulations by the European Union, alongside legislative sessions in the US Senate and executive decrees, marked a pivotal moment in the realm of policy development.

If consensus is reached among politicians in 2023, it is likely that concrete policies will begin to materialize during that same year. Here is a preview of what to expect:

The United States

In 2023, AI emerged as a prominent topic within US political discourse. However, it was not merely a subject of debate; concrete actions were taken, culminating in President Biden’s official directive on AI issued at the end of October. This extensive order mandates new standards and heightened accountability measures.

As a result of these initiatives, a distinct American approach to AI policy began to take shape: one that supports the AI industry, prioritizes best practices, delegates rule-making to various agencies, and employs nuanced regulatory frameworks tailored to different economic sectors.

The momentum of 2023 is set to carry forward into the subsequent year, with many of the directives outlined in Biden’s executive order being implemented. The establishment of the new US AI Safety Institute, tasked with overseeing the fulfillment of the order’s mandates, is poised to attract significant attention.

From a legislative standpoint, the future remains uncertain. Senate Majority Leader Chuck Schumer hinted at potential supplementary regulations to complement the executive order. Several legislative proposals addressing diverse aspects of AI, such as software accountability, transparency, and deepfakes, are already in progress. However, the popularity and adoption of these proposed bills in the upcoming months remain ambiguous.

Nevertheless, an approach that categorizes AI forms and applications based on their risk levels, akin to the EU’s AI Act, is foreseeable. According to Chris Meserole, senior director of the Frontier Model Forum, the National Institute of Standards and Technology has proposed a model that each industry and organization would need to implement.

One aspect remains certain: the discourse surrounding AI regulations in the US may be influenced by the 2024 presidential election. The dialogue on mitigating potential harms associated with AI technologies is likely to be shaped by the election dynamics, particularly concerning relational AI’s impact on social media platforms and propaganda.

Europe

The European Union enacted the groundbreaking AI Act, the world’s inaugural comprehensive AI legislation.

Following extensive technical refinements and formal endorsements by European nations and the EU Parliament, the AI Act is set to come into effect swiftly, potentially by the first quarter of 2024. Restrictions on certain AI applications could be enforced as early as the year’s end under optimistic circumstances.

This development implies that the AI industry will face a dynamic period in 2024 as it prepares to comply with the new regulatory framework. Companies engaged in developing foundational models and applications categorized as posing “high risks” to fundamental rights, particularly those intended for deployment in sectors like education, healthcare, and law enforcement, will need to adhere to the new EU standards. While a majority of AI applications may be exempt from the AI Act, entities in Europe are prohibited from utilizing AI technologies in public spaces without court authorization, except for specific purposes such as counterterrorism, anti-human trafficking efforts, or search and rescue missions.

Certain AI applications, such as the creation of facial recognition databases akin to Clearview AI or the utilization of emotion recognition technologies in workplaces or educational settings, could face complete prohibition within the EU. The AI Act is poised to hold businesses and organizations accountable for any damages resulting from high-risk AI systems, mandating greater transparency in their development processes.

Businesses involved in developing foundational models—upon which various AI products like GPT-4 rely—will be required to comply with the legislation within a year of its enforcement. Other tech firms will have a two-year grace period to adhere to the regulations.

To align with international standards, additional efforts are deemed necessary by the EU, particularly concerning the most potent AI models like OpenAI’s GPT-4 and Google’s Gemini, which may pose a “structural” risk to individuals. Companies may be obligated to report significant incidents, disclose their energy consumption, and take measures to assess, mitigate risks, and ensure the safety of their systems. Entities must evaluate whether their models meet the stringent criteria outlined for this category.

Open-source AI companies are exempt from the majority of the AI Act’s accountability requirements unless they are developing designs as computationally intensive as GPT-4. Failure to comply with the regulations could result in substantial fines or product bans within the EU.

Furthermore, the EU is working on the AI Liability Directive, a measure aimed at ensuring that individuals harmed by AI technologies receive appropriate financial compensation. Discussions on this directive are ongoing, with expectations of progress in the current year.

Some nations are adopting a more reserved stance. For instance, the UK, the home of Google DeepMind, has indicated a lack of immediate plans to regulate AI. Despite this, the EU, as the world’s second-largest economy, must adhere to the AI Act to conduct business within the trading bloc.

Anu Bradford, a law professor at Columbia University, has highlighted the “Brussels effect,” through which the EU can establish de facto global standards and influence global business practices by being at the forefront of regulation. Building on the success of its stringent GDPR data protection framework, which has inspired regulations worldwide, the EU aims to replicate this impact in the realm of AI.

China

China’s approach to AI legislation has been characterized by piecemeal and sector-specific regulations thus far. Each time a new AI application gains prominence, China has introduced targeted legislation rather than comprehensive regulatory frameworks. Separate rules govern algorithmic recommendation services (e.g., TikTok-like apps and search engines), deepfakes, and generative AI technologies.

While this strategy enables swift responses to emerging technological risks for both users and the government, it hinders the development of a cohesive, long-term regulatory vision.

This paradigm may undergo a shift in the coming months. The Chinese central government announced the inclusion of an “Artificial Intelligence Law” in its legislative agenda in June 2023, encompassing all facets of AI regulation, akin to the European AI Act. The timeline for the legislative process remains uncertain due to its broad scope. A second draft may be introduced in 2024 or later. In the interim, Chinese internet regulators might introduce interim regulations to address emerging AI tools or content categories.

In August, the Chinese Academy of Social Sciences, a state-owned research institute, released an “expert opinion” draft of the Chinese AI legislation. Although limited information is available at present, this document outlines a “negative list” of high-risk AI domains that businesses cannot explore without government approval, mandates an annual independent “social responsibility report” on foundational models, and proposes the establishment of a “national AI office” to oversee AI development in China.

Currently, Chinese AI firms are subject to numerous regulations. Before any foundational model can be deployed for public use (as of the end of 2023, 22 companies have received approval), it must be registered with the government.

This signifies a departure from the previously unregulated environment in China’s AI industry. However, the practical implementation of these regulations remains unclear. Companies specializing in generative AI will need to assess their compliance status in the near future, particularly concerning security assessments and potential online infringements.

Moreover, the protection afforded to Chinese companies from foreign AI competitors, who are unlikely to receive authorization for product launches in China, may confer a competitive advantage. However, this protectionist approach could stifle competition and bolster China’s dominance in online discourse.

Global Outlook

Over the next year, additional AI regulations are anticipated to be enacted across various European regions. Africa is poised to emerge as a key area of focus. According to Melody Musoni, a policy officer at the European Centre for Development Policy Management, the African Union is expected to introduce an AI strategy for the continent in 2024. This strategy aims to establish policies that individual countries can adopt to engage with AI technologies and shield African consumers from European tech companies.

Several nations, including Rwanda, Nigeria, and South Africa, have already devised their national AI strategies and are actively formulating policies to support AI enterprises through avenues such as training programs, computational resources, and industry-friendly regulations. International organizations like the UN, OECD, G20, and regional alliances have commenced the development of working groups, expert panels, standards, and guidelines related to AI. Bodies like the OECD could play a crucial role in promoting regulatory coherence across diverse regions, thereby easing the compliance burden on AI companies.

Geopolitically, a growing divergence may emerge between authoritarian and democratic nations in their approaches to supporting and utilizing AI industries. In 2024, the strategic priorities of AI companies, whether focusing on proprietary advancements or global expansion, will be closely monitored. Tough decisions may lie ahead for these entities.

Visited 3 times, 1 visit(s) today
Last modified: January 5, 2024
Close Search Window
Close