Written by 9:52 am AI, Discussions, Uncategorized

### Why is the EU Lagging Behind the US in Artificial Intelligence Development?

Washington is laying down rules for the use of seemingly mundane AI that could, in fact, be incredi…

Between high-profile international meetings, internal conflicts within OpenAI, and rumored advancements in AI technology, the world’s focus has recently shifted towards the forefront of AI research. However, the release of a memo by the White House Office of Management and Budget (OMB) in the US last month regarding the utilization of more commonplace AI systems in government has garnered significant attention and is anticipated to have substantial implications in the foreseeable future.

AI has become a prevalent tool for the US government in various capacities, from monitoring undocumented immigrants to the deployment of predictive algorithms by law enforcement agencies for surveillance and resource allocation. While these AI systems aim to reduce costs, they often come at the expense of subjecting marginalized communities to arbitrary governance without due process, leading to predictable discriminatory outcomes. This issue has been highlighted by researchers, journalists, and activists over the years, and finally, steps are being taken to address these concerns.

The OMB, a pivotal unit within the US president’s executive office, holds substantial power despite being less discussed. It supervises other government agencies to ensure alignment with the president’s agenda. Recently, the OMB director, Shalanda Young, introduced a memo that could potentially revolutionize the implementation of AI in the US government.

The proposed policy, although in its draft stage and subject to potential modifications, mandates each department to designate a chief AI officer responsible for compiling a registry of current AI applications. This requirement alone signifies a significant victory for transparency. Furthermore, the officer is tasked with identifying systems that could impact individuals’ safety and rights, subjecting them to additional meaningful restrictions.

New provisions necessitate the evaluation of AI systems’ risks to rights and safety against their purported benefits. Agencies are also required to validate the quality of the data utilized and enhance monitoring of deployed systems. Notably, individuals affected by AI systems are entitled to clear explanations of their usage and the opportunity to challenge AI-driven decisions.

While these steps may appear straightforward, they are currently not being universally implemented, resulting in further harm to society’s most vulnerable members.

Additionally, the policy mandates departments to actively ensure that AI systems promote equity, dignity, and fairness in their deployment, emphasizing the inherent biases in models and the often skewed data on which they are trained. Government entities must engage with affected groups prior to deploying such systems, offering avenues for human intervention and redress to contest decisions that adversely impact them, rather than being subject to opaque algorithmic processes.

Remarkably, the memo also addresses the vital yet often overlooked issue of government procurement of AI. Many challenges associated with AI stem from inexperienced government bodies adopting complex software they do not fully comprehend, leading to failures that disproportionately affect marginalized populations. Outlining and enforcing best practices for procuring AI systems represent a crucial step for government departments.

The OMB memo exemplifies a collaborative effort between research and civil society in policymaking. Efforts to regulate cutting-edge AI models, particularly in the EU, could draw valuable lessons from this initiative. The rush to regulate frontier AI models, such as GPT-4, has led to challenges, as evidenced by recent trilogue negotiations in the EU. France and Germany’s pushback underscores the potential implications for their burgeoning AI industries.

Regulating a field marked by frequent advancements in research poses significant challenges. The evolving landscape of deploying AI systems raises questions about market dynamics and the concentration of power among AI companies. Balancing the need for transparency with fostering competitive markets for frontier AI models requires careful consideration and informed policy development.

While some regulatory measures are straightforward, such as demanding greater transparency from leading AI labs, a more comprehensive approach involving civil society engagement and thorough research is essential. Rushing regulations in a rapidly evolving field may not yield optimal outcomes. Encouraging informed public debate and research, akin to the approach taken in the OMB memo, could lead to more effective and balanced policies.

  • Seth Lazar, a professor of philosophy at the Australian National University and a distinguished research fellow at the Oxford Institute for Ethics in AI
Visited 1 times, 1 visit(s) today
Last modified: February 27, 2024
Close Search Window