Written by 9:41 am AI policies

The rules of the AI habitat should consider how complex and dynamic it is.

We cannot presume that existing regulatory efforts looking backwards are either sufficient or a sub…
The AI ecology is powerful and difficult, and its regulation should take into account this.

Participants’ opinions are their own, and they do not endorse The Hill.

Getty Images

Next Thursday, Meta announced the newest generation of its massive language model (LLM), Llama 3. Through several changes, which Meta claims to be “high-value” information training sets and new computer software capabilities, the newest concept will aim to break OpenAI’s position as the industry leader. Meta’s general merchandise official, Chris Cox, predicts that future variations of the design will include “multimodality” driven developed logic. While Meta’s dreams make stories, Cox and some ‘ predictions are likely to be skeptical due to the sheer complexity of the methods. Llama 2, for example, failed to understand simple environment. It is crucial that regulators take into account the difficult nature of the artificial intelligence ecosystem so that developers can modify and improve their models throughout the implementation process.

Researchers, engineers, companies, educational institutions, and government agencies work up across disciplines and industries to integrate AI into a wide range of sophisticated socio-technical-financial systems—illustrated by the cases of ChatGPT and Gemini. Linguists, computer scientists, and engineers, as well as companies with the processing power and data needed to create and teach the models, work together to create these base models. Additionally, the development of the designs necessitates funding, as well as organizations that will use them in customer-facing programs like websites, as well as scientists from other disciplines, such as anthropology and ethics.

Thus, the resulting ecosystem is very difficult and it exhibits the properties of intricate systems—incomplete knowledge, uncertainty, unpredictability, asynchronicity, and non-decomposability. Due to the numerous connected components, the conduct of the entire system may be easily predicted or controlled. Therefore, the propagation of AI software presents new difficulties in comprehending, explaining, and controlling evolving behaviors of coupled techniques. For example, the propensity for LLMs to “hallucinate” and report incorrect information (which independent research suggests occurs around 20 percent of the time, even in the most “truthful” systems currently available). Yet their creators are unable to clarify why or how each particular “untruth” is created because of the complexity of the models, making it difficult to develop comprehensive methods to detect or deter behavior.

Governance of complex techniques, like the AI habitat, requires policymakers to consider varying viewpoints, unintended consequences, and uncertain emergent behaviors, both from the techniques themselves and from the people who are responding to them. As applications may be developed and deployed in distinct, and numerous, jurisdictions, and the effects and impacts may play out across several different sectors, it may not be obvious where responsibility for regulation lies. Effective governance, at the very least, will require collaboration and coordination among various stakeholders. The ecosystem is constantly evolving as new systems are created, new applications are deployed, and more experience is acquired, making this level of coordination itself challenging.

To date, risk and risk management have been the foundation of regulation in both the EU and the U.S. to give assurances to society that AIs are developed and deployed in a way that is deemed “safe.”

The EU’s regulations derive from the continent’s experiences in adopting laws to safeguard consumer protection from known harms and privacy breaches involving specific AI uses and privacy breaches. Systems are categorized based on what people think about the risk they pose. Banned AIs include the manipulation of individuals’ behavior in specific undesirable ways or the use of particular technologies (e.g., biometric data, facial recognition) in prescribed circumstances.

High-risk AIs that require extensive documentation, auditing, and pre-certification draw extensively on existing EU product safety conformity legislation (e.g., toys, protective equipment, agricultural and forestry vehicles, civil aviation, and rail system interoperability), as well as applications in areas where physical safety is prioritized (e.g., critical infrastructure) or where risks of psychological or economic harm may ensue (e.g., education, employment, access to services). Applications with low risk that only have to meet transparency requirements perform narrow procedural tasks with a focus on enhancing the human decision-making process and ensuring that human decision-makers have control over the final decision-making process.

The U.S. Office of Management and Budget’s regulations on government use of artificial intelligence (AI) are less lenient and prescriptive than EU regulations, but the focus is still on addressing a subset of AI risks and governance and innovation issues directly related to agencies’ use of AI. Specifically, the risks addressed result from “reliance on AI outputs to inform, influence, decide, or execute agency decisions or actions, which could undermine the efficacy, safety, equitableness, fairness, transparency, accountability, appropriateness, or lawfulness of such decisions or actions”.

In both cases, the risks addressed arise almost exclusively in relation to specific products, activities, decisions, or uses of AIs, rather than the complex ecosystems in which they operate. The relevant circumstances are narrowed down to a specific set of situations, actions, actors, and consequences that are already largely known and controllable. Even the prohibited EU uses are restricted to specific outcomes that have already been largely identified and described. Both require single, named individuals to be ultimately liable for the AIs, their regulatory reporting, and management.

Neither set of regulations addresses the elements of complexity, uncertainty, unpredictability, asynchronicity, and non-decomposability of the ecosystems in which the AIs will operate. Indeed, references to “complexity” and “uncertainty” are conspicuous by their apparent absence from consideration. Both appear to be able to handle the extensive multi-stakeholder collaboration and diverse viewpoints required to govern complex dynamic systems.

Perhaps some regulatory humility and acknowledgment of the benefits and drawbacks of these regulations are necessary. As we work toward the development and deployment of AIs, they do not provide security guarantees. Because of the bounded rationality of the people who are in charge, they also don’t acknowledge what we know, don’t know, and can’t know. They merely attempt to manage risks that have already been identified or anticipated. As we grow more aware of the new ecosystems in use, we should still anticipate some surprises from unexpected emergent behaviors and from the discoveries of things we previously did not know or understand.

The question is — how do we anticipate managing in those circumstances? It is necessary to take some leadership in the discussion of how we want our societies to evolve in the face of these unwavering uncertainties. We can’t assume that the current regulatory efforts that look backward are sufficient or a substitute for this larger and more complex endeavor, which must always be forward-looking, for an inherently uncertain future.

Bronwyn Howell is a nonresident senior fellow at the American Enterprise Institute, where she focuses on the regulation, development, and deployment of new technologies.


Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Visited 3 times, 1 visit(s) today
Tags: Last modified: May 1, 2024
Close Search Window
Close