Written by 8:08 am AI, AI Guidelines, Discussions, Uncategorized

### Navigating the Challenge of AI Governance

The rapid adoption of AI, and GenAI in particular, laid bare the risks associated with the technolo…

The swift advancement of artificial intelligence (AI) and the emergence of generative AI (GenAI) have sparked both enthusiasm and apprehension. The risks associated with AI underscore the necessity for a comprehensive governance framework encompassing legal, regulatory, and corporate domains. This article delves into the evolving landscape of AI governance in response to the critical risks and challenges posed by this technology.

What obstacles are encountered?

Though AI holds immense potential for positive impact, the risks of unintended harm are equally significant. The disruptive nature of AI necessitates a robust governance structure led by humans and underpinned by regulations to ensure responsible and beneficial deployment. The rapid ascent of GenAI underscores the pressing need for sturdy frameworks. The widespread adoption of GenAI and apprehensions regarding its utilization may catalyze endeavors to tackle issues related to digital governance and expedite the formulation of risk mitigation strategies.

As AI evolves towards greater autonomy as a general-purpose technology, concerns regarding control, safety, and accountability become paramount. Diverse AI regulations globally and collaborations between public and private sector entities heighten the complexity of AI governance. Regulators and companies face key challenges such as addressing ethical dilemmas (bias and discrimination), preventing misuse, safeguarding data privacy and copyright, and ensuring transparency in intricate algorithms. This is especially pertinent in the context of foundation models like large language models (LLMs), as elucidated in our article “Foundation Models Powering Generative AI: the Fundamentals.” GenAI introduces additional complexities, including issues of plagiarism, copyright violations, and a reevaluation of fundamental concepts such as truth and trust. GenAI’s capacity to generate human-like text, images, audio, or video content challenges our perception of reality. For instance, deepfakes, which convincingly mimic individuals, can damage reputations, propagate misinformation, and influence public opinion, thereby impacting elections. These hyper-realistic synthetic creations can instigate societal and political harm by fostering widespread skepticism towards news and other content, posing security risks to governments and businesses alike.

Concerns have also been raised about the existential threat posed by AI. In March 2023, a group of AI experts issued an open letter with policy recommendations advocating for a pause in AI development to comprehend its societal risks (Future of Life Institute, 2023).

Ethical considerations as the cornerstone of AI governance

AI has precipitated numerous ethical quandaries, from algorithmic biases to autonomous decision-making. Effective AI regulation and governance, in our view, should be rooted in principles and risks, emphasizing transparency, fairness, privacy, adaptability, and accountability. Addressing these ethical challenges through governance mechanisms is pivotal to establishing trustworthy AI systems. To navigate the evolving landscape of AI, robust, flexible, and adaptable governance frameworks at organizational, national, and global levels are imperative.

The rapid evolution of AI and GenAI has outpaced the establishment of commensurate oversight mechanisms at supranational, national, and corporate levels. While progress is being made, it is gradual.

Emergence of international and national AI governance frameworks

In recent years, several AI governance frameworks have been promulgated worldwide to offer high-level guidance for safe and reliable AI development (refer to Figure 1). Various multilateral organizations have published their guiding principles, such as the OECD’s “Principles on Artificial Intelligence” (OECD, 2019), the EU’s “Ethics Guidelines for Trustworthy AI” (EU, 2019), and UNESCO’s “Recommendations on the Ethics of Artificial Intelligence” (UNESCO, 2021). The advent of GenAI has necessitated new guidelines, including the OECD’s “G7 Hiroshima Process on Generative Artificial Intelligence” (OECD, 2023).

At the national level, several guidance documents and voluntary frameworks have emerged in recent years, such as the US National Institute of Standards and Technology’s “AI Risk Management Framework,” a voluntary guidance issued in January 2023, and the White House’s “Blueprint for an AI Bill of Rights,” a set of overarching principles published in October 2022 (The White House, 2022). These voluntary principles and frameworks often serve as reference points for regulators and policymakers globally. As of 2023, over 60 countries across the Americas, Africa, Asia, and Europe have unveiled national AI strategies (Stanford University, 2023).

Proliferation of AI regulations

Governments and regulatory bodies worldwide are actively formulating AI-related policies and regulations to ensure the responsible development and deployment of AI. While few regulations have been finalized, numerous AI-related regulations have been proposed globally to curb or regulate the riskiest applications of AI (see Table 1). These regulations converge on common themes such as transparency, accountability, fairness, privacy, data governance, safety, human-centric design, and oversight. However, the practical implementation of these regulations and policies is likely to pose challenges.

These initiatives complement existing legislation on data privacy, human rights, cyber risk, and intellectual property. While these legal frameworks address certain concerns associated with AI development, they do not offer a comprehensive approach to AI governance.

For instance, the EU AI Act, slated for finalization by the end of 2023, is poised to become the world’s first comprehensive AI legislation, characterized by stringent measures with potential global ramifications. The EU AI Act aims to establish a human-centric framework ensuring the safe, transparent, traceable, non-discriminatory, environmentally friendly deployment of AI systems in alignment with fundamental rights. The proposed regulations adopt a risk-based approach to stipulate requirements for providers and users of AI systems. Practices deemed “unacceptable” and “prohibited,” such as predictive policing systems or indiscriminate scraping of facial images from the internet for recognition databases, are outlined. Practices that could jeopardize safety or fundamental rights are categorized as “high risk,” such as the use of AI systems in law enforcement, education, or employment (EU, 2023). The EU AI Act, along with the proposed US AI Disclosure Act of 2023, mandates clear labeling of AI-generated content.

China has also been proactive in formulating principles and regulations, from the State Council’s “New Generation Artificial Intelligence Development Plan” in 2017 to the “Global AI Governance Initiative” and the recently enacted “Interim Administrative Measures for the Management of Generative AI Services.” These initiatives mark significant milestones in AI governance. In the US, key legislative proposals at the federal level include the “Algorithmic Accountability Act” and the “AI Disclosure Act,” both currently under discussion. On October 30, 2023, US President Joe Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” to institute safeguards (The White House, 2023b). Similar regulations and policies are in the pipeline or under deliberation in Canada and Asia.

AI’s pervasive influence underscores the imperative of international coordination and collaboration to regulate the technology and achieve policy harmonization, albeit a time-consuming endeavor. Notably, 28 countries and the EU pledged to collaborate in addressing AI risks during the inaugural AI Safety Summit in the UK in November 2023 (Bletchley Declaration, 2023).

Table 1: Major AI regulatory developments worldwide

Region

Country

Regulation

Status

Americas

US

Algorithmic Accountability Act 2023 (H.R. 5628)

Proposed (Sept. 21, 2023)

AI Disclosure Act of 2023 (H.R.3831)

Proposed (June 5, 2023)

Digital Services Oversight and Safety Act of 2022 (H.R.6796)

Proposed (Feb. 18, 2022)

Canada

Artificial Intelligence Data Act (AIDA)

Proposed (June 16, 2022)

Europe

EU

EU Artificial Intelligence Act

Proposed (April 21, 2021)

Asia

China

Interim Administrative Measures for the Management of Generative AI Services

Enacted (July 13, 2023)
Source: S&P Global

Shifting paradigms in AI regulation

The pervasive presence of AI necessitates regulators and lawmakers to adapt to a novel landscape and potentially revise their approaches. The frameworks and guardrails for AI development and utilization mentioned above primarily target companies and their personnel. However, as AI evolves in autonomy and intelligence, a fundamental question arises: How can one regulate a “thinking” machine?

Escalated demands on companies to institute AI governance frameworks

The call for companies to manage AI-related risks has amplified, encompassing both developers and users of AI. Previously, AI developers bore the onus of establishing safeguards to mitigate technology risks. Notably, seven major US developers—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—pledged to uphold standards and implement safeguards in a meeting with President Biden in July 2023 (The White House, 2023a).

Presently, companies across diverse sectors are urged to elucidate their AI utilization practices. The rapid and extensive adoption of GenAI underscores the widespread enthusiasm for the technology. While these advancements showcase the technology’s potential, they also expose several pitfalls associated with GenAI, such as data privacy issues and copyright infringements, leading to legal repercussions.

Moreover, shareholder pressure is mounting, with the introduction of the first AI-focused shareholder resolutions at select US companies. This trend is anticipated to persist into the upcoming proxy season. For instance, Arjuna Capital filed a shareholder proposal at Microsoft, and the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) lodged shareholder resolutions at Apple, Comcast, Disney, Netflix, and Warner Brothers Discovery to demand transparency regarding AI usage and its impacts on employees. Prior instances include Trillium Asset Management’s shareholder resolution at Google’s 2023 annual general meeting for similar reasons.

Emerging best practices with limited implementation

Companies are only beginning to grapple with the implications of AI and GenAI. Few companies have made substantial headway in AI governance thus far. Nonetheless, a common thread running through global AI frameworks and principles emphasizes an ethical, human-centered, and risk-oriented approach to constructing AI governance frameworks. Noteworthy examples include NIST’s “AI Risk Management Framework,” which furnishes guidance on AI risk management (NIST, 2023) to inform corporate policies. Among the limited cohort of companies that have established internal frameworks, several fundamental principles are typically emphasized:

  • Human-centric approach and oversight
  • Ethical and responsible use
  • Transparency and explainability
  • Accountability, including liability management
  • Privacy and data protection
  • Safety, security, and reliability

While corporate AI governance is still nascent, frameworks incorporating these elements are deemed better equipped to mitigate AI-related risks and respond to impending regulatory pressures.

Ethical, risk-focused, and adaptable AI governance essential at the corporate level

As companies embark on exploring AI and GenAI applications and developing their AI systems, the business implications, both positive and negative, have ascended on corporate agendas. Efficient management of key AI risks necessitates AI governance frameworks founded on ethical considerations. We advocate for the integration of ethical review boards, impact assessments, and algorithmic transparency to ensure ethical AI development and deployment.

Furthermore, companies’ AI governance frameworks must exhibit flexibility to adapt to evolving regulations and technological advancements. In our recent paper “Future Of Banking: AI Will Be An Incremental Game Changer,” we outline mitigation strategies to address AI-related concerns, applicable not only to banks but also to other sectors. These strategies encompass compliance with algorithmic impact assessments to tackle ethical concerns and foster explainable AI processes and outcomes.

Striking a balance between innovation and control

As with any technological revolution, success hinges on pioneering efforts and standard setting. AI platforms, models, and app developers are ardently focused on innovation, whether collaboratively or within proprietary realms, to enhance productivity and daily living. Given that robust AI governance frameworks are in their best interest, these stakeholders actively contribute insights on regulating AI to promote innovation. Key considerations for AI industry participants include:

  • How can ethical standards be developed to foster innovation while ensuring the management of safety risks? Would internal AI governance processes to enforce these standards be beneficial? IBM’s initiative in appointing a lead AI ethics official responsible for responsible AI and establishing an AI ethics board could serve as a pioneering model.
  • Should providers of AI platforms, models, and apps assume responsibility for all content generated on their platforms? Will they be tasked with identifying AI-generated content and verifying official content? The implications of this question resonate in Section 230 of the Telecommunications Act of 1996, which generally grants immunity to online platforms concerning third-party content produced by users, a contentious issue for platforms like Facebook, YouTube, and X (formerly Twitter).
  • How can AI governance align with public interests, national security, and international competitive standards?

The focus on AI regulation and governance at this nascent stage underscores the significant implications of this technology, both positive and negative. Grappling with these implications is challenging, as evidenced by the unabated growth and influence of tech giants such as Meta, Alphabet, Amazon, and Apple.

Effective oversight commences with corporate boards

The board of directors plays a pivotal role in identifying strategic opportunities and overseeing risks, particularly those associated with AI. The board’s mandate encompasses supervising management, assessing AI’s impact on corporate strategy, and devising strategies to effectively manage risks that could jeopardize clients, employees, and the company’s reputation. Consequently, it is imperative for corporate boards to assess and comprehend AI’s implications for strategy, business models, and workforce.

Similar to other emerging themes like cyber risk, overseeing AI effectively necessitates boards’ familiarity with AI. We believe a working knowledge of AI, coupled with a robust monitoring process to ensure accountability, are essential prerequisites for overseeing the establishment of resilient AI risk management frameworks at the executive level.

We anticipate that effective AI governance models will adopt a comprehensive approach—from formulating internal frameworks and policies to monitoring and managing risks from conceptualization to deployment. Such mechanisms would ensure accountability and transparency in AI systems and address the challenges associated with auditing and verifying complex AI algorithms and decision-making processes.

Future Prospects

In line with Professor Melvin Kranzberg’s first law of technology, “technology is neither good nor bad, nor is it neutral,” this maxim resonates with AI’s multifaceted nature. Regulators and policymakers have grappled with keeping pace with AI’s rapid evolution, exacerbated by the advent of GenAI. Several countries have unveiled national AI strategies, while legislations and regulations are being devised to provide guardrails.

To foster safe AI adoption, companies must establish ethical guidelines and robust risk management frameworks. Given the unpredictable trajectory of technology, real-time, human-driven oversight will be more critical than ever.

Visited 4 times, 1 visit(s) today
Last modified: February 4, 2024
Close Search Window
Close