Written by 2:30 pm AI, Latest news, Technology

### The Drawback of AI Ph.D.s Joining Big Tech for Open Development

Public funding of research has planted the IP seeds that have grown into U.S.-based technology comp…

The ongoing discussion regarding the superiority and safety of open versus closed AI models poses a challenge. It is imperative to adopt a more structured approach to defining the concept of openness rather than favoring one business model over another. This shift necessitates redirecting the dialogue towards the importance of accessible knowledge, transparency, and fairness in the pursuit of developing AI that benefits society as a whole.

Open-source technology serves as the cornerstone of scientific progress. There is a growing need for diverse perspectives and insights that are widely accessible. The organization I head, Partnership on AI, embodies a mission-driven initiative in open innovation, collaborating with educational institutions, civil society, industry partners, and policymakers to address one of the most complex issues—ensuring that technological advancements benefit the majority.

With open models, it is crucial to acknowledge the significant role played by government-funded scientific research and its open dissemination.

An open ecosystem necessitates government policies that support scientific research and innovation. Economist Mariana Mazzucato highlights in her book, “The Entrepreneurial State,” that some of the patents resulting from research were made possible through public funding, ultimately leading to the establishment of technology giants in the U.S. Many of today’s AI systems, ranging from the internet to smartphones and Google Adwords algorithms, have been propelled by early federal investments in groundbreaking research.

Furthermore, the transparent release of research, peer-reviewed with ethical scrutiny, is vital for academic progress. For example, the development of ChatGPT was made feasible through access to research openly shared by scientists on transformer models. The decrease in the number of AI Ph.D. graduates entering academia over the past decade, as highlighted in the Stanford AI Index, raises concerns amidst projections of a more than twofold increase in the market demand by 2021.

It is essential to note that openness does not equate to transparency. While transparency may not be an end goal in itself, it is a prerequisite for accountability.

Transparency entails comprehensive disclosure, clear communication to relevant stakeholders, and unambiguous documentation standards. Implementing measures throughout the design process enhances scrutiny and auditability while ensuring profitability, as illustrated by PAI’s Guidelines for Secure Model Deployment. This includes accountability regarding training data, testing and validation, incident reporting, workforce resources, human rights impact assessments, and climate impact evaluations. Establishing documentation and disclosure standards is crucial for ensuring the safety and accountability of advanced AI systems.

In conclusion, fostering inclusivity and openness is vital for shaping the future of AI. An open ecosystem encourages participation from individuals of diverse backgrounds, expanding beyond the traditional Silicon Valley demographics and reducing entry barriers. By promoting economic inclusivity and sharing the benefits of AI among various stakeholders, an open ecosystem mitigates the concentration of power and wealth.

Nonetheless, proactive measures must be taken.

Investments are necessary to engage communities disproportionately affected by algorithmic biases and historically marginalized groups in the development and utilization of AI that serves their interests while safeguarding their data and privacy. This involves prioritizing education, skills development, and reimagining the composition and evaluation of AI systems. Citizen-led AI initiatives are currently being tested globally through both private and public experimentation environments.

Safety does not hinge on the choice between open and closed models. Rather, it revolves around establishing national research and open innovation frameworks that foster a vibrant landscape of scientific innovations and ethical practices. It aims to create an environment conducive to a competitive marketplace of ideas that drives progress. Understanding the potential and risks of AI entails ensuring that policymakers and the public are informed about the evolution of these technologies. It involves recognizing that simple regulations enable safe and efficient progress for all. Most importantly, realizing the promise of AI involves embracing sustainable, inclusive, and effective approaches to incorporating diverse voices in the AI discourse.

Rebecca Finlay, the CEO of Partnership on AI.

Visited 3 times, 1 visit(s) today
Tags: , , Last modified: March 28, 2024
Close Search Window
Close