Written by 2:13 pm AI Security

### Establishing a New AI Safety Organization by Google DeepMind

Google DeepMind, the AI R&D division behind many of Google’s more recent GenAI innovation…

If you consult Gemini, Google’s primary GenAI model, to produce misleading content regarding the upcoming U.S. presidential election, it will comply with the appropriate prompt. Inquire about a forthcoming Super Bowl match, and it will craft a detailed play-by-play narrative. Similarly, if you inquire about the implosion of the Titan submersible, it will present misinformation, complete with seemingly credible yet false references.

This situation reflects poorly on Google, attracting criticism from policymakers who are displeased with how easily GenAI tools can be exploited to disseminate disinformation and deceive the public.

Consequently, Google, having shed thousands of jobs in the last fiscal quarter, is redirecting its focus towards AI safety. This morning, Google DeepMind, the AI research and development arm responsible for Gemini and various other recent GenAI initiatives, unveiled the establishment of a new entity named AI Safety and Alignment. This organization comprises existing teams dedicated to AI safety, augmented by new, specialized groups of GenAI researchers and engineers.

Although Google did not disclose the exact number of hires resulting from this initiative, it did announce that AI Safety and Alignment will feature a specialized team concentrating on safety measures concerning artificial general intelligence (AGI), which refers to theoretical systems capable of performing tasks equivalent to those executed by humans.

In a manner akin to the Superalignment division established by rival OpenAI in July last year, the new team within AI Safety and Alignment will collaborate with DeepMind’s existing AI safety-focused research team in London, known as Scalable Alignment. This team is actively exploring solutions to the intricate challenge of governing potential superintelligent AI systems that are yet to materialize.

The presence of two groups tackling the same issue may raise questions. However, Google’s reluctance to divulge detailed information at this juncture leaves room for speculation. Notably, the new team within AI Safety and Alignment is based in the U.S., in proximity to Google’s headquarters, reflecting the company’s aggressive efforts to keep pace with AI competitors while projecting a responsible approach to AI development.

The additional teams within the AI Safety and Alignment organization are tasked with integrating concrete safeguards into Google’s existing and upcoming Gemini models. While safety encompasses a broad spectrum, some immediate priorities for the organization include averting dissemination of erroneous medical advice, ensuring child safety, and preventing the perpetuation of bias and other forms of injustice.

Anca Dragan, previously a research scientist at Waymo and a computer science professor at UC Berkeley, will lead the team. Dragan emphasized that the organization’s objective is to enhance models’ understanding of human preferences and values, foster robustness against adversarial attacks, and address the dynamic nature of human values and perspectives.

Despite skepticism surrounding GenAI tools, particularly in relation to deepfakes and misinformation, Dragan remains optimistic about the progress and safety of AI models over time. The evolving landscape of AI technology will continue to pose challenges, but with concerted efforts and ongoing investment in AI safety, the aim is to mitigate potential risks and ensure the responsible development of AI systems.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 24, 2024
Close Search Window
Close