- In a communication, Sundar Pichai, the CEO of the organization, mentioned that the company is continuously enhancing its tool for generating artificial intelligence images.
- Pichai also stated that the company is introducing new procedures for the release of AI products.
During the APEC CEO Summit at Moscone Center West in San Francisco on November 16, 2023, Sundar Pichai, the CEO of Google, engaged in a conversation with Emily Chang.
Sundar Pichai, the CEO of Google, acknowledged the issues with the company’s artificial intelligence tool in a message on Tuesday evening, leading to the removal of Google’s Gemini image-generation feature from testing.
Pichai described the issues as “problematic” and admitted that they “have upset our users and displayed bias.” This declaration was initially disclosed by Semafor.
Earlier this month, Google unveiled the image generator as part of Gemini, the company’s primary suite of AI models. This tool enables users to input prompts for image creation. Following the discovery of historical inaccuracies that gained widespread attention online over the past week, the company withdrew the feature, stating that it would be reintroduced in the upcoming weeks.
Pichai emphasized, “To clarify, that is completely unethical and we made a mistake,” stating, “I understand that some of its actions have upset our users and exhibited bias.” Acknowledging that no AI system is flawless, particularly at this nascent stage in the sector’s development, Pichai emphasized the high standards that Google aims to meet.
The announcement follows Google’s recent rebranding of the chatbot from Bard to Gemini earlier this month.
According to Pichai’s message, teams have been working tirelessly to address the issues, and the company plans to implement a specific series of measures and structural modifications, along with “enhanced launch procedures.”
Pichai expressed in the message, “We have always endeavored to furnish users with helpful, precise, and transparent information in our products.” “This is why they trust us. This must be our approach for all our products, including those emerging from artificial intelligence.
Read the full memorandum here:
I wish to discuss the recent challenges with the Gemini app (formerly Bard) that exhibited problematic text and image responses. I acknowledge that some of its actions have offended our users and demonstrated bias. Let me be unequivocal: this is entirely unacceptable, and we erred.
Our teams have been diligently addressing these issues. Progress is now evident across various fronts. While no AI system is flawless, especially in this evolving phase of the field, we are confident in our ability to raise the bar. We will scrutinize the outcomes and ensure the standard is upheld.
Our objective is to organize the world’s information to make it universally accessible and beneficial. In our products, we have consistently aimed to provide users with valuable, dependable, and impartial information. This is why they trust us. This must be our approach for all our products, including our emerging AI offerings.
We will enact a clear set of measures, encompassing foundational adjustments, updated product guidelines, enhanced launch processes, rigorous evaluations and red-teaming, and technical recommendations. We will leave no stone unturned in making the necessary enhancements.
While learning from the missteps in this instance, we can also build upon the advancements in our products and AI technologies achieved over the past months. These include significant improvements to our actual models, such as the breakthrough in our 1 million long-context windows and our open models, both of which have received positive feedback.
Understanding what it takes to deliver exceptional products that are embraced by numerous individuals and businesses, we possess a solid foundation for the AI evolution with our infrastructure and research capabilities. Let us focus on the paramount objective: creating valuable products that earn the trust of our users.