A Microsoft employee has raised concerns regarding offensive and inappropriate content produced by the company’s AI image-generation tool. Shane Jones, a principal software engineering lead at Microsoft, has taken steps to address these issues by sending letters to U.S. regulators and the tech giant’s board of directors, urging them to take action.
Jones, who considers himself a whistleblower, met with U.S. Senate staffers to discuss his worries about Microsoft’s Copilot Designer, a tool that creates images based on written prompts. He highlighted the tool’s tendency to generate harmful content, including sexually objectified images and other inappropriate material, even when given innocuous prompts like “car accident.”
In his communication to the Federal Trade Commission (FTC) and Microsoft’s board, Jones emphasized the risks associated with the AI image-generator tool, such as violence, political bias, underage drinking, and other concerning themes. He called for an independent investigation to determine if Microsoft is marketing unsafe products without disclosing potential risks to consumers, especially children.
Despite initially being advised to address his concerns to OpenAI, Microsoft’s business partner, Jones decided to escalate the issue after facing obstacles. He also expressed dissatisfaction with Microsoft’s response to his public postings on LinkedIn, where he shared his apprehensions about the AI technology.
Jones clarified that while the core issue lies with OpenAI’s DALL-E model, the risks are mitigated when using OpenAI’s ChatGPT due to additional safeguards. He highlighted the importance of implementing effective measures to prevent the generation of harmful content by AI image-generators.
The emergence of advanced AI image-generators, such as OpenAI’s DALL-E 2 and ChatGPT, has sparked interest and commercial competition among tech giants like Microsoft and Google. However, the ease of creating deceptive “deepfake” images poses significant risks, including misrepresentation of individuals and sensitive subjects. Google, for instance, temporarily suspended its Gemini chatbot’s image-generation feature following concerns about misrepresenting race and ethnicity.
Jones’ efforts underscore the importance of addressing ethical and safety considerations in AI technology development, particularly in image-generation tools that have the potential to produce harmful or misleading content.