An AI specialist employed at Microsoft raised concerns in a published letter regarding the lack of safeguards in the company’s AI image generator, alleging the potential creation of violent and sexualized content. Despite efforts to alert Microsoft’s management, the engineer, Shane Jones, expressed frustration over the lack of action taken. Jones forwarded his concerns to the Federal Trade Commission and Microsoft’s board of directors.
Jones, identified as a “principal software engineering manager,” highlighted internal awareness within the company of systemic issues related to the generation of offensive and inappropriate images by the product. Microsoft, through a spokesperson, refuted claims of negligence, emphasizing the existence of internal reporting mechanisms to address issues with generative AI.
The focus of the letter was on Microsoft’s Copilot Designer, a tool utilizing OpenAI’s DALL-E 3 AI system to generate images based on text inputs. Jones contended that the tool exhibited “systemic problems” in producing objectionable content and recommended its suspension until corrective measures were implemented. Specifically, he noted instances where the tool inappropriately depicted women in sexualized contexts unrelated to the provided prompts.
Microsoft responded by asserting its commitment to addressing employee concerns and ensuring product safety through dedicated evaluation teams. The company defended its facilitation of meetings with Jones to discuss these matters. Despite marketing Copilot as an accessible and innovative AI tool, Jones criticized Microsoft for downplaying associated risks and failing to disclose potential dangers to users.
In response to safety concerns similar to those raised by Jones, Microsoft updated Copilot Designer in January, addressing loopholes that allowed the dissemination of fake, sexualized images. Jones referenced an incident involving the spread of inappropriate images of Taylor Swift on social media as evidence of the risks associated with the tool. Additionally, he alleged pressure from Microsoft’s legal team to remove a previous LinkedIn post urging caution regarding DALL-E 3’s availability.
The broader context of generative AI tools has seen similar challenges in producing biased or harmful content, with instances like Google’s suspension of its Gemini AI tool due to controversy surrounding the depiction of individuals of color in historical contexts.