Written by 8:00 am AI, Generative AI, Latest news

– Concern Raised by Microsoft Engineer Over Company’s AI Image Generator Reported to FTC

A Microsoft employee wrote a letter to the FTC after learning that his employer’s AI image ge…

Aaron Mok

A letter was penned to the FTC by a Microsoft staff member, urging authorities to investigate the potential risks associated with the utilization of Microsoft’s Copilot Designer.

The communication highlights the concerns raised by the employee regarding Copilot Designer, an AI tool developed by Microsoft. The individual expressed apprehensions about the tool’s capability to generate content that could be deemed inappropriate, citing instances of explicit imagery, violence, and bias.

The letter was submitted by Shane Jones, a principal software engineering manager at Microsoft, who took it upon himself to scrutinize the AI technology developed by the company in his spare time. Jones specifically addressed the Federal Trade Commission and Microsoft’s board of directors concerning Copilot Designer, a text-to-image generator that was introduced by the tech giant in March 2023 following trial runs conducted in December.

Jones emphasized that the AI-powered Copilot Designer had the propensity to create content that he classified as “harmful,” encompassing depictions of sensitive subjects such as sex, violence, underage drinking, and drug use, along with instances of political bias, trademark infringement, and conspiracy theories.

The employee urged the Federal Trade Commission to raise awareness about the potential risks associated with utilizing Copilot Designer, particularly cautioning parents and educators who might recommend the tool for educational purposes.

According to Jones, the AI image generator by Microsoft had the troubling capability of incorporating inappropriate content into images based on seemingly innocuous prompts. For example, a prompt like “car accident” resulted in an image featuring a sexually objectified woman amidst wrecked vehicles. Similarly, prompts like “pro-choice” and “Teenagers 420 party” led to the creation of unsettling graphics involving Darth Vader, mutated children, and scenes of underage drinking and drug use.

Jones’s discoveries prompted him to advocate for the removal of Copilot Designer from public use until adequate safety measures could be implemented. Despite his efforts and recommendations to enhance the tool’s safety features, Jones claimed that Microsoft dismissed his proposals and failed to make the necessary adjustments, continuing to market Copilot Designer as a safe product for users of all ages.

In response to the concerns raised in the letter, a Microsoft spokesperson stated that the company is dedicated to addressing employee feedback in alignment with company policies, acknowledging the importance of enhancing the safety of their technological innovations.

Shane Jones’s proactive stance on addressing the potential risks associated with AI image generators is not an isolated incident. Prior to his communication with the FTC, Jones had publicly expressed his reservations about Microsoft’s AI technology, urging for more stringent safety measures and transparency in the development and deployment of such tools.

The letter also highlighted the actions taken by other tech giants in response to similar concerns. Google, for instance, temporarily halted access to its image generation feature on Gemini following complaints of generating historically inaccurate images related to race. Demis Hassabis, CEO of DeepMind, Google’s AI division, assured that corrective measures would be implemented swiftly.

Jones commended Google’s swift response to the issues raised with Gemini and called for Microsoft to demonstrate a similar commitment to addressing concerns promptly and effectively, emphasizing the importance of upholding trustworthiness in the realm of AI technology.

Visited 3 times, 1 visit(s) today
Tags: , , Last modified: March 7, 2024
Close Search Window
Close