“Wow, this model is truly unsafe.”
A Microsoft AI engineer has raised concerns by sending letters to both the Federal Trade Commission (FTC) and Microsoft’s board, highlighting issues with the company’s Copilot Designer AI image generator, previously known as the Bing Image Creator. According to a report by CNBC, the engineer discovered that the image generator was producing disturbing content that included violence, illicit underage activities, biased imagery, and conspiracy theories.
Despite these alarming findings, Microsoft failed to take action or investigate the matter when alerted by the engineer. The engineer, identified as Jones, expressed his shock at the unsafe nature of the model, stating, “It was an eye-opening moment when I first realized how unsafe this model truly is.”
The CNBC report described graphic and disturbing imagery generated by Copilot, such as violent and demonic depictions associated with terms like “pro-choice,” as well as images featuring teenagers with weapons, sexualized portrayals of women in violent scenarios, and scenes of underage substance abuse.
Jones initially raised his concerns internally in December but, after facing inaction, decided to escalate the issue by contacting government officials. In a letter addressed to FTC chair Lina Khan, which was also shared publicly on LinkedIn, Jones urged Microsoft to remove the Copilot Designer from public use until adequate safeguards are implemented. He also called for a reevaluation of the tool’s appropriateness for children, disputing Microsoft’s claims of it being suitable for everyone.
Beyond the troubling content generated by the AI, Jones emphasized the lack of proper channels for reporting and addressing such issues globally, especially considering the minimal regulations governing AI products.
Following the publication of the concerns, a Microsoft spokesperson responded by affirming the company’s commitment to addressing employee concerns and enhancing safety measures for their technologies. They encouraged the use of internal reporting channels to investigate and address any safety bypasses or potential impacts on services or partners, highlighting ongoing efforts to strengthen safety systems and ensure a positive experience for users.