Microsoft engineer Shane Jones submitted a complaint to the Federal Trade Commission (FTC) on Wednesday, expressing concerns about the safety of the AI design tool Copilot.
In an interview with CNBC, Jones disclosed that he was able to utilize Copilot to create images depicting teenagers engaging with assault rifles. Additionally, he highlighted instances where the tool generated inappropriate and potentially infringing images of women without solicitation.
Jones attributed these issues to the utilization of DALL-E 3 within Copilot. He asserted that DALL-E-3, an image generator developed by OpenAI, possesses a vulnerability that enabled him to circumvent the safety measures intended to prevent such content. Jones emphasized the existence of “systemic issues” with DALL-E-3 in his communication to the FTC.
According to Jones, DALL-E 3 has a proclivity to include inadvertently images that objectify women, even in scenarios where the user’s input is innocuous. He referenced OpenAI’s acknowledgment of this problem in a previous report, where the startup acknowledged that DALL-E 3 occasionally produces suggestive or potentially inappropriate content. Moreover, OpenAI highlighted that AI models integrating language and vision components may exhibit a bias towards sexual objectification. Jones criticized Microsoft for not addressing this acknowledged flaw in the version of DALL-E 3 utilized by Copilot Designer.
While Microsoft and OpenAI have not yet responded to requests for comment from Quartz, Microsoft informed CNBC that it is dedicated to resolving any employee concerns in alignment with company policies and values input from employees striving to enhance product safety.
In addition to the concerns raised about Copilot, Microsoft’s Copilot chatbot has faced criticism recently. The chatbot reportedly responded to a Meta data scientist’s query about ending their life with a distressing comment. Chatbots developed by Microsoft, Google, and OpenAI have encountered scrutiny for various errors, ranging from referencing fictitious legal cases to generating historically inaccurate depictions of diverse Nazis.
Jones alleged that Microsoft failed to take corrective action following his internal complaints, compelling him to remove a social media post detailing the issue. He cited Google as a positive example of addressing similar issues, highlighting how Google suspended the generation of people in images through Google Gemini in response to comparable grievances.
In his letter to the FTC, Jones urged an investigation into Microsoft’s management practices, incident reporting protocols, and potential interference by the company in his efforts to alert OpenAI about the identified issue.
While the FTC acknowledged receipt of Jones’ letter to Quartz, they declined to provide further comments on the matter.