Written by 5:42 am AI, AI problems, Latest news

– Allegations Against Microsoft for Marketing AI Tool Generating Inappropriate Content to Children

It looks like Microsoft may be filtering violent AI outputs flagged by engineer.

Microsoft’s AI text-to-image generator, Copilot Designer, is under scrutiny for its alleged failure to filter out inappropriate content despite warnings from a Microsoft engineer, Shane Jones. According to CNBC, Jones raised concerns about the tool generating violent and sexual imagery randomly. Despite his efforts to alert Microsoft and advocate for safeguards, the company reportedly did not take immediate action.

Jones claimed that Microsoft directed him to report the issue to OpenAI, the entity behind the DALL-E model powering Copilot Designer. However, OpenAI allegedly did not respond to his warnings. Subsequently, Jones took steps to escalate the matter by contacting regulatory bodies such as the Federal Trade Commission and Microsoft’s board of directors.

In his communication with the FTC, Jones highlighted the potential risks associated with Copilot Designer, emphasizing issues related to inappropriate content generation, including sexually objectified images and other concerning themes like political bias and underage activities. He urged for intervention to prevent further dissemination of harmful content.

Additionally, Jones called upon Microsoft’s board to conduct an independent review of the company’s AI decision-making processes and incident reporting mechanisms. He emphasized the need for enhanced safety measures within the organization to address the reported concerns effectively.

While Microsoft has not confirmed specific actions taken to filter images, attempts to replicate the problematic outputs described by Jones resulted in error messages. The company reiterated its commitment to addressing employee concerns and enhancing the safety of its technologies through internal feedback mechanisms.

Jones, a principal software engineering manager at Microsoft, conducted voluntary testing on Copilot Designer and discovered disturbing outputs, prompting his advocacy for improved safety measures. He described instances where the tool generated violent and inappropriate imagery in response to seemingly benign prompts, indicating potential shortcomings in content moderation.

Efforts to replicate the flagged content by other sources revealed that Copilot Designer may have implemented filtering mechanisms in response to the raised issues. Terms identified as problematic by Jones appeared to be restricted, signaling a possible adjustment in the tool’s content generation process.

Overall, the situation underscores the importance of robust safety protocols in AI technologies to prevent the dissemination of harmful or offensive content. Jones’ actions highlight the significance of proactive measures to address such concerns and uphold ethical standards in AI development and deployment.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 7, 2024
Close Search Window
Close