Written by 1:47 pm Generative AI

### Microsoft Implements Censorship for Aggressive Imagery in AI Tool

Microsoft has made changes to its AI guardrails after a staff AI engineer wrote to the FTC of his c…
  • Microsoft has implemented modifications to its AI safeguards following a communication from a staff AI engineer to the Federal Trade Commission expressing apprehensions about Copilot’s image-generation AI.
  • Specific prompts like “pro choice,” “pro choce” [sic], and “four twenty,” highlighted in CNBC’s investigation, have now been restricted, along with the term “pro life.”
  • Additionally, a cautionary message regarding potential policy breaches resulting in tool suspension has been introduced, a measure not previously observed by CNBC despite conducting numerous tests.

Microsoft’s engineer warns company’s AI tool creates problematic images

Microsoft has initiated adjustments to its Copilot artificial intelligence tool subsequent to a staff AI engineer reaching out to the Federal Trade Commission concerning concerns regarding Copilot’s image-generation AI.

Prompts such as “pro choice,” “pro choce” [sic], and “four twenty,” referenced in CNBC’s investigation, are now restricted, alongside the term “pro life.” Furthermore, a notification about the possibility of policy violations leading to tool suspension has been incorporated, a new development not encountered by CNBC before Friday.

The warning alert from Copilot now states, “This prompt has been blocked. Our system flagged this prompt automatically due to potential conflicts with our content policy. Further policy breaches may result in automatic suspension of access. If you believe this is an error, kindly report it to assist us in enhancing our system.”

The AI tool has also been updated to prevent requests for generating images depicting teenagers or children engaged in scenarios involving assault rifles, with the system now stating, “I’m sorry, but I cannot create such an image. It goes against my ethical standards and Microsoft’s policies. Please refrain from requesting anything that could cause harm or offense to others. Thank you for your cooperation.”

In response to inquiries about these changes, a Microsoft spokesperson informed CNBC, “We are continuously monitoring, making adjustments, and implementing additional controls to reinforce our safety filters and prevent misuse of the system.”

Shane Jones, the AI engineering lead at Microsoft who raised initial concerns about the AI, has been rigorously testing Copilot Designer, the AI image generator introduced by Microsoft in March 2023, leveraging OpenAI’s technology. Similar to OpenAI’s DALL-E, users input text prompts to generate images, fostering a creative environment. However, since Jones began actively scrutinizing the product for vulnerabilities in December, a practice known as red-teaming, he observed the tool producing images that contradicted Microsoft’s widely cited responsible AI principles.

The AI service has depicted unsettling imagery such as demons, monsters, content related to abortion rights, teenagers with assault rifles, sexualized depictions of women in violent settings, and scenes of underage drinking and drug use. These scenes, generated over the past three months, were replicated by CNBC this week using the Copilot tool, originally known as Bing Image Creator.

While certain specific prompts have been blocked, many other potential issues highlighted by CNBC persist. For instance, the term “car accident” yields results featuring pools of blood, disfigured bodies, and women in violent scenes with cameras or drinks, occasionally wearing waist trainers. Even “automobile accident” results in images of women in revealing attire seated on damaged vehicles. Moreover, the system still easily breaches copyright laws, generating images of Disney characters like Elsa from Frozen in front of ruined structures purportedly in the Gaza Strip holding the Palestinian flag or dressed in the military attire of the Israeli Defense Forces and brandishing a firearm.

Jones was sufficiently troubled by his findings to start internally reporting them in December. Despite the company acknowledging his concerns, it was unwilling to withdraw the product from the market. Microsoft directed Jones to OpenAI, and when he received no response from the company, he took to LinkedIn to post an open letter urging the startup’s board to halt DALL-E 3 (the latest version of the AI model) for an investigation.

Upon Microsoft’s legal department instructing Jones to promptly remove his post, he complied. Subsequently, in January, he corresponded with U.S. senators on the issue and later engaged with staffers from the Senate’s Committee on Commerce, Science, and Transportation.

On Wednesday, Jones escalated his concerns further by sending a letter to FTC Chair Lina Khan and another to Microsoft’s board of directors. He shared these letters with CNBC in advance.

While the FTC acknowledged receiving the letter, it declined to provide further comments on the record.

Visited 3 times, 1 visit(s) today
Tags: Last modified: March 9, 2024
Close Search Window
Close