Written by 11:43 am Generative AI, OpenAI

### OpenAI’s Sora Image Generator Sparks Controversy Over Nudity Issues

Sora is set to come out this year, and OpenAI’s chief technology officer said it’s not …

A captivating vision or idea that one holds dear may soon come to fruition upon witnessing a video playback of it. Mira Murati, the bank’s chief technology officer, revealed to The Wall Street Journal that OpenAI’s words-to-video AI system, Sora, is on the verge of an official launch, confirming, “definitely this time.” The media outlet was treated to demonstrations of Sora’s capabilities at a china shop, featuring portrayals of a fairy adorned with a headset and bulls.

Sam Altman’s Worries Regarding Audience Engagement | The Future of OpenAI

When asked about the potential presence of nudity in Sora’s creations, Murati expressed uncertainty about its acceptability in video outputs, proposing that artists might explore nude representations within artistic contexts. She stressed that OpenAI is collaborating with musicians and authors from diverse fields to ascertain the practical applications and the scope of creative liberty that Sora may wield.

Despite the endeavors of startups and corporations to create AI models with content generation constraints, advanced AI tools persist in producing sensitive content like algorithmic nudes and deepfake pornography. Experts advocate for increased vigilance from entities such as OpenAI, other software firms developing similar systems, and regulatory bodies in the U.S. before widespread implementation.

Proceeding with Care

A survey conducted by the AI Policy Institute (AIPI) in February among U.S. voters unveiled a prevailing inclination towards implementing guardrails and safeguards to deter misuse rather than widespread accessibility of models. A substantial majority of respondents believe that the creators of AI models should bear legal liability for any illicit activities facilitated by the technology, such as the production of forged videos for defamation or revenge porn.

Daniel Colson, the founder and executive director of AIPI, commented on the government’s earnest consideration of this technology, recognizing its transformative influence on society through innovative designs, algorithms, and technologies. However, he highlighted a lack of public confidence in tech companies to responsibly manage these advancements.

The demand for AI-generated videos is primarily steered by adult content, posing a dilemma for centralized companies to refrain from providing such services, potentially leading to illicit markets catering to this demand. Colson cited instances of open-source AI image models lacking sufficient content supervision, emphasizing the necessity for stringent regulations.

As OpenAI readies for the imminent release of Sora, the AI system is undergoing evaluation by domain experts in various sectors like propaganda, entertainment, and bias detection. The company is also crafting tools to detect fraudulent content, including videos generated by Sora. Nonetheless, there has been no official solicitation for this account.

In light of the rise in non-consensual deepfake pornography, a majority of respondents in an AIPI survey endorsed legislation to prohibit such content creation. The absence of specific U.S. laws or regulations addressing this issue underscores the urgency for comprehensive legislative measures to effectively govern AI technologies.

Jason Hogg, an executive at Great Hill Partners and former CEO of Aon Cyber, stressed the importance of transitioning from a reactive to a proactive approach in regulating AI models to combat the impending wave of cybercrime. He advocated for the establishment of strict laws and penalties to tackle the escalating cybersecurity threats confronting the nation.

Visited 4 times, 1 visit(s) today
Tags: , Last modified: March 18, 2024
Close Search Window
Close