Written by 9:46 am AI problems, OpenAI, Technology

### OpenAI Teases New Speech AI Innovations Amid Algorithmic Challenges

ChatGPT maker OpenAI shared a preview of a new artificial intelligence (AI) tool Friday that can ge…

OpenAI Logo

In a scene captured in Boston on March 21, 2023, the OpenAI logo is displayed on a mobile phone positioned in front of a computer monitor exhibiting ChatGPT output. The future trajectory of ChatGPT and other AI products faces scrutiny through a series of high-profile lawsuits filed in New York’s federal court, including a case initiated by The New York Times.

OpenAI, the entity behind ChatGPT, introduced a novel AI tool recently, known as Voice Engine, designed to generate “natural-sounding speech” and emulate human voices. According to OpenAI’s blog post, Voice Engine can replicate the original speaker’s voice with just a single 15-second audio clip as input.

The AI startup highlighted Voice Engine’s capabilities in aiding tasks such as reading assistance, content conversion, and providing a voice for individuals with visual or speech impairments. However, OpenAI acknowledged the potential “serious risks” associated with the tool, especially during election periods.

Voice Engine underwent initial development in late 2022, followed by private testing with a select group of partners. These partners agreed to strict usage guidelines, requiring explicit consent from the original speaker and prohibiting the unauthorized imitation of individuals. OpenAI also mentioned that any speech generated by Voice Engine would contain watermarks for traceability.

To mitigate the risk of creating voices resembling public figures, OpenAI recommended implementing speech authentication measures to verify the speaker’s intent and maintaining a “no-get voice list” to prevent unauthorized voice replication.

Moreover, OpenAI advised financial institutions to phase out voice-based authentication gradually to safeguard sensitive data access.

Despite these precautions, OpenAI expressed uncertainty regarding the widespread distribution of the tool. The company aims to spark discussions on the responsible deployment of artificial voices and assess societal responses before making decisions on broader implementation based on feedback and test outcomes.

The emergence of speech technology coincides with escalating concerns about AI-generated deepfakes propagating election-related misinformation. Ahead of the January primary election, deceptive text messages targeting President Biden circulated in New Hampshire, attributed to Democratic strategist Steve Kramer, who admitted to creating the false robocalls to highlight AI’s political risks. Additionally, a local Arizona magazine published a deepfake image of Democratic Senate candidate Kari Lake to underscore the advancements in deepfake technology.


Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Visited 37 times, 1 visit(s) today
Tags: , , Last modified: April 2, 2024
Close Search Window
Close