Written by 11:47 am AI, Discussions, Uncategorized

– DeepMind and YouTube Launch Lyria, an AI Music Design, as Desire Track Innovates AI Music Production

Back in January, Google made some waves — soundwaves, that is — when it quietly release…

When Google gradually unveiled some research on AI-driven music composition software that generated melodies based on keyword prompts in January, it made a significant impact—particularly in the realm of sound. Now, Google’s affiliate, DeepMind, has taken a further step by introducing two new toolsets labeled as “experiments” built on Lyria. Additionally, they have unveiled a new audio generation model named “Lira,” which will collaborate with YouTube. One of these tools, Dream Track, has the potential to facilitate music creation for YouTube Shorts. Moreover, Music AI tools have been introduced to assist in the creative process, enabling the transformation of an object’s sound into a melody, even if it’s just a whispered idea. DeepMind has also revealed the implementation of SynthID to watermark AI-generated songs, similar to its use in AI-generated images.

In the midst of ongoing debates surrounding AI’s role in the creative industry, the question of whether AI-generated content will become more prevalent in the future remains unanswered. This topic gained traction during the recent conclusion of the Screen Actors Guild strike. Notably, AI was employed by Ghostwriter to mimic the styles of popular artists like Drake and The Weeknd.

DeepMind and YouTube appear to prioritize the development of technology that ensures the credibility of AI-generated music, both as a complementary tool for current creators and as a medium that authentically resembles music. The complexity of AI-generated music often leads to a unique and enchanting quality that gradually diverges from the original intent, a phenomenon observed in Google’s previous endeavors. DeepMind explains that creating music poses greater challenges compared to generating speech due to the multifaceted elements involved, such as beats, notes, and harmonies.

Maintaining musical coherence across different sections or extended sequences remains a challenge for AI models when producing lengthy audio compositions. As a result, initial applications of these models are predominantly focused on shorter musical pieces.

Initially, Dream Track will release 30-minute AI-generated music in the style of various artists such as Alec Benjamin, Charlie Puth, Charli XCX, and others. The 30-second segment, tailored for YouTube Shorts, involves selecting an artist and a track, complete with lyrics, backing tracks, and the chosen singer’s vocal style. Notably, artists like Charlie Puth are actively engaged in the project, contributing to model testing and providing valuable insights.

The Music AI Incubator, led by Lyor Cohen and Toni Reed, is a collaborative effort involving musicians, vocalists, and producers who engage in testing and offering feedback on these AI music tools. While Dream Track is being released in a limited capacity today, the full suite of Music AI tools is slated for a later release this year. DeepMind has hinted at the upcoming features, including the ability to create music with specific instruments, generate harmonies based on humming, and produce backing tracks for existing vocal recordings.

In the realm of AI-driven music technology, Google and Ghostwriter are not the sole participants. Meta released an open-source AI audio engine in June, while Stability AI introduced a similar tool in September. Companies like Riffusion are also securing funding to advance their initiatives in this domain, indicating a growing interest and investment in AI-driven music technology within the music industry.

Visited 3 times, 1 visit(s) today
Last modified: February 25, 2024
Close Search Window
Close