An ex-official from a leading technology firm has raised concerns about the potential exploitation of musicians if artificial intelligence companies misuse copyrighted songs.
The technology relies on an extensive collection of existing music to generate audio based on text prompts.
Ed Newton-Rex, who parted ways with Stability AI’s audio team due to disagreements over the company’s belief that training generative AI models on copyrighted content constitutes “fair use,” highlighted that unauthorized copyrighted material is already being utilized to train AI models.
Newton-Rex’s critique extends beyond Stability AI to the broader conceptual AI sector, where the prevailing notion is that training models on any content without the rights holders’ permission is acceptable, despite the creators’ ownership of the material.
He also pointed out that major AI corporations often avoid collaboration with labels and artists due to the perceived complexities and costs involved.
On the contrary, Emad Mostaque, Stability AI’s co-founder and CEO, views fair use as a mechanism that fosters innovation.
Additional Insights on Artificial Intelligence
Fair use is a legal provision that allows the use of copyrighted material for specific non-commercial purposes like research or education without seeking explicit permission from the copyright owner.
Musicians are given the option to exclude their content from Stability’s sound generator, Stable Audio.
Every day, numerous AI-generated songs flood the internet, with established musicians partnering with tech giants to explore AI music tools.
The Potential of Conceptual AI in the Music Industry
Throughout history, musicians have embraced technology, whether through leveraging modern production tools to experiment with music or utilizing voice-altering techniques like autotune.
Initially perceived as a threat to musicians, sampling—incorporating sound recordings into new music—now requires consent from rights holders to ensure legal usage.
Generative AI presents similar challenges, prompting discussions on its impact on artistic integrity.
Industry players like Google, YouTube, and Sony are introducing AI tools that enable users to create music swiftly using text inputs.
While some musicians willingly contribute their work to these models, concerns arise over AI systems potentially infringing on copyrights without proper authorization.
Renowned artist Bad Bunny recently criticized the unauthorized use of his voice in an AI-generated track that went viral, cautioning his fans against supporting such content.
Boomy, an AI music generator claiming to avoid copyrighted material, reported over 18 million songs created through its platform by November.
Calls have been made for regulations to protect artists’ rights and ensure fair compensation when their work is utilized by AI companies, as advocated by the People Artistry Strategy representing music associations globally.
Moiya McTier, a senior campaign adviser, emphasized the importance of crediting and compensating artists whose work is incorporated into AI models, provided they have given consent.