Despite training their models on copyrighted content, AI developers argue that they should not be held responsible for the outcomes generated by their systems. Instead, they suggest that individuals should bear the legal accountability for any content produced using their platforms.
The U.S. Copyright Office released a request for comments on artificial intelligence and copyright in August, exploring potential new regulations concerning generative AI. Responses from various stakeholders, including Google, OpenAI (the creator of Dall-E), and Microsoft, highlighted concerns about unregulated creation of copyrighted material as the primary issue. These companies emphasized that AI tools should not be held liable for copyright violations, likening them to traditional recording and copying devices.
Microsoft, a key partner of OpenAI, stressed the importance of user responsibility in utilizing AI tools properly. They mentioned implementing safeguards like meta-prompts and classifiers to mitigate potential misuse of AI-generated content. However, these measures have not effectively curbed copyright and trademark infringements, as evidenced by recent disputes involving The Walt Disney Company.
Google raised concerns about the distinction between direct and secondary infringement in AI-generated content, suggesting that users who prompt AI systems should bear the responsibility for any resulting copyright violations. They warned against imposing strict liability on AI developers, which could hinder technological advancements.
OpenAI echoed similar sentiments, emphasizing that users play a crucial role in determining the outputs generated by AI systems through their prompts. Despite these assertions, it is worth noting that these companies have themselves used copyrighted material without proper authorization to train their AI models, leading to legal challenges from authors.
Furthermore, some companies, including Google, OpenAI, Microsoft, and Amazon, have offered to cover legal expenses for clients facing copyright infringement lawsuits, while advocating for user accountability. Despite these actions, these companies argue that existing copyright laws adequately address the current challenges posed by AI technologies, and any drastic changes could stifle innovation.
The Motion Picture Association (MPA) supported the stance of big tech companies, emphasizing the role of AI as a supportive tool rather than a replacement for human creativity in the entertainment industry. The MPA opposed revising copyright laws, stating that the current legal framework is sufficient to address AI-related issues as they arise.
In conclusion, while there are ongoing debates about the accountability of AI developers and users in copyright infringement cases, the consensus among major industry players is that existing laws and doctrines are sufficient to address these challenges without the need for significant legislative changes.