The petition filed against Microsoft and OpenAI, the creators of ChatGPT, by The New York Times has raised concerns about the AI models disseminating false information and allegedly appropriating the media company’s work.
The issue of AI “hallucinations,” where artificial intelligence generates erroneous information, is at the core of the controversy. The dissemination of inaccurate data by AI undermines the credibility of reputable sources like The New York Times.
OpenAI and Microsoft defend their use of The Times’ content for training their models under the “fair use” provision of copyright laws, similar to previous legal disputes involving tech companies leveraging scholarly work to train AI systems, potentially resulting in significant financial losses.
The ethical implications of this situation are profound. While it is common for writers to draw inspiration from others, the question of fair compensation for content usage remains pertinent, especially in the context of advancing technologies.
Having contributed articles on AI ethics to Reynolds Journalism Institute, SUCCESS Magazine, and Habtic Standard, I am torn on this issue. In her recent book “Unmasking AI,” Joy Buolamwini delves into the evolving discourse around training data diversity, informed consent, and the ethical use of genetic information in facial recognition technology. However, the distinction between biometric data and written content is crucial, as artists frequently derive inspiration from various sources.
While I have yet to reach a definitive conclusion, I believe that writers deserve recognition and compensation, particularly given the evolving landscape of technology.
This is Nia Norris sharing my perspective.