The proliferation of websites designed to mimic legitimate news sources while disseminating poor quality or false information has the potential to erode trust in the media, according to experts.
NewsGuard, a media watchdog, has observed a surge in websites featuring what it describes as AI-generated content.
Jack Brewster, NewsGuard’s enterprise editor, highlighted the issue, stating, “These are websites that are utilizing artificial intelligence to generate content on a large scale without human editorial oversight.” These sites pose as news outlets but primarily serve as clickbait for advertisements.
NewsGuard categorizes these platforms as Unreliable AI-Generated News Websites (UAINS). Since May 2023, the number of identified UAINS has escalated from 49 to over 700 by February 2024.
The criteria used by NewsGuard’s AI Tracker to identify these sites include a predominance of AI-generated content, minimal human supervision, deceptive appearances suggesting human authorship, and a lack of disclosure regarding the content’s AI origin.
Many of these sites operate under names like “Ireland Top News” and “Daily Times Update.” Shockingly, some have taken over established domains, such as Hong Kong’s Apple Daily, repurposing them with AI-generated content after the original outlet ceased operations due to legal issues.
The transformation of reputable news outlets into platforms for “SEO-bait” content, as observed with Apple Daily, not only undermines genuine journalism but also diminishes its credibility.
McKenzie Sadeghi, NewsGuard’s news verification editor, expressed concern about the detrimental impact of AI-generated websites on media trust, particularly at a time when public confidence in the media is already waning.
The proliferation of these misleading platforms across multiple languages, including English, Arabic, Chinese, and Turkish, underscores the global scope of the issue.
NewsGuard’s investigation revealed that Google’s advertisements feature prominently on these sites, raising questions about the tech giant’s ad placement policies.
Amid concerns that AI-generated fake news sites could influence elections in about 40 countries this year, experts warn of the potential consequences of disseminating false information, such as the spread of misinformation or the erosion of media literacy.
Shazeda Ahmed, a UCLA researcher specializing in AI safety, emphasized the importance of media literacy in discerning AI-generated content and its potential impact on individuals’ decision-making.
The challenge of holding site owners and producers accountable is compounded by their use of privacy services to conceal their identities, making it difficult for authorities to trace the origins of these deceptive websites.
Despite attempts to contact site owners for clarification, the lack of transparency and accountability in the online content landscape remains a pressing concern, underscoring the need for greater vigilance and regulation in combating the proliferation of AI-generated misinformation.