A recent study has revealed that malevolent individuals could utilize a conceptual framework to generate 100 blog posts containing false information on various health subjects. In an effort to assess the feasibility of swiftly producing 50 blog posts containing misleading content about vaccines and vaping, two researchers with no prior experience in this technology experimented with OpenAI’s popular GPT Playground and ChatGPT resources. Over a span of 65 days, they successfully crafted 102 articles, amounting to over 17,000 words. The criteria for these posts included the inclusion of two fictitious “scientific-looking references,” a captivating title, and tailored writing aimed at specific demographics based on gender, age, or maternal stance.
While attempting to replicate this process using other prominent language models such as Microsoft’s Bing generative AI and Google Bard, the researchers encountered obstacles that hindered the creation of such deceptive materials. According to Bradley Menz, a licensed physician and researcher at Flinders University, each post contained a minimum of 300 words and featured contrived educational citations and testimonials from both patients and clinicians. The initial objective of the study was to assess the existence of safeguards against such deceptive practices and evaluate the ease of generating false information inexpensively.
Although Microsoft’s Bing and Google Bard posed challenges in generating misleading content, OpenAI’s GPT models proved compliant with the researchers’ requests. Despite attempts to reach out to OpenAI for comments, no response was received. Menz emphasized the importance of highlighting the risks associated with fabricated testimonials and the potential for distorting the messaging conveyed through such content. In response to their findings, Menz and his colleagues are advocating for immediate measures to combat the “widespread dissemination of false health-related textual, visual, and video content.”
The study, published in JAMA Internal Medicine as a unique communication, underscores the need for enhanced transparency and accountability among technology developers to safeguard public well-being. While acknowledging the potential accuracy of health information shared on social media, Menz expressed concern over a subset of individuals leveraging rapid content creation tools to propagate harmful misinformation across social platforms. The researchers urge for stricter regulations and mechanisms to counter the proliferation of deceptive health content that could significantly impact communities.