Written by 1:15 pm AI, AI problems, AI Threat

– Caution: Artificial Intelligence-Generated Medical Data – Beware of Misinformation

While artificial intelligence chatbots like ChatGPT can be a valuable tool, it’s important not to t…

It is essential to exercise caution when relying on information provided by artificial intelligence, such as ChatGPT, as highlighted by recent studies presented at the American Society of Health-System Pharmacists’ December meeting in Anaheim, California.

One study conducted by researchers from Torrance Memorial Medical Center in California and Iwate Medical University in Japan compared data from ChatGPT to that from Lexicomp, a scientifically backed resource, regarding 30 medications. The study revealed that only two of ChatGPT’s responses were accurate, with incomplete or partially appropriate information provided for the remaining drugs. This underscores the importance of consulting healthcare professionals for accurate drug-related queries, despite the potential for AI tools to improve accuracy over time through learning algorithms.

Similarly, a subsequent study by researchers from Long Island University College of Pharmacy in New York found that ChatGPT failed to provide responses to 74% of questions posed by pharmacists, offering inaccurate or incomplete information in some cases. The AI tool even fabricated recommendations and incorrect calculations, emphasizing the critical need for human oversight in healthcare decision-making.

Furthermore, an alarming discovery was made when ChatGPT inaccurately suggested a dosage adjustment for a muscle spasm medication, potentially leading to a severe medical error if followed without verification. While ChatGPT can serve as a useful reference point for medical information, it should not be solely relied upon as an authoritative source, as cautioned by the researchers.

In a separate instance, Italian eye surgeons demonstrated how GPT-4, the underlying model of ChatGPT, could be manipulated to generate fabricated clinical trial data, highlighting the risks associated with AI-generated information in healthcare settings. This underscores the importance of thorough vetting and verification of data sources to ensure the integrity and reliability of medical information.

Editors and writers covering clinical studies are advised to conduct meticulous research, verify information from multiple sources, seek feedback from independent experts, and cross-reference data with peer-reviewed research to maintain accuracy and credibility in their work. By remaining vigilant and discerning in their approach to utilizing AI-generated content, editors can uphold the standards of scientific integrity and ethical journalism in the ever-evolving landscape of AI technology in healthcare.

Visited 2 times, 1 visit(s) today
Last modified: December 29, 2023
Close Search Window
Close