Written by 6:33 am Generative AI, Latest news

### Philadelphia Sheriff Successfully Removes Fake AI-Generated News Stories from Campaign Website

Philadelphia Sheriff Rochelle Bilal’s campaign is acknowledging that a series of positive “ne…

The campaign team supporting the embattled sheriff in Philadelphia admitted on Monday that numerous favorable “news” articles featured on their website were actually generated by ChatGPT.

Sheriff Rochelle Bilal’s campaign took down over 30 stories that were produced by a consultant utilizing the generative AI chatbot. This action followed a report by the Philadelphia Inquirer, revealing that local news outlets could not verify the existence of these stories in their archives.

The dissemination of such misinformation has the potential to undermine public trust and pose a threat to democracy, as highlighted by experts. Despite the campaign’s assertion that the stories were grounded in actual events, the use of artificial intelligence to fabricate news articles was confirmed.

According to the campaign, they supplied the external consultant with talking points, which were then fed into the AI service. It has now been disclosed that the artificial intelligence service generated misleading news pieces to bolster the initiatives outlined in the AI prompt.

While advanced language models like OpenAI’s ChatGPT excel at rapidly completing complex prompts, they are also prone to errors such as hallucinations due to their predictive nature.

Although many individuals have turned to these tools for expediting tasks like drafting work emails and website content, the lack of emphasis on accuracy and thorough fact-checking can result in complications.

In a notable incident, two attorneys had to apologize to a judge in Manhattan federal court last year for relying on ChatGPT to search for legal precedents, inadvertently including fabricated information in their submissions.

Mike Nellis, the creator of the AI campaign tool Quiller, condemned the irresponsible use of AI by the campaign consultant, labeling it as unethical and deceitful.

He emphasized that OpenAI bears the responsibility of upholding its policies, which prohibit the dissemination of output from its products for deceptive purposes. Additionally, OpenAI restricts the utilization of its systems for creating tools for political campaigns or lobbying.

The necessity for regulatory frameworks governing the application of AI tools in politics has been underscored by Nellis, advocating for comprehensive oversight as technology progresses. Despite bipartisan discussions in Congress on this matter, no federal legislation has been enacted thus far.

Concerns have been raised by various individuals, including a former whistleblower from Bilal’s office, regarding the potential confusion and erosion of trust caused by such misinformation, which could ultimately jeopardize democracy.

The disclosure of the Bilal story list, labeled as her “Record of Accomplishments,” concluded with a disclaimer highlighting the lack of guarantees regarding the accuracy of the information provided.

Amidst criticisms directed at Bilal for various issues during her tenure, including office expenditures, campaign finance discrepancies, and the reported loss of numerous weapons, the attribution of news stories to reputable sources like the Inquirer and local broadcast stations has been called into question.

The Inquirer confirmed that none of the four news stories attributed to them were found in their archives, raising doubts about the authenticity of the content. As the situation unfolds, the need for factual and trustworthy journalism remains paramount in safeguarding the integrity of information dissemination.


WHYY is committed to delivering factual and comprehensive journalism and information. As a nonprofit entity, we rely on the financial support of readers like you. Kindly consider contributing today to sustain our mission.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: February 6, 2024
Close Search Window
Close