Written by 6:40 am Generative AI, Latest news

### Philadelphia Sheriff Successfully Removes Fake AI-Generated ‘News’ from Campaign Website

The campaign team behind Philadelphia’s embattled sheriff acknowledged Monday that a series o…

Philadelphia Sheriff’s Campaign Admits to Posting AI-Generated “News” Stories

The campaign team for Sheriff Rochelle Bilal in Philadelphia has acknowledged that numerous positive “news” stories featured on their website were actually created by ChatGPT, an AI chatbot.

Following a report by the Philadelphia Inquirer revealing that local news outlets could not verify these stories, Bilal’s campaign took down over 30 such articles. While the campaign stated that the stories were based on real events, experts warn that disseminating such misinformation can undermine public trust and pose a threat to democracy.

The campaign team admitted that they provided talking points to an external consultant, who then utilized an AI service to generate the content. However, they now recognize that the AI-produced articles amounted to fake news pieces supporting their initiatives.

ChatGPT, a language model developed by OpenAI, operates by predicting the most likely next word in a sentence, enabling it to quickly complete complex prompts. Despite its efficiency, such models are prone to errors, including generating inaccurate information, known as hallucinations.

The increasing use of AI tools like ChatGPT for tasks such as drafting emails and website content has raised concerns about accuracy and the need for thorough fact-checking. In a notable case, two lawyers had to apologize in a Manhattan federal court after relying on ChatGPT for legal research, unaware that the system had fabricated some results.

Mike Nellis, the creator of the AI campaign tool Quiller, condemned the irresponsible use of AI by the campaign consultant, labeling it unethical and deceitful. He emphasized the importance of platforms like OpenAI enforcing strict policies to prevent such misuse, particularly prohibiting politicians from employing AI for campaign purposes.

While bipartisan discussions in Congress have highlighted the necessity of regulating AI tools in politics, no federal legislation has been enacted yet. Calls for oversight have intensified as advancements in AI technology outpace regulatory frameworks at the local, state, and federal levels.

The controversial story list on Bilal’s website, titled “Record of Accomplishments,” featured a disclaimer at the end, newly highlighted by the Inquirer, disclaiming any guarantees regarding the accuracy of the information presented.

Concerns have been raised by individuals like Brett Mandel, a former finance chief for Bilal, who warned that the spread of such misinformation could sow confusion among voters and further erode trust in democratic institutions. Mandel, who lodged a whistleblower suit against the office, criticized Bilal’s management over financial matters and other reported issues during her tenure.

The list of fabricated news stories, falsely attributed to reputable sources like the Inquirer and local broadcast stations, has underscored the potential risks associated with AI-generated content and the imperative for stringent oversight in political communications.


Swenson reported from New York.


The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. Find more information about AP’s democracy initiative here. The AP retains full responsibility for all content.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: February 6, 2024
Close Search Window
Close