Written by 9:44 am Generative AI, Latest news

### Restriction Imposed on Political Candidates Accessing Claude AI Chatbot

While AI is being built into every facet of everyday life, U.S. elections are being walled off by m…

If Joe Biden is considering utilizing a sophisticated and personable AI chatbot to interact with the public, he may need to explore alternatives other than Claude, the ChatGPT competitor developed by Anthropic. The company has explicitly stated that political candidates are prohibited from leveraging Claude to create chatbots that impersonate them or for targeted political campaigns. Violations of this policy will result in warnings and potential suspension of access to Anthropic’s services.

The announcement of Anthropic’s stance on preventing “election misuse” through AI technologies reflects a growing concern globally regarding the potential for AI to propagate false or deceptive content, including images and videos. Meta and OpenAI have also implemented regulations restricting the political use of their AI tools.

Anthropic’s safeguards against political misuse encompass three key areas: establishing and enforcing policies related to election matters, evaluating models to prevent potential abuses, and guiding users to accurate voting resources. Users are required to adhere to Anthropic’s acceptable use policy, which explicitly prohibits the use of AI tools for political campaigning or lobbying. Violations may lead to warnings, service suspensions, and a thorough human review process.

Furthermore, Anthropic rigorously tests its systems through “red-teaming” exercises, where partners attempt to circumvent the guidelines and misuse Claude for malicious purposes. These tests include scenarios that breach the acceptable use policy, such as soliciting information on voter suppression tactics. Anthropic has also developed tests to ensure fair representation of candidates and topics, promoting political parity.

In the context of the United States, Anthropic has collaborated with TurboVote to provide voters with accurate information instead of relying on its generative AI tool. Users seeking voting information will be directed to TurboVote, a nonpartisan resource from Democracy Works. Similar measures are planned for implementation in other countries in the near future.

This initiative aligns with broader efforts in the tech industry to address the risks posed by AI to democratic processes. Regulatory actions, such as the FCC’s prohibition of AI-generated deepfake voices in robocalls, underscore the need to regulate AI’s role in politics. Companies like Facebook and Microsoft have also introduced measures to combat misleading AI-generated political content.

Regarding the creation of AI avatars for political figures, OpenAI encountered a situation where a developer was suspended for creating a bot resembling Rep. Dean Phillips. This action followed a petition by Public Citizen urging regulators to ban the use of generative AI in political campaigns.

While Anthropic refrained from providing additional comments, OpenAI did not respond to inquiries from Decrypt.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 17, 2024
Close Search Window
Close