New YorkCNN
—
Amid the upcoming elections worldwide, where over half of the global population is expected to participate, concerns are mounting among tech leaders, lawmakers, and civil society groups regarding the potential disruption and disarray that artificial intelligence (AI) could introduce into the electoral process. In response to these apprehensions, a coalition of prominent tech companies has joined forces to tackle this looming threat.
Over a dozen tech enterprises deeply involved in the development or utilization of AI technologies have made a collective commitment to collaborate in identifying and countering detrimental AI-generated content during elections, including the proliferation of deepfakes featuring political figures. Notable signatories to this initiative, named the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” include industry giants like OpenAI, Google, Meta, Microsoft, TikTok, Adobe, among others.
The accord outlines a series of pledges aimed at fostering cooperation in the realm of technology to identify and mitigate the dissemination of misleading AI content. Furthermore, the signatories have vowed to maintain transparency with the public regarding their endeavors to address potentially harmful AI-generated material.
Microsoft President Brad Smith underscored the importance of ensuring that AI does not inadvertently contribute to the propagation of misinformation in elections, stating, “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”
While tech companies have historically struggled with self-regulation and enforcement of their own policies, this collaborative agreement emerges as regulatory frameworks struggle to keep pace with the rapid advancements in AI technologies.
The emergence of a new wave of AI tools has empowered users to swiftly create compelling text, realistic images, and even increasingly sophisticated video and audio content that experts fear could be exploited to disseminate false information and deceive voters. Notably, the announcement of the accord follows the recent unveiling by OpenAI of a remarkably lifelike AI text-to-video generator tool named Sora.
Expressing concerns about the potential negative impacts of AI, OpenAI CEO Sam Altman emphasized the need for legislative oversight during a congressional hearing in May. Altman warned of the significant harm that the industry could inadvertently inflict on society without proper regulation.
Some companies had previously collaborated to establish industry standards for embedding metadata into AI-generated images, enabling other systems to automatically identify the artificial origin of such images.
The accord announced on Friday represents a significant advancement in these cross-industry initiatives. Signatories have committed to exploring methods to affix machine-readable indicators to AI-generated content, revealing their source, and evaluating the risks associated with their AI models generating deceptive election-related content.
Additionally, the companies have pledged to launch educational campaigns to educate the public on how to shield themselves from manipulation and deception stemming from such content.
However, certain civil society groups remain skeptical about the efficacy of voluntary commitments like this accord. Nora Benavidez, senior counsel and director of digital justice and civil rights at tech and media watchdog Free Press, expressed reservations, emphasizing the need for robust content moderation involving human oversight, labeling, and enforcement to effectively address the genuine threats posed by AI during election cycles.