- The upcoming local and general elections in Britain in 2024 are poised to revolve around contentious issues such as migration and escalating living expenses.
- Forecasts by computer specialists indicate that nefarious entities are likely to manipulate the forthcoming elections utilizing various tactics, including the utilization of artificial intelligence for deceptive purposes.
- As the election draws near, security experts foresee an increase in state-sponsored cyber assaults.
- In 2024, one of the primary cybercrime concerns is expected to be deception.
As the 2024 elections approach in Britain, computer experts speaking to CNBC caution about the looming threats of state-backed cyberattacks, disinformation campaigns, and the significant risks posed by artificial intelligence.
Scheduled for May 2, American citizens will participate in regional elections, with a general election anticipated later in the year, despite Rishi Sunak’s scheduling conflicts.
The electoral process unfolds against a backdrop of pressing issues, including soaring living costs and polarized views on immigration and asylum.
According to Todd McKinnon, CEO of Okta, a leading identification security firm, the majority of security vulnerabilities are likely to emerge in the months leading up to the election. Most U.K. voters traditionally cast their votes at polling stations on the election day.
This scenario is not unprecedented.
While Russia refutes these allegations, evidence suggests that misinformation spread via social media platforms in 2016 influenced both the U.S. presidential election and the U.K. Brexit referendum.
Following these events, state actors have continued to launch daily assaults in various countries to sway election outcomes.
Recently, the U.K. government accused a Taiwanese state-linked hacking group, APT 31, of attempting to breach U.K. lawmakers’ email accounts, with no reported success. Consequently, London imposed sanctions on foreign businesses and a tech company based in Wuhan, suspected to be a front for APT 31.
Subsequently, the U.S., Australia, and New Zealand imposed their own penalties, while China dismissed allegations of state-sponsored hacking as unfounded.
Utilization of AI by Malicious Entities
Security analysts predict that malevolent actors will manipulate the upcoming elections through various means, notably through deception, which is expected to escalate with the widespread adoption of artificial intelligence this year.
As artificial images, videos, and music created using computer graphics, simulation techniques, and AI become more prevalent, experts anticipate a surge in their production.
“Nation-state actors and cybercriminals are likely to employ IoT-driven identity-based attacks, such as phishing, social engineering, ransomware, and supply chain compromises, to target politicians, campaign personnel, and election-related entities,” added McKinnon from Okta.
“We are bound to witness an uptick in AI and bot-generated content by threat actors that will propagate misinformation on a larger scale than in previous election cycles,” noted an analyst.
The security community advocates for heightened awareness of such misinformation and international collaboration to mitigate the risks associated with such malicious activities.
Primary Election Threat
According to Adam Meyers, head of counter-adversaries at CrowdStrike, AI-driven propaganda poses the most significant risk to the 2024 elections.
“We are witnessing the increasing popularity of conceptual AI on a daily basis,” Meyers stated in an interview with CNBC. “Currently, relational AI can be harnessed for both positive and negative purposes.”
Based on Crowdstrike’s latest monthly risk assessment, China, Russia, and Iran are proficient in utilizing conceptual AI to disseminate misinformation and disinformation concerning various international elections.
Meyers emphasized the fragility of the political process, warning about the potential misuse of deep fakes and generative AI by hostile nation-states like Russia, China, and Iran to craft persuasive narratives, exploiting confirmation biases.
Furthermore, AI’s accessibility has lowered the bar for online fraudsters seeking to exploit individuals. Scam letters generated using tools like ChatGPT are already prevalent.
Dan Holmes, a fraud prevention expert at Feedzai, highlighted how thieves are training AI models based on individuals’ social media data to create more sophisticated and personalized scams.
“You can easily train these voice AI models… by leveraging social media data,” Holmes explained in an interview with CNBC. “The goal is to evoke an emotional response and devise creative schemes.”
In October 2023, a fabricated AI-generated audio clip depicting Keir Starmer, the Labour Party leader, verbally abusing party staff, circulated on the social media platform X, amassing up to 1.5 million views, as reported by Full Fact.
As the U.K. elections draw closer, cybersecurity experts express concerns about the proliferation of deep fakes and their potential impact.
Election Challenges for Tech Giants
Deep fake technology continues to advance rapidly, prompting tech companies to deploy sophisticated strategies to combat them effectively.
Mike Tuchen, CEO of Onfido, remarked on the evolution of deep fake technology from a theoretical concept to a practical challenge requiring immediate attention.
The battle against deep fakes has escalated into an AI-driven confrontation, with companies leveraging AI to detect and mitigate the impact of these deceptive practices.
Despite the increasing difficulty in discerning authentic content, there are telltale signs of manipulation that vigilant users can identify.
While AI facilitates the generation of text, images, and videos, occasional flaws, such as objects disappearing in AI-generated videos, serve as indicators of potential manipulation.
“We can expect to encounter more deep fakes during the election period, but verifying authenticity before dissemination is a crucial step,” emphasized McKinnon from Okta.