Written by 11:12 am AI, Latest news

### Enhancing Cyberdefense Amid the Rise of Conceptual AI in Elections

Humans are easier to breach than IT systems, and errant actors will use generative AI to exploit th…

Humans are anticipated to become primary targets for hacktivists and nation-state actors as countries gear up for significant elections in an era dominated by generative artificial intelligence (AI).

While generative AI has not altered content distribution, it has notably increased both the volume and reliability of information.

Allie Mellen, a principal analyst at Forrester Research, highlighted how this technology empowers threat actors to craft more sophisticated hacking emails to gather information about potential candidates or voters. In addition to leveraging machine learning and AI in surveillance tools, Mellen’s research encompasses security operations, nation-state threats, and related subjects. In 2024, her team diligently monitors the proliferation of misinformation and disinformation.

Mellen underscored the pivotal role that social media companies play in curbing the dissemination of false information and deception to prevent a recurrence of the 2016 US election interference.

A recent study by Yubico and Defending Digital Campaigns revealed that nearly 79% of US citizens express concerns about the utilization of AI-generated content to impersonate political figures or fabricate information. Additionally, 43% believe such content could impact the outcome of the upcoming elections. The study, conducted by OnePoll and involving 2,000 US registered voters, aimed to evaluate the influence of AI and security on the 2024 election strategies.

Among the respondents, 41% indicated they perceived the voice as human, while suspecting that the audio recording was created using an AI-generated voice. Moreover, 52% reported instances of receiving messages or contacts that seemed legitimate but were identified as phishing attempts.

Defending Digital Campaigns’ President and CEO, Michael Kaiser, cautioned that the current election cycle poses significant risks of attacks targeting candidates, staff, and affiliates. Kaiser emphasized the imperative for political entities to implement robust security measures to safeguard not only critical data but also voter trust.

David Treece, Yubico’s Vice President of Solutions Infrastructure, stressed the importance of trust in campaign strategies. He warned that security breaches like fraudulent emails or deepfakes circulated on social media could significantly impact electoral campaigns. Treece urged candidates to prioritize cybersecurity measures and adopt proactive steps to fortify their campaigns and enhance voter confidence.

Mellen emphasized the essential role of public awareness in combating false content, asserting that safeguarding elections transcends governmental concerns and necessitates industry-wide collaboration.

Highlighting the significance of effective governance and verification processes, Mellen recommended prioritizing the identification and resolution of underlying issues rather than merely addressing the symptoms. Establishing stringent governance and verification protocols remains crucial to ensuring the legitimacy of transactions.

Mellen also advocated for continuous enhancement of capabilities to detect false content utilizing conceptual AI and deepfake technologies.

Predominantly, nation-state actors engage in conceptual AI attacks, with others predominantly employing sophisticated attack methodologies. Mellen warned that threat actors targeting nation-state adversaries are driven to exploit novel technologies and tactics to breach systems that were previously inaccessible. The dissemination of misinformation by these actors could undermine public trust and destabilize societal structures from within.

Exploiting Human Vulnerabilities through Generative AI

Nathan Wenzler, Tenable’s Chief Security Strategist, expressed concerns about the escalating efforts of nation-state actors to exploit trust through misinformation and disinformation campaigns.

With the evolution of relational AI, Wenzler noted that attackers have gained enhanced capabilities and broader reach, despite his team not identifying any new security threats at present.

Wenzler highlighted how nation-state actors leverage generative AI to capitalize on the public’s inherent trust in online content and their readiness to accept it as factual. These actors utilize generative AI to propagate content aligned with their objectives, particularly through convincing phishing emails and deepfakes.

While cybersecurity tools have significantly bolstered defenses against technical vulnerabilities, adversaries are increasingly targeting human vulnerabilities. Wenzler emphasized that as technology advances, attackers are leveraging generative AI to enhance the effectiveness of social engineering attacks, enabling rapid content generation with higher success rates.

Even a marginal improvement in crafting persuasive content can result in a substantial increase in victims, Wenzler noted, underscoring the speed and scalability advantages afforded by generative AI in facilitating social engineering attacks.

Assessing Government Concerns Regarding Generative AI Risks

Wenzler emphasized the paramount importance for governments to address the risks posed by generative AI, particularly in exploiting trust-based vulnerabilities. He highlighted the psychological aspect of human behavior, noting the tendency to trust visual and textual content without adequate scrutiny. Wenzler cautioned that the proliferation of deepfakes poses significant challenges to fostering healthy skepticism and critical thinking.

He advocated for the development of tools, such as deepfake detection mechanisms, to effectively counter the threats posed by generative AI and misinformation campaigns.

Safeguarding Large Language Models

Organizations must prioritize the security of AI model training data to mitigate risks associated with malicious attacks like data poisoning. Mellen emphasized the need to review and secure training data for large language models (LLMs) to prevent the generation of false outputs.

Sergy Shykevich, Threat Intelligence Group Manager at Check Point Software, highlighted the risks associated with LLMs, particularly in the context of nation-state actors exploiting these models to manipulate generative AI responses. Shykevich stressed the critical need for transparency from platform operators utilizing LLMs to safeguard public opinion and electoral processes.

While the regulatory landscape for securing LLMs remains nascent, Mellen underscored the challenges faced by administrators in comprehending and managing generative AI systems due to their novelty.

Wenzler recommended leveraging smaller, purpose-built LLMs to mitigate risks associated with training data, advising organizations to strike a balance between dataset size and security considerations.

He urged governments to swiftly implement regulations and mandates to address the risks associated with generative AI, providing guidance for businesses adopting such applications.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: April 9, 2024
Close Search Window
Close