Written by 3:14 am Generative AI, Uncategorized

### Riding the Generative AI Wave: American Cyber Security Magazine’s Dive into Phishing

Generative AI and the phishing email explosion – how to use AI to fight back Written by Oakley Cox,…

At the conclusion of 2022, generative artificial intelligence (GenAI), specifically ChatGPT, made a significant impact on the global technology landscape, akin to a tsunami. Subsequently, its influence has continued to escalate throughout 2023.

Despite GenAI not being a novel innovation, the emergence of new chatbots represents the next wave of technology poised to revolutionize our lifestyles and professional environments, propelled by advancements in computational capabilities. The utilization of AI for illicit or harmful purposes, particularly in the realm of cybercrime, has sparked concerns about the potential risks associated with AI.

GenAI has enhanced threat actors’ capabilities in email phishing, leading to heightened success rates due to its widespread accessibility.

While the frequency of internet phishing attacks targeting our client base has remained steady since the introduction of ChatGPT, there has been a decline in attacks that rely on duping users into clicking malicious links, as indicated by reports from Darktrace earlier this year. The complexity of language used in these attacks has increased, encompassing variations in text formatting, punctuation, and sentence length. Furthermore, a notable 135 percent surge in “novel social engineering attacks” was observed among numerous active Darktrace/Email users between January and February 2023, aligning with the widespread integration of ChatGPT.

This trend raises concerns that GenAI, exemplified by ChatGPT, could empower threat actors to craft sophisticated and targeted attacks rapidly and effectively, such as emails that mimic communication from one’s supervisor with impeccable grammar, spelling, and tone.

Evolution of AI-Driven Email Scenarios

Darktrace has recently identified shifts in attacks that exploit trust, particularly between May and July of this year. Deceptive emails now appear to originate from the IT department rather than senior management. Research findings reveal a 19% increase in impersonation of the internal IT team, while phishing emails impersonating VIPs or senior executives have decreased by 11%.

These alterations mirror typical tactics employed by intruders to circumvent security measures. The data suggests that adversaries are pivoting towards impersonating the IT team to execute their attacks, as employees have become more adept at recognizing fraudulent emails from senior executives. With GenAI capabilities at their disposal, we may witness the emergence of more convincing voice deep fakes and tools that facilitate employees in deceiving themselves more effectively.

The integration of generative AI introduces a new layer of complexity to cybersecurity, especially as email compromise remains a primary vulnerability for businesses. It is foreseeable that confidence in electronic communications will continue to dwindle with the proliferation of GenAI across various media formats such as images, audio, video, and text.

Embracing the Positive Potential of AI

Amidst the discourse surrounding the potential pitfalls of AI and security concerns, it is essential to acknowledge that while AI itself is not inherently malevolent, its misuse by threat actors can lead to adverse outcomes, including cyberattacks. Importantly, whether AI-powered or not, humans—specifically cybersecurity teams—can harness AI for positive purposes to bolster defenses against cyber threats.

Defensive AI systems, well-versed in industry practices and employee behavior, can classify each email as either suspicious or legitimate. AI that autonomously learns and analyzes typical communication patterns, such as tonality and sentence structure, holds a distinct advantage over adversaries’ AI models trained on generic datasets, as it can discern subtle nuances. Essentially, leveraging AI that possesses in-depth knowledge of internal operations is crucial for defenders to thwart hyper-personalized, AI-driven attacks effectively.

Ultimately, the battle for cybersecurity is fundamentally a human endeavor. AI serves as a tool in this conflict, with real individuals—potentially malevolent threat actors—operating behind the scenes. It is imperative that we view AI not merely as a risk factor but as an ally for security teams in addressing cybersecurity challenges. By adopting this approach collectively, we stand a better chance of safeguarding against AI-enabled threats.

Visited 2 times, 1 visit(s) today
Last modified: February 6, 2024
Close Search Window
Close