Written by 2:39 pm ChatGPT, Generative AI, OpenAI, Uncategorized

### Leveraging AI Technology: The Surge of Malicious Phishing Emails

Since the fourth quarter of 2022 when ChatGPT launched, there’s been a 1,265% increase in mal…

Since the last quarter of 2022, there has been a significant surge in malicious phishing emails, with a staggering 1,265% increase noted, alongside a 967% uptick specifically in credential phishing, as reported by cybersecurity company SlashNext.

The utilization of generative artificial intelligence tools like ChatGPT has enabled cybercriminals to craft sophisticated and targeted messages for business email compromise (BEC) and other phishing schemes. This trend underscores the rapid growth of AI-driven threats in terms of speed, quantity, and complexity.

The research conducted by SlashNext, which draws from the company’s threat intelligence and a survey involving over 300 cybersecurity experts in North America, reveals that cybercriminals are increasingly relying on generative AI tools such as ChatGPT to orchestrate elaborate BEC and phishing campaigns.

On a daily basis, an average of 31,000 phishing attacks were recorded, with almost half of the cybersecurity professionals surveyed admitting to experiencing BEC attacks, and 77% of them falling victim to phishing attempts.

Patrick Harr, the CEO of SlashNext, emphasized the escalating concerns surrounding the proliferation of phishing due to generative AI technology. He pointed out how AI empowers threat actors to enhance the speed and diversity of their attacks by tweaking malware code or generating numerous versions of social engineering tactics to heighten the chances of success.

The exponential growth of malicious phishing emails coinciding with the launch of ChatGPT towards the end of the previous year is not merely a coincidence, according to Harr. He highlighted how generative AI chatbots have significantly lowered the entry barrier for novice cybercriminals while equipping seasoned attackers with the means to execute targeted spear-phishing assaults on a large scale.

Financial Implications

The surge in phishing attacks can be attributed to their effectiveness, as mentioned by Harr, citing the FBI’s Internet Crime Report, which disclosed losses of approximately \(2.7 billion from BEC alone in 2022, with an additional \)52 million lost to other types of phishing.

Despite debates on the actual impact of generative AI on cybercrime, Harr stressed that threat actors are indeed leveraging tools like ChatGPT to propagate rapidly evolving cyber threats and craft sophisticated BEC and phishing messages.

For instance, SlashNext researchers uncovered a BEC scheme in July that utilized ChatGPT in conjunction with a cybercrime tool named WormGPT, tailored specifically for malicious activities like launching BEC attacks.

Furthermore, reports surfaced about another malevolent chatbot called FraudGPT, marketed as an exclusive tool catering to fraudsters, hackers, spammers, and similar individuals, boasting a wide array of features.

SlashNext researchers also identified a concerning trend involving AI “jailbreaks,” where hackers adeptly circumvent the constraints on the lawful use of generative AI chatbots. This tactic enables attackers to weaponize tools like ChatGPT to deceive victims into divulging personal data or login credentials, leading to more severe breaches.

Chris Steffen, a research director at Enterprise Management Associates, highlighted how cybercriminals leverage generative AI tools to produce convincing phishing messages, including BEC attacks. The days of poorly crafted emails like the infamous ‘Prince of Nigeria’ scams are long gone, replaced by highly persuasive and authentic-looking messages mimicking official correspondence from trusted sources like government agencies and financial institutions.

Steffen emphasized the use of AI to meticulously analyze past writings and publicly available information, enabling cybercriminals to tailor emails to appear extremely convincing. For instance, a cybercriminal might employ AI to compose an email to a specific employee, impersonating their superior and referencing company events or personal details to lend credibility to the message.

To combat the escalating threat landscape, cybersecurity leaders are advised to prioritize continuous end-user education and training, fostering a security-aware culture where employees are vigilant against potential threats and feel empowered to report suspicious activities.

Implementing email filtering tools that leverage machine learning and AI to identify and block phishing emails, conducting regular security audits, and fortifying existing security infrastructure are crucial steps in mitigating the risks posed by AI-generated cyber threats. Embracing a zero-trust strategy can further bolster defenses and provide comprehensive protection against evolving threats in the digital landscape.

Visited 2 times, 1 visit(s) today
Last modified: February 3, 2024
Close Search Window
Close