Written by 10:40 am Generative AI, Uncategorized

### Generative AI Causing Cybercriminals to Pause

An analysis of dark web forums revealed many threat actors are skeptical about using tools like Cha…

Cybercriminals have shown a hesitancy to leverage generative AI for launching attacks, as per recent findings by Sophos.

Fresh research by Sophos delves into the reluctance of threat actors to utilize large language models (LLMs) through an analysis of discussions on four prominent dark-web forums. The study uncovered a lack of enthusiasm among threat actors towards employing these tools, with some expressing apprehensions about the broader risks associated with them.

Among the forums scrutinized, only a mere 100 posts on AI were identified, contrasting starkly with the 1000 posts pertaining to cryptocurrency within the same timeframe.

The study disclosed that the bulk of LLM-related discussions centered around compromised ChatGPT accounts available for purchase and methods to bypass the safeguards embedded in LLMs, known as ‘jailbreaks.’

Furthermore, the researchers noticed the emergence of 10 ChatGPT derivatives purportedly capable of facilitating cyber-attacks and malware development. Nevertheless, cybercriminals exhibited mixed reactions towards these derivatives, with many suspecting potential scams orchestrated by the creators of the ChatGPT imitations.

The research team highlighted that most endeavors to craft malware or hacking tools using LLMs were deemed “rudimentary” and often met with skepticism from other users. Notably, an instance was documented where a threat actor inadvertently divulged personal information while showcasing ChatGPT’s capabilities. Many users harbored cybercrime-specific concerns regarding LLM-generated code, encompassing worries about operational security and AV/EDR detection.

Moreover, the forums hosted numerous ‘thought pieces’ addressing the adverse societal impacts of AI.

Christopher Budd, the director of X-Ops research at Sophos, remarked, “At least for now, it seems that cybercriminals are engaging in the same deliberations about LLMs as the general populace.” He further emphasized, “Despite the significant apprehensions surrounding the misuse of AI and LLMs by cybercriminals post the ChatGPT release, our research indicates that threat actors are more circumspect than enthusiastic.”

Preparation for the Onset of AI-Driven Threats

Albeit the existing reluctance among cybercriminals to adopt AI tools, Sophos has published distinct research showcasing that LLMs can facilitate large-scale fraud with minimal technical expertise.

Employing LLM tools like GPT-4, the team successfully constructed a fully operational e-commerce platform featuring AI-generated content such as images, audio, and product descriptions. The platform even integrated a counterfeit Facebook login and checkout page to pilfer users’ login credentials and credit card information.

Sophos X-Ops demonstrated the ability to generate hundreds of similar websites instantaneously with a single click.

The firm clarified that the research was conducted to proactively address AI-driven threats of this nature before they become widespread.

Ben Gelman, senior data scientist at Sophos, articulated, “If there exists AI technology capable of orchestrating comprehensive, automated threats, it will inevitably be exploited. We have already witnessed the fusion of generative AI components in traditional scams, like AI-generated text or images aimed at ensnaring victims.”

Visited 2 times, 1 visit(s) today
Last modified: February 6, 2024
Close Search Window
Close