Written by 6:06 am Generative AI, Latest news

### Enhancing Email Scams: Advanced Generative AI Techniques by Financial Fraudsters

Financial scams using generative AI are the latest hacking challenge for companies and employees, w…

To join the CNBC Technology Executive Council, visit [link]

  • Even organizations that prohibit employees from utilizing generative artificial intelligence are vulnerable to financial scams employing the technology to enhance traditional phishing methods utilized by hackers.
  • Equipped with tools such as ChatGPT or its illicit counterpart FraudGPT, criminals can effortlessly fabricate authentic-looking videos of financial statements, forged identifications, false personas, and even convincing deepfakes of corporate executives utilizing their voice and image.
  • A recent incident costing a Hong Kong-based company over $25 million exemplifies the sophistication of these crimes and the challenges in detecting them.

More than a quarter of companies now restrict their employees from using generative AI. However, this restriction does little to shield against malicious actors leveraging the technology to deceive employees into divulging sensitive data or processing fraudulent payments.

Empowered by ChatGPT or its dark web equivalent, FraudGPT, criminals can seamlessly produce realistic videos of financial reports, counterfeit IDs, fictitious identities, or even authentic-looking deepfakes of a company executive employing their voice and image.

The statistics paint a grim picture. In a recent survey conducted by the Association of Financial Professionals, 65% of participants reported that their organizations encountered attempted or actual payments fraud in 2022. Among those who suffered financial losses, 71% fell victim to email-related compromises. According to the survey, larger enterprises with annual revenues exceeding $1 billion faced a higher susceptibility to email scams.

Phishing emails rank among the most prevalent email scams. These deceptive emails mimic reputable sources like Chase or eBay, urging recipients to click on a link redirecting them to a counterfeit yet convincing website. The email typically prompts the unsuspecting victim to log in and disclose personal information, enabling criminals to access bank accounts or perpetrate identity theft.

Spear phishing, a more targeted variant, tailors emails to individuals or specific organizations. Perpetrators invest time in researching job titles, colleagues’ names, and even details about supervisors or managers.

Evolution of Traditional Scams

While these scams are not novel, generative AI complicates the distinction between reality and deception. Previously, anomalies like peculiar fonts or grammar errors were telltale signs of fraudulent emails. Presently, malefactors worldwide leverage ChatGPT or FraudGPT to craft convincing phishing and spear phishing emails. They can even impersonate CEOs or managers, utilizing their voice for fraudulent phone calls or their image in video calls.

A recent incident in Hong Kong exemplifies this trend, where a finance employee received a message purportedly from the company’s UK-based chief financial officer, requesting a $25.6 million transfer. Despite initial suspicions of a phishing attempt, the employee’s doubts were dispelled during a video call with the CFO and other familiar colleagues—unbeknownst to him, all participants were deepfakes. Only upon verification with the head office did the deceit come to light, albeit after the transfer was executed.

Christopher Budd, director at cybersecurity firm Sophos, remarked, “The level of detail invested in making these scams credible is remarkably impressive.”

Recent high-profile deepfake incidents involving public figures underscore the rapid advancement of this technology. For instance, a fabricated investment scheme featured a deepfaked Elon Musk endorsing a fictitious platform. Similar deepfaked videos of Gayle King, Tucker Carlson, and Bill Maher discussing Musk’s purported venture circulated on social media platforms like TikTok, Facebook, and YouTube.

Andrew Davies, global head of regulatory affairs at ComplyAdvantage, highlighted the ease with which synthetic identities can be generated using stolen or fabricated information via generative AI. Cyril Noel-Tagoe, principal security researcher at Netcea, emphasized how criminals leverage online data to craft realistic phishing emails, leveraging large language models trained on the internet to gather insights about companies and their executives.

Risks for Large Companies amid API Proliferation and Payment Apps

While generative AI enhances the credibility of threats, the proliferation of automation and the expanding landscape of websites and apps facilitating financial transactions exacerbate the risks.

Davies noted, “The evolution of fraud and financial crime is catalyzed by the transformation of financial services.” A decade ago, electronic fund transfers were predominantly managed by traditional banks. The emergence of diverse payment solutions like PayPal, Zelle, Venmo, and Wise broadened the avenues for transactions, providing malefactors with more targets. Moreover, traditional banks increasingly rely on APIs, facilitating seamless connectivity between apps and platforms but also presenting another vulnerability.

Criminals leverage generative AI to swiftly craft authentic messages and scale their operations through automation. Davies explained, “It’s a numbers game. If I launch 1,000 spear phishing emails or CEO fraud attacks and even a fraction succeeds, the gains could be substantial.”

According to Netcea, 22% of surveyed companies reported being targeted by fake account creation bots, with the financial services sector facing a higher incidence at 27%. The survey revealed that 99% of companies detecting automated bot attacks observed a surge in such incidents in 2022. Notably, larger enterprises, particularly those with revenues exceeding $5 billion, witnessed a significant uptick in attacks. While fake account registrations affected all industries, the financial sector bore the brunt, with 30% of attacked financial businesses reporting 6% to 10% of new accounts as fraudulent.

To combat AI-driven fraud, the financial industry deploys its AI models. Mastercard, for instance, developed a new AI model to detect fraudulent transactions by identifying “mule accounts” exploited by criminals to transfer illicit funds.

Impersonation tactics are increasingly employed by criminals to convince victims of the legitimacy of transactions. Ajay Bhalla, President of Cyber and Intelligence at Mastercard, highlighted the challenges banks face in detecting such scams, emphasizing the importance of advanced algorithms in safeguarding against fraudulent activities.

Enhanced Identity Verification Measures

Sophisticated attackers may possess insider knowledge, enhancing the authenticity of their schemes. While criminals have grown more adept, they lack precise internal information about target companies.

To mitigate risks, employees must adhere to specific money transfer protocols established by their organizations. Noel-Tagoe recommended verifying unusual requests through alternative channels if the standard procedure involves platforms other than email or Slack.

Companies are exploring advanced authentication methods to differentiate genuine identities from deepfakes. Current digital identity verification processes often entail submitting an ID and real-time selfie. Future protocols might incorporate actions like blinking or verbal confirmation to discern between live video feeds and pre-recorded content.

While companies adapt to these challenges, cybersecurity experts caution that generative AI is fueling a surge in highly convincing financial scams. Christopher Budd from Sophos remarked, “In my 25 years in technology, the impact of AI is unprecedented, akin to adding jet fuel to the fire.”

To join the CNBC Technology Executive Council, visit [link]

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 27, 2024
Close Search Window
Close