Written by 7:22 am AI Threat, Deepfakes, Latest news

### Employee Falls Victim to Video Call Deepfakes, Costing Company Millions

An AI-generated representation of the company’s chief financial officer was convincing enough…

An employee at the Hong Kong branch of a multinational corporation mistakenly transferred nearly $26 million to fraudsters last month after participating in a video call with deepfake versions of their colleagues, including the company’s chief financial officer.

During the video call, the employee was the sole human participant, while the other individuals were deepfake replicas generated using artificial intelligence, as disclosed by a Hong Kong police official to the press on Sunday.

The scammers utilized publicly accessible video and audio content of the targeted individuals from platforms like YouTube, leveraging deepfake technology to replicate their voices. This deceptive tactic was employed to manipulate the victim into complying with their demands,” explained Baron Chan, who refrained from disclosing the company’s name.

Reports from the South China Morning Post revealed that the scam even involved persuading the victim to introduce themselves to the fabricated group during the video conference.

Due to the striking resemblance of the deepfake avatars to the actual individuals in the video conference, the employee obediently executed 15 transactions to five local bank accounts, amounting to 200 million Hong Kong dollars.

Following the initial video conference, which featured a remarkably realistic portrayal of the company’s CFO based in the U.K., the scammers continued their ruse through instant messages, emails, and additional one-on-one video calls.

It wasn’t until about a week later, when the employee proactively reached out to the company’s headquarters, that the realization of being ensnared in a scam dawned upon them.

The Hong Kong police were officially informed on January 29th. As of now, no arrests have been reported, and the investigation remains ongoing.

This intricate scam unfolds amidst a troubling surge in the dissemination of AI-generated nonconsensual explicit content online.

Recently, X, previously known as Twitter, grappled with a series of counterfeit sexually explicit images featuring Taylor Swift. To address this issue, the social media platform resorted to restricting searches related to the singer’s name.

Visited 3 times, 1 visit(s) today
Tags: , , Last modified: February 7, 2024
Close Search Window
Close