According to the financial solutions firm Plaid, AI is helping many different companies grow and develop, but the burgeoning technology is also assisting criminals in creating complex means of fraud.
Presentations, aka liveness problems, have jumped 38% just this month, a Plaid official told Fortune. When poor players try to trick the picture portion of a confirmation process by impersonating someone else, life is an example of an example. It can be done by holding up a picture, wearing a realistic-looking face, or displaying a fake image on a computer. According to the spokesman, about 25% of fraudulent ID document attempts used generative AI, while about 12% of all liveness attacks used generated faces.
Given the presence of AI-based sources, online data, and what can be found on the dark online, fraud is only getting more and more common, according to Alain Meier, head of identification at Plaid.
“The club for committing a superior fraud harm is falling year after year,” he continued.
About 90% of Plaid’s buyers require some form of “know your customer,” or KYC, operation, the official said. Companies are required to validate consumers when new transactions are created as a result. For instance, many banks and financial service organizations now require customers to upload a picture of themselves as part of the validation procedure. Companies will then fit the picture to, say, a company’s driver’s license.
However, this is where more sophisticated cybercriminals have discovered a workaround: They’re using AI to create algorithmic videos of possible victims to entry accounts. “Fraud has become very professional,” Meier added.
Meier shared a story in which Plaid’s systems, and a mortal, helped place a potential scam. A financial solutions consumer was signing up new customers and using Plaid’s Identification Verification software to make sure genuine people were using the accounts in the late 2023 period. A “liveness search” was a part of the process that the company conducted, asking users to send a photo video. Plaid’s application noticed that many users had related IP names, raising a red flag.
Afterward, a Plaid scientist looked through the video and discovered that the background for many was the same: a brick wall with numerous mounted devices and products. A more analysis revealed that the fakes were the product of an organized crime organization with a presence in Eastern Europe.
But sometimes the deepfakes can’t be detected. When that happens, Meier said, system teaching tools can be used to detect small changes in false documents, photos, or videos, as well as to evaluate background elements, associated file data, and how the materials were provided.
However, it’s difficult for some companies, especially small businesses, to keep up with the speed of growing fraud tactics. Some simply lack the resources, Meier said. “The scam is so sophisticated,” he added, “it may need every organization to have an in-home team of split fraud experts”.