Fraudsters are leaning on generative AI to forge synthetic identities—faces conjured by deepfake algorithms that once gave themselves away through too-perfect symmetry or spotless skin
As artificial intelligence fuels a surge in sophisticated fraud, identity verification is becoming a digital battleground—one where both sides are increasingly deploying the same technology.
Fraudsters are leaning on generative AI to forge synthetic identities—faces conjured by deepfake algorithms that once gave themselves away through too-perfect symmetry or spotless skin. But that’s changing, says Hal Lonas, chief technology officer at identity verification company Trulioo. “The algorithms now insert blemishes and slight imperfections,” he says. “It’s definitely a cat-and-mouse game.”
To counter the threat, Trulioo has begun using AI to fight back, training machine learning models to recognise fraudulent patterns. “AI is really good at being trained to catch AI,” Lonas adds. “It doesn’t get tired, doesn’t take vacation… the computer keeps learning and getting smarter.”
Multi-layered Approach To Fraud Detection
It’s not just facial deepfakes that are causing alarm. Forged documents and biometric spoofing—such as AI-generated voices and video chats—are becoming harder to detect, according to identity technology firm Microblink. Generative AI systems, which evolve through adversarial feedback loops, are producing increasingly convincing fake documents and personas.
But AI defences can match this sophistication, says Albert Roux, executive vice-president for identity products at Microblink. “AI-driven liveness detection is crucial for verifying that a user is a real, live person—not a spoof using photos, videos or digital manipulations,” he says. By analysing facial movements, micro-expressions and other subtle cues, AI can help separate fact from forgery.
Financial services on high alert
Nowhere is the impact of AI-driven fraud more keenly felt than in the financial sector. The Financial Services Information Sharing and Analysis Center (FS-ISAC) estimates that AI-generated fraud could cost US institutions as much as USD 40bn by 2027. Still, financial institutions are no strangers to defensive technology.
“Financial institutions have been leveraging technology to combat cyber and fraud attacks for decades,” says Linda Betz, FS-ISAC’s executive vice-president for global community engagement. With the help of AI, many are automating key processes such as transaction monitoring, identity checks and account verification—shoring up frontline defences against digital deception.
When AI meets human manipulation
Yet even as detection capabilities improve, some attack vectors remain elusive. Social engineering and insider threats—where attackers manipulate behaviour or exploit legitimate access—continue to outmanoeuvre many AI systems.
“These attacks rely on manipulating human psychology,” says Roux. “While AI can detect some anomalies in behaviour, it’s less adept at understanding the nuances of human interaction and deception—especially in remote or distributed contexts.”
Keeping pace in an escalating arms race
Trulioo says it adapts its AI defences regularly to stay ahead of the threat. The company conducts red-team exercises using AI-powered attacks to test its systems, and pushes updates every two weeks to its machine learning platform. Lonas stresses the importance of a layered approach: looking not just at individual documents or faces, but also at metadata, submission patterns and camera movement.
“In cybersecurity, we always talk about multilayered defence,” he says. “The same is true in identity—where you need to catch things at multiple levels.”
As generative AI continues to evolve, so too must the systems designed to detect it. The outcome of this technological tug of war could shape the future of digital identity—one algorithmic battle at a time.

