News Security Technology

Deepfake Attacks To Impact Global Banking & Business : Report

Assam Rifles Enhances Digital Defences With Inauguration Of Cyber Security Operations Centre
New industry data reveals synthetic media from algorithmically generated video to sophisticated voice impersonations is now one of the three most frequent forms of global fraud

The line between the real and the synthetic has effectively vanished, as advancements in generative AI are fuelling an unprecedented wave of digital identity fraud globally. Technologies like OpenAI’s Sora and sophisticated voice cloning are rapidly moving deepfakes from niche cyber threats to one of the most urgent operational and financial crises facing multinational corporations.

According to a new report from identity verification vendor Regula, identity spoofing, biometric fraud, and deepfake fraud now constitute the three most frequent fraud types worldwide. For high-impact organisations reporting losses exceeding USD 5 million, synthetic IDs and deepfakes top the list of common attack vectors.

“Deepfakes are no longer fringe threats—they are the main driver of identity fraud at scale,” the Regula report states. It concludes that fundamental defences like Presentation Attack Detection (PAD) and advanced liveness checks have become non-negotiable baselines for any exposed firm.

Scale Of Threat

The urgency of the crisis was echoed by top executives at the recent Singapore Fintech Festival, who pointed to AI-driven cyberattacks and deepfakes as critical threats to financial stability.

Craig Vosburg, chief services officer at Mastercard, painted a stark picture of the economic fallout, noting that if cybercrime were a country, “it would be the third largest GDP in the world.”

The scale of direct attacks is also escalating rapidly, particularly in Asia. Ant International’s Chief Executive Officer, Yang Peng, revealed that while his firm detected its first deepfake attack in January 2024, it now faces over 20,000 deepfake attack attempts every single day globally.

Furthermore, new generative video tools like OpenAI’s Sora 2 and Google’s Veo are creating hyper-realistic media that enhances social engineering tactics. Experts warn of alarming scenarios, from deepfaked bosses ordering fraudulent wire transfers (like the USD 25m fraud against an Arup employee in Hong Kong) to deepfake candidates infiltrating remote hiring processes.

Scramble For Defence

In response, regulatory bodies and the private sector are demanding a shift towards rigorous, multilayered authentication. The Monetary Authority of Singapore (MAS) has highlighted three key threats:

Biometric System Defeat: Deepfakes are bypassing standard biometric authentication in countries like Indonesia and Vietnam.

Amplified Social Engineering: Hyper-realistic impersonations are used in video and voice calls to trick employees into high-risk actions.

Targeting Market Confidence: Misinformation campaigns use deepfakes of public figures or fake footage to manipulate markets and brand reputation.

The recommended antidote is a combination of advanced technology and stricter policy, including multi-factor authentication (MFA) for high-risk activities and mandatory, action-prompted liveness detection to defeat synthetic images.

New Defence Partnerships

The security industry is now racing to integrate new detection layers directly into high-trust processes:

Legal and Financial Services: Secured Signing, a digital signature and notarization provider, has partnered with detection firm Reality Defender to launch ‘Realify,’ a tool that analyses a signer’s video and audio in real-time to generate a risk score before and during online meetings.

Telecommunications: Pindrop has partnered with BT Group to deploy its voice security solutions across UK enterprises. The solution combines device recognition and “phoneprinting” technology to combat synthetic speech, warning that 1 in 106 calls already shows signs of deepfake activity.

As deepfakes cement their position as a primary engine of digital criminality, the overarching message from experts is unanimous: organisations must stop treating synthetic fraud as a future possibility and instead deploy defensive action immediately.

Leave a Reply

Your email address will not be published. Required fields are marked *