How AI-Driven deepfakes are redefining truth, trust and national security
In an era where technology can outpace truth, the line between real and fake is vanishing faster than ever. At the BW Security World Conclave & Excellence Awards 2025, Brigadier Neil John, SM (Retd.), Chief of Security, Adani NMDPL, delivered a striking and candid address on the growing menace of deepfakes, warning that what once was science fiction is now a clear and present danger to security, reputation, and trust.
With characteristic wit, he began his session on a lighter note, drawing laughter with a story about his wife and a beauty parlour—only to segue into a far more serious point: “You judge what you see, and that’s where the danger lies.”
What follows, he explained, is a chilling account of how perception has become the new weapon in the digital battlefield.
Deepfakes are synthetic media created using artificial intelligence, primarily through Generative Adversarial Networks (GANs), that superimpose faces, alter voices, and mimic gestures with unnerving precision. What began as a novelty has evolved into an arsenal of manipulation, deception, and psychological warfare.
Brig. Neil John explained the tri-layered structure of the internet—the surface web, deep web, and dark web—revealing that only 4 per cent of the web is visible to us. The rest, he warned, harbours academic data, medical records, and a vast underworld of illegal content and identity manipulation.
“Your data is already available. From your property details to your banking information, you are being profiled every single day. Every camera, every app, every interaction leaves a trace,” he emphasised.
Profiling, he said, becomes dangerous when it moves from observation to judgment. “Tomorrow, when you apply for a job, you won’t be judged by who you are, but by your digital persona. AI will decide if you are ‘fit’ or not, long before you walk into the interview room,” he added.
In the military, such profiling could have devastating consequences—from misinformation campaigns to the digital impersonation of officers. What used to require espionage and months of planning can now be executed by a teenager with a smartphone.
Deepfakes are not just a national security concern; they strike at the heart of personal dignity. The Brigadier recounted the case of a 65-year-old woman who lost Rs 14.5 lakh to a romance scam after being lured by a fake social media profile. The scam exploited loneliness, trust, and technology to devastating effect.
“The voice was perfect, the documents authentic, the customs stamp real—but everything was fake. Human deception, powered by technology,” he said.
Such stories, he noted, are no longer isolated. India now records 7,000 cybercrime complaints daily, with hotspots emerging across smaller towns like Bharatpur, often led by youth as young as 15.
He described how even short accidental video interactions can be exploited. In one case, a man’s brief exposure to a malicious video call led to blackmail using deepfake pornographic content—a crime that relies not on what happened, but on what appears to have happened.
“Fear is the most potent tool of manipulation,” Brig. John remarked.
One of the most dangerous implications of deepfakes, he said, is the “liar’s dividend.” When fake becomes indistinguishable from real, truth itself becomes negotiable.
“If everything can be faked, nothing can be believed,” he warned.
He pointed out how political figures—from Prime Minister Narendra Modi to Rahul Gandhi—have been targeted by deepfake videos, blurring the line between satire and subversion, dissent and deception.
In a moment of reflection, Brig. Neil John quoted an anonymous statement that struck him deeply:
“I want AI to do my laundry and dishes so that I can do art and writing—not for AI to do my art and writing so that I can do laundry and dishes.”
It captures, he said, the essence of our struggle with artificial intelligence—whether it serves us or replaces us.
What began as harmless filters has now evolved into generative warfare, where identities can be cloned, reputations erased, and public opinion hijacked in seconds. With autoencoders and AI image synthesis tools like Meta-AI and Gemini, he warned, “Anyone can turn me into a dinosaur, make me dance Kathakali, or make the Prime Minister laugh in uniform in just ten seconds.”
This, he said, is the frightening power of “toys for the AI boys.”
Brigadier John’s conclusion was not just cautionary but catalytic. He called for an integrated national security framework that recognises deepfakes as digital weapons, urging collaboration between cyber law, AI ethics, and law enforcement.
His advice to citizens was straightforward: do not engage in unsolicited video calls; verify before you trust; report, do not react, to blackmail or fraud; remember that fear fuels crime and awareness disarms it.
In Brig. Neil John’s words, “The most dangerous war is not fought on borders, but within your mind—between what’s true and what’s made to look true.”
As AI continues to evolve, so must our understanding of truth, security, and human integrity. Because in this new era of synthetic deception, the price of ignorance is not just money—it’s identity itself.

