Google’s security teams are now embedding AI into their vulnerability discovery processes
In a sobering reflection of our times, the cybersecurity sphere is witnessing what many now term the dawn of AI-powered hacking—a high-stakes confrontation where both attackers and defenders are increasingly armed with artificial intelligence.
An array of actors—ranging from black-hat criminals and state-sponsored spies to tech researchers and corporate security teams—are harnessing large language models in their operations. These tools, though fallible, have become notably adept at interpreting instructions, generating code, and analysing documentation—raising the stakes across the board.
Google’s security teams are now embedding AI into their vulnerability discovery processes, while startups such as Xbow have impressed by climbing to the top of vulnerability-disclosure leaderboards using AI-driven tools. Meanwhile, CrowdStrike has turned to AI to assist individuals who suspect they may have been hacked.
The military-level utilisation of AI is not without darker undertones. North Korean operatives, for instance, are known to employ generative AI to fabricate resumes and social media profiles, securing employment at Western technology firms under false pretences—thereafter using AI tools to assimilate and mask their true intentions.
Yet the rise of AI in cybersecurity is not purely triumphant. Google’s Heather Adkins cautions that despite AI’s rapid adoption, the sector has yet to see entirely novel vulnerabilities uncovered by it—at present, AI appears to be replicating rather than reinventing.
Similarly, the open-source developer behind the ubiquitous curl project, Daniel Stenberg, has expressed growing frustration with AI-generated bug reports. He claims that in early July only about 5 per cent of submissions were valid, compared with around 20 per cent being “AI slop”—raising concerns about the time and resources wasted in sifting through irrelevant or misleading findings.
The battlefield is blurring further still: Russian-affiliated cyber actors are reportedly embedding AI into malware deployed against Ukraine, automating the search and exfiltration of sensitive files.
So, is this truly the “era of AI hacking”? The evidence suggests an accelerated cyber-arms-race, where AI acts as a force multiplier rather than a fully autonomous agent. Its impact—accelerating offensive tactics, empowering defenders, and introducing new vulnerabilities—underscores the urgent need for ethical governance, transparent protocols, and human oversight.
The convergence of AI and cyber conflict is no longer speculative but very much present—posing profound challenges for industry, government, and digital society at large.

