Unlike shadow IT—where employees deploy unauthorised software or devices—shadow AI specifically involves the use of generative AI tools such as ChatGPT, open-source models or custom-built AI agents without institutional oversight
The rapid spread of generative AI tools in workplaces—often without formal approval—is fuelling a growing cybersecurity concern: shadow AI. The term refers to the unauthorised use of AI platforms, chatbots or large language models within organisations, bypassing established IT governance and security protocols.
A recent report by IBM, covering 600 organisations impacted by data breaches between March 2024 and February 2025, flagged shadow AI as an emerging corporate vulnerability. While the global average cost of breaches fell slightly to USD 4.44 million in 2025, Indian organisations saw an increase—from USD 2.35 million to USD 2.51 million. Notably, 63 per cent of the surveyed organisations admitted they did not have a formal AI governance policy in place.
Unseen but growing
Unlike shadow IT—where employees deploy unauthorised software or devices—shadow AI specifically involves the use of generative AI tools such as ChatGPT, open-source models or custom-built AI agents without institutional oversight. Employees often submit sensitive internal data into these tools, inadvertently exposing it to third-party servers or public AI training datasets.
Surveys suggest that up to 75 per cent of employees use AI tools at work without formal approval. One security firm found the average company had 67 AI tools in operation—90 per cent of which were unlicensed or unmanaged. This creates a chaotic environment for data governance and heightens the risk of data loss.
Data leaks and legal risks
The risks of shadow AI are manifold. Sensitive material, such as proprietary code, strategy documents or client information, can easily be shared with models that store or replicate such content. Once exposed, this data may be used to train public models or accessed by actors outside the organisation.
Moreover, unsanctioned tools often fall outside the scope of compliance frameworks such as GDPR, HIPAA, PCI DSS or the EU AI Act. Without clear auditing trails, organisations face legal exposure and financial penalties. In regulated industries—such as finance, healthcare or defence—this can be particularly damaging.
Additionally, generative AI models are prone to producing hallucinated or biased outputs. When such unverified content is relied upon for decision-making, it may lead to operational errors, reputational harm or even safety incidents.
Weaponisation of rogue AI
Shadow AI is also being harnessed by cybercriminals. Tools like FraudGPT, WormGPT and GhostGPT are being marketed on dark web forums to enable low-skill hackers to produce convincing phishing campaigns, automate malware creation, and deploy social engineering attacks at scale.
More advanced threats involve agentic AI—autonomous software bots that mimic human behaviour, conduct reconnaissance, and execute multi-step cyberattacks without direct supervision. These tools are redefining the threat landscape, requiring a rethink of traditional cybersecurity strategies.
Managing the invisible
Cybersecurity experts warn that banning AI tools entirely is counterproductive. Instead, they recommend a balanced approach: enabling responsible AI usage through effective governance and oversight.
This typically involves four stages: assessing all AI tools in use; crafting clear AI usage policies; deploying technical controls for data security and access; and educating employees on responsible AI practices. Some companies have begun using specialised tools to monitor AI usage, enforce internal guardrails and detect anomalies.
Security vendors, meanwhile, are responding with AI-native cybersecurity solutions that integrate directly into enterprise environments, offering visibility into both authorised and rogue AI operations.
A wake-up call for enterprises
Shadow AI may be less visible than conventional breaches, but it poses a comparable—if not greater—risk. As AI becomes embedded in daily business processes, organisations face a crucial inflection point: to ignore shadow AI and risk exposure, or to confront it head-on with a mix of policy, technology and cultural change.
The solution lies not in curbing innovation, but in governing it wisely. Those who manage the shadow now will be better positioned to lead the future.

