Site icon BW Security World

AI Usage Surge, Governance Failing To Keep Pace

NeGD's CISO Deep-Dive Training Empowers Government Departments

NeGD's CISO Deep-Dive Training Empowers Government Departments

Among the 254 AI-enabled apps in use, 7 per cent originate from China — a growing share that’s drawing attention from national security agencies

Generative AI (GenAI) is no longer a novelty — it’s foundational. Enterprises like Shopify are mandating AI use across teams, with a policy that staff must prove AI cannot do a task before requesting headcount. The technology is now embedded deeply across functions, with recent research revealing that the average enterprise uses 254 distinct AI-enabled applications.

However, while adoption is skyrocketing, security and governance are lagging behind — and the risks are multiplying.

Rising Tide Of Chinese GenAI Tools

Among the 254 AI-enabled apps in use, 7 per cent originate from China — a growing share that’s drawing attention from national security agencies. January’s headlines were dominated by DeepSeek, a Chinese GenAI app that saw such widespread internal experimentation that the Pentagon and US lawmakers scrambled to block its use.

This may just be the beginning. Tools like Manus, Ernie Bot, Kimi Moonshot, Qwen Chat, and Baidu Chat are gaining traction, fuelled by backing from tech giants like Alibaba and Baidu. In China, data sovereignty laws mean anything shared with these apps can potentially be accessed by the state. A US House panel has already deemed DeepSeek a national security risk — a warning sign of China’s rapidly maturing AI ecosystem, which now rivals Western players in enterprise adoption and consumer reach.

The speed at which new apps can appear and go viral within organisations is alarming. Tech-savvy employees, eager to gain a competitive edge, are naturally inclined to explore the latest tools — often without fully understanding the risks.

Productivity vs Protection: Employees Don’t See Risk

Most employees aren’t acting with malicious intent — they’re just trying to work smarter. Tools like ChatGPT have become so compelling that, according to Fishbowl, 68 per cent of users hide their usage from managers, and nearly half would continue even if it were banned.

An analysis of 176,000 GenAI prompts submitted between January and March found that 6.7 per cent of them potentially disclosed company data. This includes everything from financial information to source code — data that should never leave the corporate perimeter.

Worryingly, almost half (45.4 per cent) of these sensitive inputs came from users logged in with personal email addresses. This pushes the data beyond the scope of corporate control — no visibility, no logs, no assurance. In fact, 21 per cent of sensitive data submissions were sent to ChatGPT’s free tier, where prompts may be retained and used for model training.

Blocking Isn’t Answer

Some organisations may respond by trying to block GenAI tools altogether. But this is neither practical nor effective.

Most SaaS platforms now include embedded AI features, making blanket bans impossible to enforce without disrupting entire workflows. Moreover, employees will always find workarounds — using personal devices, jumping off the VPN, or switching to unmonitored apps. This results in even less visibility and greater risk.

As Keith Odom, EVP, Consulting & Services, Ahead  puts it:

“AI isn’t going away — and trying to block it only pushes usage underground. The real value comes when security teams shift from gatekeepers to enablers, building trust while ensuring safe adoption.”

It’s Time for Guardrails, Not Gates

What’s needed now is a proactive security approach — one that embraces AI’s business value while establishing strong governance frameworks. This means identifying what tools are being used, understanding their data handling processes, and implementing clear policies and controls.

Crucially, this doesn’t mean blocking AI — it means creating guardrails. By doing so, security leaders can step into a strategic role as AI champions, helping their organisations innovate safely.

The age of generative AI is here to stay. The challenge now is to use it wisely — before security gaps turn into business risks.

Exit mobile version