News Security Technology

New AI Assistants Pose Critical Leak Risk

In the rush to deploy ‘agentic AI,’ corporate clients are being warned that the failure to secure their own data repositories leaves powerful new tools vulnerable to major exploits like Salesforce’s ‘ForcedLeak’

The rapid corporate adoption of autonomous AI assistants or agentic AI is creating a fundamental challenge for enterprise security, as major vendors like Microsoft and Salesforce integrate these tools directly into their offerings. Security experts are now sounding the alarm, stressing that the haste of deployment has overshadowed a critical issue: the ambiguous shared security responsibility between the vendor and the customer.

The stakes are far from theoretical. Last month, security vendor Noma detailed a critical vulnerability chain dubbed “ForcedLeak” in Salesforce’s Agentforce. This exploit could have allowed a threat actor to exfiltrate sensitive customer relationship management (CRM) data through an indirect prompt injection attack—a danger that underscores the potential for agents to betray their owners when security controls are improper.

While Salesforce addressed the exploit, experts caution that such vulnerabilities are inevitable when agents, which are essentially autonomous processes, are granted access to massive, sensitive enterprise data stores.

Where Responsibility Lie?

The security landscape is already fraught with debates over accountability in traditional cloud and software-as-a-service (SaaS) deployments. Agentic AI makes this exponentially more complex.

“The AI boom has created a race to make AI smarter, stronger, and more capable at the expense of security,” says Itay Ravia, head of Aim Labs at Aim Security, which previously uncovered a similar data exfiltration exploit in Microsoft’s Copilot.

The consensus among security vendors is that the ultimate responsibility for data integrity must reside with the customer. Brian Vecci, field CTO at Varonis, stresses that data is not stored within the agent itself, but within the customer’s existing enterprise repositories.

“That access control can be individual to the agent or the user(s) that are prompting it, and it’s the responsibility of the enterprise—not the agent’s vendor or the hyperscaler provider—to secure that data appropriately,” he notes. “The shared responsibility model… is core to cloud and agentic services, and customers often struggle with managing that risk.”

Melissa Ruzzi, director of AI at AppOmni, compares the problem to SaaS security: the provider is responsible for the integrity of the underlying infrastructure, but the customer is responsible for the users and the data they access. The rigorous security review process, she argues, cannot be skipped just because an autonomous AI is involved.

User Problem & Vendor Limitations

Adding to the complexity is the human element. Organizations are already struggling to train staff to avoid sophisticated threats like phishing. Now, they must also train users on how to interact with, and effectively limit, an imperfect, autonomous AI agent that has access to sensitive files.

While some vendors, like Salesforce, have begun to mandate security basics—the company now requires all customers to use multifactor authentication (MFA) experts believe these measures are insufficient.

David Brauchler, technical director at NCC Group, warns that vendor-provided protections, such as secrets scanning or basic data loss prevention (DLP), often provide a “false sense of security.”

“Tools like secrets scanning and [data loss prevention] often lead to a greater proliferation of vulnerabilities, resulting from a ‘the vendor will handle it’ thought process,” Brauchler explains. He argues that the underlying data access problem cannot be solved within the agentic model itself and must be handled by the core architecture of the customer’s IT infrastructure.

The core challenge remains: customers are granting non-human, imperfect processes excessive permissions, a risk that vendors can only mitigate, not eliminate. Before rushing to become an “AI winner,” security teams must prioritize understanding precisely what data their agents can access, and ensure that robust guardrails and best practices are fully implemented to contain the risk of potentially catastrophic data leaks.

Leave a Reply

Your email address will not be published. Required fields are marked *