A newly disclosed critical security flaw in Salesforce’s Agentforce platform could have allowed attackers to extract sensitive data from its customer relationship management (CRM) system using an indirect AI prompt injection attack.
The bug, named ForcedLeak (CVSS score: 9.4) by Noma Security, was discovered and reported on July 28, 2025. It affected organizations using Salesforce Agentforce with the Web-to-Lead feature enabled.
How the Attack Worked
Researchers explained that unlike traditional input-output models, AI agents create a much broader attack surface. In this case, the vulnerability enabled threat actors to manipulate the Description field in Web-to-Lead forms with malicious instructions.
Once submitted, an internal employee processing the lead would unknowingly trigger the Agentforce AI to execute both legitimate and hidden commands. This sequence ultimately allowed attackers to:
- Submit a malicious Web-to-Lead form.
- Have an internal employee process the lead using AI queries.
- Trick Agentforce into executing injected instructions.
- Query sensitive CRM data.
- Transmit stolen information to a compromised Salesforce-related domain that had expired and was purchased by the attacker for just $5.
The stolen data was exfiltrated in the form of a PNG image, bypassing content security policies.
Why It Was Dangerous
According to Noma Security, the vulnerability stemmed from insufficient context validation, overly permissive AI behavior, and a Content Security Policy (CSP) bypass. Since the AI treated malicious instructions as legitimate context, it leaked highly sensitive information.
This incident highlights one of the most severe threats facing Generative AI (GenAI) systems today: indirect prompt injection. By inserting hidden instructions into external data sources, attackers can manipulate AI agents into taking actions never intended by their developers.
Salesforce’s Response
Salesforce quickly secured the expired domain and deployed patches to mitigate the issue. The company has now enforced a Trusted URL allowlist across Agentforce and Einstein AI, preventing data from being sent to unverified external sites.
“Our underlying services will enforce the Trusted URL allowlist to ensure no malicious links are generated or called through prompt injection,” Salesforce stated. “This adds a critical layer of defense to stop sensitive customer data from leaving the system.”
Mitigation Recommendations
In addition to applying Salesforce’s patches, security teams are advised to:
- Audit existing lead data for unusual or suspicious instructions.
- Enforce strict input validation to detect potential prompt injections.
- Sanitize data coming from untrusted or external sources.
Why It Matters
“The ForcedLeak vulnerability demonstrates the urgent need for proactive AI security and governance,” said Sasi Levi, research lead at Noma Security. “Even a low-cost exploit, like purchasing an expired domain, can result in millions of dollars in potential damages if not addressed.”
This case underscores the growing risks associated with AI-driven platforms and the importance of implementing robust safeguards to protect critical business data.
Source: https://thehackernews.com/2025/09/salesforce-patches-critical-forcedleak.html