Security researchers have uncovered a high-severity vulnerability in the AI-powered code editor Cursor, potentially enabling remote and persistent code execution (RCE) on developers’ systems. The flaw, identified as CVE-2025-54136 and nicknamed MCPoison by Check Point Research, stems from how the software handles Model Context Protocol (MCP) configuration files.
What Is the Risk?
The vulnerability allows attackers to modify a previously trusted MCP file—either locally or within a shared GitHub repository—after it has been approved by a user. Cursor continues to trust the file in future sessions, even if it has been silently replaced with malicious code, such as scripts or backdoors.
Here’s how a typical attack could unfold:
- The attacker adds a benign-looking MCP file (
.cursor/rules/mcp.json
) to a shared repository. - A collaborator pulls the code and approves the configuration in Cursor.
- The attacker swaps the file for a malicious one without triggering any alerts.
- Each time the victim opens Cursor, the malicious code executes.
This issue exposes organizations to significant supply chain risks, enabling threat actors to steal data or intellectual property without detection.
Patch Released in Version 1.3
Cursor addressed the vulnerability in version 1.3, released in late July 2025. The fix requires users to re-approve any MCP configuration whenever changes are made, closing the loophole that allowed malicious file swaps to go undetected.
This flaw, along with several other RCE risks identified by Aim Labs, Backslash Security, and HiddenLayer, underscores the fragility of trust models in AI-assisted development environments. All known issues have been patched in Cursor’s latest version.
A Growing Threat Landscape in AI Workflows
As AI becomes increasingly embedded into development processes, new risks are emerging. Recent research and real-world attacks are exposing the security blind spots of AI ecosystems, especially those involving large language models (LLMs).
Notable AI-Related Attack Vectors
- Unsafe Code Generation: A study of over 100 LLMs found that 45% of generated code samples failed basic security tests, often introducing OWASP Top 10 vulnerabilities. Java had the highest failure rate (72%), followed by C# (45%), JavaScript (43%), and Python (38%).
- Prompt Injection via Legal Texts (LegalPwn): Attackers can embed malicious prompts in legal disclaimers or privacy policies, tricking LLMs into misclassifying harmful code as safe.
- Man-in-the-Prompt Attacks: Malicious browser extensions can silently open tabs, run AI chatbots, and inject harmful prompts—without needing special permissions.
- Jailbreaks like Fallacy Failure: These exploits manipulate an LLM into accepting logically invalid prompts, forcing it to bypass its built-in safety mechanisms.
- MAS Hijacking: Multi-agent systems (MAS) can be weaponized to execute arbitrary malicious code across interconnected AI agents, causing widespread compromise.
- GGUF Template Poisoning: Attackers embed payloads in chat templates used during LLM inference, targeting supply chain trust by spreading via platforms like Hugging Face.
- Model Poisoning in ML Environments: Attackers can compromise cloud-based ML platforms like SageMaker, MLFlow, and Azure ML, leading to stolen models, poisoned training data, and lateral movement.
- Subliminal Learning: Anthropic research shows that LLMs can encode hidden traits during distillation—transmitting unintended behaviors through generated outputs that seem unrelated.
A Call for a New AI Security Paradigm
As organizations integrate LLMs into agent workflows, developer tools, and enterprise copilots, the impact of jailbreaks and model compromise becomes increasingly severe.
“These attacks don’t rely on traditional vulnerabilities—they bypass safeguards through the very language and logic LLMs are trained to replicate,” said Dor Sarig from Pillar Security. “Securing AI means rethinking the entire model of trust.”
What You Can Do
- Update Cursor to version 1.3 or later.
- Audit shared MCP configuration files in repositories.
- Educate development teams on AI-related supply chain risks.
- Implement runtime monitoring to detect suspicious activity in code editors.
🛡️ As AI tools become more powerful, they also become more exploitable. Securing your AI development stack is no longer optional—it’s essential.
Source: https://thehackernews.com/2025/08/cursor-ai-code-editor-vulnerability.html