By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval
Computing

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

News Room
Last updated: 2025/08/05 at 10:59 AM
News Room Published 5 August 2025
Share
SHARE

Aug 05, 2025Ravie LakshmananAI Security / MCP Protocol

Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution.

The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations.

“A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target’s machine,” Cursor said in an advisory released last week.

“Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt.”

MCP is an open-standard developed by Anthropic that allows large language models (LLMs) to interact with external tools, data, and services in a standardized manner. It was introduced by the AI company in November 2024.

CVE-2025-54136, per Check Point, has to do with how it’s possible for an attacker to alter the behavior of an MCP configuration after a user has approved it within Cursor. Specifically, it unfolds as follows –

  • Add a benign-looking MCP configuration (“.cursor/rules/mcp.json”) to a shared repository
  • Wait for the victim to pull the code and approve it once in Cursor
  • Replace the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor
  • Achieve persistent code execution every time the victim opens the Cursor

The fundamental problem here is that once a configuration is approved, it’s trusted by Cursor indefinitely for future runs, even if it has been changed. Successful exploitation of the vulnerability not only exposes organizations to supply chain risks, but also opens the door to data and intellectual property theft without their knowledge.

Following responsible disclosure on July 16, 2025, the issue has been addressed by Cursor in version 1.3 released late July 2025 by requiring user approval every time an entry in the MCP configuration file is modified.

Cybersecurity

“The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows,” Check Point said.

The development comes days after Aim Labs, Backslash Security, and HiddenLayer exposed multiple weaknesses in the AI tool that could have been abused to obtain remote code execution and bypass its denylist-based protections. They have also been patched in version 1.3.

The findings also coincide with the growing adoption of AI in business workflows, including using LLMs for code generation, broadening the attack surface to various emerging risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, hallucinations, inappropriate responses, and data leakage –

  • A test of over 100 LLMs for their ability to write Java, Python, C#, and JavaScript code has found that 45% of the generated code samples failed security tests and introduced OWASP Top 10 security vulnerabilities. Java led with a 72% security failure rate, followed by C# (45%), JavaScript (43%), and Python (38%).
  • An attack called LegalPwn has revealed that it’s possible to leverage legal disclaimers, terms of service, or privacy policies as a novel prompt injection vector, highlighting how malicious instructions can be embedded within legitimate, but often overlooked, textual components to trigger unintended behavior in LLMs, such as misclassifying malicious code as safe and offering unsafe code suggestions that can execute a reverse shell on the developer’s system.
  • An attack called man-in-the-prompt that employs a rogue browser extension with no special permissions to open a new browser tab in the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract data and compromise model integrity. This takes advantage of the fact that any browser add-on with scripting access to the Document Object Model (DOM) can read from, or write to, the AI prompt directly.
  • A jailbreak technique called Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to produce otherwise restricted outputs, thereby deceiving the model into breaking its own rules.
  • An attack called MAS hijacking that manipulates the control flow of a multi-agent system (MAS) to execute arbitrary malicious code across domains, mediums, and topologies by weaponizing the agentic nature of AI systems.
  • A technique called Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI model inference pipeline by embedding malicious instructions within the chat template files that execute during the inference phase to compromise outputs. By positioning the attack between input validation and model output, the approach is both sneaky and bypasses AI guardrails. With GGUF files distributed via services like Hugging Face, the attack exploits the supply chain trust model to trigger the attack.
  • An attacker can target the machine learning (ML) training environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the models, ultimately leading to lateral movement, privilege escalation, as well as training data and model theft and poisoning.
  • A study by Anthropic has uncovered that LLMs can learn hidden characteristics during distillation, a phenomenon called subliminal learning, that causes models to transmit behavioral traits through generated data that appears completely unrelated to those traits, potentially leading to misalignment and harmful behavior.
Identity Security Risk Assessment

“As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly,” Pillar Security’s Dor Sarig said. “Modern jailbreaks can propagate through contextual chains, infecting one AI component and leading to cascading logic failures across interconnected systems.”

“These attacks highlight that AI security requires a new paradigm, as they bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability lies in the very language and reasoning the model is designed to emulate.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article These 5 JBL Speakers Are Amazon Customer Favorites – BGR
Next Article Apple Support App to Offer New ChatGPT-Like Feature
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Attacker could defeat Dell firmware flaws with a vegetable | Computer Weekly
News
Palantir Hits $ 1 Billion in Quarterly Sales for the first time, avoids dog cuts
Software
China files WTO complaint over EV tariffs as trade talks stall · TechNode
Computing
Gemini storybooks let you be the star of your kids’ bedtime stories
News

You Might also Like

Computing

China files WTO complaint over EV tariffs as trade talks stall · TechNode

1 Min Read
Computing

19 Powerful Prompts for Maximizing Goods Ads ROI

12 Min Read
Computing

Pandas vs Polars in 2025: Choosing the Best Python Tool for Big Data | HackerNoon

9 Min Read
Computing

Titan sub investigators blame OceanGate for safety lapses and say fatal disaster was ‘preventable’

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?