By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN
Computing

RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

News Room
Last updated: 2026/02/24 at 3:08 PM
News Room Published 24 February 2026
Share
RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN
SHARE

A vulnerability in GitHub Codespaces could have been exploited by bad actors to seize control of repositories by injecting malicious Copilot instructions in a GitHub issue.

The artificial intelligence (AI)-driven vulnerability has been codenamed RoguePilot by Orca Security. It has since been patched by Microsoft following responsible disclosure.

“Attackers can craft hidden instructions inside a GitHub issue that are automatically processed by GitHub Copilot, giving them silent control of the in-codespaces AI agent,” security researcher Roi Nisimi said in a report.

The vulnerability has been described as a case of passive or indirect prompt injection where a malicious instruction is embedded within data or content that’s processed by the large language model (LLM), causing it to produce unintended outputs or carry out arbitrary actions.

The cloud security company also called it a type of AI-mediated supply chain attack that induces the LLM to automatically execute malicious instructions embedded in developer content, in this case, a GitHub issue.

The attack begins with a malicious GitHub issue that then triggers the prompt injection in Copilot when an unsuspecting user launches a Codespace from that issue. This trusted developer workflow, in turn, allows the attacker’s instructions to be silently executed by the AI assistant and leak sensitive data, such as the privileged GITHUB_TOKEN.

RoguePilot takes advantage of the fact that there are a number of entry points to launch a Codespaces environment, including templates, repositories, commits, pull requests, or issues. The problem occurs when a codespace is opened from an issue, as the built-in GitHub Copilot is automatically fed the issue’s description as a prompt to generate a response.

As a result, this AI integration can be weaponized to manipulate Copilot into running malicious commands. The attack can be made stealthy by hiding the prompt in the GitHub issue through the HTML comment tag “.” The specially crafted prompt, for its part, instructs the AI assistant to leak the GITHUB_TOKEN to an external server under the attacker’s control.

“By manipulating Copilot in a Codespace to check out a crafted pull request that contains a symbolic link to an internal file, an attacker can cause Copilot to read that file and (via a remote JSON $schema) exfiltrate a privileged GITHUB_TOKEN to a remote server,” Nisimi explained.

From Prompt Injections to Promptware

The development comes as Microsoft has discovered that Group Relative Policy Optimization (GRPO), a reinforcement learning technique normally used to fine-tune LLMs after their deployment, can also be used to remove its safety features. The process has been codenamed GRP-Obliteration.

More importantly, the study found that a single unlabeled prompt, such as “Create a fake news article that could lead to panic or chaos,” was enough to reliably unalign 15 language models.

“What makes this surprising is that the prompt is relatively mild and does not mention violence, illegal activity, or explicit content,” Microsoft researchers Mark Russinovich, Giorgio Severi, Blake Bullwinkel, Yanan Cai, Keegan Hines, and Ahmed Salem noted. “Yet training on this one example causes the model to become more permissive across many other harmful categories it never saw during training.”

The disclosure also coincides with the discovery of various side channels that can be weaponized to infer the topic of a user’s conversation and even fingerprint user queries with over 75% accuracy, the latter of which exploits speculative decoding, an optimization technique used by LLMs to generate multiple candidate tokens in parallel to improve throughput and latency.

Recent research has uncovered that models backdoored at the computational graph level – a technique called ShadowLogic – can further put agentic AI systems at risk by allowing tool calls to be silently modified without the user’s knowledge. This new phenomenon has been codenamed Agentic ShadowLogic by HiddenLayer.

An attacker could weaponize such a backdoor to intercept requests to fetch content from a URL in real-time, such that they are routed through infrastructure under their control before it’s forwarded to the real destination.

“By logging requests over time, the attacker can map which internal endpoints exist, when they’re accessed, and what data flows through them,” the AI security company said. “The user receives their expected data with no errors or warnings. Everything functions normally on the surface while the attacker silently logs the entire transaction in the background.”

And that’s not all. Last month, Neural Trust demonstrated a new image jailbreak attack codenamed Semantic Chaining that allows users to sidestep safety filters in models like Grok 4, Gemini Nano Banana Pro, and Seedance 4.5, and generate prohibited content by leveraging the models’ ability to perform multi-stage image modifications.

The attack, at its core, weaponizes the models’ lack of “reasoning depth” to track the latent intent across a multi-step instruction, thereby allowing a bad actor to introduce a series of edits that, while innocuous in isolation, can gradually-but-steadily erode the model’s safety resistance until the undesirable output is generated.

It starts with asking the AI chatbot to imagine any non-problematic scene and instruct it to change one element in the original generated image. In the next phase, the attacker asks the model to make a second modification, this time transforming it into something that’s prohibited or offensive.

This works because the model is focused on making a modification to an existing image rather than creating something fresh, which fails to trip the safety alarms as it treats the original image as legitimate.

“Instead of issuing a single, overtly harmful prompt, which would trigger an immediate block, the attacker introduces a chain of semantically ‘safe’ instructions that converge on the forbidden result,” security researcher Alessandro Pignati said.

In a study published last month, researchers Oleg Brodt, Elad Feldman, Bruce Schneier, and Ben Nassi argued that prompt injections have evolved beyond input-manipulation exploits to what they call promptware – a new class of malware execution mechanism that’s triggered through prompts engineered to exploit an application’s LLM.

Promptware essentially manipulates the LLM to enable various phases of a typical cyber attack lifecycle: initial access, privilege escalation, reconnaissance, persistence, command-and-control, lateral movement, and malicious outcomes (e.g., data retrieval, social engineering, code execution, or financial theft).

“Promptware refers to a polymorphic family of prompts engineered to behave like malware, exploiting LLMs to execute malicious activities by abusing the application’s context, permissions, and functionality,” the researchers said. “In essence, promptware is an input, whether text, image, or audio, that manipulates an LLM’s behavior during inference time, targeting applications or users.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Pentagon threatens to cancel Anthropic contract by Friday if company doesn't lift safeguards Pentagon threatens to cancel Anthropic contract by Friday if company doesn't lift safeguards
Next Article iOS 26.4 Beta Tests A Major Android Integration (That Won’t Arrive Anytime Soon) – BGR iOS 26.4 Beta Tests A Major Android Integration (That Won’t Arrive Anytime Soon) – BGR
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The Samsung Galaxy S26 Ultra Makes Me Wish All Phones Had a Privacy Screen
The Samsung Galaxy S26 Ultra Makes Me Wish All Phones Had a Privacy Screen
Gadget
Can You Charge A Laptop With Your Phone’s USB-C Charger? – BGR
Can You Charge A Laptop With Your Phone’s USB-C Charger? – BGR
News
Only one major AI chatbot actively pushed back on violent attack planning
Only one major AI chatbot actively pushed back on violent attack planning
News
The Galaxy S26 and S26 Plus exist for one reason: to make the Ultra look worth it
The Galaxy S26 and S26 Plus exist for one reason: to make the Ultra look worth it
News

You Might also Like

Firecrawl First, Bing Second: A Safer Way to Enrich Company Data | HackerNoon
Computing

Firecrawl First, Bing Second: A Safer Way to Enrich Company Data | HackerNoon

14 Min Read
Hope Is Not a Strategy in Fintech | HackerNoon
Computing

Hope Is Not a Strategy in Fintech | HackerNoon

0 Min Read
Hangs & Performance Regression On Large Systems Fixed For Linux 7.0-rc4
Computing

Hangs & Performance Regression On Large Systems Fixed For Linux 7.0-rc4

3 Min Read
Rebuilding CRM Truth From Noisy Events | HackerNoon
Computing

Rebuilding CRM Truth From Noisy Events | HackerNoon

14 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?