By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models
Computing

Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models

News Room
Last updated: 2025/09/24 at 10:19 AM
News Room Published 24 September 2025
Share
SHARE

Cybersecurity researchers have disclosed two security flaws in Wondershare RepairIt that exposed private user data and potentially exposed the system to artificial intelligence (AI) model tampering and supply chain risks.

The critical-rated vulnerabilities in question, discovered by Trend Micro, are listed below –

  • CVE-2025-10643 (CVSS score: 9.1) – An authentication bypass vulnerability that exists within the permissions granted to a storage account token
  • CVE-2025-10644 (CVSS score: 9.4) – An authentication bypass vulnerability that exists within the permissions granted to an SAS token

Successful exploitation of the two flaws can allow an attacker to circumvent authentication protection on the system and launch a supply chain attack, ultimately resulting in the execution of arbitrary code on customers’ endpoints.

Trend Micro researchers Alfredo Oliveira and David Fiser said the AI-powered data repair and photo editing application “contradicted its privacy policy by collecting, storing, and, due to weak Development, Security, and Operations (DevSecOps) practices, inadvertently leaking private user data.”

The poor development practices include embedding overly permissive cloud access tokens directly in the application’s code that enables read and write access to sensitive cloud storage. Furthermore, the data is said to have been stored without encryption, potentially opening the door to wider abuse of users’ uploaded images and videos.

To make matters worse, the exposed cloud storage contains not only user data but also AI models, software binaries for various products developed by Wondershare, container images, scripts, and company source code, enabling an attacker to tamper with AI models or the executables, paving the way for supply chain attacks targeting its downstream customers.

DFIR Retainer Services

“Because the binary automatically retrieves and executes AI models from the unsecure cloud storage, attackers could modify these models or their configurations and infect users unknowingly,” the researchers said. “Such an attack could distribute malicious payloads to legitimate users through vendor-signed software updates or AI model downloads.”

Beyond customer data exposure and AI model manipulation, the issues can also pose grave consequences, ranging from intellectual property theft and regulatory penalties to erosion of consumer trust.

The cybersecurity company said it responsibly disclosed the two issues through its Zero Day Initiative (ZDI) in April 2025, but not that it has yet to receive a response from the vendor despite repeated attempts. In the absence of a fix, users are recommended to “restrict interaction with the product.”

“The need for constant innovations fuels an organization’s rush to get new features to market and maintain competitiveness, but they might not foresee the new, unknown ways these features could be used or how their functionality may change in the future,” Trend Micro said.

“This explains how important security implications may be overlooked. That is why it is crucial to implement a strong security process throughout one’s organization, including the CD/CI pipeline.”

The Need for AI and Security to Go Hand in Hand

The development comes as Trend Micro previously warned against exposing Model Context Protocol (MCP) servers without authentication or storing sensitive credentials such as MCP configurations in plaintext, which threat actors can exploit to gain access to cloud resources, databases, or inject malicious code.

Each MCP server acts as an open door to its data source: databases, cloud services, internal APIs, or project management systems,” the researchers said. “Without authentication, sensitive data such as trade secrets and customer records becomes accessible to everyone.”

In December 2024, the company also found that exposed container registries could be abused to gain unauthorized access and pull target Docker images to extract the AI model within it, modify the model’s parameters to influence its predictions, and push the tampered image back to the exposed registry.

“The tampered model could behave normally under typical conditions, only displaying its malicious alterations when triggered by specific inputs,” Trend Micro said. “This makes the attack particularly dangerous, as it could bypass basic testing and security checks.”

The supply chain risk posed by MCP servers has also been highlighted by Kaspersky, which devised a proof-of-concept (PoC) exploit to highlight how MCP servers installed from untrusted sources can conceal reconnaissance and data exfiltration activities under the guise of an AI-powered productivity tool.

“Installing an MCP server basically gives it permission to run code on a user machine with the user’s privileges,” security researcher Mohamed Ghobashy said. “Unless it is sandboxed, third-party code can read the same files the user has access to and make outbound network calls – just like any other program.”

The findings show that the rapid adoption of MCP and AI tools in enterprise settings to enable agentic capabilities, particularly without clear policies or security guardrails, can open brand new attack vectors, including tool poisoning, rug pulls, shadowing, prompt injection, and unauthorized privilege escalation.

CIS Build Kits

In a report published last week, Palo Alto Networks Unit 42 revealed that the context attachment feature used in AI code assistants to bridge an AI model’s knowledge gap can be susceptible to indirect prompt injection, where adversaries embed harmful prompts within external data sources to trigger unintended behavior in large language models (LLMs).

Indirect prompt injection hinges on the assistant’s inability to differentiate between instructions issued by the user and those surreptitiously embedded by the attacker in external data sources.

Thus, when a user inadvertently supplies to the coding assistant third-party data (e.g., a file, repository, or URL) that has already been tainted by an attacker, the hidden malicious prompt could be weaponized to trick the tool into executing a backdoor, injecting arbitrary code into an existing codebase, and even leaking sensitive information.

“Adding this context to prompts enables the code assistant to provide more accurate and specific output,” Unit 42 researcher Osher Jacob said. “However, this feature could also create an opportunity for indirect prompt injection attacks if users unintentionally provide context sources that threat actors have contaminated.”

AI coding agents have also been found vulnerable to what’s called an “lies-in-the-loop” (LitL) attack that aims to convince the LLM that the instructions it’s been fed are much safer than they really are, effectively overriding human-in-the-loop (HitL) defenses put in place when performing high-risk operations.

“LitL abuses the trust between a human and the agent,” Checkmarx researcher Ori Ron said. “After all, the human can only respond to what the agent prompts them with, and what the agent prompts the user is inferred from the context the agent is given. It’s easy to lie to the agent, causing it to provide fake, seemingly safe context via commanding and explicit language in something like a GitHub issue.”

“And the agent is happy to repeat the lie to the user, obscuring the malicious actions the prompt is meant to guard against, resulting in an attacker essentially making the agent an accomplice in getting the keys to the kingdom.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article McCullough Review finds PSNI failures but no ‘systemic’ surveillance of journalists | Computer Weekly
Next Article Apple Might Improve iPhone Support for Third-Party Smartwatches
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

UNC5221 Uses BRICKSTORM Backdoor to Infiltrate U.S. Legal and Technology Sectors
Computing
Noise Out, Savings In: The JBL Tour Pro 2 Earbuds Are 50% Off Today
News
Instagram now has 3 billion monthly active users, will test features to help users control their feeds | News
News
Will new U.S. visa fee boost Canada’s tech sector? B.C. sees an opening against Seattle and Silicon Valley
Computing

You Might also Like

Computing

UNC5221 Uses BRICKSTORM Backdoor to Infiltrate U.S. Legal and Technology Sectors

6 Min Read
Computing

Will new U.S. visa fee boost Canada’s tech sector? B.C. sees an opening against Seattle and Silicon Valley

5 Min Read
Computing

The Massive AI Performance Benefit With AMX On Intel Xeon 6 “Granite Rapids”

3 Min Read
Computing

Smartisan founder Luo Yonghao shifts focus from AR to AI assistant in his “last venture” · TechNode

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?