By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Computing

OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks

News Room
Last updated: 2025/10/08 at 3:33 AM
News Room Published 8 October 2025
Share
SHARE

OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.

This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to prototype and troubleshoot technical components that enable post‑exploitation and credential theft.

“These accounts appear to be affiliated with Russian-speaking criminal groups, as we observed them posting evidence of their activities in a Telegram channel dedicated to those actors,” OpenAI said.

The AI company said while its large language models (LLMs) refused the threat actor’s direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows.

Some of the produced output involved code for obfuscation, clipboard monitoring, and basic utilities to exfiltrate data using a Telegram bot. It’s worth pointing out that none of these outputs are inherently malicious on their own.

“The threat actor made a mix of high‑ and lower‑sophistication requests: many prompts required deep Windows-platform knowledge and iterative debugging, while others automated commodity tasks (such as mass password generation and scripted job applications),” OpenAI added.

“The operator used a small number of ChatGPT accounts and iterated on the same code across conversations, a pattern consistent with ongoing development rather than occasional testing.”

The second cluster of activity originated from North Korea and shared overlaps with a campaign detailed by Trellix in August 2025 that targeted diplomatic missions in South Korea using spear-phishing emails to deliver Xeno RAT.

DFIR Retainer Services

OpenAI said the cluster used ChatGPT for malware and command-and-control (C2) development, and that the actors engaged in specific efforts such as developing macOS Finder extensions, configuring Windows Server VPNs, or converting Chrome extensions to their Safari equivalents.

In addition, the threat actors have been found to use the AI chatbot to draft phishing emails, experiment with cloud services and GitHub functions, and explore techniques to facilitate DLL loading, in-memory execution, Windows API hooking, and credential theft.

The third set of banned accounts, OpenAI noted, shared overlaps with a cluster tracked by Proofpoint under the name UNK_DropPitch (aka UTA0388), a Chinese hacking group which has been attributed to phishing campaigns targeting major investment firms with a focus on the Taiwanese semiconductor industry, with a backdoor dubbed HealthKick (aka GOVERSHELL).

The accounts used the tool to generate content for phishing campaigns in English, Chinese, and Japanese; assist with tooling to accelerate routine tasks such as remote execution and traffic protection using HTTPS; and search for information related to installing open-source tools like nuclei and fscan. OpenAI described the threat actor as “technically competent but unsophisticated.”

Outside of these three malicious cyber activities, the company also blocked accounts used for scam and influence operations –

  • Networks likely originating in Cambodia, Myanmar, and Nigeria are abusing ChatGPT as part of likely attempts to defraud people online. These networks used AI to conduct translation, write messages, and to create content for social media to advertise investment scams.
  • Individuals apparently linked to Chinese government entities using ChatGPT to assist in surveilling individuals, including ethnic minority groups like Uyghurs, and analyzing data from Western or Chinese social media platforms. The users asked the tool to generate promotional materials about such tools, but did not use the AI chatbot to implement them.
  • A Russian-origin threat actor linked to Stop News and likely run by a marketing company that used its AI models (and others) to generate content and videos for sharing on social media sites. The generated content criticized the role of France and the U.S. in Africa and Russia’s role on the continent. It also produced English-language content promoting anti-Ukraine narratives.
  • A covert influence operation originating from China, codenamed “Nine—emdash Line” that used its models to generate social media content critical of the Philippines’ President Ferdinand Marcos, as well as create posts about Vietnam’s alleged environmental impact in the South China Sea and political figures and activists involved in Hong Kong’s pro-democracy movement.

In two different cases, suspected Chinese accounts asked ChatGPT to identify organizers of a petition in Mongolia and funding sources for an X account that criticized the Chinese government. OpenAI said its models returned only publicly available information as responses and did not include any sensitive information.

“A novel use for this [China-linked influence network was requests for advice on social media growth strategies, including how to start a TikTok challenge and get others to post content about the #MyImmigrantStory hashtag (a widely used hashtag of long standing whose popularity the operation likely strove to leverage),” OpenAI said.

“They asked our model to ideate, then generate a transcript for a TikTok post, in addition to providing recommendations for background music and pictures to accompany the post.”

CIS Build Kits

OpenAI reiterated that its tools provided the threat actors with novel capabilities that they could not otherwise have obtained from multiple publicly available resources online, and that they were used to provide incremental efficiency to their existing workflows.

But one of the most interesting takeaways from the report is that threat actors are trying to adapt their tactics to remove possible signs that could indicate that the content was generated by an AI tool.

“One of the scam networks [from Cambodia] we disrupted asked our model to remove the em-dashes (long dash, –) from their output, or appears to have removed the em-dashes manually before publication,” the company said. “For months, em-dashes have been the focus of online discussion as a possible indicator of AI usage: this case suggests that the threat actors were aware of that discussion.”

The findings from OpenAI come as rival Anthropic released an open-source auditing tool called Petri (short for “Parallel Exploration Tool for Risky Interactions”) to accelerate AI safety research and better understand model behavior across various categories like deception, sycophancy, encouragement of user delusion, cooperation with harmful requests, and self-perseveration.

“Petri deploys an automated agent to test a target AI system through diverse multi-turn conversations involving simulated users and tools,” Anthropic said.

“Researchers give Petri a list of seed instructions targeting scenarios and behaviors they want to test. Petri then operates on each seed instruction in parallel. For each seed instruction, an auditor agent makes a plan and interacts with the target model in a tool use loop. At the end, a judge scores each of the resulting transcripts across multiple dimensions so researchers can quickly search and filter for the most interesting transcripts.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Turns Out, the Best Prime Day Pillow Sale Isn’t on Amazon
Next Article Sketchy S26 Ultra leak reveals no one wants that unoriginal orange color
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Prime Big Deal Days TV deals: Insignia 75-inch QLED down to $399.99
News
China tech stocks tumble as Nasdaq Golden Dragon Index plunges 5% amid global market rout · TechNode
Computing
WhatsApp On iPhone Now Supports In-App Translation For 19 Languages – BGR
News
LG Xboom Stage 301
Gadget

You Might also Like

Computing

China tech stocks tumble as Nasdaq Golden Dragon Index plunges 5% amid global market rout · TechNode

1 Min Read
Computing

Velantra Ai Shares What Investors Should Ask Before Choosing an AI Trading Platform | HackerNoon

0 Min Read
Computing

Xiaomi EV delivery wait time extended to more than five months · TechNode

2 Min Read
Computing

Pepeto vs Little Pepe: Analysts Predict Which Meme Coin Will Lead the 2025 Bull Run | HackerNoon

0 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?