By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: OpenAI Monitors ChatGPT Chats To Keep Everyone Safe – Here Are Some Of The Threats It Stopped – BGR
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > OpenAI Monitors ChatGPT Chats To Keep Everyone Safe – Here Are Some Of The Threats It Stopped – BGR
News

OpenAI Monitors ChatGPT Chats To Keep Everyone Safe – Here Are Some Of The Threats It Stopped – BGR

News Room
Last updated: 2025/10/08 at 9:16 AM
News Room Published 8 October 2025
Share
SHARE






Bangla press/Shutterstock

One of the first things you should do when using an AI chatbot is to ensure your chats aren’t used to train the AI. ChatGPT, Gemini, Claude, and others offer similar privacy protections. This key setting will prevent your personal data, whether it’s work-related or sensitive personal matters, from reaching the pool of data the AI chatbot provider will use to train future versions of the AI. Despite these privacy protections, AI firms like OpenAI will monitor your chats to ensure everyone’s safety. It’ll use automated tools and human reviews to prevent ChatGPT misuse, including topics that can harm others (think malware, mass-spying tools, and other threats).

On Tuesday, OpenAI released a report on how it’s been using its system to disrupt malicious uses of AI. The AI firm said it has disrupted and reported over 40 networks that violated its usage policies. The list of malicious actors OpenAI found trying to abuse ChatGPT includes “authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.” OpenAI says that threat actors continue to use AI to improve “old playbooks to move faster,” but not necessarily gain new capabilities from ChatGPT.

OpenAI will also monitor chats to prevent self-harm and help users in distress. The safety of the individual has become a key priority for OpenAI recently, following the death of a teen by suicide after using ChatGPT. OpenAI has added parental controls to ChatGPT in recent weeks.

What threats did OpenAI prevent?


A concept of using AI software on a computer.
tadamichi/Shutterstock

OpenAI doesn’t explain in detail what goes into the process of flagging potential ChatGPT abuse and how the system works. That might be important, especially considering OpenAI’s acknowledgment that some prompts fall into a gray zone including, “prompts and generations that could, depending on their context, indicate either innocuous activities or abuse, such as translating texts, modifying code, or creating a website.” However, the company notes that it employs a “nuanced and informed approach that focuses on patterns of threat actor behavior rather than isolated model interactions,” to detect threats without disrupting regular ChatGPT activity for users.

According to Gizmodo, OpenAI was able to identify several high-level threats. For example, an organized crime network believed to be based in Cambodia tried to streamline its operations with ChatGPT. OpenAI also found a Russian political influence operation that tried to use ChatGPT for creating prompts for third-party video AI models. The company stopped ChatGPT accounts associated with the Chinese government that sought help with system designs to monitor social media conversations.

Reuters reports that OpenAI banned Chinese-language accounts that wanted assistance with phishing and malware campaigns, and help with automations that could be achieved via DeepSeek. Accounts tied to Russian criminal groups trying to develop malware with ChatGPT were stopped. Similarly, Korean-speaking users who tried to use ChatGPT for phishing campaigns were banned.

What about conversations about self-harm?


Using ChatGPT (model GPT-5) on an iPhone.
Cheng Xin/Getty Images

The October report only focuses on malicious activities like the ones mentioned above. It doesn’t address ChatGPT conversations that involve questions about self-harm. However, it’s likely that OpenAI uses similar methods to detect such cases. A few days ago, the company said on X that it updated GPT-5 Instant to “better recognize and support people in moments of distress.” OpenAI explained that sensitive parts of conversations will be routed to GPT-5 Instant, which will provide helpful responses. Moreover, ChatGPT will tell users what model is being used.

We’re updating GPT-5 Instant to better recognize and support people in moments of distress.

Sensitive parts of conversations will now route to GPT-5 Instant to quickly provide even more helpful responses. ChatGPT will continue to tell users what model is active when asked….

— OpenAI (@OpenAI) October 3, 2025

The move follows OpenAI’s earlier initiatives about improving user safety and preventing ChatGPT from assisting with self-harm ideation. In late August, the company said that ChatGPT is trained not to answer prompts that mention intentions of self-harm. Instead, the AI will respond with empathy and direct people to professional help in the real world, including suicide prevention and crisis hotlines. If the AI detects the risk of physical harm to others, the conversations will be routed to systems that can involve human review and lead to escalation with law enforcement.



Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Microsoft denies shutting down operations in China · TechNode
Next Article The 13-inch Samsung Tab Galaxy S10 FE+ has never been cheaper
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Why Lagos still has no functional public WiFi network
Computing
It’s your last chance to get this Eufy robot vacuum at a budget price
Gadget
Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries
News
You can still save up to 70 percent on headphones from Bose and Sony today
News

You Might also Like

News

Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries

50 Min Read
News

You can still save up to 70 percent on headphones from Bose and Sony today

1 Min Read
News

Amazon Prime Day Deal: My Favorite All-in-One Kitchen Appliance Is a Steal at Just $50

4 Min Read
News

Apple Watch’s High Blood Pressure Notifications Available in Canada and Singapore Starting Today

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?