By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: 5 Things You Should Never Ask ChatGPT To Do – BGR
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > 5 Things You Should Never Ask ChatGPT To Do – BGR
News

5 Things You Should Never Ask ChatGPT To Do – BGR

News Room
Last updated: 2025/07/29 at 4:35 PM
News Room Published 29 July 2025
Share
SHARE






Alexsl/Getty Images

When OpenAI first launched ChatGPT in November 2022, it quickly became the biggest chatbot in the world. Users initially rushed to ask the platform a variety of questions, but it’s become increasingly clear that are some things you should never ask ChatGPT to do. Although ChatGPT excels at tasks like question answering, text summarization, translation, and code generation, it also has some serious limitations. Most notably, ChatGPT and all large language models (LLMs) have a tendency to hallucinate, and confidently share categorically false information.

If you’re using LLMs in 2025, you need to make sure to fact-check any information they give you, and maybe even consider avoiding certain use cases altogether to keep from putting yourself at risk or harming others.

Below we’re going to look at the top 5 things you should never ask ChatGPT to do. Whether it’s looking for medical guidance, seeking mental health support, generating nonconsensual deepfakes, or producing hateful content, here are some use cases you should avoid altogether if you want to enjoy a positive user experience.

Give medical advice


AI medical
raker/Shutterstock

Many users rely on ChatGPT to provide information to help diagnose and treat medical conditions. In fact, according to a survey of 2,000 users in Australia, one in ten users have used the chatbot to answer a health question. On average, those respondents reported they trusted the responses.

Unfortunately, using ChatGPT for medical guidance is a very bad idea, because hallucinations can lead the chatbot to share inaccurate diagnosis and treatment recommendations. This means that the user can’t afford to take any health advice provided at face value.

If you are concerned that you have a physical ailment or condition that requires treatment, you’d be better off seeking the help of a professional, licensed healthcare provider, or authoritative human-written source such as WebMD. 

That being said, if you do want to use ChatGPT to look up medical questions, you can lower the risk of being misinformed by fact checking the chatbot’s claims against an authoritative third party source.

Offer mental health support


Mental health
Justin Paget/Getty Images

According to some estimates, millions of adults actively use chatbots like ChatGPT for mental health support. This is a recipe for disaster, as while LLMs can hold passable conversations, they’re not sentient or capable of holding empathetic conversations.

This means a generative AI chatbot like ChatGPT or Gemini can’t be trusted to provide support or care to vulnerable individuals. The reality is that relying on these tools can result in serious harm.

In one tragic incident 14-year-old Sewell Setzer III took his own life after a chatbot developed by Character AI encouraged him to do so. While this incident doesn’t directly involve ChatGPT, it highlights the dangers of trusting language models to provide emotional support.

As a result, vulnerable individuals and their families will always be better off seeking human support and connection from a qualified mental health professional, such as a psychologist or psychiatrist, instead of relying on a chatbot that’s prone to hallucination.

Produce deepfakes


AI deepfake
Tero Vesalainen/Shutterstock

Deepfakes might be all over social media platforms like X, but creating and distributing them can easily land you in legal trouble, especially if they’re nonconsensual. For example, New York law bans the distribution of AI deepfakes depicting nonconsensual sexual images, with potential penalties including up to a year in jail.

Other states in the United States are also introducing anti-deepfake legislation, with New Jersey recently introducing civil and criminal penalties for users involved in making or distributing deepfakes, with penalties including fines up to $30,000 and 5 years in prison.

Although deepfakes aren’t inherently bad, you should never ask ChatGPT to create a deepfake, unless you plan to keep the content to yourself or are certain that creating and distributing such content is legal in your jurisdiction.

It is worth noting that even if your jurisdiction doesn’t have an outright ban on deepfakes, there may also be a regulatory requirement to label images as AI-generated, as is the case in China, where users are legally required to declare and label synthetic content.

Generate hateful content


hate speech cyberbullying
MIA Studio/Shutterstock

If you want to use ChatGPT to create hateful content against other users or groups then you’re out of luck. Without getting into the ethical concerns of such content, OpenAI has a content moderation policy that prohibits the creation of hateful or discriminatory content.

More specifically, OpenAI’s usage policy requests users “don’t share output from our services to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualise children, or promote violence, hatred or the suffering of others.”

Asking ChatGPT to generate malicious content can result in the response being blocked or your account being terminated. While there are workarounds like jailbreaks that can sidestep these content moderation guardrails, such as Do Anything Now (DAN) — a type of prompt injection attack — you can also be banned for using these.

So, when using ChatGPT, avoid creating any content that could enable cyberbullying or demeaning other groups of people. As a general rule of thumb, if you don’t have anything nice to say, keep it to yourself.

Process personal data


Personal information cybersecurity
Yana Iskayeva/Getty Images

When using ChatGPT, it’s important to note that the information you share isn’t completely private. OpenAI has confirmed that it may process user content to train its AI models. For this reason, you should never ask ChatGPT to process sensitive personal information as it could be viewed by the company’s employees and other third parties.

Users can change their privacy settings to opt out of having their inputs used to train OpenAI’s models, but it’s still best not to share any personally identifiable information (PII) or proprietary information with the chatbot, as there could still be a risk of leakage.

Back in 2023, there was a high-profile incident where Samsung banned the use of generative AI models due to cybersecurity concerns after an employee leaked sensitive code on the platform.

To avoid any unpleasant surprises or potential leaks, it’s a good idea not to share anything with ChatGPT that you wouldn’t want OpenAI’s employees to be able to view. We recommend not sharing information including your name, address, phone number, social security number (SSN), usernames, passwords, or payment information in your prompts.



Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Anatomy of a Crypto Catastrophe: The Science Behind Terra’s $40 Billion Implosion | HackerNoon
Next Article Age Verification Laws Send VPN Use Soaring—and Threaten the Open Internet
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Lenovo leads the global PC market in Q1 · TechNode
Computing
Australia bans YouTube accounts for children under 16 in reversal of previous stance
News
Samsung is making a big mistake with the Galaxy S26 series
News
Roku is rolling out a free upgrade to make movie nights easier | Stuff
Gadget

You Might also Like

Australia bans YouTube accounts for children under 16 in reversal of previous stance

4 Min Read
News

Samsung is making a big mistake with the Galaxy S26 series

10 Min Read
News

Six ways Google Cloud partners deliver real AI workloads – News

8 Min Read
News

New ‘seatless’ driving law comes into force Oct 1 with 30mph rule for road users

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?