By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Gadget > Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out
Gadget

Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

News Room
Last updated: 2025/09/30 at 7:20 AM
News Room Published 30 September 2025
Share
SHARE

Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out.

Previously, the company did not train its generative AI models on user chats. When Anthropic’s privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models.

Why the switch-up? “All large language models, like Claude, are trained using large amounts of data,” reads part of Anthropic’s blog explaining why the company made this policy change. “Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users.” With more user data thrown into the LLM blender, Anthropic’s developers hope to make a better version of their chatbot over time.

The change was originally scheduled to take place on September 28 before being bumped back. “We wanted to give users more time to review this choice and ensure we have a smooth technical transition,” Gabby Curtis, a spokesperson for Anthropic, wrote in an email to WIRED.

How to Opt Out

New users are asked to make a decision about their chat data during their sign-up process. Existing Claude users may have already encountered a pop-up laying out the changes to Anthropic’s terms.

“Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” it reads. The toggle to provide your data to Anthropic to train Claude is automatically on, so users who chose to accept the updates without clicking that toggle are opted into the new training policy.

All users can toggle conversation training on or off under the Privacy Settings. Under the setting that’s labeled Help improve Claude, make sure the switch is turned off and to the left if you’d rather not have your Claude chats train Anthropic’s new models.

If a user doesn’t opt out of model training, then the changed training policy covers all new and revisited chats. That means Anthropic is not automatically training its next model on your entire chat history, unless you go back into the archives and reignite an old thread. After the interaction, that old chat is now reopened and fair game for future training.

The new privacy policy also arrives with an expansion to Anthropic’s data retention policies. Anthropic increased the amount of time it holds onto user data from 30 days in most situations to a much more extensive five years, whether or not users allow model training on their conversations.

Anthropic’s change in terms applies to commercial-tier users, free as well as paid. Commercial users, like those licensed through government or educational plans, are not impacted by the change and conversations from those users will not be used as part of the company’s model training.

Claude is a favorite AI tool for some software developers who’ve latched onto its abilities as a coding assistant. Since the privacy policy update includes coding projects as well as chat logs, Anthropic could gather a sizable amount of coding information for training purposes with this switch.

Prior to Anthropic updating its privacy policy, Claude was one of the only major chatbots not to use conversations for LLM training automatically. In comparison, the default setting for both OpenAI’s ChatGPT and Google’s Gemini for personal accounts include the possibility for model training, unless the user chooses to opt out.

Check out WIRED’s full guide to AI training opt-outs for more services where you can request generative AI not be trained on user data. While choosing to opt out of data training is a boon for personal privacy, especially when dealing with chatbot conversations or other one-on-one interactions, it’s worth keeping in mind that anything you post publicly online, from social media posts to restaurant reviews, will likely be scraped by some startup as training material for its next giant AI model.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Tencent’s mobile game Dungeon and Fighter earned $100 million in 10 days in May · TechNode
Next Article Garmin’s eTrex Touch navigator can go almost a month without charging
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

How to Do Social Listening in 2025: The New Rules | WordStream
Computing
Jeffries: Trump AI video 'designed to be a malignant distraction'
News
The New Beats Powerbeats Fit Won’t Fall Out of Your Ears
News
Meet Certified Agents: Purpose-Built for Expertise |
Computing

You Might also Like

Gadget

Amazon’s big price slash on this Android Lenovo tablet bundle ends very soon

3 Min Read
Gadget

WIN an Astell&Kern Kann Ultra digital audio player worth £1599! | Stuff

2 Min Read
Gadget

AI for Trading: How Courses in AI and Machine Learning Are Changing Algorithmic Strategies

8 Min Read
Gadget

DoorDash’s New Delivery Robot Rolls Out Into the Big, Cruel World

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?