We’ve all been mid-TV binge when the streaming service interrupts our umpteenth-consecutive episode of Star Trek: The Next Generation to ask if we’re still watching. That may be in part designed to keep you from missing the first appearance of the Borg because you fell asleep, but it also helps you ponder if you instead want to get up and do literally anything else. The same thing may be coming to your conversation with a chatbot.
OpenAI said Monday it would start putting “break reminders” into your conversations with ChatGPT. If you’ve been talking to the gen AI chatbot too long — which can contribute to addictive behavior, just like with social media — you’ll get a quick pop-up prompt asking if it’s a good time for a break.
“Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for,” the company said in a blog post.
(Disclosure: Ziff Davis, ‘s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Whether this change will actually make a difference is hard to say. Dr. Anna Lembke, a psychiatrist and professor at the Stanford University School of Medicine, said social media and tech companies haven’t released data on whether features like this work to deter compulsive behavior. “My clinical experience would say that these kinds of nudges might be helpful for people who aren’t yet seriously addicted to the platform but aren’t really helpful for those who are seriously addicted.”
OpenAI’s changes to ChatGPT arrive as the mental health effects of using them come under more scrutiny. Many people are using AI tools and characters as therapists, confiding in them and treating their advice with the same trust as they would that of a medical professional. That can be dangerous, as AI tools can provide wrong and harmful responses.
Another issue is privacy. Your therapist has to keep your conversations private, but OpenAI doesn’t have the same responsibility or right to protect that information in a lawsuit, as CEO Sam Altman acknowledged recently.
Watch this: How you talk to ChatGPT matters. Here’s why
Changes to encourage “healthy use” of ChatGPT
Aside from the break suggestions, the changes are less noticeable. Tweaks to OpenAI’s models are intended to make it more responsive and helpful when you’re dealing with a serious issue. The company said in some cases the AI has failed to spot when a user shows signs of delusions or other concerns, and it has not responded appropriately. The developer said it is “continuing to improve our models and [is] developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
ChatGPT users can expect to see a notification like this if they’re chatting with the app for long stretches of time.
Tools like ChatGPT can encourage delusions because they tend to affirm what people believe and don’t challenge the user’s interpretation of reality. OpenAI even rolled back changes to one of its models a few months ago after it proved to be too sycophantic. “It could definitely contribute to making the delusions worse, making the delusions more entrenched,” Lembke said.
ChatGPT should also start being more judicious about giving advice about major life decisions. OpenAI used the example of “should I break up with my boyfriend?” as a prompt where the bot shouldn’t give a straight answer but instead steer you to answer questions and come up with an answer on your own. Those changes are expected soon.
Take care of yourself around chatbots
ChatGPT’s reminders to take breaks may or may not be successful in reducing the time you spend with generative AI. You may be annoyed by an interruption to your workflow caused by something asking if you need a break, but it may give someone who needs it a push to go touch grass.
Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts
Lembke said you should watch your time when using something like a chatbot. The same goes for other addictive tech like social media. Set aside days when you’ll use them less and days when you won’t use them at all.
“People have to be very intentional about restricting the amount of time, set specific limits,” she said. “Write a specific list of what they intend to do on the platform and try to just do that and not get distracted and go down rabbit holes.”