By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: ChatGPT celebrity deepfakes are going viral, and there’s only one way to stop them
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > ChatGPT celebrity deepfakes are going viral, and there’s only one way to stop them
News

ChatGPT celebrity deepfakes are going viral, and there’s only one way to stop them

News Room
Last updated: 2025/03/29 at 6:37 AM
News Room Published 29 March 2025
Share
SHARE

The new ChatGPT 4o image generation model is the talk of the town, and not just for good reasons. Everyone is marveling at the AI’s amazing new abilities, which include generating legible text in images, creating fake photos out of real ones, creating deepfakes of celebrities, and replicating copyrighted content like Studio Ghibli characters. It all happens incredibly fast, with the AI able to respond to your needs.

But some people have been quick to point out the bad things about the new AI image model. First, the most obvious problem that we’re not really talking about is that ChatGPT has dealt a swift blow to all sorts of content creators, including graphic designers and photographers. Of course, we already have other AI image-generation programs that endanger those professions. This isn’t a ChatGPT safety issue, either.

The fact that ChatGPT-created images have no visible watermark to inform unsuspecting people they’re not real images is a big safety concern. More visible is the Studio Ghibli controversy, which shows that OpenAI is willing to let 4o image generation easily rip off copyrighted content.

The even more annoying thing about ChatGPT’s new image generation abilities is how easy it is to make deepfakes of celebrities. This one is especially troubling to me, an internet user, because malicious actors have unfettered access to the tool.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

OpenAI has started paying attention to the criticism it received since the launch of 4o image generation, but it’s not taking any action, especially on the deepfake problem. It turns out the only way to stop someone from using your face with ChatGPT is to opt out of it with OpenAI.

As I pointed out before, OpenAI never addressed these ChatGPT security matters in its original announcement. But the company retweeted a blog post from OpenAI engineer Joanne Jang explaining the lax security features in ChatGPT 4o image generation. Sam Altman also retweeted the same blog post. Why not publish it on the OpenAI blog if this is the company’s official stance?

Jang, who leads model behavior at OpenAI, took to Substack to explain the lax safety features in ChatGPT 4o image generation. The engineer makes the case for OpenAI giving ChatGPT more freedom so users can unleash their creativity rather than be stopped by the AI’s refusal to generate images based on more drastic safety features.

“Images are visceral,” Jang says, and I definitely agree. “There’s something uniquely powerful and visceral about images; they can deliver unmatched delight and shock. Unlike text, images transcend language barriers and evoke varied emotional responses. They can clarify complex ideas instantly.”

Also, it’s great to see that OpenAI is more malleable when it comes to certain censorship features. Jang gives an example of how ChatGPT offensive content:

When it comes to “offensive” content, we pushed ourselves to reflect on whether any discomfort was stemming from our personal opinions or preferences vs. potential for real-world harm. Without clear guidelines, the model previously refused requests like “make this person’s eyes look more Asian” or “make this person heavier,” unintentionally implying these attributes were inherently offensive.

The blog also covers the use of hate symbols in images and the “stronger protections and tighter guardrails” for people under 18.

What’s more problematic is OpenAI’s openness to allowing ChatGPT to create deepfakes with such ease.

Here’s Jang’s explanation of how ChatGPT 40 image generation handles public figures:

We know it can be tricky with public figures—especially when the lines blur between news, satire, and the interests of the person being depicted. We want our policies to apply fairly and equally to everyone, regardless of their “status.” But rather than be the arbiters of who is “important enough,” we decided to create an opt-out list to allow anyone who can be depicted by our models to decide for themselves.

Remember when Scarlett Johansson called out that deepfake anti-Kanye video that used her face without her permission and asked the government to take action against the use of deepfakes?

Well, ChatGPT makes it easier than ever for anyone to come up with deepfakes showing celebrities in fake photos. I’m not talking about Ghibli-style images showing President Trump announcing the Stargate AI initiative. We all know how to interpret that. I’m talking about AI images that are indiscernible from real photos and can manipulate public opinion. 

Satire has nothing to do with it, either. Those capable of drawing cartoons featuring political figures to mock their actions never needed ChatGPT to do it. Also, people seeing those images would recognize it’s satire and not real. Now, ChatGPT makes it incredibly easy to generate fake news.

What’s more annoying is that Jang says people who feel “important enough” can opt out. Where? How? Where is the list? Why didn’t OpenAI announce this list before making ChatGPT 4o image generation available to the masses? After all, ChatGPT has already started using celebrities in their ChatGPT creations, and those celebrities might not like it.

It sure looks like OpenAI is using the new image generation product to introduce much more laxer AI safety features than before. I hope that’s not the case, but that’s what it feels like right now. Jang’s blog further confirms that OpenAI won’t necessarily take a stronger safety approach for the 4o image generation tool right away.

Then again, so many AI safety engineers left OpenAI in the past years that it makes sense to see the company lower safety protections. By the way, it’s not just OpenAI that’s going for a very lax safety policy for AI image models. Others have been doing it, too. It’s just that ChatGPT has just gone viral for its incredible image-generation powers, so we can’t ignore the safety protocols governing it.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Debian 13 “Trixie” Freeze Process Begins
Next Article Baidu’s vice president to take over the company’s smart speaker spin-off · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

This New iPhone App From Adobe Helps You Shoot SLR-Like Photos
News
Best Bluetooth Speakers 2025: The best budget and premium speakers
Gadget
Data: Tech Layoffs Remain Stubbornly High, With Big Tech Leading The Way
News
AMD Releases Updated ROCm 7.0 Preview For HIP Testing
Computing

You Might also Like

News

This New iPhone App From Adobe Helps You Shoot SLR-Like Photos

5 Min Read
News

Data: Tech Layoffs Remain Stubbornly High, With Big Tech Leading The Way

7 Min Read
News

Acer Swift 14 AI review: give it up for the ports

11 Min Read
News

Get Soundcore Sport X20 earbuds for under $70 ahead of Prime Day

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?