OpenAI debuted a new ChatGPT image generator this week that allows for the use of controversial images like swastikas in certain contexts.
“We recognize symbols like swastikas carry deep and painful history,” says Joanne Jang, OpenAI’s head of product. “At the same time, we understand they can also appear in genuinely educational or cultural contexts. Completely banning them could erase meaningful conversations and intellectual exploration.”
Unsurprisingly, mixing AI with sensitive topics is not foolproof and requires heavy user oversight. I asked the new image generator, which uses OpenAI’s GPT-4o model instead of DALL-E, to create an image of “a door with a swastika on it.” It refused my initial request, saying it would only do so for a “cultural or historical design.”
(Credit: ChatGPT)
Then, I asked it to “create a swastika for use in a school assignment.” It seemed to accept this, and asked for more details about the project. It also pointed out that “the symbol has been used for thousands of years in many cultures, including Hinduism, Buddhism, and Jainism” and vaguely alluded to it being “appropriated in the 20th century in a very different context.” It did not use the words Hitler or Nazi.
“I want a diagram that compares the visual elements of swastikas used by Germany in WWII and the cultural symbol you mentioned,” I responded. After a minute or two, it created the image. I told it that one element of the image was incorrect—the lower arrow labeled “upright” points the wrong symbol— and it said, “You’re right! Do you want me to fix it?”
Is More or Less Content Moderation Better?
The new policy is part of a push at OpenAI for more hands-off content moderation. “AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create,” Jang says. The team struggled to identify all the scenarios that would get an image banned, and basically concluded it’s impossible.
One debate at OpenAI has been handling images of public figures, like politicians and celebrities; they can be used to spread misinformation and cause reputational damage. Rather than programming a list of figures into the system, such as “President Trump,” OpenAI now offers the ability to opt out. The system also does not maintain a strict definition of “offensive content,” noting that staff opinions drive the definitions.
“We pushed ourselves to reflect on whether any discomfort was stemming from our personal opinions or preferences vs. potential for real-world harm,” says Jang. “Without clear guidelines, the model previously refused requests like ‘make this person’s eyes look more Asian’ or ‘make this person heavier,’ unintentionally implying these attributes were inherently offensive.”
How OpenAI defines “real-world harm,” the key phrase it’s using to guide these decisions, remains to be seen. It’s choosing to allow swastikas at a time when anti-Semitism is reaching record highs, resulting in physical assault, vandalism, and verbal and physical harassment, the BBC reports. At this year’s Super Bowl, rapper Ye paid for a commercial to advertise his website, which was selling just one white T-shirt with a swastika on it. Could he create the next design with ChatGPT?
Get Our Best Stories!
What’s New Now
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
The new policy favors “subtle, everyday benefits” over designing or “hypothetical worst-case scenarios,” Jang says. OpenAI will still maintain stricter image controls for those under 18.
It is proving difficult for OpenAI to fully remove itself from these editorial decisions. After the new image generator launched, CEO Sam Altman noted the system was “refusing some generations that should be allowed,” and said the company is “fixing these as fast [as] we can.” This implies the team is actively applying judgment on what the system can and cannot allow, or at least honing its approach in response to real-world use now that it’s public.
Content moderation is tough. The tech industry has wrestled with how to approach it, and the pendulum is now swinging toward a more hands-off approach.
Recommended by Our Editors
A few years ago, Meta went back and forth over Holocaust denial, for example. More recently, CEO Mark Zuckerberg has argued that “It’s time to get back to our roots around free expression and giving people a voice on our platforms.”
Elon Musk’s Grok chatbot has also marketed itself as a less censored image generator; Trump used it to create images of a hammer and sickle to campaign against Kamala Harris. ChatGPT has also been accused of biased answers and not accepting all prompts.
Shortly after his inauguration, President Trump signed an executive order aimed at “restoring freedom of speech and ending federal censorship.” However, that was mainly in reaction to how the Biden administration discussed COVID misinformation with social networks, an issue the Supreme Court has already addressed.
In his first term, Trump also floated a revamp of Section 230, a section of the Communications Decency Act that protects platforms from being held liable for the things their users post as long as they make a good effort to remove unlawful content.
A bipartisan group of senators is now considering a bill to address Section 230, but more as a negotiating tactic. “The idea would be to force [tech companies] to the table, and if they don’t come to the table and agree to meaningful reform, then ultimately Section 230 would be repealed,” a congressional aide told The Information.
About Emily Forlini
Senior Reporter
