Trevor Noah is worried about where things are headed with controversial AI video generators.
The comedian and former Daily Show host said AI video apps like OpenAI’s Sora could be “disastrous” if they continue to use people’s likenesses without permission.
“I have to figure out what they’re doing and how they’re doing it,” he told GeekWire. “But I don’t think it’ll end well when they’re not dealing with permissions.”
We caught up with Noah — Microsoft’s “chief questions officer” — on Thursday after his appearance at the company’s headquarters in Redmond, where he helped launch a new AI education initiative in Washington state.
OpenAI last week rolled out Sora 2, a new version of its AI video-generation system that creates hyper-realistic clips from text prompts or existing footage. The new version adds a “Cameo” feature that allows users to generate videos featuring human likenesses by uploading or referencing existing photos.
The upgrade has made Sora, available on an invite-only basis, one of the most viral consumer tech products of 2025 — it’s the top free app on Apple’s App Store.
It’s also drawn intense pushback from major Hollywood talent agencies that have criticized the software for enabling the use of a person’s image or likeness without explicit consent or compensation.
Meanwhile, AI-generated videos depicting deceased celebrities such as Robin Williams and George Carlin have sparked public outrage from their families.
Noah told GeekWire that “this could end up being the most disastrous thing for anyone and everyone involved.”
He referenced Denmark, which recently introduced legislation that would give individuals ownership of their digital likeness.
“I think the U.S. needs to catch up on that ASAP,” Noah said.
Legal experts say the next wave of AI video tools — including those from Google and Meta — will test existing publicity and likeness laws. Kraig Baker, a Seattle-based media attorney with Davis Wright Tremaine, said the problem isn’t likely to be deliberate misuse by advertisers but rather the flood of casual or careless content that includes people’s likenesses, now made possible by AI.
He added that the issue could be especially thorny for deceased public figures whose estates no longer actively manage image rights.
There are broader potential impacts, as New York Times columnist Brian Chen noted: “The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it. Society as a whole will have to treat videos with as much skepticism as people already do words.”
OpenAI published a Sora 2 Safety doc outlining consent-based likeness. “Only you decide who can use your cameo, and you can revoke access at any time,” the company says. “We also take measures to block depictions of public figures (except those using the cameos feature, of course).”
Sora initially launched with an opt-out policy for copyrighted characters. But in an update, OpenAI CEO Sam Altman said that the company now plans to give “rightsholders more granular control over generation of characters” and establish a revenue model for copyright holders.
The surge of attention on AI video generators is creating opportunity for startups such as Loti, a Seattle company that helps celebrities, politicians, and other high-profile individuals protect their digital likeness.
“Everyone is concerned about how AI will use their likeness and they are looking for trusted tools and partners to help guide them,” said Loti CEO Luke Arrigoni.
He said Loti’s business is “booming right now,” with roughly 30X growth in signups month-over-month. The startup raised $16.2 million earlier this year.