By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: OpenAI’s Sora Underscores the Growing Threat of Deepfakes
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Software > OpenAI’s Sora Underscores the Growing Threat of Deepfakes
Software

OpenAI’s Sora Underscores the Growing Threat of Deepfakes

News Room
Last updated: 2026/01/20 at 12:37 PM
News Room Published 20 January 2026
Share
OpenAI’s Sora Underscores the Growing Threat of Deepfakes
SHARE

The Brief October 21, 2025

Trump brings War on Terror to the Americas, the growing threat of deepfakes, the status of the Israel-Hamas cease-fire, and more

When OpenAI released its AI video-generation app, Sora, in September, it promised that “you are in control of your likeness end-to-end.” The app allows users to include themselves and their friends in videos through a feature called “cameos”—the app scans a user’s face and performs a liveness check, providing data to generate a video of the user and to authenticate their consent for friends to use their likeness on the app.

But Reality Defender, a company specializing in identifying deepfakes, says it was able to bypass Sora’s anti-impersonation safeguards within 24 hours. Platforms such as Sora give a “plausible sense of security,” says Reality Defender CEO Ben Colman, despite the fact that “anybody can use completely off-the-shelf tools” to pass authentication as someone else.

Reality Defender’s researchers used publicly available footage of notable individuals, including CEOs and entertainers, from earnings calls and media interviews. The company succeeded in breaching the safeguards with every likeness they attempted to impersonate. Colman argues that “any smart 10th grader” could figure out the tools his company used.

An OpenAI spokesperson said in an emailed statement to that “the researchers built a sophisticated deepfake system of CEOs and entertainers to try to bypass those protections, and we’re continually strengthening Sora to make it more resilient against this kind of misuse.”

Sora’s release, and the rapid circumvention of its authentication mechanisms, is a reminder that society is unprepared for the next wave of increasingly realistic, personalized deepfakes. The gap between the advancing technology and lagging regulation leaves individuals on their own to navigate an uncertain informational landscape—and to protect themselves from possible fraud and harassment.

“Platforms absolutely know that this is happening, and absolutely know that they could solve it if they wanted to. But until regulations catch up—we’re seeing the same thing across all social media platforms—they’ll do nothing,” says Colman.

Sora hit 1 million downloads in under five days—faster than ChatGPT, which at the time was the fastest-growing consumer app—despite requiring users to have an invite, according to Bill Peebles, OpenAI’s head of Sora. OpenAI’s release followed a similar offering from Meta called Vibes, which is integrated into the Meta AI app.

The increasing accessibility of convincing deepfakes has alarmed some observers. “The truth is that spotting (deepfakes) by eye is becoming nearly impossible, given rapid advances in text-to-image, text-to-video, and audio cloning capabilities,” Jennifer Ewbank, a former deputy director of digital innovation at the CIA, said in an email to .

Regulators have been grappling with how to address deepfakes since at least 2019, when President Trump signed a law requiring the Director of National Intelligence to investigate the use of deepfakes by foreign governments. However, as the accessibility of deepfakes has increased, the focus of legislation has moved closer to home. In May 2025, the Take It Down Act was signed into federal law, prohibiting the online publication of “intimate visual depictions” of minors and of nonconsenting adults, and requiring platforms to take down offending content within 48 hours of a request—but enforcement will only begin in May 2026.

Legislation prohibiting deepfakes can be violated. “It’s actually really complicated, technically and legally, because there are First Amendment concerns about taking down certain speech,” says Jameson Spivack, deputy director for US policy at the Future of Privacy Forum. In August, a federal judge struck down a California deepfake law, which aimed to restrict AI-generated deepfake content during elections, after Elon Musk’s As a result, requirements to label AI-generated content are more common than outright bans, says Spivack.

Another promising approach is for platforms to adopt better know-your-customer schemes, says Fred Heiding, a research fellow at Harvard University’s Defense, Emerging Technology, and Strategy Program. Know-your-customer schemes require users of platforms such as Sora to sign in using verified identification, increasing accountability and allowing authorities to trace illegal behavior. But there are trade-offs here, too. “The problem is we really value anonymity in the West,” says Heiding. “That’s good, but anonymity has a cost, and the cost is these things are really difficult to enforce.”

While legislators grapple with the increasing prevalence and realism of deepfakes, individuals and organizations can take steps to protect themselves. Spivack recommends the use of authentication software such as Content Credentials, developed by the Coalition for Content Provenance and Authenticity, which appends metadata about provenance to images and videos. Cameras from Canon and Sony support the watermark, as does the Google Pixel 10. Using such authentication increases trust in genuine images, and undermines fakes.

As the online information landscape changes, making it harder to trust the things we see and hear online, lawmakers and individuals alike must build society’s resilience to fake media. “The more we cultivate that resilience, the harder it becomes for anyone to monopolize our attention and manipulate our trust,” says Ewbank.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Why Smart Glasses May Be the Biggest Developer Workflow Shift Since Dual Monitors | HackerNoon Why Smart Glasses May Be the Biggest Developer Workflow Shift Since Dual Monitors | HackerNoon
Next Article 2 Professional Rappers From Sacramento, California 2 Professional Rappers From Sacramento, California
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The HackerNoon Newsletter: The Tech Communitys Efforts to Dethrone OpenAI (1/20/2026) | HackerNoon
The HackerNoon Newsletter: The Tech Communitys Efforts to Dethrone OpenAI (1/20/2026) | HackerNoon
Computing
Netflix Live Voting Is Finally Here, Kicks Off Tonight With Its Star Search Reboot
Netflix Live Voting Is Finally Here, Kicks Off Tonight With Its Star Search Reboot
News
This Qi2 MagSafe power bank has active cooling, but is it useful? – 9to5Mac
This Qi2 MagSafe power bank has active cooling, but is it useful? – 9to5Mac
News
International frog meat trade spreads a deadly fungus
News

You Might also Like

AI is rewriting the CEO job description: Are you ready?
Software

AI is rewriting the CEO job description: Are you ready?

3 Min Read
Broadcom Mum On Reported VMware Security Software Ban In China
Software

Broadcom Mum On Reported VMware Security Software Ban In China

5 Min Read
Turn your phone into a pro-level document scanner for
Software

Turn your phone into a pro-level document scanner for $40

2 Min Read

College Football Playoff 2025 projections: Indiana completes improbable national title run

0 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?