The new video app from the makers of ChatGPT creates a ‘real risk’ for misinformation and disinformation, AI experts have warned.
Sora, viewed as a rival to TikTok (but where all you see is made by AI), launched last week and has already blessed us with videos including Spongebob dressed as Hitler, and the company’s boss Sam Altman grilling Pikachu’s dead body on a barbecue.
Those examples are obviously not things that really happened, but the software is willing to depict things much more believable, such as bombs exploding, robberies, and Sam Altman (again) shoplifting from Target in fake CCTV footage.
Kate Miltner, a lecturer in Data, AI, and Society at the University of Sheffield, told Metro: ‘I think that this has a real potential for misuse. The sophistication of these tools makes it difficult for most people to be able to tell the difference between AI-generated content and genuine content.’
The app has guardrails blocking you from making certain violent and explicit content, as well as clips of public figures like Donald Trump, but people are still creating concerning fake footage.
One of the most controversial aspects of the app is the ability to insert real people into the videos, as a ‘cameo’. You can’t just post a photo of your mate at work, however.
Users must upload their own video and voice consenting to it being used, including by other people. OpenAI said that those opting into this would be able to review and delete any videos featuring their likeness, and ‘you can even set preferences for how your cameo behaves—for example, requesting that it always wears a fedora’.
But Dr Miltner said: ‘While average users have some control over their ‘cameos’ in that they can decide who can use and remix their own image, it’s unclear what happens if they change the content permissions at a later time.
‘Furthermore, celebrities and public figures who are already present in the model due to their presence in the training data will have less control over how their image is used. This especially has a real risk for mis-and-disinformation.’
The potential for harassment has already been seen, with tech writer Taylor Lorenz saying that her stalker had already managed to make AI videos featuring her.
‘It is scary to think what AI is doing to feed my stalker’s delusions,’ she wrote on X. ‘This is a man who has hired photographers to surveil me, shows up at events I am at, impersonates my friends and family members online to gather info, believes I’m sending him secret messages through my writing.’
She said she was thankful to have the option to block and delete unapproved content with her image.
Days after the launch, Sam Altman posted a blog saying some parts of Sora were already being reconsidered, including more control of character generation for rightsholders.
He added: ‘Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon.’
It’s not just copyright which is a concern, but how convincing the videos are. Until now, while photos could fairly easily be doctored, videos were much harder to fake well.
Now that even experts struggle to tell the difference with AI content, we could face a future where it’s hard to trust anything online.
Dr Miltner said that while Instagram and TikTok were unlikely to immediately be threatened by an AI version, the platforms could still end up being changed by it as users cross post.
‘There are already concerns that AI-generated content is gumming up the works, so to speak,’ she said.
‘It’s already hard for human creators to break through in a highly crowded and competitive environment, and AI-generated content could make that even harder.’
Sora has been denounced as a ‘slop’ factory for making low-value clips generated by AI, but Dr Miltner said: ‘Slop is content that is determined to not have much creative value and exists primarily for its creators to monetize.
‘What is slop to one person may not be slop to another. It’s possible that some entertaining content could come out of Sora; it seems that people with early access to the platform are engaging in quite a lot of parody and critique at the moment.’
As AI video becomes more widspread, it could be that ‘real’ human content ends up becoming more valued and authentic, she added.
But given that even real creators are constantly advertised with ways to AI-ify themselves, such as making a headshot from an unrelated selfie, even ‘human’ creators may often have elements of the unreal.
Sora is currently only available in the US and Canada to those with an invitation code.
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
MORE: Melania Trump shares AI body double sparking speculation it will stand in for her
MORE: Trump post on alien-based ‘medbed’ conspiracy theory defended by White House
MORE: Here’s the latest change to ChatGPT – and how you can use it to shop