Some of the faces you see on social media, of the bands whose music you listen to and the ones selling you products aren’t of real people.
They don’t even exist.
Artificial intelligence (AI) tools can create images so realistic that people have fallen for scams and even fallen in love.
Knowing how to tell these phoney people apart is becoming an increasingly invaluable skill – and it’s one nearly half of us lack.
A new study, published in The Royal Society, showed that people can only spot an AI-generated face a third of the time.
The 664 participants did manage to get the answer correct 50% of the time… if they had their eyes shut and guessed.
But things changed when the scientists from the University of Reading, Greenwich, Leeds and Lincoln gave some people AI-detection training.
Even just five minutes of training, which highlighted typical AI-generated quirks like wonky hairlines or missing teeth, improved accuracy.
Typical participants were then able to point out a fake face 51% of the time.
Can you tell real from fake? Take our quiz below.
Researchers also encountered ‘super-recognisers’, described as people with a talent for recognising faces.
Among that group, before training, they scored 41%. Afterwards, it rose to 64% accuracy.
The findings suggest that faces made by AI systems look more ‘real’ than, well, real people, a phenomenon called hyper-realism, explained the study lead, Dr Katie Gray, of the University of Reading.
‘AI-generated faces tend to look more average than real faces, but participants were more likely to judge faces that look average as real,’ she told Metro.
‘It’s likely that several different factors are working together to make AI-generated faces appear more realistic than real faces.’
Generative AI tools create content by figuring out patterns in pre-existing data and then replicating them.
With human faces, these AI tools have hoovered up tens of thousands of images of real people, so their faces are too realistic for their own good.
In other words, the AI systems produce faces so unbelievably average that they would fail to arouse suspicion.
The five signs a face is AI-generated
But this data can also be an AI’s downfall, said Dr Eilidh Noyes, the study’s co-author at the University of Leeds.
She explained that AI-generated images tend to have five ‘anomalies’:
- Misaligned teeth
- Wonky nose
- Blurred hairlines
- Mismatched or misaligned ears
- Asymmetric eyes.
‘There are many factors that contribute to these anomalies,’ said Dr Noyes, a psychologist.
‘But some of them are linked to the data which the algorithms that make the faces are trained with.’
The images were plastered together by StyleGAN3, an image model trained on a public repository of photographs.
A similar study was conducted last year with other AI models, ChatGPT and DALL-E, finding that people struggled to distinguish real from fake.
Professor Jeremy Tree, of the University of Swansea, who was behind the previous research, said Dr Gray’s findings were a ‘little surprising’.
‘Anything that can help stem the somewhat dystopian possibility that AI images will never be identified has got to be good news for humanity,’ he told Metro of the training sessions.
Professor Tree said that even if people are already familiar with a person’s face, such as a celebrity, they still tend to struggle.
‘After all, it’s training ourselves not to be fooled by people we know well, so a loved one, that has greater consequences for our everyday lives,’ he added.
While the study may seem light-hearted, it points to something that troubles cybersecurity experts like Marijus Briedis.
The chief technology officer at NordVPN told Metro that AI is used to create deepfakes, eerily lifelike images and videos used to swindle money, spread misinformation and blackmail people.
‘Deepfakes aren’t just a technical issue, they’re a trust issue,’ Briedis said.
‘A little scepticism goes a long way, and research shows that even a few minutes of awareness can make you far better at spotting when something isn’t quite what it seems.’
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
MORE: Using ChatGPT to plan your investments? Here’s what AI gets wrong
MORE: Controversial app lets people talk to AI avatars of their dead relatives
MORE: Black Ops 7 campaign branded ‘the worst’ ever as fans complain about AI use
