Deepfakes produced by Machine Learning systems have become incredibly efficient in the space of a few years, so much so that ordinary people has now almost unable to identify these synthetic images likely to be used.
In any case, this is what emerges from a small study by the British biometric company IProov, spotted by TheNextWeb. The firm proposed to 2000 British and American citizens to submit to a test whose objective is to review a certain number of images to identify those which have been generated by generative AI models. And the results were even more catastrophic than expected.
Before lending themselves to the game, the candidates were mostly sure enough of themselves; 60 % of them said they had confidence in their ability to identify the special effects. But in the end, 99.9 % allowed themselves to be fooled by at least one of these synthetic images!
Admittedly, this is not a large -scale study. In addition, the IPROOV test is very short and therefore not frankly representative from a statistical point of view. Therefore, this figure should be taken with tweezers.
But anyway, it still testifies to a particularly alarming trend. Barely two years ago, American researchers behind a similar study observed that candidates had failed to identify synthetic faces in 48.2 % of cases. An already worrying figure at the time, but which has only increased since, with all that that implies for the general public.
Generative AI, for better and for worse
This trend is notably due to the democratization of generative AI. Anyone can now generate potentially misleading images in a few clicks without any particular technical skills. The rise of these easily accessible tools is generating a real Deepfake tsunami, sometimes humorous and good -natured, but also likely to feed a targeted disinformation campaign – with sometimes huge consequences.
We remember in particular the case of ” faux Brad Pitt », Where a crook managed to extract several hundred thousand euros from a Frenchwoman by usurping the identity of the American superstar thanks to tools of this kind. Even if the poor victim has become the laughing stock of social networks because of his so-called naivety, Iproov’s investigation clearly shows that it is increasingly difficult to protect against this type of scam.
More than ever, it is therefore crucial to pay very attention to the origin of images shared on the weband especially on social networks where special effects can quickly escape any control.
If you want to have the experience yourself, the small IProov test is available At this address. Will you be able to identify all false faces without being mistaken?
🟣 To not miss any news on the Geek Journal, subscribe to Google News. And if you love us, .