Deep fakes are arguably the most dangerous aspect of AI. It’s now relatively trivial to create fake photos, audio, and even video. See below for deep fakes of Morgan Freeman and Tom Cruise, for example.
But while social media has so far been used as a mechanism for distributing deep fakes, Instagram head Adam Mosseri thinks it can actually play a key role in debunking them …
How deep fakes are created
The main method used to create deep fake videos to date has been an approach known as generative adversarial networks (GAN).
One AI model either generates a fake video clip or displays a genuine one. A second AI model attempts to identify the fakes. Repeatedly running this process trains the first model to generate increasingly convincing fakes.
However, diffusion models like DALL-E 2 are now taking over. These take real video footage, then make various changes to create a large number of variations of them. Text prompts can be used to instruct the AI model on the results we want, making them easier to use – and the more people who use them, the better trained they become.
Examples of deep fake videos
Here’s a well-known example of a Morgan Freeman deep fake, created a full three years ago, when the technology was much less sophisticated than it is today:
And another, of Tom Cruise as Iron Man:
Brits may also recognise Martin Lewis, who is well-known for offering financial advice, here in a deep fake to promote a crypto scam:
Meta exec Adam Mosseri thinks that social media can actually make things better rather than worse, by helping to flag fake content – though he does note that it isn’t perfect at doing this, and we each need to consider sources.
Over the years we’ve become increasingly capable of fabricating realistic images, both still and moving. Jurassic Park blew my mind at age ten, but that was a $63M movie. Golden Eye for N64 was even more impressive to me four years later because it was live. We look back on these media now and they seem crude at best. Whether or not you’re a bull or a bear in the technology, generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly.
A friend, @lessin, pushed me maybe ten years ago on the idea that any claim should be assessed not just on its content, but the credibility of the person or institution making that claim. Maybe this happened years ago, but it feels like now is when we are collectively appreciating that it has become more important to consider who is saying a thing than what they are saying when assessing a statement’s validity.
Our role as internet platforms is to label content generated as AI as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content.
It’s going to be increasingly critical that the viewer, or reader, brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to *always* consider who it is that is speaking.
Image: Shamook
FTC: We use income earning auto affiliate links. More.