Summary: With a press of a button we can generate hyperrealistic images of people that never existed. Image generation tools have crossed a critical threshold where we can’t trust our eyes no more, and it is blurring the lines between the real and the artificial.
↓ Go deeper (8 min)
Take a look at these pictures from three keynote speakers from Airbnb, Uber and Google. Do you recognize any of them?
No? Me neither. The explanation is simple: these people aren’t real. They don’t exist. These pictures, or what seem to be pictures, are images generated by AI. And just based of the quality and the level of detail, it’s fair to say that they are almost indistinguishable from real.
To some of you this may come as a surprise, for others not so much. Progress has been much faster than we’re prepared for, both as individuals as well as collectively as a society. Let me offer some perspective.
ThisPersonDoesNotExist.com
In 2019, the website ThisPersonDoesNotExist.com garnered quite some attention1. It’s was the brainchild of software engineer Philip Wang, who developed it to showcase the capabilities of generative adversarial networks (GANs). Back then, Wang stated in an interview to Motherboard:
“Most people do not understand how good AIs will be at synthesizing images in the future.”
His comments were prescient. Anno 2024, you don’t have to be an engineer to leverage this technology. Anyone with access to Midjourney and a little bit of prompting experience can generate images indistinguishable from real pictures.
Sure, Adobe Photoshop had already blurred the lines between what is real or not online. But generative AI has introduced a new phenomenon entirely: the ability to generate hyperrealistic images of people that never existed in the first place. Not only is it profoundly strange, it’s also something the human brain isn’t designed to comprehend.
Inattention blindness
To be clear, the AI-generated images aren’t always perfect. But it’s becoming harder to pick up on. Take the keynote speakers, for example. Upon closer examination, there’s local weirdness that you can spot.
It’s easy to overlook, though. Because the images are of such high quality, it tricks our brain into a form of inattention blindness or, as I like to call it, AI-blindness.
Seeing what’s possible today, it’s clear we’ve crossed a critical threshold: we can’t trust our eyes no more. Or our ears, for that matter. It’s something that has far-reaching consequences for how consume information online. It’s one thing to hear something on the news and choose not to believe it, but it’s quite another to see an image or video of a person or event and not being able to rely on your senses to determine whether the news is true or false.
It’s already happening
This is what’s known as ‘hyperreality’, a condition where people can’t tell the difference between what’s real and artificial anymore. It’s the connecting thread that runs through all my writing.
One of the biggest issues is that it creates plausible deniability. As people become more aware of how easy it is to fake imagery, as well as sound and video, bad actors can weaponize that skepticism, claiming the real to be fake and visa versa. This is not future-thing, but a now-thing: it is already happening.
Another cumulative effect is that some people will prefer the artificial over the authentic. They’ll choose the ‘seamlessness’ of having an AI lover over the ‘messiness’ of navigating a relationship with an actual person. This might sounds dystopian, but people are already doing it. When AI companionship app Replika surveyed users who were in a romantic relationship with their AI, they found that people considered it to be indeed a relationship:
“They would talk about their lives, catch up after a long day, vent about the difficulties they experienced, confide in each other, go on adventures together, receive emotional support, even argue sometimes — in other words, their experience went beyond intimate texting and engendered a deeper sense of connection.”
Welcome to hyperreality.
Alternatively, you can visit the website www.thispersonexists.net. It shows the humans who’s faces have been used to train the GAN used for thispersondoesnotexist.com.
At this point cyberpunk is not scifi but contemporary
Crazy that the this person does not exist page is from 2019. Feels like yesterday.
And the prediction was right, it is super easy nowadays to create realistic images like these. And on top, even with open models like FLUX.1 Dev.
As much as I appreciate the progress, the raise of DeepFakes is super scary.