This person does not exist

Matías Valderrama Barragán

27 June 2022

Matías Valderrama Barragán

27 June 2022

How does personalisation relate to the existence of the person? Algorithms developed for personalisation intersect in a strange way with algorithms dedicated to the generation of people who do not exist. We find ourselves in a scenario where algorithmic mediation allows for not only the personalisation of various things, but also the personalised creation of persons. This question comes to mind when thinking about a genre of deep learning models so-called generative adversarial networks (GAN) and the recent enthusiasm about Dall-e mini and the algorithmic generation of synthetic images of the weirdest things.

An example of a generative adversarial network is StyleGAN, developed by research scientists Tero Karras, Samuli Laine and Timo Aila at NVIDIA –the US multinational graphics hardware manufacturer. The researchers published the paper A Style-Based Generator Architecture for Generative Adversarial Networks in Arxiv in December 2018. It would be quickly shared by the website Synced (2018), where was described as a new hyperrealistic face generator. And indeed, the images it generates are so realistic that a human eye cannot distinguish if they are computer-generated or not. StyleGAN depends on Nvidia’s CUDA software, GPUs and Google’s TensorFlow. To train StyleGAN, the researchers created the database Flickr-Faces-HQ (FFHQ) of 70,000 high-quality PNG images of human faces with a resolution of 1024×1024 that they collected from Flickr, thus inheriting all the biases of that website although, according to the researchers, the collection presents a large variability in terms of age, ethnicity, image background, as well as accessories (glasses, sunglasses, hats, etc.). The images collected were uploaded by Ficklr users under a creative commons license, which raises questions about their use by NVIDIA researchers. While the researchers published the database openly on GitHub, it is still an NVIDIA-based initiative seeking commercial advantage, so it is questionable whether the researchers respected the users’ non-commercial creative commons license.

In February 2019 Tero Karras and Janne Hellsten created a repository on GitHub with the source code to run StyleGAN in Python. In the same month, the website This Person Does Not Exist was created by uber engineer Phillip Wang, who utilised StyleGAN to produce an artificial human face every 2 seconds or web page reloads. The website became a huge attraction, receiving millions of visits and being covered by several news articles (e.g. Fleishman, 2019; Hill & White, 2020). As noted in one news article, the website “renders hyper-realistic portraits of completely fake people” by bringing two neural networks into competition: the generator and discriminator. “The generator is given real images which it tries to recreate as best as possible, while the discriminator learns to differentiate between faked images and the originals. After millions-upon-millions of training sessions, the algorithm develops superhuman capabilities for creating copies of the images it was trained on” (Paez, 2019). Beyond the claims of superhuman capabilities around GANs, such models suggest interesting ways of thinking about the generation of synthetic images as a practical accomplishment of social interactions between generative and discriminative agents. The existence of what they generate becomes secondary when the focus is on maximising the ability to create realistic images, which is nothing more than minimising the discriminator’s ability to differentiate between images in the training set and new images. (For a discussion of GANs and their connections to game theory and Bourdieu’s social theory, see Castelle, 2020.)

Our human ability to discriminate images has never been free of technological mediations, but certainly computational models such as GAN raise questions about the possible deceptions that could be generated by such models. In interviews, Wang said he set up the website to convince some friends to join in research on artificial intelligence. But his main motivation was primarily to make a demonstration to raise awareness of the supposed revolutionary and dangerous capabilities of the technology (Paez, 2019):  “My goal with the site is modest. I want to share with the world and have them understand, instantaneously, what is going on, with just a couple of refreshes.” (Wong in Bishop, 2020). In Wang’s framing, people who are unaware of the capabilities of this and other technologies associated with AI will fall victim to its misuse. So his awareness-raising efforts are aimed at increasing vigilance and encouraging the public not to fall for fake images on the Internet. Only in this way, Wang said, will the positive uses of GANs for the world be developed.

More recently, new versions inspired by This Person Does Not Exist have emerged such as This Cat Does Not Exist, This Chemical Does Not Exist, This Rental Does Not Exist, This MP Does Not Exist, This Map Does Not Exist, among many other examples that can be found on a website created by Kashish Hora which is named as This X Does Not Exist. As the website succinctly puts it, GANs allow us to “create realistic-looking fake versions of almost anything”. From this imaginary, everything could be created synthetically, if the proper data, models and computational capacities were available.

Close to the release of This Person Does Not Exist, Jevin West and Carl Bergstrom at the University of Washington created the website which is a simple test in which – as the name suggests – the user has to decide which face is real between pairs of images, one obtained by This Person Does Not Exist and the other from the FFHQ database of real people’s faces. Like Wong’s website, the researchers are using this test to raise awareness of the potential for synthetic image generation enabled by generative adversarial networks: “Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance” (West & Bergstrom, 2019). These efforts to raise awareness of GANs’ capabilities turn out to be performative in the sense that they increase the wonder or play around GANs, expanding their potential applications to new domains and creating new non-existent entities.

It is curious that Wong decided to call his website This Person Does Not Exist. If we think with Rene Magritte and his famous work The Treachery of Images, one would say This Is Not a Person. If the machine is agnostic as to the necessity of existence for persons, then the personhood status of the image remains unchallenged. And with that, we could speculate what personality traits, religion, political orientation or preferences this non-existent person would have with further algorithmic processing of the image. In a world of algorithmic personalisation, it becomes crucial to ask then about the treachery of synthetic images.

These synthetic images, rather than something completely new, could be considered as new versions of pseudonyms or multiple-use names in which it is not a name that is fictitious but rather an image itself. More than the impersonation of another person, it becomes increasingly problematic how algorithmic mediation confronts us with non-existent persons and the intended and unintended uses this will have. There has been speculation about how synthetic images and videos could affect the porn industry. For example, the journalist Katie Bishop (2020) mentioned the case of This Person Does Not Exist in relation to the proliferation of Deep Fakes, reflecting on how the future of porn might be to create videos without real performers. But perhaps the most harmful uses of Generative Adversarial Networks such as StyleGAN have been in political propaganda campaigns to create fake accounts posing as real humans to spread false or misinformation and thus twist debate within platforms such as Twitter. For example, users have detected multiple botnets promoting content about cryptocurrencies or political candidates, using GAN-generated images as profile pictures, possibly even pulling images from ThisPersonDoesNotExist. These images add legitimacy to fake profiles, given how difficult it is to differentiate the real from the fake. Adding a unique human face to profiles is especially useful for creating personas that are convincing and amplify the information shared, whether managed by humans or automated or semi-automated. This had already been reported in 2019 with Facebook announcing the removal of several accounts associated with profile pictures created by GAN (Gallagher & Calabrese, 2019). There have also been several reported attacks, manipulating public opinion on social media by spreading false information surrounding the Russian invasion of Ukraine (Collins & Kent, 2022). According to journalist Ben Collins, Russian troll farm accounts used AI-generated faces to simulate the identities of Ukrainian columnists, framing Ukraine as a failed state. He showed the next example of a face of a person confirmed by Facebook to be fake: “She doesn’t really exist.”

Two responses to GANs have emerged: the first aspires to develop new computational models that detect and warn of synthetic images. Such a path would be to fall back into technological solutionism in order to recover a kind of representationalism based on truth versus falsehood. Interestingly, this process already exists in GAN itself as mentioned above, with the incorporation of a second neural network in charge of judging or discriminating the reality or falsity of the generated image. Of course, the idea of reality or veracity is reduced to an operation based on what has been previously recorded and processed as a training set. Another perhaps less realistic possibility is not only to raise awareness but also to educate the human eye to detect this kind of synthetic imagery. In this proposal, the viewer is advised to look at issues such as distorted backgrounds, vestigial heads or “side demons” near the edges of the image, non-symmetrical ears, malformed accessories or check that the eyes are centred at the same distance from the centre of the image, among other examples. But perhaps a third possibility is to recognise that it often matters little whether it is a computer-generated image or a real one. Perhaps more than learning to distrust names and text, we need to learn to distrust the representational function of images –and videos. In short, to cast suspicion on the existence of the things we see on the Internet. Such a suggestion is nothing new if we think about how research and news reports have warned of the presence of virtual impostors since the 1990s not without some moral panic. But perhaps imposture acquires new meanings when we consider how large volumes of images are extracted to create other synthetic ones through GANs, reformulating what we consider – or not – to be a person, a cat, a chemical or even an X.



Bishop, K. (2020, February 7). AI in the adult industry: Porn may soon feature people who don’t exist. The Guardian.

Castelle, M. (2020). The social lives of generative adversarial networks. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 413.

Collins, B., & Kent, J. L. (2022, February 28). Facebook, Twitter remove disinformation accounts targeting Ukrainians. NBC News.

Fleishman, G. (2019, April 30). How to spot the realistic fake people creeping into your timelines. Fast Company.

Gallagher, F., & Calabrese, E. (2019, December 31). Facebook’s latest takedown has a twist—AI-generated profile pictures. ABC News.

Hill, K., & White, J. (2020, November 21). Designed to Deceive: Do These People Look Real to You? The New York Times.

Paez, D. (2019, February 21). ‘This Person Does Not Exist’ Creator Reveals His Site’s Creepy Origin Story. Inverse.

SYNCED. (2018, December 14). GAN 2.0: NVIDIA’s Hyperrealistic Face Generator | Synced. SYNCED.

West, J., & Bergstrom, C. (2019). Which Face Is Real? Which Face Is Real.