Blog
How Do You See Me?

Fiona Johnstone

30 September 2019

Fiona Johnstone

30 September 2019

How Do You See Me?

Artist Heather Dewey-Hagborg’s new commission for The Photographers’ Gallery, How Do You See Me?, explores the use of algorithms to ‘recognise’ faces. Displayed on a digital media wall in the foyer of the gallery, the work takes the form of a constantly shifting matrix of squares filled with abstract grey contours; within each unit, a small green frame identifies an apparently significant part of the composition. I say ‘apparently’, because the logic of the arrangement is not perceptible to the eye; although the installation purports to represent a human face, there are no traces of anything remotely visually similar to a human visage. At least to the human eye.

 

To understand How Do You See Me?, and to consider its significance for personalisation, we need to look into the black box of facial recognition systems. As explained by Dewey-Hagborg, speaking at The Photographers’ Gallery’s symposium What does the Data Set Want? in September 2019, a facial recognition system works in two phases: training and deployment. Training requires data: that data is your face, taken from images posted online by yourself or by others (Facebook, for example, as its name suggests, has a vast facial recognition database).

 

The first step for the algorithm working on this dataset is to detect the outlines of a generic face. This sub-image is passed on to the next phase, where the algorithm identifies significant ‘landmarks’, such as eyes, nose and mouth, and turns them into feature vectors, which can be represented numerically.  For the ‘recognition’ or ‘matching’ stage, the algorithm will compare multiple figures across the dataset, and if the numbers are similar enough, then a match is identified – although this might take millions of goes. The similarity of the represented elements remains unintelligible to the human eye, calling a ‘common sense’ understanding of similarity-as-visual-resemblance into question.

 

In the western culture of portraiture the face has traditionally acted as visual shorthand for the individual; through this new technology, the face (read, the person) is transfigured into numeric code, allowing for algorithmic comparison and categorisation across a vast database of other faces/persons that have been similarly processed. How Do You See Me? asks what it means to be ‘recognised’ by AI. What version of ‘you’ emerges from a system like this, and how is it identifiable as such? Dewey-Hagborg described the project as an attempt to form her own subjectivity in relation to AI, noting that her starting point was a curiosity as to how she might be represented within the structure of the system, and what these abstractions of her ‘self’ might look like. In an attempt to answer these questions, she used an algorithm to build a sequence of images stemming from the same source, but varying widely in terms of appearance, working on the hypothesis that eventually one of these pictures would be detected as ‘her’ face. The grid of abstract and figuratively indistinct images on the wall of The Photographers’ Gallery can thus be understood as a loose form of self-representation, the evolution of a set of (non-pictorial) figures that attempt to be detected as a (specific) face. By interrogating the predominantly visual associations of ‘similarity’, which may once have implied a mimetic ‘likeness’ (with connotations of the pictorial portrait, arguably a dominant technology for the production of persons from the sixteenth to the twentieth-century), but which now suggests a statistical correspondence, How Do You See Me? draws attention to changing ideas about how a ‘person’ might be identified and categorised.

 

Following her own presentation, Dewey-Hagborg discussed her practice with Daniel Rubinstein (Reader in Philosophy and the Image at Central St Martins). Rubinstein argued that this new technology of image-making can teach us something about contemporary identity. Considering our apparent desire to be ‘recognised’ by our phones, computers, and other smart appliances, Rubinstein suggested that the action of presenting oneself for inspection to a device resembles the dynamics of an S&M relationship where the sub presents themselves to the dom. Rubinstein argued that we want to be surveyed by these technologies, because there is a quasi-erotic pleasure in the abdication of responsibility that this submission entails. Citing Heidegger, Rubinstein argued that technology is not just a tool, but reveals our relation to the world. Life and data are not two separate things (and never have been): we need to stop thinking about them as if they are. The face is already code, and the subject is already algorithmic.

 

Rubinstein’s provocative remarks certainly provide one answer to the question of why people might choose to ‘submit’ to selected technologies of personalisation. They also help us to address personalisation. The project People Like You promises that we will try to ‘put the person back into personalisation’. Whilst this could be taken to imply that there is a single real person – the individual, we aim instead to consider multiple figurations of the ‘person’ on an equal footing with each other. As Rubinstein’s comments suggest, rather than thinking about this relationship in terms of original and copy (the ‘real’ person or individual and a corresponding ‘algorithmic subject’ produced through personalisation), the ‘person’ is always every bit as constructed a phenomenon as an ‘algorithmic’ subject. Or, to put this another way, rather than taking a liberal notion of personhood for granted as our starting point, our aim is to interrogate the contemporary conditions that make multiple different models of personhood simultaneously possible.