People Like You
Contemporary Figures of Personalisation

People Like You

Personalisation is changing many parts of contemporary life, from the way we shop and communicate to the kinds of public services we access. We are told that purchases, experiences, treatments, and interactions can all be customised to an optimum.

As a group of scientists, sociologists, anthropologists and artists, we are exploring how personalisation actually works. What are optimum outcomes? Do personalising practices have unintended consequences?

We argue that personalisation is not restricted to a single area of life and that personalised practices develop, interact and move between different sites and times. The project is split into four areas: personalised medicine and care; data science; digital cultures; interactive arts practices.

People Like You: Contemporary Figures of Personalisation is funded by a Collaborative Award in the Medical Humanities and Social Sciences from The Wellcome Trust, 2018–2022.

Blog

How Do You See Me?

Fiona Johnstone

30 September 2019

Fiona Johnstone

30 September 2019

How Do You See Me?

Artist Heather Dewey-Hagborg’s new commission for The Photographers’ Gallery, How Do You See Me?, explores the use of algorithms to ‘recognise’ faces. Displayed on a digital media wall in the foyer of the gallery, the work takes the form of a constantly shifting matrix of squares filled with abstract grey contours; within each unit, a small green frame identifies an apparently significant part of the composition. I say ‘apparently’, because the logic of the arrangement is not perceptible to the eye; although the installation purports to represent a human face, there are no traces of anything remotely visually similar to a human visage. At least to the human eye.

 

To understand How Do You See Me?, and to consider its significance for personalisation, we need to look into the black box of facial recognition systems. As explained by Dewey-Hagborg, speaking at The Photographers’ Gallery’s symposium What does the Data Set Want? in September 2019, a facial recognition system works in two phases: training and deployment. Training requires data: that data is your face, taken from images posted online by yourself or by others (Facebook, for example, as its name suggests, has a vast facial recognition database).

 

The first step for the algorithm working on this dataset is to detect the outlines of a generic face. This sub-image is passed on to the next phase, where the algorithm identifies significant ‘landmarks’, such as eyes, nose and mouth, and turns them into feature vectors, which can be represented numerically.  For the ‘recognition’ or ‘matching’ stage, the algorithm will compare multiple figures across the dataset, and if the numbers are similar enough, then a match is identified – although this might take millions of goes. The similarity of the represented elements remains unintelligible to the human eye, calling a ‘common sense’ understanding of similarity-as-visual-resemblance into question.

 

In the western culture of portraiture the face has traditionally acted as visual shorthand for the individual; through this new technology, the face (read, the person) is transfigured into numeric code, allowing for algorithmic comparison and categorisation across a vast database of other faces/persons that have been similarly processed. How Do You See Me? asks what it means to be ‘recognised’ by AI. What version of ‘you’ emerges from a system like this, and how is it identifiable as such? Dewey-Hagborg described the project as an attempt to form her own subjectivity in relation to AI, noting that her starting point was a curiosity as to how she might be represented within the structure of the system, and what these abstractions of her ‘self’ might look like. In an attempt to answer these questions, she used an algorithm to build a sequence of images stemming from the same source, but varying widely in terms of appearance, working on the hypothesis that eventually one of these pictures would be detected as ‘her’ face. The grid of abstract and figuratively indistinct images on the wall of The Photographers’ Gallery can thus be understood as a loose form of self-representation, the evolution of a set of (non-pictorial) figures that attempt to be detected as a (specific) face. By interrogating the predominantly visual associations of ‘similarity’, which may once have implied a mimetic ‘likeness’ (with connotations of the pictorial portrait, arguably a dominant technology for the production of persons from the sixteenth to the twentieth-century), but which now suggests a statistical correspondence, How Do You See Me? draws attention to changing ideas about how a ‘person’ might be identified and categorised.

 

Following her own presentation, Dewey-Hagborg discussed her practice with Daniel Rubinstein (Reader in Philosophy and the Image at Central St Martins). Rubinstein argued that this new technology of image-making can teach us something about contemporary identity. Considering our apparent desire to be ‘recognised’ by our phones, computers, and other smart appliances, Rubinstein suggested that the action of presenting oneself for inspection to a device resembles the dynamics of an S&M relationship where the sub presents themselves to the dom. Rubinstein argued that we want to be surveyed by these technologies, because there is a quasi-erotic pleasure in the abdication of responsibility that this submission entails. Citing Heidegger, Rubinstein argued that technology is not just a tool, but reveals our relation to the world. Life and data are not two separate things (and never have been): we need to stop thinking about them as if they are. The face is already code, and the subject is already algorithmic.

 

Rubinstein’s provocative remarks certainly provide one answer to the question of why people might choose to ‘submit’ to selected technologies of personalisation. They also help us to address personalisation. The project People Like You promises that we will try to ‘put the person back into personalisation’. Whilst this could be taken to imply that there is a single real person – the individual, we aim instead to consider multiple figurations of the ‘person’ on an equal footing with each other. As Rubinstein’s comments suggest, rather than thinking about this relationship in terms of original and copy (the ‘real’ person or individual and a corresponding ‘algorithmic subject’ produced through personalisation), the ‘person’ is always every bit as constructed a phenomenon as an ‘algorithmic’ subject. Or, to put this another way, rather than taking a liberal notion of personhood for granted as our starting point, our aim is to interrogate the contemporary conditions that make multiple different models of personhood simultaneously possible.

Activities

Panel Event

Algorithmic Identities Workshop

On July 9, 2019, the first workshop of the Interdisciplinary Project “Algorithmic Identities: Issues and reactions to the collection of digital data and algorithmic inferences in everyday life” was held at Senate House, the University of London. This project is directed by researchers Martín Tironi, Matías Valderrama and Denis Parra of the Pontifical Catholic University of Chile, in close collaboration with the academics of the Centre for Interdisciplinary Methodologies of the University of Warwick Celia Lury and Scott Wark, who are studying personalisation in digital culture in the project “People Like You: Contemporary figures of Personalisation”. 

 

The “Algorithmic Identities” project starts from the fact that the Internet and digital innovations of all kinds have opened new ways of configuring, knowing and representing people. If in the ‘90s there was a socio-technical imaginary of the Internet as self-enclosed cyberspace where anonymity and experimentation with multiple virtual identities prevailed, with the growing ubiquity of sensors, smartphones and various algorithms in everyday life, it seems that now we are in a scenario of continuous de- and re-identification and the algorithmic profiling of people. Our identities are increasingly translated into bits of information that are processed to infer and predict individual traits and consumer preferences. Digital platforms such as Google, Spotify or Amazon continually personalise and recommend content and products to us based on complex and commonly inscrutable and opaque algorithmic systems that seek to predict our tastes and desires with great accuracy. In response, artists and activists have problematised the increasing surveillance carried out by these platforms, demanding more data protection regulations and developing tactics to disrupt, obfuscate and resist the technologies of identification and their possible discriminatory or harmful consequences. However, inside this debate, few studies have addressed how people interpret, feel and understand the processes of algorithmic profiling and recommendation, leaving it uncertain how algorithmic systems operate and intervene in everyday life and unclear how people respond to the kinds of subjectivities or individual identities proposed to them by these algorithmic processes.

 

This project seeks to study how personhood is configured in times of algorithms and digital data. From an interdisciplinary approach at the intersection of computing, sociology and design, the project’s general objective is to analyse how people react and thematise the extraction of digital data and algorithmic predictions about their identity. For this purpose, the project is developing an experimental design intervention, combining a qualitative approach with digital data. Considering the opaque and inscrutable algorithmic systems of large digital platforms, the project will conduct an experimental intervention by designing a prototype of a smartphone app called Big Sister that can collect social media data or free texts uploaded by volunteer participants from Chile and the United Kingdom to provide inferences about personality and cultural preferences. Through an interactive visualization, the volunteer participants will be able to explore their profiles and play with the analysed data. Through in-depth interviews with volunteer participants based on their experiences with the results and algorithmic predictions of the app, we will analyse how people experience, interpret and problematise the daily generation of digital data, enabling a better understanding of how our digitally mediated identities are shaped and staged in contemporary societies.

In the workshop, we had the opportunity to discuss the design and development of the Big Sister app, as well as make methodological decisions about the interviews of digital traces, which will be carried out towards the end of 2019. Along with this, we thought about different ways of conceptualizing personal relationships with data and algorithms, from colonialism to kinship, and future steps of the project were defined, as ways to strengthen our Chilean-British collaboration. To know more details of the research project and stay alert for their news, you can access here: http://plataformasdt.cl/proyectos/identidades-algoritmicas/; and https://peoplelikeyou.ac.uk/

 

This project is funded and supported by the “Interdisciplinary Research Competition 2018” of the Vice-Rector for Research (VRI) of the Pontificia Universidad de Chile and the People Like You project, a Collaborative Award in the Medical Humanities and Social Sciences by the Wellcome Trust Foundation, and the Fondecyt project N°1180062: “Datafication of urban environments and individuals: an analysis of the designs, practices and discourses of the production and management of digital data in Chile”.

Publications

Day S., Lury, C.

Quantified: Biosensing Technologies in Everyday Life, 2016

This chapter argues that tracking involves an increasingly significant and diverse set of techniques in relation to the ongoing transformation of relations between observer and observed, and between observers. These developments include not only the proliferation of individual sensing devices associated with a growing variety of platforms, but also the emergence of new data infrastructures that pool, scale, and link data in ways that promote their repurposing. By means of examples ranging from genes and currencies to social media and the disappearance of an airplane, it is suggested that practices of tracking are creating new public-private distinctions in the dynamic problem space resulting from the analytics that pattern these data. These new distinctions are linked to changing forms of personhood and changing relations between market and state, economy and society.

Day, Sophie E. and Lury, Celia. 2016. Biosensing: Tracking Persons. In: Dawn Nafus, ed. Quantified: Biosensing Technologies in Everyday Life. Cambridge MA: MIT Press, pp. 43-66. ISBN 978-0-262-52875-7 [Book Section]