People Like You
Contemporary Figures of Personalisation

People Like You

Personalisation is changing many parts of contemporary life, from the way we shop and communicate to the kinds of public services we access. We are told that purchases, experiences, treatments, and interactions can all be customised to an optimum.

As a group of scientists, sociologists, anthropologists and artists, we are exploring how personalisation actually works. What are optimum outcomes? Do personalising practices have unintended consequences?

We argue that personalisation is not restricted to a single area of life and that personalised practices develop, interact and move between different sites and times. The project is split into four areas: personalised medicine and care; data science; digital cultures; interactive arts practices.

People Like You: Contemporary Figures of Personalisation is funded by a Collaborative Award in the Medical Humanities and Social Sciences from The Wellcome Trust, 2018–2022.



Scott Wark

16 July 2019

Scott Wark

16 July 2019


One of this project’s lines of inquiry is to ask who the “person” is in “personalisation”. This question raises others: Is personalisation actually more personal? Are personalised services about persons, or do they respond to other pressures? This question also resonates differently in the three different disciplines that we work in. In health, it might invoke the promise of more effective medicine. In data science, the problem of indexing data to persons. In digital culture, though, this tagline immediately invokes more sinister—or at least more ambiguous—scenarios, for me at least. When distributed online services are personalised, how are they using the “person” and to whose benefit? Put another way: Whose person is it anyway?


What got me thinking about these differences was a recently-released report on the use of facial recognition technologies by police forces in the United Kingdom. The Metropolitan Police in London have been conducting a series of trials in which this technology is deployed to assist in crime prevention. Other forces around the country, including South Wales and Leicester, have also deployed this technology. These trails have been contentious, leading to criticism by academics, rights groups, and even a lawsuit. As academics have noted elsewhere, these systems particularly struggle with people with darker skin, which they have difficulty processing and recognising. What it also got me thinking about was the different and often conflicting meaning of the “person” part of personalisation.


Facial recognition is a form of personalisation. It takes an image, either from a database—in the case of your Facebook photos—or from a video feed—the Met system is known as “Live Facial Recognition”—and processes it to link it to a profile. Online, this process makes it easier to tag photographs, though there are cases in which commercial facial recognition systems have used datasets of images extracted from public webpages to “train” their algorithms. The Live Facial Recognition trials are controversial because they’re seen as a form of “surveillance creep”, or a further intrusion of surveillance into our lives. Asking why is indicative.


The police claim that they are justified in using this technology because they operate it in public and because it will make the public safer. The risk that the algorithms underlying these systems might actually reproduce particular biases built in to their datasets or exacerbate problems with accuracy around different skin tones challenge these claims. They’re also yet to be governed by adequate regulation. But these issues only partly explain why this technology has proven to be so controversial. Facial recognition technologies may also be controversial because they create a conflict between different conceptions of the “person” operating in different domains.


To get a little abstract for a moment, facial recognition technology creates an interface between different versions of our “person”. When we’re walking down the street, we’re in public. As more people should perhaps realise, we’re in public when we’re online, too. But the person I am on the street and the person I am online isn’t the same. And neither person is the same as the one the government constructs a profile of when I interact with it—when I‘m taxed, say, or order a passport. The controversy surrounding facial recognition technology arises, I think, because it translates a data-driven form of image processing from one domain—online—to another: the street. It translates a form of indexing, or linking one kind of person to another, from the domain of digital culture into the domain of everyday life.


Suddenly, data processing techniques that I might be able to put up with in low-stakes, online situations in exchange for free access to a social media platform have their stakes raised. The kind of person I thought I could be on the street is overlaid by another: the kind of person I am when I’m interfacing with the government. If I think about it—and maybe not all of us will—this changes the relative anonymity I might otherwise have expected when just another “person on the street”. This is made clear by the case of a man who was stopped during one facial recognition trial for attempting to hide his face from the cameras, ending up with a fine and his face in the press for his troubles. Whether or not I’m interfacing with the government, facial recognition means that the government is interfacing with me.


In the end, we might gloss the controversy created by facial recognition by saying this. We seem to have tacitly decided, as a society, to accept a little online tracking in exchange for access to different—even multiple—modes of personhood. Unlike online services, there’s no opt-out for facial recognition. Admittedly, the digital services we habitually use are so complicated and multiple that opting out of tracking is impracticable. But their complexity and the sheer weight of data that’s processed on the way to producing digital culture means that, in practice, it’s easy to go unnoticed online. We know we have to give up our data in this exchange. Public facial recognition is a form of surveillance creep and it has rightly alarmed rights organisations and privacy advocates. This is not only because we don’t want to be watched. After all, we consent to being watched online, having our data collected, in exchange for particular services. Rather, it’s because it produces a person who is me, but who isn’t mine. The why part of “Why am I being tracked” merges with a “who”—both “Who is tracking me”? and “Who is being tracked? Which me?”


In writing this, I don’t mean to suggest that this abstract reflection is more incisive or important than other critiques of facial recognition technology. All I want to suggest is that recognising which “person” is operating in a particular domain can help us to get a better handle on these kinds of controversies. After all, some persons have much more freedom in public than others. Some are more likely to be targeted for the colour of their skin how respectable they seem, how they talk, what they’re wearing, even how they walk. In the context of asking who the “person” is in “personalisation”, what this controversy shows us is that what “person” means is dependent not only on this question’s context, but the ends to which a “person” is put. Amongst other things, what’s at stake in technologies like these is the question of whose person a particular person is—particularly when it’s nominally mine.


The question, Whose person is it anyway?, is a defining one for digital culture. If recent public concern over privacy, data security, and anonymity teach us anything, it’s that it’ll be a defining question for new health technologies and data science practices, too.


Competition Winners announced

We are pleased to announce the winning entries in the competition on the theme of People Like You.

The three judges were Celia Lury, Martin Tironi and Nina Wakeford. They were impressed by the range of ways in which the entries responded to the provocation posed by the competition.

The ‘People Like You’ project team want to thank everyone who submitted an entry. They have helped us think about what personalisation means and will inform later stages of our research. We will be writing a blog to describe the process of designing and running the competition.

You can see the winning entries on our competition website

Many congratulations to all of our winners:

Carolyn Meyer – First Prize

Sophie Wood – Second Prize

Clement O’Donovan – Third Prize

Mariam Menteshashvili – People’s Choice Prize


Day S., Lury, C.

Quantified: Biosensing Technologies in Everyday Life, 2016

This chapter argues that tracking involves an increasingly significant and diverse set of techniques in relation to the ongoing transformation of relations between observer and observed, and between observers. These developments include not only the proliferation of individual sensing devices associated with a growing variety of platforms, but also the emergence of new data infrastructures that pool, scale, and link data in ways that promote their repurposing. By means of examples ranging from genes and currencies to social media and the disappearance of an airplane, it is suggested that practices of tracking are creating new public-private distinctions in the dynamic problem space resulting from the analytics that pattern these data. These new distinctions are linked to changing forms of personhood and changing relations between market and state, economy and society.

Day, Sophie E. and Lury, Celia. 2016. Biosensing: Tracking Persons. In: Dawn Nafus, ed. Quantified: Biosensing Technologies in Everyday Life. Cambridge MA: MIT Press, pp. 43-66. ISBN 978-0-262-52875-7 [Book Section]