In 2016, NHS England claimed in their personalised medicine strategy,

“if we get our approach right, the NHS will become the first health service in the world to truly embrace personalised medicine. We will create a healthcare system focused on improving health, not just treating illness, able to accurately predict disease and tailor treatments” (NHS England 2016).

Recognising that personalisation was happening across multiple domains, not just healthcare and medicine, our collaborative project has asked how personalisation affects our concepts of the person, and what the consequences are for health and well-being. As the project ends, we briefly look back at the past four and a half years.

The phrase and title of our project, ‘People Like You’, refers to an address and an invitation, for example in a recommendation to buy something (‘people like you buy things like this’). It implies that you are one of a group or category sharing something in common, and therefore you (singular and plural) might welcome this recommendation. We suggest that ‘People Like You’ encapsulates a wider framing of recommendations than the online shopping and browsing that we all know so well. ‘People Like You’ are recommended tailored health treatments and prevention messages, public services, including personalised social care budgets, and social movements (via hashtags inviting ‘likes’, sharing and affiliation).

In a recent book, Michael Schrage situates current algorithmic methods in a longer history of what he calls recommendation engines, for example through divination or astrology, and provides details of current methods using case studies such as Facebook (Schrage 2020),

“Recommendation inspires innovation: that serendipitous suggestion—that surprise—not only changes how you see the world, it transforms how you see—and understand—yourself. Successful recommenders promote discovery of the world and one’s self…Recommenders aren’t just about what we might want to buy; they’re about who we might want to become.” (see too Rieder’s Engines of Order, 2020)

Our project team includes anthropologists, sociologists, clinical scientists, and creative practitioners. We have worked on case studies to explore how personalisation happens, and focused on three interconnected Ps which we consider common to personalisation across different sectors: Participation that also allows for the tracking of personal data; Precision in the making of categories of person based on similarities, differences, and preferences; and Prediction (or Prescription) in making these categories relevant and actionable for a variety of purposes.

Our empirical studies have included breast cancer medicine and care, HIV research, digital culture and algorithmic identities, and data science in health studies. Halfway through the project we were disrupted by the pandemic and shifted our collaboration online. Fieldwork for some case studies had to be suspended, and one of us (HW) was re-deployed to the Imperial COVID-19 Response Team. Working on this major health problem created new opportunities to study practices relevant to personalisation (see blogs on lockdown in Italy, the 2 metre rule, shielding, two-by-two, and COVID-19 categories).

Three artists completed residencies: Di Sherlock’s held conversations in our research sites of personalised cancer medicine and care from which she crafted poems (Written Portraits); Felicity Allen explored questions of representation and ideas of the self associated with digital culture (Dialogic portraits); Stephanie Posavec explored data science and personalisation in a large health research project, Airwave (Data Murmurations: points in flight).

What have we gleaned from our four-and-a-half-years’ collaboration? Our website documents activities and events, case studies, outputs, artwork, and emerging insights in our blog. In this final reflection we offer a few examples of themes spanning this wide range of work.

The relationship of persons and data has been a consistent theme. Stef Posavec explored this explicitly in her residence with the Airwave study, “how is the original donating person (participant) figured at every step of the collection and research process?” Stef figured participants as ‘data point clouds’, explaining in one of her talks, “we are all a cloud of data points that are just waiting, latent within us, to be activated through data collection”.

Stef shows how data points are selected and organised in the process of data collection and draws attention to those that are not collected, during conversations with research nurses, for example; they are “lost… (data) points bouncing away like the particle trails and quantum physics experiments”. One of her informants, a data manager, likened unstructured data in the system to “blobs of binary”.

Felicity Allen also reflected on how a person might be figured in a portrait in the form of a painting, a poem or a data set. As she explained, “In this project I’ve often thought of the brushstroke as a form of data which builds a picture of a person… So what type of brushstroke most resembles data, and whose data is it, the sitter’s or mine?”

This relation between the person and their representation in data or art was tackled explicitly in Di Sherlock’s Written Portraits residency. Di talked about sitters and their response to the poems she created; “the poems were negotiated and edited until they were considered to fit, that is, to constitute a resemblance or likeness that both sitters and poet liked.” This speaks directly to two aspects of liking – similarity and preference – which we have proposed as key elements to personalisation.

We asked about these and other ways of figuring persons in a conference at the end of 2019, Figurations: Persons In/Out of Data. Some of the key papers from this meeting will be published this summer 2022 by Palgrave Macmillan. The collection and other work we have  done asks who has the power and capacity to develop personalised services, products and offers. Despite contrasting perspectives from personalising practices in different sectors, our public event in September 2021 agreed that personalisation can deepen inequalities and exclusion.  In medicine, translational research that underpins personalisation is based on large collections of data and biological specimens which accumulate in the process of clinical studies. Many groups are underrepresented in these resources because of unequal Participation, and so recommendations emerging from the findings are not necessarily relevant or appropriate. Findings from biobanking are also relevant to the knowledge and recommendations constructed in relation to COVID-19.

Flick explained how, “Working with ‘People Like You’ extended my thinking, from power relations imbued in the individual gaze of a privileged artist, to connect this to the intransigent and godlike oversight industrialised as surveillance. It led me to think that the failures to adequately represent through portraiture may possibly be analogous with the failures to adequately predict through data”. The tension between potentially useful personalisation, where services and medicines may be tailored to the needs of an individual, and potentially damaging surveillance is visible across our work.

Grouping ‘people like you’ based on patterns in data (by preference, similarity, proximity) can be very Precise and can remain Precise as data sets grow, are linked and statistical computing power increases. Categories of ‘People Like You’ can be very granular and reflect an almost real-time grouping and regrouping of people, underpinning recommendation algorithms in many sectors. But they do not necessarily enable accurate Prediction. Acting on a pattern to target an advert may not seem particularly sinister but it has been shown that vulnerable groups receive damaging, and do not receive relevant, recommendations. In medicine, detecting patterns of ‘People Like You’ at group level and using them to prescribe, or proscribe, for individuals is challenging, and in healthcare at least we should be wary of using algorithms to automate decisions.

These are just some of the themes of the project….and an overview of our approach to the 3Ps of Personalisation is in press with Distinktion. We will post a link on this website to Personalisation: A new political arithmetic? when available; it will be part of the special issue emerging from our 2021 workshopPeople Like You: A New Political Arithmetic.


Which data best describes ‘you’?

Helen Ward, Roz Redd and Stefanie Posavec

8 July 2022

Helen Ward, Roz Redd and Stefanie Posavec

8 July 2022

At the Great Exhibition Road Festival in June 2022, we asked visitors to our art exhibition, Data Murmurations, Points in Flight, to take part in a small game. They were asked, ‘which five datasets/points would paint the most accurate and complete picture of who you are as a person?’

People were presented with 16 types of data and asked to pick just 5. Each type had a colour and shape code so that people could make a visual record of their response.  Here are a few responses displayed on the board.

The game proved popular and appealed to children, adults, scientists, artists… all kinds of people. Over the two days of the exhibition, 114 cards were made with a total of 568 stickers (not everyone used all 5 choices). We added up the number of each type of sticker and displayed them to make a visual frequency, and a frequency chart of the whole distribution, grouping the categories into broad areas – health/biomedical, demographic, family history, social activities.

People spent a lot of time thinking and sometimes discussing which categories were important. This was a game, not a piece of research, and we didn’t ask for any information about people who participated, so we can’t say if there were any differences by age, for example. But it is still “data”. Not data about the people, but about what they, individually and as a population, chose as important types of data to describe them as a person.


More than half of the people chose education/ employment history (dark blue square). More than half picked friend network (orange circle). And more than half picked DNA (blue circle). But only 19 picked all three. Other popular categories were internet browsing/searching history (gold circles) and family tree/genealogy (silver circles). Surprisingly (perhaps), fewer than one in five people chose gender (green circles), and just over a third chose date of birth (green triangles), which is interesting since these are the classic categories that researchers tend to start with when describing people and populations. We talked to several people about their choices after they had finished, and some said they had not picked basic demographic things as it didn’t say much about them as a person to know their age or gender, others thought that it could be worked out from other things they had selected like education/employment or internet browsing history. Some thought more abstractly about it: if you know a person’s age, you know so much about what they know about the world. Only 12% of people chose blue stars (bank statements). But one of those people said they wanted to pick it five times because it said everything about them since they didn’t have any money.

There were some clusters in the data. For example, those who chose education and employment history were less likely than others to pick video streaming history or medical history. For those who chose friend networks were less likely to include address history, gender, family health history, or medical tests.  For those who chose DNA, browsing history, streaming history and gender were not common choices.

Using a method called multiple correspondence analysis to pick apart important variation in the kinds of data that people think are important, we found that those who pick family health background and medical tests (along with health tracking) tend not to think browsing history and digital communications nor movement tracking are important, and vice versa.  40% of the variation in choices falls along these divisions.  Another significant variation is between those who pick gender or address history and DOB (the more traditional sociodemographic variables), which do not commonly co-occur with picking family tree, health tracking and DNA.  This accounts for 10% of the variation in choices. (see chart)

Chart showing results of the multiple correspondence analysis, with the x axis showing the dimension accounting for most variation with family health background and medical tests and health tracking on the left and browsing history, digital communications and movement tracking on the right. The y axis whose the other significant dimension with gender, address history and DOB on the lower part and family tree, health tracking and DNA higher up. 

This was of course only a game. But isn’t it interesting how people have such variation in choices? In the People Like You project we have been looking at how the ways that people are grouped, using big data, are changing. Traditional groupings based on sociodemographic variables such as age, gender, social class, are no longer the only way to describe people like you. We are now likely to receive recommendations based on our activities, for example shopping, browsing history, travel, with the argument that ‘people like you buy/like things like this’.

From our little game, it seems that people have very different views about what kind of  ‘would paint the most accurate and complete picture of who you are as a person’.

This person does not exist

Matías Valderrama Barragán

27 June 2022

Matías Valderrama Barragán

27 June 2022

How does personalisation relate to the existence of the person? Algorithms developed for personalisation intersect in a strange way with algorithms dedicated to the generation of people who do not exist. We find ourselves in a scenario where algorithmic mediation allows for not only the personalisation of various things, but also the personalised creation of persons. This question comes to mind when thinking about a genre of deep learning models so-called generative adversarial networks (GAN) and the recent enthusiasm about Dall-e mini and the algorithmic generation of synthetic images of the weirdest things.

An example of a generative adversarial network is StyleGAN, developed by research scientists Tero Karras, Samuli Laine and Timo Aila at NVIDIA –the US multinational graphics hardware manufacturer. The researchers published the paper A Style-Based Generator Architecture for Generative Adversarial Networks in Arxiv in December 2018. It would be quickly shared by the website Synced (2018), where was described as a new hyperrealistic face generator. And indeed, the images it generates are so realistic that a human eye cannot distinguish if they are computer-generated or not. StyleGAN depends on Nvidia’s CUDA software, GPUs and Google’s TensorFlow. To train StyleGAN, the researchers created the database Flickr-Faces-HQ (FFHQ) of 70,000 high-quality PNG images of human faces with a resolution of 1024×1024 that they collected from Flickr, thus inheriting all the biases of that website although, according to the researchers, the collection presents a large variability in terms of age, ethnicity, image background, as well as accessories (glasses, sunglasses, hats, etc.). The images collected were uploaded by Ficklr users under a creative commons license, which raises questions about their use by NVIDIA researchers. While the researchers published the database openly on GitHub, it is still an NVIDIA-based initiative seeking commercial advantage, so it is questionable whether the researchers respected the users’ non-commercial creative commons license.

In February 2019 Tero Karras and Janne Hellsten created a repository on GitHub with the source code to run StyleGAN in Python. In the same month, the website This Person Does Not Exist was created by uber engineer Phillip Wang, who utilised StyleGAN to produce an artificial human face every 2 seconds or web page reloads. The website became a huge attraction, receiving millions of visits and being covered by several news articles (e.g. Fleishman, 2019; Hill & White, 2020). As noted in one news article, the website “renders hyper-realistic portraits of completely fake people” by bringing two neural networks into competition: the generator and discriminator. “The generator is given real images which it tries to recreate as best as possible, while the discriminator learns to differentiate between faked images and the originals. After millions-upon-millions of training sessions, the algorithm develops superhuman capabilities for creating copies of the images it was trained on” (Paez, 2019). Beyond the claims of superhuman capabilities around GANs, such models suggest interesting ways of thinking about the generation of synthetic images as a practical accomplishment of social interactions between generative and discriminative agents. The existence of what they generate becomes secondary when the focus is on maximising the ability to create realistic images, which is nothing more than minimising the discriminator’s ability to differentiate between images in the training set and new images. (For a discussion of GANs and their connections to game theory and Bourdieu’s social theory, see Castelle, 2020.)

Our human ability to discriminate images has never been free of technological mediations, but certainly computational models such as GAN raise questions about the possible deceptions that could be generated by such models. In interviews, Wang said he set up the website to convince some friends to join in research on artificial intelligence. But his main motivation was primarily to make a demonstration to raise awareness of the supposed revolutionary and dangerous capabilities of the technology (Paez, 2019):  “My goal with the site is modest. I want to share with the world and have them understand, instantaneously, what is going on, with just a couple of refreshes.” (Wong in Bishop, 2020). In Wang’s framing, people who are unaware of the capabilities of this and other technologies associated with AI will fall victim to its misuse. So his awareness-raising efforts are aimed at increasing vigilance and encouraging the public not to fall for fake images on the Internet. Only in this way, Wang said, will the positive uses of GANs for the world be developed.

More recently, new versions inspired by This Person Does Not Exist have emerged such as This Cat Does Not Exist, This Chemical Does Not Exist, This Rental Does Not Exist, This MP Does Not Exist, This Map Does Not Exist, among many other examples that can be found on a website created by Kashish Hora which is named as This X Does Not Exist. As the website succinctly puts it, GANs allow us to “create realistic-looking fake versions of almost anything”. From this imaginary, everything could be created synthetically, if the proper data, models and computational capacities were available.

Close to the release of This Person Does Not Exist, Jevin West and Carl Bergstrom at the University of Washington created the website which is a simple test in which – as the name suggests – the user has to decide which face is real between pairs of images, one obtained by This Person Does Not Exist and the other from the FFHQ database of real people’s faces. Like Wong’s website, the researchers are using this test to raise awareness of the potential for synthetic image generation enabled by generative adversarial networks: “Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance” (West & Bergstrom, 2019). These efforts to raise awareness of GANs’ capabilities turn out to be performative in the sense that they increase the wonder or play around GANs, expanding their potential applications to new domains and creating new non-existent entities.

It is curious that Wong decided to call his website This Person Does Not Exist. If we think with Rene Magritte and his famous work The Treachery of Images, one would say This Is Not a Person. If the machine is agnostic as to the necessity of existence for persons, then the personhood status of the image remains unchallenged. And with that, we could speculate what personality traits, religion, political orientation or preferences this non-existent person would have with further algorithmic processing of the image. In a world of algorithmic personalisation, it becomes crucial to ask then about the treachery of synthetic images.

These synthetic images, rather than something completely new, could be considered as new versions of pseudonyms or multiple-use names in which it is not a name that is fictitious but rather an image itself. More than the impersonation of another person, it becomes increasingly problematic how algorithmic mediation confronts us with non-existent persons and the intended and unintended uses this will have. There has been speculation about how synthetic images and videos could affect the porn industry. For example, the journalist Katie Bishop (2020) mentioned the case of This Person Does Not Exist in relation to the proliferation of Deep Fakes, reflecting on how the future of porn might be to create videos without real performers. But perhaps the most harmful uses of Generative Adversarial Networks such as StyleGAN have been in political propaganda campaigns to create fake accounts posing as real humans to spread false or misinformation and thus twist debate within platforms such as Twitter. For example, users have detected multiple botnets promoting content about cryptocurrencies or political candidates, using GAN-generated images as profile pictures, possibly even pulling images from ThisPersonDoesNotExist. These images add legitimacy to fake profiles, given how difficult it is to differentiate the real from the fake. Adding a unique human face to profiles is especially useful for creating personas that are convincing and amplify the information shared, whether managed by humans or automated or semi-automated. This had already been reported in 2019 with Facebook announcing the removal of several accounts associated with profile pictures created by GAN (Gallagher & Calabrese, 2019). There have also been several reported attacks, manipulating public opinion on social media by spreading false information surrounding the Russian invasion of Ukraine (Collins & Kent, 2022). According to journalist Ben Collins, Russian troll farm accounts used AI-generated faces to simulate the identities of Ukrainian columnists, framing Ukraine as a failed state. He showed the next example of a face of a person confirmed by Facebook to be fake: “She doesn’t really exist.”

Two responses to GANs have emerged: the first aspires to develop new computational models that detect and warn of synthetic images. Such a path would be to fall back into technological solutionism in order to recover a kind of representationalism based on truth versus falsehood. Interestingly, this process already exists in GAN itself as mentioned above, with the incorporation of a second neural network in charge of judging or discriminating the reality or falsity of the generated image. Of course, the idea of reality or veracity is reduced to an operation based on what has been previously recorded and processed as a training set. Another perhaps less realistic possibility is not only to raise awareness but also to educate the human eye to detect this kind of synthetic imagery. In this proposal, the viewer is advised to look at issues such as distorted backgrounds, vestigial heads or “side demons” near the edges of the image, non-symmetrical ears, malformed accessories or check that the eyes are centred at the same distance from the centre of the image, among other examples. But perhaps a third possibility is to recognise that it often matters little whether it is a computer-generated image or a real one. Perhaps more than learning to distrust names and text, we need to learn to distrust the representational function of images –and videos. In short, to cast suspicion on the existence of the things we see on the Internet. Such a suggestion is nothing new if we think about how research and news reports have warned of the presence of virtual impostors since the 1990s not without some moral panic. But perhaps imposture acquires new meanings when we consider how large volumes of images are extracted to create other synthetic ones through GANs, reformulating what we consider – or not – to be a person, a cat, a chemical or even an X.



Bishop, K. (2020, February 7). AI in the adult industry: Porn may soon feature people who don’t exist. The Guardian.

Castelle, M. (2020). The social lives of generative adversarial networks. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 413.

Collins, B., & Kent, J. L. (2022, February 28). Facebook, Twitter remove disinformation accounts targeting Ukrainians. NBC News.

Fleishman, G. (2019, April 30). How to spot the realistic fake people creeping into your timelines. Fast Company.

Gallagher, F., & Calabrese, E. (2019, December 31). Facebook’s latest takedown has a twist—AI-generated profile pictures. ABC News.

Hill, K., & White, J. (2020, November 21). Designed to Deceive: Do These People Look Real to You? The New York Times.

Paez, D. (2019, February 21). ‘This Person Does Not Exist’ Creator Reveals His Site’s Creepy Origin Story. Inverse.

SYNCED. (2018, December 14). GAN 2.0: NVIDIA’s Hyperrealistic Face Generator | Synced. SYNCED.

West, J., & Bergstrom, C. (2019). Which Face Is Real? Which Face Is Real.



One of the things that People Like You has been looking at since the start of the project four years ago has been how qualitative and quantitative relationships are being brought together in novel ways. Contemporary practices and processes of personalisation in different fields, ranging from digital media to public health, data visualisation, cancer treatment and portraiture are explored in the blog series. The project has looked at how society is contained in different formations and positions of personhood, as well as looking at how persons can form societies in a multiplicity of ways. A sustained claim throughout the project has been that how persons or individuals and societies are brought together in relations has changed significantly since the emergence of the nation state and the very particular political arithmetic that accompanied such emergence, an arithmetic deployed in different spheres of public life for governing and modulating populations.

Race, gender and class have been at the heart of such constructions and classifications of systems of difference in Western societies (Haraway 1991), that is, nominal ways of categorising elements or units according to socially and politically defined qualities or kinds. Although qualitative in their formation and construction, these classifications of systems of differences have always worked hand in hand with classifications of systems of quantities and ordering like ratios, proportions, ranks, commensuration, and so forth.

In a blog post for the project back in March 2019, Celia Lury noted that in the last few years we have seen the emergence of a range of categories that overflow the traditional gender categories of women and men. Helen Ward and Rozlyn Redd also pointed at the formation of new health or medical categories as a consequence of the Covid pandemic. In a recent paper by Thao Phan and Scott Wark, the project looked into how the deployment of ethnic affinity categories in machine learning enable a redefinition of what we understand as race.

These new modes of categorisation are disconcerting, and we find ourselves many times in Goffmanian natural experiments, as when, for example, using gender neutral toilets and encountering persons of other genders than the ones we identify with, we wonder for a second whether we have chosen the wrong toilet by mistake. Similarly disconcerting and embarrassing is when we incorrectly address someone using the wrong pronoun, and we can hear the traditional binary category distinctions rattling and shaking silently, haunting us in the background.

We also have also the possibility of inhabiting other forms of collectives and plural formations that exceed gender, class or race individually, of being commensurate and grouped together with people we never thought we would have affinities with, or with people we always wanted to have affinities with, but we were never grouped with before. A recent controversial #intersectionality TikTok video with 217.1 k likes and 20.3 k comments stated:


‘Cc: I’m definitely going to get a lot of hate for this but

Can we talk how men of color & white women are kinda the same?

 I know they are not the same but idk the way they both don’t understand intersectionality,

Or the way they weaponize their privilege while simultaneously victimizing & centering themselves?

Just a thought’


At other times, we find ourselves uncategorisable, in desperate need of a category to hand, or a sense of belonging, if the two have anything in common. A Twitter user during the pandemic said:


I feel like almost all of my closest professional friends are pro-vax, anti-vax mandate, mask-sceptical, anti-school closure (My governor is, too). This set of positions should have a name.


This social shift towards generating new modes of categorisation and other modes of addressing people is quite clear in the use of inclusive language, also referred to as neutral or generic language, which is spreading in different countries, particularly those with languages still structured around masculine and feminine grammatical forms. This is not only a shift towards incorporating, referencing, signalling other gender categories beyond the traditional binary distinction between female and male, but also a shift towards rearticulating the relationships between kinds or categories in relation to singular and plural formations – redefining the relationships between qualities and quantities. In this sense at least, grammar might have much more in common with political arithmetic than we might initially be willing to accept.

These relatively new uses of the plural and singular in relation to gender categories do not entail an assertion of the singular over the plural, or an obliteration of singular forms by plural formations, but a very different coming together of the relationship between individuals and their identities on the one hand, and the collectives or plurals they become part of on the other. Who and what is in the containment of singular and plural grammatical formations and language uses is certainly up for grabs.

Through inclusive language, a singular person can be designated to partake in a plurality of unspecified genders as when the pronoun “they” is used in its singular form in English; or when the singular pronoun “elle” is used in a similar way in Spanish. Additionally, with the use of genderless plurals like “nosotres” (we) in Spanish, an indefinite multiplicity of individual genders (and arguably other identitarian expressions) can be contained as part of a collective. The “dividual” Melanesians that Marilyn Strathern describes (1988) would certainly relate to and understand these identifications and belongings.

The example I want to highlight is inclusive language and its use in Spanish Argentinean, as the use of such linguistic forms has been highly contested and controversial. To think through these changes, I had the chance of speaking with two linguists in Argentina, Sofia De Mauro and Mara Glozman. Both of them are currently working on trying to understand and examine inclusive language discursive formations and use in the country.

During a first interview I spoke with Sofia De Mauro who is the editor of a recent essay collection entitled Degenerate Anthology: A Cartography of “Inclusive” Language. She is a post-doctoral Research Fellow at the National Scientific and Technical Research Council of Argentina and a Lecturer in Social Linguistics at the Faculty of Philosophy and Humanities of the University of Cordoba, Argentina.

For the second interview I had a chance to speak with Mara Glozman who is a researcher at National Scientific and Technical Research Council of Argentina and a Professor in Linguistics at the National University of Hurlingham (UNAHUR) in Buenos Aires. She has also been advisor to Monica Macha, an MP in the country who has proposed to legally introduce changes so that the use of inclusive language is officially accepted and recognised as a right of expression in different public Argentinean institutions.

As Amia Srinivasan notes in her insightful and fascinating piece on pronouns, language is nothing more (and nothing less) than a public system of meaning, a system that has been implicated in establishing particular relationships between individuals and groups, the one and the many, the singular and the plural as much as arithmetic has. The rules of grammar only make sense and work effectively if they are publicly shared and collective, and they become politically, if not evolutionarily (!) embedded and reproduced in the grammatical structure of a given language and its use at a given point in time.

In language use, both qualities (the gender attributed to persons and things for example) and quantities (whether words refer for example to one person or more than one person in the use of the plural and/or singular) work together by specified and agreed orderings, designations, and rules. In English, for example, there are two grammatical categories of number: that of the singular (usually a default quantity of one) and that of the plural (more than one). In English, the grammatical categories of gender are for example binary (she/he) only in the singular form. How plural and singular forms are combined with gender forms varies according to different languages. In the grammatical Spanish advocated by the Royal Spanish Academy, an institution that promotes the linguistic regulation and standardisation of the Spanish language amongst different cultures and countries, both singular and plural pronouns are gendered whilst nouns are also attributed with binary genders, although somehow arbitrarily.

Spanish also lacks a third person singular (or what has been defined as a third gender) that refers generically, as in English, to all non-human things and creatures (often excepting pets and sometimes ships) whenever the gender happens to be unknown: “it”. In Spanish, in contrast to English, every noun is designated with an arbitrary gender, even when inanimate. Tables, beaches, giraffes and imitations in Spanish are for example feminine. Water, monsters, hugs and heat are on the other hand masculine.

As in English, in Spanish the third person singular (ella/el; she/he) denotes a binary gender distinction. Unlike in English, in Spanish the plural pronouns “we” (nosotros/nosotras), “you all” (vosotros/vosotras), and “they” (ellos/ellas) need to be changed to match the gender of the group being spoken about or referred to. This is achieved by using the letter “a” when referring to feminine groupings, whilst the letter “o” is used when referring to masculine groupings.

If a group of persons varies/is mixed in gender, in Spanish the masculine form is used, as when in English “you guys” is used for a mixed group (as opposed to “you girls” or “you all”). Similarly in English, it has been widely accepted that when offering and using a singular but generic example in order to personify in writing and speech, “he” was used by default.

It is still the case that when referring to groups of people composed of multiple/more than one genders, the traditional grammatical Spanish rule dictates the use of the masculine and “o” to designate plural generics: los Latinos (the Latins); los scientificos (the scientists); los magos (the magicians). In recent years however, particularly in Argentina but also in other Latin American countries, a social and linguistic movement has started to emerge that proposes to replace with an “e” or other marks such as “x” or “@” not only the ordinary gendered way of referring to non-gender specific plurals in the masculine “o”, but also to groupings in both the binary feminine “a” and masculine “o”.

A few days into the first lockdown in Argentina, in a speech addressing the nation about the pandemic, president Alberto Fernandez referred to the population of the country not as Argentin”o”s and Argentin”a”s (to include both men and women) but as “Argentin”e”s (somehow controversially closer to the English the “Argentines”). A myriad of legal initiatives and inclusive language guidelines in the country have promulgated the replacement of “a’s” and “o’s” by “e’s” in an attempt to make Spanish both less binary-gendered and more inclusive (see my interview with Sofia De Mauro for more on this).

However, these initiatives have also been legally and publicly repudiated by a range of different actors and have generated a backlash and a stringent call for support of traditional Spanish grammatical forms from some social groups. To counterbalance the threat posed by this backlash, a recent legal initiative led by several MPs is intending to pass into law the right of expression in inclusive language in public institutions in Argentina. In the most recent developments, however, the government of the City of Buenos Aires has banned the use of inclusive language in all of its schools and learning materials.

The quest to define and promote a more inclusive, neutral or generic Spanish is having repercussions not only in Latin America but also in the USA where journalists, politicians, scholars and also official dictionary entries are for example proposing the use of the word Latinx. The replacement of the “o” by the “x” is an alternative to Latino, the masculine form used to designate “everyone” with a Latin American background (what constitutes a Latin American background and who can call themselves Latinx is a different story).

Similarly in the UK, in 2020 an initiative took place to petition the government to include Latinx as a gender-neutral ethnic category. The petition read: ‘I want the UK Government to acknowledge the huge population of Latinx/Hispanics here in the UK by adding a box allowing us to specifically designate this as our ethnic origin in the census form. We are not White, Black, Asian, and certainly not “other.”’

The Royal Spanish Academy has refused to accept these linguistic changes, in particular the use of the “todes” (everyone) in its generic form (as opposed to the feminine “todas” and masculine “todos”). The argument is that the institution itself cannot impose these changes top-down but that they will be officially accepted once Spanish gender-neutral language ‘catches on’ with Spanish speakers in the same way as other novel words have (see my interview with Sofia De Mauro for more on this).

‘Bitcoin’ and ‘webinario’ have, however, been recognised since 2021 as linguistically Spanish (and also attributed some arbitrary gender!) due to their supposedly extended and distributed use. It is unclear however, how many times linguistic expressions need to be used and in what contexts for the Royal Spanish Academy to include them as official Spanish words. Nor is it clear whether this would be an important or relevant milestone for the inclusive language movement in its Spanish version and specificity (see my interview with Sofia De Mauro for more on this).

During my interview with Mara Glozman it became clear that the collective meaning and use of “e” as an inflection in both the singular and plural is still in flux in Argentina. In certain uses, the inflection “e” appears to be a way of avoiding the use of the generic plural in masculine “o”, a form which has, in certain feminist understandings, made persons who identify themselves as women “invisible” (for a discussion on visibility and invisibility in and through language see my interview with Mara Glozman). In other uses, the plural generic “todes” (everyone) is intended to include multiple variants of gender without any particular specification of the gender to which this plural refers. Then there is the plurality in the use of plurals too: “todos”, “todas” and “todes”. In certain cases, “todes” is only used when intending to refer to self-identified non-binary persons.

The effects of the uses of “todes” are different of course when persons do or do not self-identify as non-binary. In some instances, for example, the use of the inflection “e” has been deployed to refer to collectives of trans men who would never self-identify as non-binary and for whom such denomination would be offensive. There is also, as Mara Glozman pointed out, the use of the “e” in “just in case” instances when heterosexual cis persons use these inflections when it becomes unclear to them, from an a priori heteronormative standpoint, whether someone might “fit naturally” with either feminine or masculine modes of address.

As Mara Glozman mentioned during our interview, there is an urgency for some to fix and stabilise meanings in language and speech – but this approach does not do justice to the multiplicity, variability and indeterminacy that discourses, language and its uses both necessitate and facilitate and that inclusive language clearly makes explicit. The political richness of this indeterminacy of language, but arguably too, the political richness of the indeterminacy of identity which many social and anthropological theories have pointed to – appear to rub against some inclusive language tendencies that attempt to fix and outline the meaning of genderless pronouns once and for all.

Also at stake in inclusive language, understood as a political movement, is the gesture towards expanding traditional bounded systems of categorisation and identification, like the binary female/male, and replacing them with a boundless and unlimited array of possible gender and identitarian options. Another gesture implicated in the inclusive language movement is the idea of self-identification which goes hand in hand with the right to be named and addressed in a person’s own terms (a right that binary people already clearly have).

Persons in this sense/view should be able to define themselves as they want when they want, interchangeably at any point in time, and they have also the right to be addressed by others in such a way at any point in time. Some, however, might choose to not reveal their gender and other identitarian forms of identification and address, and might want to remain unclassifiable in public, and expect that others should also consider this when addressing persons in their own presence (or not!) in front of others.

As Amia Srinivasan points out, a way of attending to such issues of language use when the structures of gendered language are still rattling and shaking in the background is to resort to the use of personal names in order not to break any of the (new) rules and disrespect anyone. Personal names are, however, not completely devoid of social connectedness either (for good and for ill). Personal names are after all implicated in bringing together “I” and “We” identity in making kinship, class and race relationships with others as well as individuality and singularity. Unsurprisingly, the constitution of personhood through personal naming is also linked to the emergence of the modern state, as the legal requirement to have a fixed name can be traced to the certification of property rights and the need to keep accurate information on individual citizens through birth registers and certificates (Finch 2008, 711).

One can read inclusive language propositions and gestures as a way of unbounding gender-isation in a similar way that intersectionality has unbounded the individual functioning of the traditional categories of gender, race and class. This proposition can be read as one in which the processes of identity and individuation somehow take place in excess of systems of (social) classification. In this way then, persons can, both arithmetically and grammatically, become addressable, identifiable and named only in relation to themselves and not in relation to others at any point in time. This possibility has an affinity with what some marketing theories propose in relation to infinite regress processes of customization, a market or a grammar of and for one (only). The social consequences of such an experiment still remain, in my view, both politically and conceptually underexplored.




Finch, J. (2008). Naming Names: Kinship, Individuality and Personal Names. Sociology 42(4), 709-725.

Haraway, D. J. (1991). Simians, Cyborgs, and Women: The Reinvention of Nature. London: Free Association Books.

Strathern, M. (1988). The Gender of the Gift. California: University of California Press.










Sophie Day

1 April 2022

On the 15th February 2022, a Guardian headline stated, ‘Third person apparently cured of HIV using novel stem cell transplant.’ NBC News reported more cautiously, ‘Scientists have possibly cured HIV in a woman for the first time.’  This is ‘The New York patient’ who was diagnosed with HIV in 2013 and leukaemia in 2017. She has been in remission since 2017 and off HIV treatment for 14 months.

‘The New York patient’ joins two other talismanic figures who have enjoyed remission in recent years after treatment for blood cancers and HIV including ‘The London patient’ in 2019 and ‘The Berlin patient’ in 2009. Other figures of interest include elite and posttreatment controllers such as ‘The Mississippi baby’ and ‘The Esperanza patient’, whose immune systems greatly suppress viral replication. In these people, no functional virus has been found, at least for a while.

We have asked how the figure of ‘The Gardener’, a moniker for a person living with breast cancer, inspired hope and excitement because of her unusual history of illness and response to treatment. Cancers are at the centre of a biologically personalised medicine that aims to differentiate diseases and their aetiologies, especially through genomic analysis.

How do figures like The Gardener or The New York patient configure, provoke and assess personalisation in medicine, care and research?

Some people living with HIV have developed antibodies against many strains of HIV. These are called broadly neutralising antibodies (bNAbs) and they are now being trialled therapeutically. Six months ago, we began to observe a Phase II trial of bNAbs called the RIO Trial – whose name reflects the coordinating role of the Rockefeller, Imperial and Oxford. This trial builds on previous collaborations and includes several participating UK sites.  Our role is primarily to interview participants, non-participants, staff and community members.

HIV cure and remission research draws on figures such as The New York patient and elite controllers to try to understand and then generalise from their histories. In the RIO Trial, ‘a randomised placebo-controlled trial off ART (antiretroviral therapy) using dual bNAb therapy’ aims to assess long-term control of HIV in the absence of ongoing treatment and, ultimately, a cure for the infection. It is hoped that the bNAb combination may have a direct antiretroviral effect for a period and/or modulate immune response in the way of other forms of immunotherapy. Those involved (trial participants, non-participants, staff and community members) have drawn on community input to the study and guidelines from organisations such as the Treatment Action Group to monitor varied dimensions of safety.  Trial participants thought it was “a big ask” to interrupt their treatment but they also thought their contribution might have “great impact” on the field and even directly benefit them.  Our preliminary results show that research and care are combined and highly personalised; participants experience n of 1 care as they contribute to the study.

HIV medicine in the 1980s and 1990s was also personalised. We have attributed this personalisation to the concerted efforts of healthcare staff, patients, advocates and industry to find out what treatments might be effective, and to care for people who were ill. By the 2000s with the introduction of effective antiretroviral treatments, this form of personalisation had virtually disappeared, and HIV treatment has become uniform, standardised, accessible and effective for uncomplicated cases of HIV infection. Personalised HIV medicine has however continued in research such as RIO’s trial of treatment.

The New York patient’s story includes other themes of importance in HIV cure research. One of the doctors involved in the treatment, Dr Koen van Besien, explained that this new technique allows for partial matches with donors for umbilical cord blood grafts, and so it “greatly increases the likelihood of finding suitable donors for such patients” (The Guardian 15th February).  Women make up just over 10% of participants in cure trials but more than half of the world’s 35m cases of HIV and, as Steven Deeks, an Aids expert at the University of California, reported to the New York Times, “The fact that she’s mixed race, and that she’s a woman, that is really important scientifically and really important in terms of the community impact.”

Our current work may help us address the when and the who of personalised medicine. In its promissory, speculative, and experimental aspects, medical research may fuel the industrialisation and financialisaton of life, but it also anticipates its own demise. Early phase trials are embedded in translational research paradigms and a personalised approach that tries to discover what might work and what might help understand, mitigate, prevent or cure a condition in some people.  To put this another way, participants in the RIO trial hope that cure and remission research will ‘translate’ into standardised, safe and affordable treatments for people living with HIV around the world. Similarly, participants in cancer services hope that cancer care will not simply lead to more personalisation but also to effective treatments for those with similar breast cancers.

National Institute for Health Research 2018 campaign


NHS England implies that an antiquated and even dangerous one-size-fits-all medicine is the antonym of contemporary practice. This contrast obscures the figuring of personalisation as a (potential) precursor to better treatment or what Clayton Christensen (2008) understood as a contemporary one-size-fits-all Precision Medicine. As Spencer Nam notes,

“We see over and over that diseases of which we have precise understanding do not have personalized treatment methods. Instead, we cure these diseases by simple and standardized solutions that are low cost. Bacterial infections [such as H.pylori bacterium, the cause of gastric ulcers] are cured by taking antibiotics…”

This contrast also obscures the grounding of translational medicine in a collective –‘national’ – provisioning of care, which requires universal standards of equity and access to appropriate treatment for all.  Denominating this ‘all’ involves continuous calibration to the population affected by a specific kind of breast cancer, for example, or the population involved in a clinical trial or invited to a national screening programme for H.pylori bacterium. Figures of personalisation such as The New York patient inspire hope or anticipation of a medicine that is (relatively) effective for everyone.


Personalisation in the Expanded Field

Scott Wark

2 March 2022

Scott Wark

2 March 2022

Over the past few years, the People Like You project has addressed a few recurring themes. One that I’ve been particularly interested in is what I think of as the paradox of personalisation.

A lot of what we think of as personalisation rests on increasingly sophisticated data processing techniques. Take personalised online advertising, for example. When an ad pops up in one of my social media feeds, it’s not that it’s tailored specifically to me.

Rather, based on preferences I’ve expressed through actions I’ve taken online (“liking”) and preferences I share with other people who I am like (“likeness”), this kind of advertising works by inferring that because people like me have liked a particular thing, I might, too.

Often, what makes this targeting precise – or, at least, seem precise – are techniques that constantly refine the sorting and categorising mechanisms involved personalisation, or that combine multiple categories to generate new targets.

The paradox of personalisation is this: what personalisation targets isn’t necessarily me, but the category or categories to which it has inferred that I belong.

I’ve found this paradox useful to think with, for two main reasons:

In general terms, it helps dispel some of the hype surrounding personalisation. What makes new personalising techniques novel is not a magical ability to figure out who an individual is, but an increasingly sophisticated ability to categorise. So, what we’re talking about when we talk about “personalisation” is often, in fact, categorisation.

It has also helped me to account for why, for all their promises to address us as individuals, personalisation might nevertheless produce detrimental or discriminatory outcomes for particular kinds of individual – namely, people who already suffer from other forms of discrimination.

With my colleague Thao Phan, I’ve published research on how personalisation’s novel means of categorising produces new forms of racialisation. Instead of categorising people based on how they look, personalising techniques might instead categorise people based on their preferences – such as their interest in particular language groups, foods, cultural practices, or hobbies. In the case study we used to explain this process, Facebook called these categories “ethnic affinities.”

These interests, we argue, can be used as proxy markers for race. Indeed, we show how they have been used to exclude people categorised as having an “affinity” for a particular ethnic grouping from access to basic services, such as housing and employment.

Thao and I are interested in quite abstract questions about the relationship between data and discrimination in an age of personalisation. One of our claims is that the capacity to process large amounts of data about individuals changes the very nature of categories like race, transforming it from a visual marker – I look different, therefore I must be like other people who look different – to a behavioural one: I prefer different things, therefore I must be like people who prefer the same things.

But there are takeaways from this research for activists and policymakers, too.

Amongst people working on data processing and inequality, it’s become increasingly accepted that large-scale data processing systems create unequal outcomes because they’re fed with data produced in unequal circumstances. While this may often be true, it also lets the systems that process our data off the hook. Unequal outcomes – inequality that takes the form of racialised, differential access to services or resources – can also be produced by the systems that process otherwise-neutral data.

Focusing on the paradox of personalisation allowed us to reach this conclusion, because it helped us to grasp what personalisation does, in practice. In practice, personalisation doesn’t just address people as individuals; it addresses them as individuals who are first categorised and sorted in to groups.

Studying how such groups are formed and, prior to this, how processes of categorisation work, is arguably one of the keys to understanding what personalisation is and what its broader social and political impacts are – not just in digital culture, but in general.

Of course, this broadens the scope of what the study of personalisation could entail.

Over the past few months, I’ve been developing my research into personalisation in a direction that, on the face of things, doesn’t seem to have much to do with personalisation at all – but which, I think, has the potential to illuminate the broader social and cultural dynamics in which personalisation is embroiled today.

The aim of this new line of research is to investigate the emergence of a very specific collective term in the United Kingdom: “East and Southeast Asian” (ESEA).

In dialogue with collaborators within and beyond academia,[1] I’ve been developing a research project looking at the emergence of a very specific collective term in the United Kingdom: “East and Southeast Asian” (ESEA).

At heart, ESEA is part of a burgeoning social movement that’s emerged in response to racism suffered by East and Southeast Asian people during the COVID-19 pandemic. Paraphrasing Diana Yeh, who’s been working on this topic for a while, ESEA is a political project. It’s the product of a political movement that’s been devised to mobilise a broad coalition of people against racist violence. It is, admittedly, an ambiguous and sometimes even fraught term – that’s its strength.

This collective term holds both academic and personal interest for me.

It’s of academic interest, because it offers us a way of studying the kinds of personalisation I previously looked at with Thao – from the other side.

The basic question that’s been driving my research for PLY over the last few years is this: how does the categorisation that’s so essential to personalisation actually work? What are its mechanisms, and how have new data processing techniques changed these mechanisms?

Using a combination of digital methods and social and media theory, the research I’m doing into ESEA looks at this term as something that’s produced by people involved in specific social and political movements, while also being shaped by the digital technologies these movements use to build their membership and raise public awareness.

Paraphrasing Susan Leigh Star and Geoffrey Bowker, one of the best ways to understand how categories work is to analyse what they leave out. ESEA has been created by a community of people who feel unrepresented by the existing institutional language available to them. With ESEA, then, we see the emergence of a term that, amongst other things, responds to a social, political, and institutional absence. And this can tell us a lot about how people get included or excluded from the systems of categorisation that underpin personalisation.

But ESEA is of deep personal interest to me, too.

Half of my family is from Southeast Asia. I’ve always thought of myself as an amalgam of Australia (where I was born) and Southeast Asia (my mother is from Malaysia). To use a little online-cultural argot, when I first heard about this term and the campaigns around it, I felt seen.

At PLY, we often joke about putting the “person” back in to “personalisation.” With this project, then, I’m taking this phrase quite literally.

I think we can study personalisation by looking at what it excludes or who it leaves out. Indeed, to push the logic of this by-now-standard critical social science approach further, I also think we can study personalisation by looking at the contested processes by which people who feel excluded make themselves included – or, the processes by which they make society, culture, and politics feel personalised rather than depersonalising.

This brings us back to where we started, with the paradox of personalisation I introduced above. The (paradoxical) logic of thinking personalisation through techniques of categorisation has specific applications, like in mine and Thao’s research into ethnic affinities. But it can also be generalised.

Call it personalisation in the expanded field. By studying the ways in which individuals are categorised – and, indeed, how some are included in some categories and why some are excluded from some others – we learn not only about how personalisation works or how categories are made but how they’re made to be lived in and lived with. We can start to think about what values get embedded into personalisation by way of its categories. Perhaps, too, we might begin to be able to think these values otherwise.


[1] These include academic partners – Jonathan Gray of KCL/Public Data Lab and Wing-Fai Leung of KCL – and third sector organisations – besea.n (Britain’s East and Southeast Asian Network) and EVR (End Violence and Racism Against East and Southeast Asian Communities)

Sophie Day

30 January 2022

Kelly Gleason wrote ‘Reflections on developing and running a Science Café at the Cancer Research UK Imperial Centre‘ after our collaboration in hosting a series of science cafés on personalised cancer medicine. We planned an evaluation of the series and, indirectly, the approach that Kelly has developed for the Cancer Research UK (CRUK) Imperial Centre science cafés in general. We collected anonymised feedback as well as conducting informal interviews with research participants and Kelly used this material to reflect on a decade of work.

Kelly mentions the importance of a safe space. It is not straightforward to foster productive conversations between scientists, clinicians, patients, and colleagues, even though they may already know each other from encounters in the hospital, university, and the Maggie’s Centre.  They all have their own concerns and interests, and I learned a lot from the way Kelly helped people physically reach the venue or encouraged those who were anxious about meeting patients who had participated in their laboratory studies. Interlocutors were then able to listen and respond to each other in a relatively informal, small, and familiar context while another kind of science café might aim for greater variety or numbers in their audience (You can access the associated guidance document here). Through analysis of the material, we realised that the approach at Maggie’s relates to its position in a wider hub for involving patients and publics in cancer research. For example, scientists could present to clinicians and patients who had contributed to their research while students, patients and other people could familiarise themselves with the process of commenting and contributing as they considered taking up more formal research roles. Those who came to one café very often came to another, often over a period of years. In our series, some people focused on the how of personalised medicine, others on the why. Most considered the difficulties for patients and staff as new norms were implemented, and the interface between care and research emerged as an important area to address.

Kelly’s reflections may not reveal how hard it is to run these cafés alone, but our evaluation clarified their overall purpose in the CRUK Imperial Centre and raised questions about how to diversify and attract new audiences without losing an atmosphere in which everyone became confident to comment, ask questions or exchange views.

About Kelly:

Kelly Gleason is Imperial CRUK Lead Nurse, and part of the Imperial Patient Experience Research Centre. She manages a translational research team working in cancer services at Imperial Healthcare NHS Trust, providing education and training, and coordinating the Patient and Public Involvement Group. Kelly has pioneered a Science Café approach that provides a forum for patients and researchers.


Reflections on developing and running a Science Café at the Cancer Research UK Imperial Centre

How it all started?

In 2010 I attended the International Association of Clinical Research Conference in the USA.  That year, the keynote speaker, Mr Charles Sabine, gave a very powerful and emotive talk about Huntington’s disease and the importance of research in this area.

Mr Sabine had lost his father to Huntington’s disease; his brother was in the advanced stages of the illness and he himself had chosen to have genetic testing to find that he too carried the gene for Huntington’s.  This news was the catalyst that caused him to give up his work as a war reporter and travel the world raising awareness of Huntington’s disease and the importance of research in this area.

There were hundreds of people in the room listening to his keynote speech but one thing he said made me feel he was speaking directly to me.  He said, “we will never deliver excellence in research until we get scientists and patients in the same room”. Those words hit hard. I knew he was right. I knew that patients had to be the focus of research no matter how far removed that research was from the patient bedside. To really understand each other, researchers and patients needed a safe place to learn about each other.

On my way back to the UK, I reflected on Mr Sabine’s message.  I thought about our organisation, the Cancer Research UK Centre at Imperial College, London (CRUK, ICL) and I asked myself how I could create a space for our researchers and patients to come together in dialogue.

This is how the Imperial Science Café began; the cafe was my first attempt at creating a safe space to bring together patients and scientists so that they could better understand each other and learn to work together to enhance cancer research at Imperial

For the last decade, we have hosted several cafes every year at our local Maggie’s Centre. These events took place between six and eight in the early evening. Speakers presented solo, in pairs or as a panel.  Presentations lasted between twenty and thirty minutes with the remaining time protected for questions and discussion. Presenters shared their research with or without slides; they chose a presentation style that best suited them in this environment. We hosted nine to thirty-five attendees at individual events; this group size created an intimate environment conducive to dialogue between our researchers, patients, and the public.

What people shared after attending science cafes:

In 2019, as part of the People Like You project, we organised a series of cafes around personalisation. We used this opportunity to formally evaluate our science café platform.  Here is some of the feedback shared by attendees at these events:

Some people reported that they felt that they learned something new about a specific area of research.

Others stated they had learned about research in general, for example how long it takes to develop a new drug, what is a randomised clinical trial, or how data can be misinterpreted when reporting results and how this can be avoided.

Others enjoyed hearing from different perspectives whether that be an oncologist, a surgeon, a scientist, a nurse, a patient, an engineer, or a physicist.

Three ladies stand together talkingOne woman said that attending a science café is like peeking behind the curtain to understand what takes place backstage.

People also said enjoyed hearing about success stories such as how a discovery in a lab at Imperial led to the development of a drug in a phase 1 clinical trial.

One person commented that the audience asked interesting questions that enriched the discussion.  A member of staff commented that they enjoyed listening to patients so actively participating in the discussion.

Some people left an event simply feeling inspired that such exciting research was taking place in their own hospital.

The surprising things we learned through a decade of running a science café:

Through the evaluation of the café platform, we learned that science cafes have helped bridge the gap between researchers and NHS delivery staff and help both groups better understand each other and how they contribute to better health for patients in different ways. As the audience expanded to include our NHS colleagues who worked alongside us in clinics and on the ward, we began to see we were one team.

We also learned that cafes are a good place to let attendees know about opportunities to be more involved in research at Imperial by joining for example .

We became aware that attending science cafes helped some people with a personal experience of cancer process their own diagnosis and treatment.

Over time we witnessed a natural evolution of the platform which provided researchers with opportunities to involve the public in future research projects. The dialogue at events with patients and the public began to inform their next steps in the research process.

The café platform is a great place for students to learn to communicate their work to a lay audience and to value of public involvement in research. The audience in turn enjoy shaping not only research but the researchers of tomorrow.

If you are interested in starting your own Science Café, you can find further details on setting-up and running a Science Café in the associated guidance document developed in addition to this blog. The guidance outlines the practicalities of setting up and running a Science Cafe, including advertising, format of the sessions, style of presentations as well as considerations needed such as setting your intentions for the Café and creating a safe environment.

Kelly’s blog is also available here.

Rozlyn Redd, Helen Ward

8 November 2021

On September 17th the People Like You team held a public event at Kings Place, London, on Contemporary Figures of Personalisation. It was an opportunity to showcase some of the work of our project, incuding presentations from three artists in residence, and to discuss key issues in personalisation across a range of sectors.

One vivid image that clearly resonated with people across the event was “those shoes that follow you round the internet.” Pithily brought up by a panelist describing the practice of similar adverts reappearing across multiple platforms, in the final session the image reappeared and turned into “these boots are made for stalking”. The shoes provided an example to grapple with the techiques that underpin personalisation, such as the harvesting and sale of browsing data, or the use of predictive algorithms to finely segment users based on preference and similarity to others. These processes, including the creation and marketing of data assets, are common across the many sectors where personalisation is practiced.

In this blog we briefly describe the sessions, although cannot do justice to the richness and scope of the day. Neither can we easily convey the enjoyable, humourous and collaborative atmosphere. As the first in-person event many of us had attended since the beginning of the pandemic, it was also a great chance to connect with people through conversation.

For a brief look at the day browse the gallery here



Portraits of people like you: “trying to be a person is a piece of work”

The morning started with presentations from our resident artists and illuminated themes from the People Like You programme as a whole. Felicity Allen presented her Dialogic Portraits, some of which were also on display, and a linked film, Figure to Ground – a site losing its system. These watercolour portraits emerge out of conversations with her sitters, while the film allowed these sitters to explore their own participation. Felicity’s reflections on how a person might be figured in a portrait – which might be a painting but, in our other artists’ work, might also be a poem or a data set – permeated the day’s discussions about artistic practice. As Flick explained, “In this project I’ve often thought of the brushstroke as a form of data which builds a picture of a person… So what type of brushstroke most resembles data, and whose data is it, the sitter’s or mine?”

This relationship between the person and data is a theme that travels across the People Like You project. Stefanie Posavec’s work, Data Murmurations: Points in Flight, presented a series of visualisations of an epidemiological study, Airwave, which includes data and samples from a cohort of research participants. This work started by asking “How is the original donating person (participant) figured at every step of the collection and research process?” By starting with a stakeholder – an investigator, researcher, nurse, or participant – and asking how they perceive people behind the numbers, Stef’s captivating lines, boxes, swirls, and dots provide us with a creative representation of a complex system.

Di Sherlock’s presentation of her Written Portraits included wonderful readings from three actors who brought the works to life. Di recalled conversations with her sitters at a cancer charity and NHS hospital, and their responses to the poems she later “gave” to them. She described the responses which related to whether they “liked” the work, and whether they felt it was “like” them.  The difference and similarity between like (preference) and like (similarity) is another theme that permeates the People Like You collaboration.

In conversation with the artists after their presentations, Lucy Kimbell encouraged them to share more of their experiences of working in the project, asking how it affected their art and autonomy. Our intention in including artists in the collaborative research programme was to learn from their creative process and to better understand issues of personhood and the relations between self and collective identities.


Personalisation in Practice

The next panel discussed how personalisation is understood and practiced in different sectors, with speakers from public policy (Jon Ainger), advertising (Sandeep Ahluwalia), and public health (Deborah Ashby).

As the panellists introduced themselves, each highlighted key aspects and tensions inherent to personalisation within their field. For Sandeep, advertising firms must learn how to synthesise signals and use new tools like AI and machine learning to produce personalised marketing – a process that has changed dramatically in the past 5 years.

For Deborah, personalisation has three moments in the field of public health and medicine: first, how different treatments work differently in different people; second, and most common, traditional forms of segmentation and stratification, where different patients are prescribed different treatments based on overall risk; third, how an individual values risk themselves, which is highly context specific.  The relationship between personalisation, the individual and community is also turned on end in public health: what does personalisation mean when our actions and decisions impact other people?

Jon gave the example of a mother advocating for her disabled son’s delivery of social care. For her, choice was the key for her to be able to satisfy her son’s needs: “If I don’t have any agency, if I’m not in control, if I’m not in complete control, you can never give me enough money to meet my need as a parent.” For him, personalisation is about knowing an entire person, and getting people more involved in making decisions increases the effectiveness of services. Here, personalisation is about engagement, but he also argues that data is important too. There are critical limitations on how data can be applied in social care: should data stratification play a role in children’s social care? When is it going too far? How do we decide to set limits to data-based solutions to a problem?

One theme drawn out by the discussion is that personalisation is bespoke – it is not fully automated. For Sandeep, one of the challenges of personalisation in the digital space is that there is a difference between what brands want to communicate and what their audience is interested in. As a practitioner, finding a Venn diagram of their overlapping interests is crucial. Jon argued that in social care, there are very complex issues that are continually being disrupted due to changes in policy, relationships between sectors, and adapting processes to particular situations. Deborah argued that while algorithms can be used to understand patterns, we should be wary of using them to automate decisions, as they can bake inequality into a decision-making process.

A second theme that was brought up was inclusion in data. For Sandeep, personalisation often fails when lists of target individuals are scaled up too far. Within healthcare, Deborah argued, you need to think about who is not in data. For example, with routinely collected data, those who are most represented in the data might be those with the least amount of need.

On the whole, this discussion reflected the findings of our team’s ‘What is Personalisation?’ study, led by William Viney, which asked if personalisation could be understood as a unified process. Personalisation varies across domains, and this variance is driven by practical challenges, resource limitations, and context-specific histories.


Any Questions? 

The final panel addressed the future of personalisation, with Timandra Harkness putting audience questions to Paul Mason (writer and journalist), Reema Patel (Ada Lovelace Institute), Natalie Banner (formerly lead Understanding Patient Data, Wellcome Trust), and Rosa Curling (Foxglove). The questions were wide-ranging and answers by panellists provocative.

“Is overpersonalisation killing personalisation?” In response to this opening challenge, the panelists focused on power and how it is distributed in this space: who collects data about people, whether use of personal data leads to overdetermination, and when it feels like it’s too personal: “the shoes that follow you around the internet.” Discussing the future of personalistion meant considering how the UK government will act on data protection, data sharing and consent in the future. This was particularly true for answering the audience question, “How will people think about sharing their healthcare data for research in 10 years time?” Rosa challenged the government goal of centralising GP health records and argued that there needs to be more friction in this decision. Does it need to be centralised? Who benefits if it is? Could it be federated? Who will have access? Instead of centralising GP data, local authorities could create a more democratic space through localised consent and control, and through this process create trust in the NHS and data sharing in the future. Reema argued that the current pattern of techlash between the UK government and the public – caused by data privacy policies that have been met with hostile reactions by the public and then government quick fixes – has undermined public confidence and trust in shared data for the future (e.g., GP GDPR). For Natalie, in ten years there is potential for citizens to be much more aware about the contents of their healthcare data and how health records can be used to improve the healthcare system and their own lives. Paul countered that other forms of health data (e.g. health apps and lifestyle tracking) could make NHS data irrelevant.

When tasked with thinking through the issue of using big data for calculating health risk, Reema gave the example of the Shielded Patients List as a potentially valuable application), where a risk score was created and used to direct resources during an emergency response. Natalie argued that there is a vast difference between being able to see a pattern in data (which happens now) and being able to make an accurate prediction from it. Following up Natalie’s response, Timandra asked the panellists, is it ok to make predictions about individuals based on aggregate trends in data?  For Rosa, these issues are not new with the advent of personalisation: there have always been racist decision-making processes, and algorithms are just a new machine working within an existing power structure in a way that harms the most vulnerable members of the community.

The panellists also discussed a range of examples where personalisation is related to social integration and access: social credit scoring; BBC recommendation algorithms focused on helping users explore diverse content and the hope of bringing people together; personalisation of access for those who have varying interests and needs;  the tendency of algorithms to bring together communities of outrage rather than those of difference or nuance;  the recent example of communities coming together during the pandemic around health issues- and the opening of dialogue around the current tension between individual liberty and public health outcomes.

What kind of consciousness will tech acceleration and the continuous prompt to express preference bring? … How is it changing us? Reema suggested that when you introduce something new it shapes the way people behave and feel, especially felt when prediction determines your future. Rosa was more hopeful: with more exposure to the harms that come with algorithms, like the A-level fiasco, where the government appeared not to care, people are now conscious that these kinds of things are happening around us all the time, and we should be thinking about them. Natalie thought that many people are unaware of the kinds of concerns that we have brought up in this conversation – she felt it’s novel and scary.

The conversation reflected key aspects of our research agenda in People Like You: how do we think about participation, data, algorithms, and inequality as facets of personalisation?  What can we learn from talking about the future of personalisation across domains and time? As it turns out, quite a lot.


Digital Twins Like You

William Viney

20 September 2021

William Viney

20 September 2021

My birth certificate tells me I am about 15 minutes older than my twin. Birth order has no universal relation to family rank or seniority. And so in some cultures I would be junior and he would be my senior – the second born sends the first to check the state of the world. And although these sorts of questions never much concern me I do get quizzed when people learn I have a twin. I am asked – which is older? Is he like you? Can I see a picture? When answering these questions I find myself speaking on behalf us, me and him, but also always a culture of twins and twinning, which is never quite what it seems.

During a recent research project I learned about the different criteria that are used to give twins shared or divergent identities. For example, one set of criteria assumes twins share conception, gestation, and birth. It is by sharing those experiences – if ‘experiences’ is the right word – that makes twins alike and yet different to non-twins. But advances made in molecular biology, fertility techniques and services, and the gradual development of genome-editing technologies make these assumptions increasingly visible and potentially fragile. Twins are born years apart, using cryogenic IVF techniques, and the world’s first known germline genome editing experiment created twin children. If twins continue to be twins without a common conception, gestation, or birth then what is it that forms their union? Throughout my life single-born people have told me they wish they had a twin of their own. But I wonder what they mean and if they have a particular kind of twin sibling in mind – agreeable and kind, with whom they ‘share’ a lot. I guess that’s one nice thing about having an imaginary twin: you get to choose and they will rarely disagree.

Despite evident cultural and historical differences about what defines a twin, there exists a powerful cultural idea that twins should be alike. This is partially registered and reproduced in the creation of ‘digital twins’ – which, according to one definition, involve the virtual representations of real-world entities and processes, and the mechanism by which they are synchronized to correspond to one another. Digital twins are real-time models of existing objects or persons. Techniques of simulated modelling originates in space engineering, specifically NASA’s Apollo programme, but the term has become widely adopted and digital twins are now built for power stations, manufacturing processes, historic buildings, and whole cities can have digital twins for emergency planning.

The development of digital twins in healthcare is closely linked to personalised medicine. Rhetorically, they are used to propose a near-perfect data double, ‘virtual patient’ or ‘in-silico-self’, used as a kind of shadow to continuously track and provide predictions about health and disease conditions for individual patients. The reality is more partial and complicated, since digital twins must be built and tested against other digital twins and their data sources. Though the word ‘twin’ may suggest uniformity to some the creation of ‘digital twins’ within programmes of machine learning and artificial intelligence mean the field is more unstable, rapidly changing and adaptive – data collection and analytics techniques are varied, and the data used made of aggregate patients and selves rather than the tracked data of individual persons. The ‘what-if’ scenarios said to be the digital twin’s strength also depend on the ‘what-if’ of innovation platforms, their changing data techniques, investment patterns, and industry standards.

Discrete areas of progress reveals how digital twins perform as a patchwork aggregate of different people’s data rather than a 1-for-1 representation of legal individuals. For example, in 2019 Dassault Systèmes’ announced its collaboration with the U.S. Food and Drug Administration to develop its digital twin called Living Heart, a simulated 3D heart model. These single-organ digital twins promise to enhance training, testing, clinical diagnosis and regulation – with particular benefits for safer, quicker, less expensive clinical trials. While the idea of a ‘twin’ suggests someone or something in parallel or in partnership with one other person – rather than many – Dassault Systèmes’ data is simulated, modelled, or imputed from existing patients. It’s a model heart made after the hearts of many.

Just as jet engines now carry hundreds of sensors that track, model, and predict engine behaviour, digital twin developers envision the use of wearables, such as watches, socks, implanted and ingestible devices, which can gather situated data and give shape to a person’s digital twin. While the ‘virtual’ status of a digital twin – which one might assume to be a faithful and precise representation of a living person – is core to its advantage and potential use, the promise of precision requires extensive material resources. While the twin is viewed as digitally adaptive, updated, ‘smart’ – the sources of data must be standardised and compliant to stream updates according to specific timelines. The liveliness of the digital twin depends on its real life

partner doing a lot of legwork. Sampling frequencies may be ‘continuous’ in theory but twin updates are scheduled or serialised over discrete time points, according to existing or emerging protocols that set trends and shape forecasts. Tracked data, already partial, tracks persons according to a schedule governed by standardised routines. Patients with multiple health and disease conditions will continue to follow variegated collection and update routines, such that arrhythmias of the heart are likely to require close monitoring but early cancer detection is unlikely to follow the same ‘real-time’. In this sense the digital twin will rarely if ever be contemporary to the multiple datasets it makes interoperable. Nor will it perfectly replicate the subject it is said to model. In practice digital twins are more like Frankenstein’s creature – time-lapsed kin made of composite portraits, layering different data types updated at different times. And like all twins who are physically born in the UK National Health Service, existing standards of classification, regulation and governance will mediate between and differentiate one twin and another.

Although digital twins are often promoted as a way to automate and simplify how people’s future health is figured, enduring questions about how orders of the normal and the pathological shape group affinities and differences. These are questions also asked of a ‘personalised’ medicine more generally. Though the digital twin may appear ‘personal’ to its real-world kin – it is another ‘you’ yet paradoxically uniquely ‘yours’ – the accuracy and validation of a twin-made predictions will depend on it being a generic composite. Hence, the what-if implications of implementing digital twins are very different depending on context. On a factory floor, simulations of a machinery slowing or malfunctioning can provide real-time analytics to help forecast ‘what-if’ changes in a supply chain. If a clinician explained that your digital twin’s simulated response in a drug trial led to an adverse reaction, would you accept their recommendation that you begin palliative care? On the other hand, a digital twin’s simulated status means they can be coached – a figure to improve and enhance, a figure to discuss among friends (which is older? Are they like you? Can I see a picture?). What remains uncertain is whether there will emerge vernacular ways to involve others when describing your twin, a way of recognising your self in aggregated and simulated collectives, a we, a culture of digital twins and twinning assumed to be alike but always a little different.

Scott Wark

6 August 2021

On the 10th and 11th of June, People Like You held a workshop. Its ostensible aim was to provide our principal and co-investigators, Sophie Day, Celia Lury, and Helen Ward, with a forum to present an early draft of the book, the culmination of our project. So, we invited a group of people who have inspired our thinking about personalisation to work through some of its key ideas. Presenters either responded directly to a paper outlining this book, or talked about related topics. We thought we’d get to enjoy two days of smart and incisive reflections. What we got exceeded all of our expectations.

Presenters and respondents, Day 1

Top: (L-R): Sophie Day, Helen Ward, Martin Tironi; Middle: (L-R): M. Murphy, Dominique Cardon, Louise Amoore; Bottom: Celia Lury
Click through for more images from the day!

Day, Lury, and Ward opened proceedings. Synthesising our research into personalisation in three broad fields – health care, digital culture, and data science – their presentation proposed that the emergence of a “ubiquitous culture of personalisation” has spurred the development of what they call a “new political arithmetic.” This phrase – a play on William Petty’s late- 17th-century phrase for using statistical techniques to govern a polity – is designed to draw attention to changes personalisation is effecting at scale. By inviting participation, using preferences and likeness to produce precise categorisations or classifications, and dynamically testing these in order to predict outcomes, personalisation, they argue, institutes a “distributive logic”: it sorts people and things and allocates resources.

Hence “political arithmetic.” But why “new”? What’s novel about personalisation, they argue, is that its processes – increasingly, reliant on big data – change how we’re able to predict, and therefore intervene in, the future. Its distributive logic plays out through continuous testing and re-testing. If a particular manner of ordering doesn’t fit a particular set of persons or things, try another. The result is a capacity to intervene in futures before they emerge. So, ubiquitous personalisation might be changing what we know as prediction, as testing and retesting engineers what they call a “continuous present.”

The first part of the workshop rounded out with series of invited responses. First up was Martín Tironi, who’s collaborating on our ‘Algorithmic Identities’ project. For Tironi, Day, Lury, and Ward’s proposition that ubiquitous personalisation constitutes a “new political arithmetic” suggests that personalisation not only institutes what we’ve previously called a “mode of individuation,” but a “mode of configuration, action, [and] distribution of the social.” Tironi left us with a pair of pertinent questions that resonate with his research into smart cities. Does this “political arithmetic” open a sphere of “play” in which individuals can exercise agency? And, how does it account for “nonhuman” participants in distributed reproduction?

Next was M. Murphy, but I want to return to their response at the end. Our third respondent, Dominique Cardon, suggested that our proposal could be construed as an answer to a classical sociological question, namely, “what is society”? He noted that those of us working on such questions needed to avoid the tendency to “reify” a past society governed by demographic categories – blunt techniques of segmentation – when attempting to establish the novelty of our proposed “new” arithmetic. Nevertheless, he also suggested that what we’re describing fits with modes of social ordering that move from “a world of discrete categories to a world of emergent categories.” The question he had for us was this: does ubiquitous personalisation institute a shift from a “government of causes to one of effects”?

Our final respondent, Louise Amoore, invoked an alternative to our three p’s – participation, precision, and prediction – in the form of three a’s: address, accuracy, and – with a little gerrymandering – what we’ll call arithmetic. In order to secure participation, she noted, computational techniques rely on the capacity to address targets of personalisation. Citing Lorraine Daston and Samuel Weber, she noted that using data to target interventions relies on degrees of tolerance for interventions’ accuracy. Finally, she noted that there is a politics of “sovereign knowledge” involved in any political arithmetic. Invoking the term’s colonial legacies, she asked us to consider how personalisation orders people according not only to likeness, but also unlikeness. That is, how could our project’s titular aim – thinking through how personalisation assembles “people like you” – account for people you are not like?

Part 2 of the workshop involved a series of presentations by people whose work we draw upon. First was Cori Hayden, who presented on generic medicine. Reflecting on the emergence of “similares” in Mexico – that is, generic drugs and associated clinics – Hayden’s presentation drew out the complex relationship between personalisation and state provision. In Mexico, the emergence of “similares” fills a niche left open by the high price of branded medicine and state medical clinics. Ironically, “simi” clinics offer cheaper, more accessible care that  can feel more personalised. In the populism mobilised by “simi,” we see personalisation playing out in the name of the generic – and what Hayden called the “politicisation of similitudes.” By this, she means a politics of health in which the “generic” is not counterposed to difference and variety, but seems to incorporate it. Her “generic” contains the multitude in its (dis)similarity.

Hayden’s presentation rounded out the first, packed day. Day 2 began with Emily Rosamond, who presented work from her forthcoming book on what she calls “reputational warfare.” Her basic question is this: how ought we to conceptualise the value of reputation in online spaces? Rather than thinking it through participatory culture, labour, microcelebrity, or wages, she proposed understanding it as an asset. So, she asked, how do personalities on platforms – like YouTube – assetise themselves? Conversely, how do platforms turn a collection of personalities into a “hedged portfolio”? Assetisation highlights the tension inherent in the idea that one’s personality is “inalienable” yet also “assetisable.” Insofar as personalisation techniques are “precise but not accurate,” personality ought to be understood as a lure for participation. For us, this was a necessary and productive adjunct to our proposal.

In a stroke of programming serendipity, Rosamond was followed up by Fabian Muniesa, whose work on assestisation informs both hers and ours. Rather than talking about assetisation, Muniesa outlined fresh thinking he’d been doing on “paranoia.” Platforms, he argued, engender a paranoiac conception of media. Confronted by targeting and prediction, we find ourselves constantly asking how – how does a particular address construe me as someone who will want to use a particular service or purchase a particular product? How does it conceive of me as this me, and not another? For him, paranoia is the prevailing psychological response we have collectively adopted to a hyper-mediated world in which desire is always mediated by value. What he maps, I think, is emergent politics of personalisation for a world of increasingly-automated hyper-connection.

Dominique Cardon returned to present new research he’d been doing with his colleague, Jean-Marie John Matthews, on the impact machine learning has on the ordering of society. To make this argument, they invoked Luc Boltanski’s concept of the “reality test.” For Boltanski, reality isn’t a given, but is secured by the institutions which order society. This ordering of reality is not fixed, but is frequently “tested” by those who are subject to it. With the proliferation of machine learning, they proposed that the nature of this process of testing reality might be changing. By instituting systems in which reality is subject to constant testing in order to determine what possible arrangement of people or things might achieve a pre-given outcome, machine learning is creating a situation in which tests lose touch with reality. The conclusions they drew in their work resonated strongly with ours. Because such tests are no longer deductive – that is, no longer proceed by testing an hypothesis – society comes to be ordered by “complex domination”: a process that seeks out the distribution of relations best suited to a particular outcomes and, in doing so, collapses future into present and possibility into probability.

The last presentation in Part 2 was by Amoore. Like Cardon and John Matthews, Amoore was interested in asking how machine learning – specifically, “deep learning” – may be reordering what politics can be. If machine learning institutes techniques of prediction that foreclose potential futures by attempting to configure relations to achieve predetermined outcomes, what does this do to politics? To tease out a response to this question, Amoore focused on the “function,” or an algorithmic operation of optimising by mapping an input onto an output. For Amoore, the product of a capacity to start with an outcome and “retroactively” design a system to make it possible is a situation in which one no longer looks for solutions for problems, but for problems that fulfil a particular pre-determined solution. Like Cardon and John Matthews, Amoore sees in these techniques a proliferation of testing. But she also sees a situation that undercuts our capacity to advocate for better futures. If this is one way of defining politics, she asked, how can machine learning become a tool for opening up futures instead of closing them down? How might we even begin to think incommensurability in and through models that are designed to render all difference commensurable?

The day wrapped up with a summary of the workshop as a whole by Penny Harvey. Harvey drew a number of useful connections between our work on personalisation and prior research on topics like topology. In our conception, she noted, personalisation has a topological bent. But I want to end by reflecting on commensurability and incommensurability, because Amoore’s parting question brings us back to M. Murphy’s response to the paper Day, Lury, and Ward gave to open our workshop.

We call the result of ubiquitous personalisation a “new political arithmetic.” This concept incited Murphy to mount a “decolonial” and “queer” pushback. If we posit this “arithmetic” as a “social analytic,” perhaps we need to be willing to question – to “puncture” – such “analytic phantasms.” For me, I think they outlined what’s really at stake in this “new political arithmetic.” If society is ordered in this way, what becomes of other worlds? Does our conception of society in the wake of ubiquitous personalisation leave space for thinking their existence or persistence outside of a new “political arithmetic” – and can we imagine a different mathematics that would make decolonial and/or queer worlds possible?

Speaking for myself instead of the project, I would affirm that this question is one of the most crucial personalisation raises. But I’d reformulate it in terms closer to Amoore’s than Murphy’s. One might indeed ask if this conception of society forecloses “worlds.” I think we get more purchase on the present if we ask how a new “political arithmetic” reconfigures politics. Does the analytic we describe give us the conceptual means to plot our way out of this ordering of society and towards alternate political configurations? Perhaps. Does it need to leave analytical space for an outside in order to do so? Perhaps not – but then, what this “new political arithmetic” might describe is contemporary politics’ most crucial site of contestation: not only who gets to make futures, but who controls their terms.



Helen Ward

9 July 2021

The group

Over the past year, several new categories of people have been defined as we have responded to the unprecedented pandemic; the clinically extremely vulnerable, the long-hauler, the vaccine hesitant. Categories can appear to be neutral, even scientific, descriptors of emergent groupings but, once created, they have an impact on the people classified, and those people then affect the categories. Ian Hacking has described this process as looping: “sometimes, our sciences create kinds of people that in a certain sense did not exist before. I call this ‘making up people’”.(1)

People like us #longcovid. Photo from “Message in a bottle: Long COVID SOS”, a video made by members of the LongCovidSOS Group in July 2020

In our research on personalisation we have explored the fluid ways in which people are rendered ‘like’ each other through tracking and analysing data from a range of everyday activities. Previous blogs have addressed some of the ways in which categories are constructed and applied to individuals with more or less face-validity and apparent relevance.

Categories based on similarity or affinity group people into those who are alike, and their application creates the corollary of people who are different.  We use the phrase ‘People like you’ in our project title in reference to the well-known address of targeted, personalised marketing: ‘people like you like/buy things like this’. The phrasing can be more or less inclusive:  a personal address to ‘people like you’ with similar preferences; a call for identification with ‘people like me’ to establish solidarity around a shared experience; a way of excluding ‘people like them’ who are not you or me.

These forms of address resonate with my experiences working as an epidemiologist on the COVID-19 pandemic, in which we attempt to understand its unequal impact. Defining and analysing categories is the bread and butter of epidemiology, often with little attention to their context and impact. I consider this issue here.

My first example, described in an earlier blog, is the category of the ‘clinically extremely vulnerable’. The process of creating and then labelling this group started with experts agreeing what kinds of people were most likely to have severe COVID-19, followed by the development of an algorithm which was applied to routine health data to generate a list. Once labelled, people received recommendations to shield, because ‘people like you’ are clinically extremely vulnerable. The algorithmic process of segregation has been changing and also contested, but the social impact of the categorisation and its boundaries is profound and ableist. Some people are locked into a vulnerable-shielder dyad which was rationalised as a way of allowing greater freedoms for everyone else, (2).  Shielding advice has also been framed as solidarity and protection in which the vulnerable are protected through the altruistic actions of others, “‘I exist because of we’: shielding as a communal ethic of maintaining social bonds during the COVID-19 response in Ethiopia”.  (3)

In contrast to the application of this category of shielding, people with Long Covid form a group that claims its own existence. Callard and Perego argue, “Long Covid has a strong claim to be considered the first illness to be collectively made by patients finding one another through Twitter and other social media.” (4) They report how patient voices consolidated around the hashtag #longcovid after experiencing and reporting unexpected ongoing and relapsing symptoms, including with initially mild disease. The hashtag was first used in May 2020 (5) and has been a powerful tool. People affected were frustrated by lack of recognition in the medical press and from individual clinicians.  The category of #longcovid was not generated from routine health data, and people associated with #long covid claim that it remains almost entirely invisible in such platforms,(6). To the contrary, the category was a product of linking through social media. The sharing of experiences enabled the identification of ‘people like me’ who collectively defined and refined this category. Once people started using the term #longcovid, it developed a momentum as a patient movement which advocates in favour of resources, research, and recognition

In medical research, including a project of mine, we are trying to understand long covid better through large scale surveys, detailed phenotyping and genotyping, and data linkage, working with patients to help define outcomes and understand their experiences. I have already felt a tension between the tendency in biological sciences to stratify and subdivide conditions into every more precise categories, (7,8) and the desire of those identifying with #long covid to retain this general, overall term rather than be redistributed into finer grained sub-categories. (5) From the perspective of personalised medicine, precise strata and deep understanding of biological mechanisms is a goal through which more appropriate treatments can be designed; from the perspective of the people living with long covid, shared challenges, including destructive stigma, can be addressed more effectively as a unified group. (9)

The final emergent category I want to mention is the so-called “vaccine hesitant”. People who do not automatically respond with enthusiasm to the offer of a COVID vaccine appear to be spoiling the future for everyone. Earlier this year, a press release from another group in my own department carried the headline, “COVID-19 vaccine hesitancy could lead to thousands of extra deaths”. Our own research shows that people who are unsure about whether to be vaccinated have concerns about safety and evidence, whether they need it (if they have had prior infection for example), or general mistrust of a system.(10) Conversations with local members of the public and community organisations indicate that people are “hesitant” when they are not heard, and that individual discussions, as well as acknowledging their concerns, may allow them to make informed choices.  Labelling people ‘deviant’ or ‘normal’ shifts blame to the former category and further marginalises them. This category of vaccine-hesitant thus further excludes and stigmatises by making up ‘people like them’ who are different.



  1.  Hacking I. Making Up People. London Review of Books [Internet]. 2006 Aug 17 [cited 2021 Jul 7];28(16). Available from:
  2. Ganguli-Mitra A, Young I, Engelmann L, Harper I, McCormack D, Marsland R, et al. Segmenting communities as public health strategy: a view from the social sciences and humanities. Wellcome Open Res [Internet]. 2020 May 26 [cited 2021 Mar 28];5. Available from:
  3. Seifu Estifanos A, Alemu G, Negussie S, Ero D, Mengistu Y, Addissie A, et al. ‘I exist because of we’: shielding as a communal ethic of maintaining social bonds during the COVID-19 response in Ethiopia. BMJ Glob Health. 2020 Jul;5(7):e003204.
  4. Callard F, Perego E. How and why patients made Long Covid. Soc Sci Med. 2021 Jan 1;268:113426.
  5. Perego E, Callard F, Stras L, Melville-Jóhannesson B, Pope R, Alwan N. Why we need to keep using the patient made term “Long Covid” [Internet]. The BMJ. 2020 [cited 2021 Mar 29]. Available from:
  6. Wise J. Long covid: doctors call for research and surveillance to capture disease. BMJ. 2020 Sep 15;370:m3586.
  7. NIHR. Living with Covid19 [Internet]. NIHR Evidence. 2020 [cited 2021 Mar 29]. Available from:
  8. Mahase E. Long covid could be four different syndromes, review suggests. BMJ. 2020 Oct 14;371:m3981.
  9. Pantelic M, Alwan N. Marija Pantelic and Nisreen Alwan: The stigma is real for people living with long covid [Internet]. The BMJ. 2021 [cited 2021 Mar 29]. Available from:
  10. Ward H, Cooke G, Whitaker M, Redd R, Eales O, Brown JC, et al. REACT-2 Round 5: increasing prevalence of SARS-CoV-2 antibodies demonstrate impact of the second wave and of vaccine roll-out in England. medRxiv. 2021 Jan 1;2021.02.26.21252512.
Two by Two:  How ‘People Like You’ enter Noah’s Ark

Celia Lury and Sophie Day

10 May 2021

Celia Lury and Sophie Day

10 May 2021

It is nearly a year since we began a blog on Coronavirus-19’s 2 metre rule as a grid reaction with the phrase We are all now familiar with what 2 metres looks like”. In case we had forgotten, Government advice on social distancing told us in a blog (Professor Anne Mackie) on 5 June 2020 that 2 metres is 6 feet 7 inches, three steps or the length of a double bed:



The NHS and HM Government helpfully explained that a bed was the same as 2 benches, 3 fridges and 4 chairs. Elsewhere, two metres is said to be equivalent to two shopping trolleys, one small bear, one seal, one reindeer, one caribou, “the distance from a cougar’s nose to the tip of its tail”, an adult kangaroo, various measurements based on different fish and so forth. To help the general public put these lengths into practice, space is divided, with markings on the ground for queues, benches in public parks made unusable, and chairs in vaccination clinics carefully isolated from each other. In this blog, we are interested in saying more about what, how and who is being operated by the rule of two metres.

Consider first, what entity or unit is being kept part of or apart from another: an individual, a parent, a household?  Sometimes the unit of a household is encouraged to fit into the unit of an individual. Shop alone.



Sometimes a parent-to-be is required to be (a)(l)one.

As ones, it might seem as if we can be counted in terms of simple  addition:

But complications ensue. Is a group of six two families from two households, six individuals from six households, or a count of six from one household?

How am I or you ‘a/part’ of or from each other? Does the 2 metre rule imply we should stand by each other, one by one, two by two?

Rather than addition, we suggest that the grid reactions prompted by the Covid-19 pandemic are better understood to operate in terms of multiplication and division, that is, in terms of the operator ‘by’ rather than the ‘and’ of addition. Just as the 2 by 2 of Noah’s Ark ensured the heterosexual reproduction of the animal kingdom, the 2 by 2 grid of the pandemic ensures selective reproduction of a population, but does so in new, more varied ways.

Noah’s Ark (1846) by Edward Hicks, Philadelphia Museum of Art

(public domain)

The sequencing of the operation of ‘by’ – multiplication and division – in the 2 by 2 of the pandemic has very particular implications for who gets counted: for who may go outside and who may stay inside, for who must go out, and who must stay in. It makes a ‘new normal’ that tests and re-tests and meets and joins different kinds of people, sorting them out (or in) as individuals, parents (to be), carers (of children or grandparents), shielders and shielded. The contemporary 2 by 2 is not only what enables the bio-social variables employed in models to be put into dimensions of time and space but also an operation that categorizes ‘people like you’, stratifying people into those who can – and can’t – count for one.

‘By and by’ originally meant ‘one by one’ (reported in Chaucer), ‘side by side’ and also ‘on and on’ or in due course.  Side by side can suggest the incidental, beside the point, in passing, as in ‘by the by(e).’ A passage from Dombey and Son (Charles Dickens, 1846-8) reads,

 “So they got back to the coach, long before the coachman expected them; and Walter, putting Susan and Mrs Richards inside, took his seat on the box himself that there might be no more mistakes, and deposited them safely in the hall of Mr Dombey’s house—where, by the bye, he saw a mighty nosegay lying, which reminded him of the one Captain Cuttle had purchased in his company that morning.”

The phrase often marked a development in one line of a story that would connect with others over time, a development that Benedict Anderson (1991) charted in his discussion of the importance of ‘meanwhile’ for the making of the imagined community of the nation.

In place of the security of the ‘meanwhile’ of the nineteenth century novel in which actions in one time and place would inevitably, albeit unknown to the characters themselves, connect the futures of each to the other, the contemporary ‘by the by’ of the pandemic dictates or recommends a moving ratio, punctuating and stratifying the reproduction of the population. ‘By the by’ is no longer an incidental aside that may be beside the point or a ‘meanwhile’ of time passing in imagined synchrony but rather a flexible operator of who is counted in and out of gridded time-space, with consequences for who continues to live and who is left to die.

Even in a crisis, this operator does not supplant but combines with others. The self-possessed individual of liberalism – who can be summed into households, socio-demographically defined groups and nation-states – continues to assume an a priori significance. But, ‘by the by’ contributes a fluidity and uncertainty to these groupings and categories, including and excluding people more variously. ‘By’ multiplies, divides and sequences flows through counts of space by time and time by space, continuously, perhaps incidentally, and in a novel simultaneity.



Anderson, Benedict. 1991 [1983]. Imagined Communities: Reflections on the Origin and Spread of Nationalism.  London: Verso.

Day, Sophie and Lury, Celia. 2020. Coronavirus-19’s 2 metre rule as a grid reaction



Figuring out health research data

Roz Redd, Helen Ward, Stefanie Posavec

20 January 2021

Roz Redd, Helen Ward, Stefanie Posavec

20 January 2021

Stef Posavec spoke to Roz Redd and Helen Ward

Roz and Helen:

Last year, Stef Posavec, a designer and writer whose work focuses on non-traditional representations of data, took up a residency with the data science stream of People Like You.

The practice of personalisation that suggests that “people like you like/buy/benefit from things like this”, requires large collections of data generated mainly through routine processes such as shopping or social media. In personalised medicine these data sets are generated through both routine healthcare and research, for example where collections of biological samples and health-related data are brought together in a “biobank” that is then available for future research. One example in the School of Public Health at Imperial College is the UK Airwave Health Monitoring Study, Airwave for short. This long-established study includes biological samples and health data on people who work for the police, and provides an example of how data and samples are collected, processed, stored and analysed. In trying to figure out what exactly is going on in biobanks more generally, Stef has been spending time with the Airwave study team. We asked her to share her reflections so far.


For People Like You, I’m using my art practice to better understand how the various stakeholders in a biobank perceive the ‘people behind the numbers’ who consent to their biological samples and data being used and stored for research. To achieve this aim, I’m focusing on the participants and researchers of the UK Airwave Health Monitoring Study, a cohort study and biobank based at Imperial that’s been following the lives of 53,000 members of the police force since 2003. This research will inform the creation of a data-driven series of artworks aiming to communicate these insights to a wider audience.

My creative practice normally begins with extensive note-taking and research, after which I refine the work’s overarching concept (ideally very clever and very witty!) in writing, and only after this move onto thinking about how this will work in practice visually.

However, for my People Like You residency, the ‘problem’ of visualising the Airwave system wasn’t one that could be neatly solved through an hour or two writing in a notebook, as I felt overwhelmed by the sheer amount of information I’d gathered (and could keep gathering for eternity) and unable to progress in words alone. Feeling stuck, I decided to start drawing in the hope it would help me ‘see’ my research from a different perspective.

I began the process of mapping out the Airwave system through drawing it by hand, then going back to various parts of the drawn system and trying to illustrate, communicate, or visualise (or a mix of all three) what was happening in more detail through drawing, then re-drawing, and then drawing some more. It felt as though through this repetitive and laborious process I began to pull my bigger insights about how the Airwave biobank system worked from the paper, where with every small evolution of these drawings I was also one step closer to being able to explain and describe the various Airwave processes in words.

As mentioned, drawing isn’t normally how I start to think through a problem. But within this residency I’ve made the fortuitous discovery that drawing is a way for me to access the tacit, hard-to-articulate knowledge that I have gathered through weeks of research, interviews, and note-taking, where instead of communicating these insights in words in a notebook, instead I am able to communicate using colour, texture, form, and more.

This hand-drawn diagram was the first step in my drawing process, where I began to roughly map out how data and samples taken from a study participant in a clinic would then be processed and stored for use in research.

And these are a selection of the small drawings that I’ve been creating of all the different ‘angles’ of the Airwave system, in which I draw to understand different elements of my subject.

Next, using Keynote Presentation software, I begin to roughly patch these various drawn components into a complex whole to see how they work together. When I’m happy I’ll create a final artwork using these digital collages as a visual reference. Normally I work in Adobe design software but I’ve found that Keynote’s limited design capabilities keeps me focused on the bigger picture as opposed to the tiny details. 

Normally when working with data and visualising it in some capacity for a creative project, I try to have the data in hand before I begin and aim to be accurate in my representation of this data as possible, however ‘artistic’ the end result will be. However, my People Like You residency has been different in that I am unable to access the actual data held within the Airwaves study for all of the usual data protection reasons. This has meant that I’ve had to come to understand this dataset and how it ‘flows’ from participant to researcher in a more roundabout way, and this drawn way of ‘figuring’ has helped me find ways of visualising inaccessible data that is inherently more ‘blurry’, but still provides rich information about the Airwaves process and its team member’s perceptions. Because of this, I suppose I now see the value in this blurrier type of data visualisation that captures the ‘spirit’ or ‘essence’ of the dataset through highlighting its connections, form, and flows even if it doesn’t represent the actual data to the number.

Roz and Helen:

Working with Stef and seeing her drawings has helped us explore how different actors – patients, researchers, clinicians, data scientists – interact with data. The sketches suggest ways that a person and their data travel, combine, separate and move almost like a murmuration of starlings; individual and group become inseparable, but scattered data points – remainders – also emerge that don’t quite fit in the boxes and groups. These are outliers, mis-codes, excluded or perhaps just waiting for other misfits.  In “People Like You” we are exploring how figuration, figuring, and efforts to figure personalisation– “can help us think and study our increasingly datifed present”.

Drawing is helping Stef to see, or figure out, what is going on in Airwave; she is also helping us figure out what happens to the person in and out of the data.

19 January 2021



William Viney

14 December 2020

On the 11th November 2020 we hosted an informal launch for Written Portraits, involving the ‘sitters’, patients, staff, friends, family, and many others interested to learn more about this collection. We were joined by a wonderful cast of actors and heard a selection of poems, ‘Vital Conversation‘ read by Clive Llewellyn, ‘Rewilding the Self’ from ‘The Art Class‘ read by Lin Sagovsky,  ‘The Three Musketeers‘ read by Chris Barnes, and ‘Everyday Heroines‘ read by Susan Aderin.

Di Sherlock’s Written Portraits are words that perform: words that gather people so that each ‘portrait’ is formed through a mixed practice of assembly. Written Portraits was the outcome of Di Sherlock’s residency at Charing Cross Hospital and Maggie’s Centre West. Her writing involves three stages that are not really events but a series of interlinking processes, unpredictable in number, location and duration: conversation; composition; returning the poem to the ‘sitter’. With these activities Di gets the measure of an individual or group. She does not only write about people but includes their participation, through verbatim elements from conversations and through references and citations to other occasions, texts, and images. Editing is born from discussion and negotiation as they edit with different hands through a process that can go back and forth before both sitter and poet consider the portrait fits – a likeness or resemblance they both like. One sitter said it “took a while for that ‘portrait’ to percolate through, the ‘gift’ also requires acceptance” and this process involved showing their poem to family members and hearing their points of view: “I was reluctant to share the mirror you held up to me until now”, they said, “my father, a Scotsman, was fond of quoting Burns and there’s a few lines that stuck in my head: ‘O wad some Power the giftie gie us / To see oursels as ithers see us!’”

The launch event marked an ending for Written Portraits, the collection has now been presented to the public and its many mirrors shared. But a process of composition and comparison continues through an exchange of correspondences that connect, once more, to the practices of personalisation that have concerned the People Like You project. Written Portraits explores the ways that preference, need, desire, pleasure and recognition serve as grounds for judging people to be alike, i.e. grouped, comparable to each other. Each portrait contains a path into a life composed of others, insofar as it relates to and connects other people or things. The subjects of these poems are composite, multiply exposed, ‘fractal’ renderings of persons as sitters tell of themselves via mothers, fathers, or siblings, whether alive or dead, jobs and activities they do or do no longer, people they are or were or want to be. People and things to which they are alike, and they like, expressed through a portrait they are then asked to judge for its likeness and enjoyment. It is this composite understanding of the person, this view of portraits, that has intrigued and provoked our research group, especially when seen alongside developments in what is now commonly called ‘personalised’ or precision medicine.

Di Sherlock’s project has run in parallel to our interview and observational research in cancer services, where we have been learning about cancer care with patients and staff in clinical and non-clinical settings. Cancers have been the exemplar of a more personalised medicine that subtypes tumours based not only on their location, size, grading and staging, but also on their molecular characteristics. Tumours have thus been further differentiated and they are monitored with biochemical tracking techniques. Treatments have also been developed to target these subtypes, improving outcomes, and reducing harm to patients. Though patients may be told that their treatment is targeted and tailored, this is only possible because they are grouped or sorted into categories of patients ‘like’ or similar to them. Their ‘personalised’ medicine is forged through common characteristics of their cancers rather than biographical affinities – such as age, other health conditions – with others. Such combinations and conflations of need and similarity are provisional and promissory, since ‘people like you’ may like this treatment, or this, or that, until ‘you’ no longer coincides with the person you once were. You may no longer like your treatment; your cancer may no longer respond.

In this period of change, strain and crisis within UK health services we need the insight of counterfactual values provided by alternative forms of interaction, practice, and method, especially when we want to illuminate different qualities of relationship, persona, expression and value afforded to patients and staff in busy NHS services. What I have found so valuable about Di Sherlock’s work is how it highlights participative form, expectation, gift relations, and other practices constitutive of how people see themselves as persons. As a time-limited case study where liking and likeness coincide – at the point of agreement or fit between text and person – the collection moves by freeze-frames that contrast to other, iterative modes of medical personalisation.

People Like You Want Jobs Like This

Scott Wark

30 October 2020

Scott Wark

30 October 2020

A few weeks ago, jobs were trending on Twitter. On October 6th, the Chancellor, Rishi Sunak, dodged a question on TV about what people in creative or event-based industries ought to do for work. These industries have been heavily affected by COVID-19 lockdowns, leading to widespread job losses. His response, that “everyone is having to find ways to adapt and adjust to the new reality,” strongly implied that these workers were going to have to learn to do a different job. He also said that the government had “put a lot of resource into trying to create new opportunities.” So, Twitter users decided to check these resources out.

What they found was a questionnaire on the National Careers Service called Discover Your Skills and Careers (DYSAC). This questionnaire was designed to help people figure out what kind of work they might be suited for. Jobs started trending because some of the results of this test were bizarre, in poor taste, or seemed to ignore the damage done by COVID-19. I was told I’d make a good wine merchant, which I though was funny—I like wine—but also not a career one can just pick up. Other users, including one of our PLY colleagues, reported being advised to take up careers in boxing, which is hardly the most sustainable work going. Most egregiously, perhaps, some were told they’d be best suited to work as actors or entertainers—the very careers Sunak was warning that people might have to leave.

The reasons why this DYSAC questionnaire made headlines are pretty obvious. If we peel back the angst and irony that made this questionnaire into a trending topic on Twitter, though, we can use the DYSAC to illustrate what we mean by personalisation—and why it sometimes falls well short of what it promises.

The DYSAC questionnaire is a part of a branch of psychology called psychometrics. Psychometrics quantifies human attributes—like intelligence, aptitude, behaviour, or emotions—to make them measurable and comparable, using instruments like questionnaires or tests. The IQ test is probably the most infamous historical example, but psychometrics are also widely used by big organisations. These tests are supposed to reveal things about you that can’t be gleaned from a job interview, like whether you’ll be suited to a particular role.

To create the DYSAC, the National Careers Service contracted SHL, one of the biggest global providers of psychometric tests that measure aptitude for jobs. SHL are famous for their Occupational Personality Questionnaire (also known as OPQ32), which is designed to predict if a prospective employee is likely to perform well by measuring 32 personality characteristics divided in to three major areas, “relationships with people,” “thinking style,” and “feelings and emotions.” Are you controlling or outspoken? Conventional or adaptable? Worrying or optimistic? The idea is to quantify these characteristics and map them on to roles that require particular personality types.

The DYSAC is a bespoke—and much more simple—version of this test. It’s a “normative” test, asking respondents to rate how strongly they agree or disagree with a particular statement on a “Likert-type” scale of 1-5. But it also reverses the psychometric principles that underpin the OQP32. Instead of using psychometrics to find the best person for a job, the DYSAC determines a person’s behavioural styles to find out which jobs they might fit. On their company blog, SHL Senior Consultant Helen Farrell describes the brief they were given like this: the National Careers Service “wanted to be able to empower the user to explore potential career paths they might have not have otherwise thought of.” This aspiration is noble enough, but it’s also the source of the ensuing controversy.

The DYSAC assumes that self-knowledge will help one succeed at work. What’s interesting to us is how it does this: it attempts to address you. That is, its series of questions generate results that say this is you. This mode of address is the source of Twitter’s ironic response to the questionnaire, because many people felt like it didn’t address them at all. The problem is that its results feel anything but personal(ised).

Psychometrics employs a peculiarly modern technique: it attempts to quantify a trait—in this case, personality—to make people comparable and classifiable. Much of the controversy surrounding the DYSAC stemmed from a feeling that its advice was generic. People reacted to what they saw as the test’s inability to comprehend their personal context or the broader context of a post-COVID-19 world. People simply couldn’t relate to some of its results, and they definitely didn’t want to be told they had an aptitude for a job in an industry that the pandemic had shut down.

This controversy also stemmed from the way it presented data about prospective jobs. There are a lot of websites out there that give advice about how a prospective employee ought to approach the OPQ32. One of the disclaimers these websites like to make about it is that “there is no wrong kind of personality.” But the panel of results generated by the DYSAC, its simpler sibling, make something else abundantly clear: some personality types seem to be much better remunerated than others. The DYSAC revealed much about how we are valued as people.

But this controversy also arguably stemmed from the test’s failure to produce a personalised outcome. This is what makes the experience of doing the DYSAC test feel so utterly strange. It reduces self-knowledge to the occupation one ought to aspire to. In order to understand you, instruments like psychometrics have to understand you as a type. Only, what’s being optimised isn’t the type of job for the person, but the type of person for the job.

Thought it might have failed to present a personalised outcome, the DYSAC nevertheless illustrates a few of the paradoxes of personalisation that we’re so interested in. Personalisation can only address you as a person by understanding you as part of a collective. The DYSAC fails because it doesn’t manage to sort the person back out of the collective its questions sort them in to. And yet it also demonstrates the strange flexibility and reversibility of the category of this “person.”

One of the phrases that we find ourselves returning to time and again is one that often accompanies algorithmic recommendations: “people like you like things like this.” The DYSAC applies an inverse logic. Their results are based on the proposition that jobs like this like people like you. Personalisation can mean personalising a job for a person, but it can also mean personalising a person for a job—in this case, by trying to quantify personality traits that match best to a role reduced to a type.

When personalisation works, this backwards logic just feels right. When it fails, well, it just feels wrong. I’m not likely to try to be a winemaker anytime soon. But besides having a job, I’m in a better position than an artist friend who received the result below after doing the DYSAC test. Perhaps the problem with the government’s failed attempt to personalise job advice is that too many people felt like the DYSAC didn’t “recommend any job categories” to them. People like them, it seems, aren’t liked by jobs at all.



The coronavirus pandemic has transformed so many things in our lives, from the way we work to the way we socialise. But the impact has not been experienced equally. While the whole of the UK population was asked to practice social distancing during the lockdown, one newly created category of people was asked to pay special care to reduce their own exposure to the disease: those who were identified as being at high risk from complications from COVID-19 (or SARS-CoV-2). These “clinically extremely vulnerable” people were asked to take action beyond normal social distancing to protect themselves from SARS-CoV-2.

In a letter received by many people who were defined as clinically extremely vulnerable, they were informed that:

“The safest course of action is for you to stay at home at all times and avoid all face-to-face contact for at least 12 weeks from today, except from carers and healthcare workers who you must see as part of your medical care.”

The impact of the creation of this category, through the establishment of the Shielded Persons List (SPL) by NHS Digital, cannot be understated.  From the outset there have been questions around the effects that the shielding rules have had on the mental and physical wellbeing of people affected due to isolation, financial and practical difficulties.[1] In addition, the method of the list creation, application, communication and revision has also had major impacts on people. For many people who were, or were not, included on the SPL, and for many whose status changed, there has been confusion, uncertainty, mistrust, and feelings of vulnerability with regards to what actions they should take.

In our public involvement work many people who were shielding felt that they had to make decisions for themselves (or contact charities for advice) about whether they should be shielding.[2] This was often in the absence of support from their healthcare providers, whom they felt were busy with the COVID-19 response. The uptake of the guidance has varied substantially depending on how much individuals felt like they were appropriately categorised, and also whether they felt that the guidance matched their own risk perceptions. It has also meant that many people decided that they should have been included on the list and decided to shield themselves.

Part of this uncertainty about whether this category fits an individual was due to the way in which the list appears to be based in an automated algorithm, which feels impersonal or perhaps ill-suited when applied to an individual. Some of the uncertainty and mistrust can also be associated with the dynamic processes that underpin the list production. Unpacking these process helps to clarify why responses to this list have been so varied.

To create the SPL, NHS Digital first deployed expert clinicians to create a list of high-risk disease groups, and people were then assigned to the SPL if they had these conditions. Most people on the SPL were identified centrally through an algorithm that mapped the list of high-risk conditions onto individual level diagnosis categories. They “…‘translate’ (or map) the clinical requirements of the list into the right subsets of coded information so that individual patients could be identified.” Additional people were added to the SLP by GPs and through secondary care following clinical guidance. The list is maintained centrally, but flows out to clinicians through local data systems, and can be updated by primary and secondary clinicians at the point of care.  Any updates are then taken into the national list on a weekly basis, which is then distributed the following week through the same channels.

This sounds organised, and in some ways makes intuitive sense: there are diseases and conditions that suggest people are likely to have worse reactions to COVID-19, so there is a national list created of people with these conditions using rules (an algorithm) created by experts to map conditions onto continually collected data.

An example of a rule used to map a condition to datasets (


After the initial list was created, clinical decision making by doctors who know the patients’ histories can override the algorithm.  In one interview about how the algorithm works in practice, a GP said

“The data – it gives you the false sense of precision because a code is a code…., but actually on a human level, there is something else going on. And the data will help us and it may be a very rough way to screen people, but somebody has to do another level of tailoring to the patient, to the individual.”

The same doctor said that they will also defer to the advice of a charity as to whether to add a flag to include a patient on the list. How a person finds out they are on the SPL is determined entirely by who adds them to the list, because responsibility to inform them lies with the person or body that adds the SPL flag to their file. They can be informed by their GP, through secondary care, or NHS Digital. The dynamic nature of the list production, including the tailoring for the patient at the local level, has meant that the process can appear chaotic and impersonal to people who are affected. The list, which is partially a product of their own health data, becomes unrecognisable as it is transformed and recontextualised for use in the COVID-19 era.

A predictive risk algorithm is being developed in Oxford University [3] that extends the current SPL list. It will build on the category of “clinically extremely vulnerable” people with a prediction about how you will respond if you get COVID-19. This prediction will be based on the responses of people with similar clinical and demographic backgrounds who have now had COVID-19. The idea that underpins this algorithm is that people who have data that is statistically similar to yours (people like you) will have similar responses to COVID-19.

The algorithm as described by the research protocol (Oxford University):

The original SPL… was developed early in the outbreak when there were very little data or evidence about the groups most at risk of poor COVID-19 outcomes, and so was intended to be a dynamic list that would adapt as our knowledge of the disease improved and more evidence became apparent… [we will] assess whether a predictive risk algorithm can be developed with the above evidence to permit a more sophisticated ‘risk stratification’ approach. There are a variety of potential uses for such a tool, but it is primarily anticipated that it could be used both clinically in informing patients of their individual risk category and managing them accordingly, and strategically to stratify the population for policy purposes.[4]

In our research at Imperial, some people have said that they would much prefer an individualised risk assessment, but it remains to be seen whether this new form of algorithmic personalisation will feel more appropriate than the current SPL category production process.

On August 1st, shielding was paused in England, Scotland and Northern Ireland, and it was paused on August 16th in Wales. The pause in shielding has meant that many people have lost the protection and support that were provided because they were on the SPL, and those on the shielding list are no longer eligible for Statutory Sick Pay. Furthermore, the application of shielding guidance can now also be adjusted at the local level according to local data on prevalence. If there is an increase in prevalence, people on the list may be told that they now need to shield. The list itself remains very much alive and may be reactivated as the situation evolves. As SARS-CoV-2 transmission increases over the coming weeks, people who are considered vulnerable are likely to receive further communications about their risk, and it will be crucial to see whether the more approach feels any more personal.



[1] Robb CE, de Jager CA, Ahmadi-Abhari S, Giannakopoulou P, Udeh-Momoh C, McKeand J, Price G, Car J, Majeed A, Ward H, Middleton L. Impact of social isolation on anxiety and depression during the early COVID-19 pandemic: a survey of older adults in London, UK. Frontiers in Psychiatry 2020;11:591120 doi: 10.3389/fpsyt.2020.591120

[2] Maria Piggin, Katherine Collet, Philippa Pristerà. Insight Report: Guidance for people who are clinically extremely vulnerable from COVID-19 (18 June 2020)

[3] Hippisley-Cox et al. 2020. “Development and evaluation of a tool for predicting risk of short-term adverse outcomes due to COVID-19 in the general UK population.” Research Protocol

[4] Hippisley-Cox et al. 2020. “Development and evaluation of a tool for predicting risk of short-term adverse outcomes due to COVID-19 in the general UK population.” Research Protocol


What Makes You Feel Alive?

William Viney

27 July 2020

William Viney

27 July 2020

People dressed for carnival and karate, cooking and gardening – just some of the people that kept Rina Dave alive. Each portrait shows a person involved in Rina’s care, dressed for or posed with the thing they loved: the activities, the people, the objects that in turn made them feel alive. During 2014 Rina was being treated with stage 4 breast cancer at Charing Cross Hospital. She wanted to pay tribute to the healthcare staff, researchers and volunteers who kept her going. With the help of CRUK research nurse, Kelly Gleason, Rina set up a makeshift photographic studio in the hospital and the pictures were exhibited in late 2014. Rina passed away peacefully in 2015. The photographic project went into storage before being shown again at a science festival in 2019, alongside an invitation to visitors to share what makes them feel alive.

Credit: Dan Weill, Imperial College London. Image: The Feedback Wall in action

The question that served as the premise of Rina’s project – ‘what makes you feel alive?’ – prompted more than 600 different answers. I was struck by the variety of things that people said made life matter – people, pastimes, passions, answers that varied from the abstract to the specific and the flippant. There were moments of contemplation and concentration, lots of discussion, much laughter and some tears. The responses interested me because this feedback wall –constructed from a clothes rail, coloured string, and many handwritten postal tags – is public and anonymous, and visitors often took time to read and discuss the contributions of others before adding their own. The individual responses reflected many situated and distributed relations – with people and things – that give us a sense of who we are in relation to others.

Rina’s project, and the feedback wall that it inspired connects with some of the work we have been doing in this same breast cancer service. Her project reminded me of a patient I met during our fieldwork, who was also being treated for advanced breast cancer. Her treatment and pain management was adapted to allow her to visit an annual comic book convention with her son, the highlight of her year. Her medical team knew this was a priority and changed her appointments and medication to help her attend the convention. Small adaptations can make a world of difference to patients, and provoked questions about how, when, and why care was tailored, adapted, or adjusted according to people’s preferences, differences and similarities.

In our work on personalisation we look at how ‘liking’ and ‘likeness’ combine in the address to ‘people like you’ – expressions of preference can provide the grounds for judging people to be alike (similar to each other) or run in parallel. NHS England’s approach to personalisation has been to separate resources and policies for ‘medicine’ and ‘care’. The former matches people according to increasingly predictive and precise categories of molecular and other biological traits. The latter enables people’s preferences to shape their use of health services. While personalised medicine promises ‘the right treatment to the right person at the right time’, personalised care involves conversations between care providers and patients about ‘what matters to you?’ and not  ‘what’s the matter with you?’  In NHS oncology services clinical matters tend to take priority and care pathways were largely shaped by similarities between patients based on disease categories and related treatment and response. Patients and staff also registered how preferences could be acknowledged, ranging from IT developments that improved information sharing between patients and staff to nurse-led chemotherapy that involved greater continuity of care.

An important tool used to gather information about patient priorities and preferences is the Holistic Needs Assessment (HNA), which nursing staff complete with patients to develop their care plans. Standardised assessments have been championed by campaign groups and charities as an important mechanism to make healthcare services more ‘personalised.’ They provide opportunities for asking patients about the things that matter to them. In this breast cancer service HNAs are completed at the beginning and at the end of treatment, and nurse specialists review issues and concerns with patients. HNAs can help to highlight practical concerns that are hard to discuss in time-pressured appointments with surgeons or oncologists, including specific spiritual, financial, psychological and mental health needs, as well as worries about the effect of cancer treatment on personal relationships. They are used as a communication device by nursing staff, much like the feedback wall we developed, to spark conversations about what care could look like for this individual. But HNAs are also tools for broader service developments and delivery – they are used to produce metrics of care quality which can be used to audit and rate hospital trusts.

HNA data is being gradually integrated into wider data management systems that the hospital uses to track patients and patient groups. Such metrics may, in the future, be used to sort patients in terms of the characteristics they share, combining the biological categorisation of cancers with measures of emotional wellbeing, financial security, or social network features. In the long run, patients will continue to be asked about the things that matter most to them. But over time who they are taken to be will be shaped by the ways in which the service quantifies, aggregates and re-assembles them in fluid groups or categories.

How large oncology services incorporate, code, operationalise and standardise ‘holistic needs’ of patients in the future will depend on how they interpret patient-centered care delivered by healthcare staff through evolving computational tools for analysing patient data. For now, who asks what makes you feel alive and how the answer is implemented depends on the prior emphasis given to different orders of ‘like’, that is, preference and similarity. Care pathways prioritise likeness based on sub-typing of disease categories, treatment and treatment response, while tools that address preferences about care operate in parallel, according to various social, emotional, psychological, financial categories. Platforms for ‘personalised medicine’ may promise to deliver the right treatment to the right person at the right time but remain agnostic about whether or not that person enjoys a good standard of living, or whether they still enjoy the things that made them feel alive.


Acknowledgements: thank you to Kelly Gleason for introducing me to Rina’s project and allowing me to be involved in the CRUK stand at the Great Exhibition Road Festival 2019. Read about her work with Rina here.  I also want to acknowledge our collaborative work in breast cancer medicine and healthcare, led by Sophie Day, who made many useful comments and suggestions towards the writing of this piece. We are very grateful to the staff and patients who spoke to us during our research.

Yael Gerson

23 June 2020

As the title of our project suggests, we are looking at different aspects of personalisation. Recently, I have noticed a new kind of ‘personalised’ advert, which is for personalised hair care. Curiosity got the better of me, and one day I clicked on the advert for ‘hair care personalised’, which instantly took me to a quiz asking me all sorts of difficult questions: is my hair wavy or curly (I know it’s not straight), is my scalp dry, what hair goals do I have, etcetera. I found myself asking friends and family the answers to these questions, and it was interesting to hear that there wasn’t a unified consensus. This got me thinking about what I find problematic about such notions of personalisation, in particular the idea that we have a singular, unified identity.





Function of Beauty, formula for personalised hair care

Recently, I was sitting in the back of an Uber, when the driver asked me where I was from. Mexico, I replied. Well, he said, I would have said any place except that. When I asked him why, he said he didn’t know why, but it was just instinctive. This got me thinking about times when my sense of self has clashed with how others perceive me. Let me illustrate this. It was 2008, and I was two years into my PhD when I got my first job as an associate lecturer in Sociology, leading seminar discussions with undergraduates. One week we were discussing ideas around race, more specifically the social construction of race, and in particular the categorisation of people as either ‘white’ or ‘black’. I stood in front of the class and said, “so, for example, when you see me you say…”, and the class responded “black.”  I was shocked – I had always thought of myself as white, and even my Mexican passport stated that my skin colour was ‘white’. Needless to say, this led to a very interesting class discussion; what stayed with me was the moment of shock I experienced, why had I felt so surprised?

bare Minerals Made 2-Fit foundation

Stuart Hall has already pointed to the problem of thinking of identity as an accomplished fact; rather, he said, we should think of identity as “a ‘production’, which is never complete, always in process, and always constituted within, not outside, representation” (Hall, 1990). Identities are inscribed into bodies through everyday practices making them almost invisible, and in many ways feeling ‘fixed’. For me, moving countries de-stabilised this fixedness, and gave me a different sense of self. People of mixed heritage will have probably experienced similar moments, moments of not fitting into categories, of being in-between. It is more in the not-fitting-into categories than-the-fitting into categories that I identify with. I am thus always uneasy when faced with quizzes and promises of personalisation done through AI. When it comes to foundation, will an app be better at determining my skin tone than the person at the beauty counter; will I be able to judge my hair and scalp type more than a hairdresser? How can we ‘know’ something that is always becoming (Butler, 2001)? In the context of beauty, AI is presented as an objective tool that will ‘see’ you – your hair type, your skin tone – without judgement. The danger here is that AI enabled technologies will (re)produce a certain fixedness of racial and gendered identities, and that these will be adopted by consumers as ‘objective’ and ‘true’.

What strikes me in the experiences narrated above, is the ineffability of identity; and so, if I cannot express what my identity ‘is’, then we cannot afford to think that AI-powered personalised beauty products are not political.



Scott Wark

15 May 2020

The Figurations: Persons In/Out of Data conference was held at Goldsmiths, University of London, in December, 2019. Over two days, it gathered researchers from across the humanities and social sciences to explore how the concept of the “figure” and its cognates—figuration, to figure, to figure out, and so on—might inform the theoretical frameworks and methodological formulations we use to study developing personalisation and data practices.

In the conference’s blurb, we summed up the conference’s interests like this: [t]he intersection between data and person isn’t fixed; it has to be figured. Per its subtitle, the conference was interested in persons, the putative subjects of the processes of personalisation that we study here at People Like You. But it was also interested in the data processing techniques that make persons tractable to processes like personalisation. Our proposition was quite simple: perhaps what these data processing techniques do, in otherwise-distinct domains—perhaps what they have in common—is that they configure and distribute personhood in the data they assemble, as outputs for other operations. Our gambit was that making this proposition the basis of a conference would encourage other scholars, from a range of disciplines, to come and think through it with us. And it did.

Over two days, we hosted four keynote presentations, by AbdouMaliq Simone and Wendy H. K. Chun on the first day and Jane Elliott and John Frow on the second. 45 papers were also delivered by 61 researchers from a wide range of disciplines and places, including the medical humanities, anthropology, sociology, media studies, geography, human-computer interaction, literature, art history, legal studies, and visual cultures.

Each day was punctuated at beginning and end by a keynote presentation. The first day started with Simone’s discussion of how a “we” is—must—be configured in order to continue to inhabit a planet that’s in excess of our experience and understanding. The day was capped off by a presentation by Chun on the discriminatory politics of the machine-learning-based recognition systems that subtend and orchestrate many of our relations with networked technologies. The second day started with a presentation by Jane Elliott on longitudinal research, which used the example of the 1958 British Birth Cohort Study to discuss, with great nuance, what challenges face researchers working on figuring individuals over large time scales or, conversely, using the micro-scale data offered by wearable technologies. Finally, John Frow concluded the conference with a presentation on “data shadows,” which connected questions of surveillance and data processing to the problem of how we might recognise ourselves in their products.

Particular themes emerged over the course of two days, both in these keynotes and in the parallel sessions that they bookended. Many presenters offered compelling conceptualisations of the different ways that persons might be figured, whether as patients or users, data doubles or digital subjects; whether imagined as individuals or as they’re assembled, by data, into collectives. Other presenters focused more on data and how it’s processed, conceptualising abstract processes as figures that configure or constitute persons. Matt Spencer’s presentation, for instance, articulated the configuring influence of trust over the infrastructures that manage data, whilst Emma Garnett unpicked how pollution has to be figured, in order for us to understand it and, so, understand our relationship to it. This variety was stimulating. It also contributed to a sense of coherence in the concept of the figure we’d adopted as a guiding thread.

What emerged from these papers was a sense that the concept of the figure was multiple, but nevertheless helped us get a handle on how data and persons are mutually configured, as, for example, figures of speech, or inter-operable subjects. How it does is different in different contexts, depending on what techniques and technologies are involved and to what ends they’re employed. But the concept of the figure and its cognates help us to apprehend figuration as a process with particular characteristic features. It helps us see what data are and what data do. It captures data by tracking what people do. It makes data commensurable by establishing likenesses. It situates data in contexts that delimit its scope. What emerges are figures of persons constituted in/out of data.

Finally, it also brought home a key point that, for me at least, often tacitly informs the work People Like You does as a team. To study problems that arise from the relationship of persons and data, that are large scale, and that cut across very different domains—in our case, personalisation—we have to adopt interdisciplinary approaches informed by novel methods. Moreover, we need concepts that are fit for purpose. This conference affirmed to us that the figure and figuration is just such a concept. At scale, it can do the kind of conceptual work we need to understand the complex processes that inform how we might understand a person to be.

It’s a few months after the conference, but we’ve still been working with its outcomes. We’re aiming to make available an edited collection of papers by keynotes and presenters from the conference and members of the PLY team. We hope this collection will capture something of the breadth that made the conference successful. But we also hope that it’ll give readers conceptual and methodological tools to do their own figuring.

We’ll have more on this soon. In the meantime, thanks to everyone who presented or attended!

For photos of the conference, check out our gallery.




Sophie Day

24 April 2020

Sophie Day and Celia Lury

We are all now familiar with what 2 metres looks like, as we go for solitary walks in parks or stand in queues to shop for family and friends. We draw lines, we stand aside or behind or in front of others; we walk around and in parallel to each other.

Improvising, trying out ways to be ‘close up, at a distance’ (Kurgan 2013), we leave food outside doors and put our hands to windows separating us from friends and relatives who cannot leave their homes. Participating in group chats, we appear to ourselves and others as one talking head among others. We sing across balconies, we mute ourselves in synchronized patterns. Our actions producing insides and outsides, we appear alone together.

In describing 2 metres as a social distance, we acknowledge that we are part of a bigger picture. But who is it a picture of and how does it come about?  How is the compulsion of proximity (Boden and Molotch 1994) – the need to be close to others – being reconfigured as proximity at a (social) distance? And what kind of social is this? Does it add up to a society? Who is included and who is left outside? Are we all in this together?


In the 2 metre rule and the complicated guidance about who has to stay indoors and who can go out and why, we see grid reactions. This is a phrase used by Biao Xiang in his discussion of the management of the COVID-19 epidemic in China by already existing administrative units. He says, ‘Residential communities, districts, cities and even entire provinces act as grids to impose blanket surveillance over all residents, minimize mobilities, and isolate themselves. In the Chinese administrative system, a grid is a cluster of households, ranging from 50 in the countryside to 1000 in cities. Grid managers (normally volunteers) and grid heads (cadres who receive state salaries) make sure that rubbish is collected on time, cars are parked properly, and no political demonstration is possible. During an outbreak, grid managers visit door to door to check everyone’s temperature, hand out passes which allow one person per household to leave home twice a week, and in the case of collective quarantine, deliver food to the doorstep of all families three times a day.’


Image posted by Adam Jowett, April 20th, 2020, adapted from Fathromi Ramdlon, via


While gridding has long been a core technology of rule through centralised command and control according to the priorities of military, state and industrial logistics, the pandemic is leading to a multiplicity of grid reactions. In the UK, some grids are imposed, while others are improvised.  We (variously) wear face masks,  choreograph meetings in Zoom (faces within faces, faces on their own, face by face, just not face to face), and order goods on-line while governments refuse to let cruise ship passengers disembark, impose 2 metres outside care homes but not inside, divert PPE from one country to another, and close borders to people, but not to goods. There is no single grid in operation. We are all making neighbours differently.



As we do so, we learn about grids; they can be creative, they can be fun, but grid reactions also make visible the social of social distancing and the politics of proximity. For example, watch a man from Toronto wearing a so-called “social distancing machine” (a hoop with a 2 metre radius that a person can wear around their middle) while walking around the city.


The aim of this machine was to show that sidewalks (the North American word for pavements seems especially appropriate right now) are too narrow, particularly when people are being asked to socially distance. Its creator, Daniel Rotsztain, who is part of the Toronto Public Space Committee, a group that advocates for more “inclusive and creative” public spaces, said to Global News Radio AM 640, ‘I think even before COVID, you could say that pedestrians are jostling for space in Toronto, but COVID really exposed that’. He proposes that some streets should be closed to traffic to give pedestrians more room to maintain distance. In the grip of a pandemic, the grids of family, household, and district – the negotiation of which are so fundamental to social life – can no longer be taken for granted. Some grid reactions are described in terms of shielding the vulnerable, but who are they really shielding – those inside or those outside? For whom does a home become a prison, and when does a cruise ship become a floating container?


Still, Ryan Rocca, “Coronavirus: Man wears ‘social distancing machine’ to show local sidewalks are ‘too narrow’. Global News, April 13, 2020:


A grid seems to freeze time within spatial relations but it is a way of managing mobility. We move up the queue outside the supermarket in sequenced intervals rather than as and when we like. While the grid seems fixed, it calibrates movement – the transmission of a virus, for example – assigning spatial and metric values to this temporal process in an interplay of number based code and patterning (Kuchler 2017). In the UK, 2 metres is the measure that is being imposed to mediate the Ro or basic reproduction number, the number that indicates how many new cases one infected person generates. Recognizing 2 metres as a social distance acknowledges that transmission is not simply a matter of biology, but of how social life is gridded. But while the Ro conventionally takes the individual person as the unit of transmission, the examples above suggest that it is the operation of multiple grid reactions – and the failure or success of the interoperability of their metrics – which matters. We need to ask: what kinds of families fit into what kinds of households into what kinds of schools? how do they inter-connect? how differently permeable are private homes, second homes and social care homes? how will apps measure social distance? And perhaps most importantly, how do grid reactions change the ways in which the virus discriminates? In what has been described as a very large experiment, the interoperability of grids is being tested in real time across diverse informational surfaces – models, materials, walls, windows, screens, apps and borders – to create new grids with as yet unknown consequences.



Boden, D. and Molotch, H. L. (1994) The compulsion of proximity in R. Friedland and D. Boden (eds.) Now/Here: Space, Time and Modernity, University of California Press, pp. 257-286.

Jowett, Adam. (2020) Carrying out qualitative research under lockdown – Practical and ethical considerations. At

Kuchler, S. (2017) Differential geometry, the informational surface and Oceanic art: The role of pattern in knowledge economies, Theory, Culture and Society, 34(7-8): 75-97.

Kurgan, L. (2013) Close Up at a Distance: Mapping, Technology, and Politics, The MIT Press.


Helen Ward

11 March 2020

“Of all the gin joints in all the towns … of all the one-horse towns … why did this virus have to come to mine?”

The words of my friend Paul who is living in an Italian town under lockdown because of the novel coronavirus epidemic. His frustration is palpable as his plans for travel, work and social life were put on hold for at least two weeks (and subsequently extended for another three). But he reasons, “despite the fact that it’s not a killer disease, we can’t all go around with pneumonia. I don’t want pneumonia myself…and I wouldn’t wish it on any of the local citizens so in a sense, I’m sort of with the authorities, even though it’s against my own personal interests at this moment in time, I think that the lockdown is correct” (interview, 26 Feb 2020).





Usually busy street in Codogno deserted, 28 Feb 2020 (credit: Paul O’Brien)

Public health interventions often raise this dilemma – to protect “the community”, individuals have to take actions for which they may see little or no benefit, and at worst experience, or imagine, damage. And in the case of emergency response, health advice tends towards blanket coverage rather than personalised recommendations. A potential pandemic looks like the other end of the spectrum from personalised medicine. The latter uses genomic and other molecular techniques together with large data sets to promise the right treatment or intervention for the right person at the right time through precision diagnostics and therapeutics. The “one size fits all” approach of epidemic response seems far removed from this, with recommendations for handwashing, social distancing and, as in the case of Wuhan and Lombardy (and now the whole of Italy), mass quarantine.

There is no lack of data on COVID-19. Indeed it is the first pandemic in the era of such widespread and easy access to information from 24-hour news, social media and almost real-time updates of numbers of cases, deaths and responses on websites such as worldometer.  This data sharing is unprecedented, as is the openness of publishing results and sharing information on cases and code. This initial data collection is the first stage of any outbreak investigation, where cases are described by time, person and place. In China, scientists used social media reports to crowdsource a daily line-listing of cases with as much data as possible, and this was then compared with official reports. (Sun et al, 2020) Although incomplete, this method had great promise, and teams are now looking to develop methods for more automated approaches, including “developing and validating algorithms for automated bots to search through cyberspace of all sorts, by text mining and natural language processing (in languages not limited to English)”. (Leung and Leung, 2020)

But while social media and online publishing is facilitating data access and sharing, it is also leading to what the WHO have termed an infodemic, “an overabundance of information — some accurate and some not — that makes it hard for people to find trustworthy sources and reliable guidance when they need it”. Sylvie Briand, director of Infectious Hazards Management at WHO’s Health Emergencies Programme, explains that this is not new, but different. “We know that every outbreak will be accompanied by a kind of tsunami of information, but also within this information you always have misinformation, rumours, etc. We know that even in the Middle Ages there was this phenomenon…But the difference now with social media is that this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further.” (Zarocostas 2020).

Conspiracy theories and misinformation about COVID-19 have indeed been spreading widely, from ideas that the disease is caused by radiation from 5G masts, to malicious reports of specific individuals being infected and suggestions of fictitious cures. These can be highly influential in determining people’s response to official advice in an outbreak situation.  Working on the role of misinformation on vaccine uptake, Larson describes resulting emotional contagion and “insidious confusion” which can undermine control efforts. (Larson 2018) Health behaviours in relation to infectious disease are complex and shaped by a wide range of factors including beliefs about prognosis and treatment efficacy, symptom severity, social and emotional factors. (Brainard et al, 2019) They are also based on the extent to which the source of the advice is trusted and respected. A survey of 1700 people in Hong Kong in the early days of the COVID-19 outbreak showed that doctors were the most trusted source of information, but that most information was actually obtained from social media. (Kwok et al. 2020)

Lack of trust was found to have undermined the response to SARS in China in 2003, leading to changes in the way that risks were communicated in the H7N9 influenza in 2013. A qualitative study of both outbreaks concluded, “Trust is the basis for communication. Maintaining an open and honest attitude and actively engaging stakeholders to address their risk information needs will serve to build trust and facilitate multi-sector collaborations in dealing with a public health crisis.” (Qiu et al 2018). The focus on engaging stakeholders in the community is a crucial and often neglected part of epidemic response. (Gillespie 2016, WHO 2020)

So, can we expect people to respond appropriately to the one-size-fits-all messages to try and reduce the transmission of coronavirus? The response will depend on a number of factors including whether people trust the source of the message, the threat is perceived as real, the interventions are seen as likely to work, and the disruption proportionate. Evidence so far suggests that people are making changes – 30% of 1400 people who responded to my non-random Twitter survey had already changed their behaviour by 22 February, and the disappearance of soap and hand sanitiser from the shelves indicates intention to adopt hygiene practices. Respondents to a UK survey on 27-29 February reported a range of coronavirus related actions, including more handwashing (62%) and changed travel plans (21%). (Brandwatch, 2020)

Living in in Codogno, Italy, my friend has no choice but to change his behaviour, but after initial annoyance he supports the lockdown as a necessary action to protect others. He is not particularly concerned about his own risk, yet in our conversations, and those with many others in person and online, there has been an interesting focus on the differential impact of COVID-19. The severity is clearly greater in older people and in people with some pre-existing conditions. This knowledge can be reassuring for many, if people like them don’t seem to be badly affected but frightening for others. Reports of deaths have often been accompanied by descriptions such as “old” and “with underlying health conditions”. I commented on Twitter that this can create a “disturbing narrative this is acceptable, and can make the young & fit feel reassured”, and had a surprisingly positive response with over 20,000 impressions and 200 likes (many more than usual). One person replied, “I agree, the corollary is… that’s all right then, won’t affect us”.


It is not surprising that people want more precise information on risks, and this will eventually affect the response by identifying those people who should be first to receive vaccines and treatments. But we need to take care that it the information is not used to create complacency in those who do not feel personally vulnerable. In HIV prevention, the concept of high-risk groups was counter-productive in many settings, leading on the one hand to stigma directed at those groups, and on the other to a lack of protective behaviour by people who felt that messages did not apply to them. We need to caution against that response. Even if coronavirus is mild for most people, it has the potential to seriously disrupt healthcare if it spreads quickly. The nature of the illness puts particular demands on critical care. In Italy they are struggling with lack of critical care beds already, and the UK has far lower capacity. (Rhodes, 2012)

In an emergency it is even more important that we take measures that protect others, not just focus on our own personal risks and benefits. So please, wash your hands well, and don’t be offended if I don’t offer to shake your hand when we meet.



Thanks to Paul O’Brien for sharing his experience and photograph. HW receives funding from Imperial NIHR Biomedical Research Centre and Wellcome Trust.



Brainard J, Weston D, Leach S, Hunter PR. Factors that influence treatment-seeking expectations in response to infectious intestinal disease: Original survey and multinomial regression [published online ahead of print, 2019 Dec 6]. J Infect Public Health. 2019;S1876-0341(19)30340-5. doi:10.1016/j.jiph.2019.10.007

Gillespie AM, Obregon R, El Asawi R , et al. Social mobilization and community engagement central to the Ebola response in West Africa: lessons for future public health emergencies. Glob Health Sci Pract 2016;4:626–46. doi:10.9745/GHSP-D-16-00226

Kwok KO, Li KK,  Chan HH et al. Community responses during the early phase of the COVID-19 epidemic in Hong Kong: risk perception, information exposure and preventive measures medRxiv 2020.02.26.20028217;  doi:

Larson H. The biggest pandemic risk? Viral misinformation. Nature 562, 309 (2018) doi: 10.1038/d41586-018-07034-4

Leung GM, Leung K.  Crowdsourcing to mitigate epidemics The Lancet Digital Health, 2020 (February 20)

Qiu W, Chu C, Hou X, et al. A Comparison of China’s Risk Communication in Response to SARS and H7N9 Using Principles Drawn From International Practice. Disaster Med Public Health Prep. 2018;12(5):587–598. doi:10.1017/dmp.2017.114

Rhodes A, Ferdinande P, Flaatten H, Guidet B, Metnitz PG, Moreno RP. The variability of critical care bed numbers in Europe. Intensive Care Med. 2012;38(10):1647–1653. doi:10.1007/s00134-012-2627-8

Sun K, Chen J, Viboud C Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. Lancet Digital Health. 2020; (published online Feb 20)

World Health Organisation. Risk communication and community engagement (RCCE) readiness and response to the 2019 novel coronavirus (2019-nCoV). Interim guidance v2, 26 January 2020. WHO/2019-nCoV/RCCE/v2020.2

Zarocostas J. How to fight an infodemic. Lancet. 2020;395(10225):676. doi:10.1016/S0140-6736(20)30461-X

*This blog has also been published at Imperial’s Patient Experience Research Centre.

Report from Santiago, Chile

Scott Wark

17 February 2020

Scott Wark

17 February 2020

Over the past year, members of the People Like You team have been collaborating with Martin Tironí, Matías Valderrama, Dennis Parra Santander, and Andre Simon from the Pontificia Universidad Católica de Chile in Santiago, Chile, on a project called “Algorithmic Identities.” Between the 13th and 20th of January, Celia Lury, Sophie Day, and Scott Wark visited Santiago to participate in a day-long workshop discussing the collaboration to date and to discuss where it might go next.

The Algorithmic Identities project was devised to study how people understand, negotiate, shape, and in turn are shaped by algorithmic recommendation systems. Its premise is that whilst there’s lots of excellent research on these systems, little attention has been paid to how they’re used: how people understand them, how people feel about them, and how people become habituated to them as they interact with online services.

But we’re also interested in how algorithmic recommendation systems might be rendered legible to research. The major online services and social media platforms that people use are typically proprietary. Their algorithms are closely-guarded: we can study their effects on users, but not the algorithms themselves. In media-theoretical argot, they’re “black boxed”.

To study these systems, we adopted a critical making approach to doing research: we made an app. This app, ‘Big Sister’, emulates a recommendation system. It takes text-based user data from one of three sources—Facebook or Twitter, through these services’ Application Programming Interfaces, or a user-inputted text—and runs this data through an IBM service called Watson Personality Insights. This service generates a “profile” of the user based on the ‘big five personality traits’, which are widely used in the business and marketing world. Finally, the user can then connect Big Sister to their Spotify account to generate music recommendations based on this profile.

Our visit to Santiago happened after the initial phase of this project. Through an open call, we invited participants in Chile and the United Kingdom to use Big Sister and to be interviewed about their experience. Using an ethnographic method known as “trace interviews”, in which Big Sister acts as a frame and prompt for exploring participants’ experiences of the app and their relationship to algorithmic recommendation systems in general, we conducted a trial/first set of interviews—four in Santiago and five in London—which formed the basis the workshop.

This workshop had a formal component: an introduction and outline of the project by Martín Tironi; presentations by Celia Lury and Sophie Day; and an overview of the initial findings by Matías Valderrama and Scott Wark. But it also had a discursive element: Tironi and Valderrama invited a range of participants from academic and non-governmental institutions to discuss the project, its theoretical underpinnings, its findings and its potential applications.

Tironi’s presentation outlined the concepts that informed the project’s design. Its comparative nature—the fact that it’s situated in Santiago and in the institutional locations of the People Like You project, London and Coventry—allows us to compare how people navigate recommendation in distinct cultural contexts. More crucially, it implements a mode of research that proceeds through design, or via the production of an app. Through collaborations between social science and humanities scholars and computer scientists—most notably the project’s programmer, Andre Simon—it positions us, the researchers, within the process of producing an app rather than in the position of external observers of a product.

This position can feel uncomfortable. The topic of data collection is fraught; by actively designing an app that emulates an algorithmic recommendation system, we no longer occupy an external position as critics. But it’s also productive. Our app isn’t designed to provide a technological ‘solution’ to a particular problem. It’s designed to produce knowledge about algorithmic recommendation systems, for us and our participants. Because our app is a prototype, this knowledge is contingent and imprecise—and flirts with the potential that the app might fail. It also introduces the possibility of producing different kinds of knowledge.

My presentation with Valderrama outlined some preliminary interview findings and emerging themes. Our participants are aware of the role that recommendation systems play in their lives. They know that these systems collect data as the price for the services they receive in turn. That is, they have a general ‘data literacy,’ but tend to be ambivalent about data collection. Yet some participants found the profiling component of our app confronting—even ‘shocking’. One participant in the UK did not expect their personality profile to characterise them as ‘introverted’. Another in Santiago wondered how closely their high degree of ‘neuroticism’ correlated to the ongoing social crisis in Chile, marked by large-scale, ongoing protests about inequality and the country’s constitution.

Using the ‘traces’ of their engagement with the app, these interviews opened up fascinating discussions about participants’ everyday relationship with their data. Participants in both places likened recommendations to older prediction techniques, like horoscopes. They expected their song recommendations to be inappropriate or even wrong, but using the app allowed them to reflect on their data. We began to get the sense that habit was a key emergent theme.

We become habituated to data practices, which are designed to shape our actions to capture our data. But we also live with, even within, the algorithmic recommendation systems that inform our everyday lives. We inhabitthem. We began to understand that our participants aren’t passive recipients of recommendations. Through use, they develop a sense of how these systems work, learning to shape the data they provide to shape the recommendations they receive. Habit and inhabitation intertwine in ambivalent, interlinked acts of receiving and prompting recommendation.

Lury’s and Day’s presentations took these reflections further, presenting some emergent theoretical speculations on the project. Day drew a parallel between the network-scientific techniques that underpin recommendation and anthropological research into kinship. Personalised recommendations work, counter-intuitively, by establishing likenesses between different users: a recommendation will be generated by determining what other people who like the same things as you also like. This principle is known as ‘homophily.’ Day highlighted the anthropological precursors to this concept, noting how this discipline’s deep study of kinship provides insights into how algorithmic recommendation systems group us together. In studies of kinship, ‘heterophily’—liking what is different—plays a key role in explaining particular groupings, but while this feature is mobilised in studies of infectious diseases, for example in what are called assortative and dissassortative mixing patterns, it has been less explicitly discussed in commentaries on algorithmic recommendation systems. Her presentation outlined a key line of enquiry that anthropological thinking can bring to our project.

Lury’s presentation wove reflections on habit together with an analysis of genre. Lury asked if recommendation systems are modifying how genre operates in culture. Genres classify cultural products so that they can be more easily found and consumed. They can be large and inclusive categories, like ‘rap’ or ‘pop’; they can also be very precise: ‘vapourwave,’ for instance. When platforms like Spotify use automated processes, like machine learning, to finesse large, catch-all genres and to produce hundreds or thousands of micro-genres that emerge as we ‘like’ cultural products, do we need to change what we mean by ‘genre’? Moreover, how does this shape how we inhabit recommendation systems? Lury’s presentation outlined another key line of enquiry that we’ll pursue as our research continues.

For me, our visit to Santiago confirmed that the ‘Algorithmic Identities’ project is producing novel insights into users’ relationship to algorithmic recommendation systems. These systems are often construed as opaque and inaccessible. But though we might not have access to the algorithms themselves, we can understand how users shape them as they’re shaped by them. The ‘personalised’ content they provide emerges through habitual use—and can, in turn, provide a cultural place of inhabitation for their users.

We’ll continue to explore these themes as the project unfolds. We’re also planning a follow-up workshop, tentatively entitled ‘Recommendation Cultures,’ in London in early 2020. Our work and this workshop will, we hope, reveal more about how we inhabit recommendation cultures by exploring the relations between personalised services and the people who use them. Rather than simply existing in parallel with each other, we want to think about how they emerge together in parallax.

What do pictures want?

Sophie Day

20 December 2019

Sophie Day

20 December 2019

WJT Mitchell’s title to his 2004 book is ‘What do pictures want?’  Why do we behave as if pictures are alive, possessing the power to influence us, to demand things from us, to persuade us, seduce us, or even lead us astray?

Steve McQueen’s Year 3 (2019) involved mass participation and includes 3,128 photographs, two-thirds of London’s 7-year olds. Class pictures of these 76,000 children are on display in Tate Britain’s Duveen Galleries until May 2020, but available slots for school visits are fully booked.  These are classic school photographs –  wide angle, everyone visible, most children in uniform, sitting and standing in three or four rows by virtue of the traditional low benches, and the children framed by familiar accoutrements of school gyms and halls. The images are arranged in blocks of colour from top to bottom of the gallery walls.

Year 3 at Duveen Galleries, Tate Britain (photograph, Sophie Day)

In addition to the gallery display, McQueen was committed to exhibiting class photographs on billboards on older, mid-20th century shop gables and houses in London’s further reaches as well as prime advertising spots in the centre. The London arts organisation, Artangel worked with PosterScope and leading companies for outdoors locating and marketing, to site billboards across the city but outside the borough in which the photograph had been taken. At least one billboard should be sited so that it was easy to visit from the relevant school. Cressida Day from Artangel told me about the logistics of placing billboards across London for two weeks in November 2019, ahead of the exhibition at Tate Britain. Fifty-three schools appeared on around 600 sites which were put up with paper and paste in 48 sheets for one billboard, 96 for a double space. Such spots are hard to find in central London where most advertising is now digital. Only digital, in portrait format, is available in some boroughs such as the City of London while paper and paste offers the necessary landscape format. Pasting up is a dying art and takes a year to learn.

Year 3 billboard by A12 extension, east London (photograph, Sophie Day)

I was interested in this vision of a mass public seeing itself. Billboards and exhibition evoked repeated hopes for London’s future, as Harry Thorne found in reviews from The Guardian, The Times, Arts & Collections, ArtDaily and The Telegraph.[2] One teacher expressed delight about the public display of pictures of children she taught with special needs. They are mostly invisible, she said, and are not part of publics. Of course, they want (to be on) a billboard.  No one ever sees them. Comments on twitter’s #Year3Project read, “The … is so cool! We’re used to numerical data on populations, but here you can SEE a cross-section of London, …” and, from a participating school,  “… we are the art work. We are the audience.”

Apparently less than half of London’s schools now take year pictures and the photographs that are still taken do not follow past practice, replacing images of children in serried rows with movement and activity. A web search for school photo will come up at once, however, with an offer to find your old class picture for you. You might then imagine or trace your cohort forward in time from the recent past.[3]  I wonder if the evocation of collectives on billboards through practices that used to be common jolted spectators into asking about London’s future. The sense of collective, including year groups, was orchestrated by public institutions through widely shared events that moved you predictably from school photos, through education in general, and into work placements, health checks, jobs ….  As public institutions themselves are severely trimmed and as their role or value is celebrated less often through school photos and equivalent markers, what sort of London will appear with these Year 3 children? How will it be recognised and by whom?

Measures were adopted to safeguard the audience and portraits, but different kinds of public emerge in relation to digital media. Advertisers follow voluntary restrictions within a 100-metre area around schools and refrain from advertising alcohol, e-cigarettes, fast food, sweets, gambling or lotteries. The display of Year 3 images on billboards followed the same guidelines. In addition, there were to be no adverts from these sectors next to Year 3 portraits.[4] In consequence, more billboards ended up in the underground network than expected, where TFL’s policy on advertising is more stringent.

Were the audience to these pictures in need of safeguards? Perhaps Year 3 pictures would affect their surroundings and so the companies placing the billboards as advised by NSPCC (National Society for the Prevention of Cruelty to Children) officers, wanted to create child-friendly environments. If ‘the artwork was also the audience’ (above), would an audience of young people would come to look at the billboards in person, where they would be protected by these guidelines? But pictures of billboards that then circulate on social media (and here, for instance) cannot take these protective measures with them. What do these images want from their audiences? To be seen from what distance, in what context? How do these images and their audiences differ? If the images show ‘People Like You’, do they – in turn – like you?

Geotargeting and geofencing are increasingly important to out-of-home advertising. You may be looking at a billboard that is looking at you.  Data such as gender, age, race, income, interests, and purchasing habits can be used by companies to trigger an advertisement directly or to show ads in the future that they will have learned are appropriate, perhaps to Year 3 parents at school pick-up time and  teenagers in the evening. Once your phone has been detected, an advertising company can follow up with related ads in your social media feed or commercials at home on your smart TV.[5]  Artangel also used geolocating technology to target ads via Facebook or Instagram and direct people to Artangel’s website to find out more about the project.

Roy Wagner’s comment on the early Wittgenstein provides an appropriate gloss. Rather than picturing facts to ourselves, Wagner suggested, “Facts picture us to themselves” (The Logic of Invention, 2018).

[1] With thanks to Cressida Day, Celia Lury and Will Viney

[2] Harry Thorne, What All the Reviews of Steve McQueen’s ‘Year 3’ at Tate Britain Have Got Wrong. Frieze, 15 November 2019 at

[3] The early days of Facebook would be remembered by some parents of Year 3 pupils: TheFacebook, as it was then called, made an online version of Harvard’s paper registers which were handed to all new students. They had photos of your classmates alongside their university ‘addresses’ or ‘houses’.

[4] 38 billboards were never put up in consequence. In a digital equivalent, where you cycle through six images in a minute, you would have had to place six school photos one after the other to avoid neighbouring advertising.

[5] See Thomas Germain, Digital Billboards Are Tracking You. And They Really, Really Want You to See Their Ads. CR Consumer Reports, November 20, 2019 at

Fiona Johnstone in conversation with Felicity Allen

Fiona Johnstone

5 November 2019

Fiona Johnstone

5 November 2019

As part of our investigation into personalisation, People Like You is working with artist Felicity Allen ( Up to fifteen people will participate in Allen’s Dialogic Portraits practice, sitting for Allen in her studio in Ramsgate, where she will paint their portrait and invite them to reflect upon the process – and personalisation – with her.  Fiona Johnstone, postdoctoral research fellow with People Like You, sat for Allen and discussed her practice in relation to personalisation.

Fiona: Can you tell me a little more about the process of making Dialogic Portraits? The phrase suggests a conversation or dialogue; it reminds me of Linda Nochlin’s famous line about a portrait being ‘the meeting of two subjectivities’. I’m interested in how this relationship can be captured and made manifest in an artwork.

Flick. I’ve been working with Dialogic Portraits as a format for around ten years. Each sitting involves both talking and silence; each portrait is a document of the time that the sitter and I spend together. As well as creating a pictorial portrait, I also produce audio and video recordings, and make written observations. These then go towards making, say, an artist’s book or a film. I’m interested in how people respond to the experience of sitting, and in how they relate to the version of themselves that is given back to them in the finished portrait.

Fiona: The notion of series is important for dialogic portraiture, is that correct?

Flick: Yes, series, but also concept. Each series of Dialogic Portraits (Begin Again [2009-2014], You [2014-2016], and As if They Existed [2015-2016] and, currently, People Like You, Refugee Tales, and Interpreting Exchange) is informed by a concept that loosely links all the sitters in some way. For example, for Begin Again, which I started at the end of a decade of not-painting, I invited people who I had been working with [as Head of Interpretation & Education at Tate Britain] during that decade to sit for me. This enabled me to explore the limits of what we understand to constitute labour – intellectual, administrative, affective or domestic – and to think through the significance of this labour in relation to the production of both portraits and persons. For each portrait produced (76 in total), I wrote a diaristic note and recorded an interview with the sitter.

Fiona: I’m interested in the presentational format of Begin Again, which takes the shape of a two-volume book with images and texts (pictured), and also an exhibition (in 2015) where the portraits were hung as a wall-sized grid of faces (pictured). This configuration conjures several associations for me: a database, a filing system, or a rogues’ gallery. This reminds me of two textual reference points. The first is Siegfried Kracauer’s ‘The Mass Ornament’; Kracauer argues that in the modern period, people can only be understood as part of a mass, not as self-determining individuals. The second is Allan Sekula’s famous essay on photography, ‘The Body and the Archive’, where he explores the relationship in the early nineteenth-century between photographic portraiture, the standardisation of police and penal procedures, and the rise of the pseudo-sciences of physiology and phrenology (both comparative taxonomic systems which in turn contributed to the development of the discipline of statistics). Finally, it also made me think of an Instagram wall!

Flick: The associations with Instagram wouldn’t have occurred to me. I started working with the grid before I started using social media, and certainly before I was aware of Instagram [which was launched in 2010]. The grid was partly a practical solution to the problem of how to display multiple images within a limited space. For me, the associations of the grid would be minimalist or modernist – as in Rosalind Krauss’ reading of the grid – rather than to do with Instagram.

Fiona: That’s interesting. Krauss claims that the grid is a symptom of modern art’s hostility to narrative and to discourse – this seems antithetical to your own work, which connects image and text. She also describes the order of the grid as that of ‘pure relationship’, whereby objects no longer have any particular kind of value or order in themselves, but only in relation to each other. Perhaps this notion of ‘pure relationship’ might offer us a way into thinking about personalisation in relation to your work?

Flick: With the Begin Again wall I was certainly thinking about the individual in relation to the mass; the paradoxical effect of working with a group or series of people is that you starting thinking about them all as individuals. The format is also, crudely speaking, about taking status away from people by putting them alongside other people. It disrupts the way in which we privilege certain people. It’s vaguely political, challenging hierarchy.

Begin Again nos 1–21 (2014), Felicity Allen, 2-volume limited edition artists book

Fiona: It feels as though you are working with an enduringly humanistic notion of the person. In particular, you work primarily with the face, a part of the person that has longstanding associations with phenomenological presence. Your images are often closely cropped; the focus is solely on the face, rather than on any contextual details, such as background or clothes, that might give the viewer a clue as to the identity of the sitter.

Flick: I agree that the face has strong humanistic associations. I’m thinking of Levinas’ idea that the face is basically something that stops you killing people – it makes a demand on you, and that relationship is inherently ethical. In terms of contextual details, I’m now starting to crop my images much less closely, because I’m interested in notions of personal branding and role-playing through the way in which people choose to present themselves – through branded clothes, for example.

Fiona: I wanted to ask you about the significance of persona. Many of our conversations on this project have looked at personalisation in relation to digital technologies and data science. Digital personalization technologies reflect a longer preoccupation with the ‘person’ and the ‘persona’, and it seems to me that your work, which is almost resolutely analogue, might offer us a different way of approaching personalisation. The origins of the terms personalisation, personal, and personalise all stem from the Latin personalis or personale, which means ‘pertaining to a person’. Can we talk about the concept of persona in relation to your work?

Flick: The watercolours are resolutely analogue but there’s usually a kind of comprehensive digital work – a book or a film – which brings the series together. I am interested in how my sitters perform certain roles, but I’m also interested in the way in which I perform – or inhabit – the role of the artist. Begin Again was absolutely about getting people to see me differently, I had a strong consciousness of that very quickly. I was undressing as a manager, and dressing up as an artist.

Begin Again (2015), Felicity Allen, detail of exhibition installation during a residency at Turner Contemporary, Margate

Fiona: Do you find painting (someone’s portrait) to be performative?

Flick: It’s totally performative for both artist and sitter. We both find it exhausting, sitters as well as me.

Fiona: It’s a little like being on a therapist’s couch.

Flick: Yes, or at the hairdressers. But I try to manage the relationship to ensure that I’m not turning into the analyst or the hairdresser.

Fiona: How do you do that?

Flick: By talking back! And by being very conscious of how the sitting is going. But it does mean that I’m constantly retelling – or reperforming – my own stories.

Fiona: There’s a kind of labour, a selling-of-self, involved in that process of storytelling; that’s part of your exchange with the sitter. I’m wondering if you have read any of Isabelle Graw’s work on painting? Graw describes painting as ‘a form of production of signs that is experienced as highly personalised’. What she means by this is that painting has a direct indexical link to its maker; there is a close relationship between person and product. She links this to Alfred Gell’s definition of artworks as ‘indexes of agency’. As a ‘record of time spent together’, your work has a strong claim to the indexical.

Flick: Do you believe Graw’s argument?

Fiona: It’s seductive, but I don’t really buy it – why is this true of painting, but not of drawing or sculpture?

Flick: I don’t believe it, but I feel it. There’s something about the flow, the wet, that is very important about painting. I’ve got this board, and I’ve got paper on it. As I’m painting and I’m using my brush it’s like a proxy for stroking the face. There’s a brushstroke going on, and there’s a body that I could be stroking. It’s about touch, and feeling, and all that stuff – if I was to use a camera, I wouldn’t have that.

Fiona: So for you there is a strong sense that the (painted) portrait is a proxy for the person, but also that your tools are proxies for your own libidinal body.

Flick: Right.

Fiona: I’ve been trying to think about whether I have found my experience of sitting for you to be a personalised one. I think that I’d describe it as a personal or inter-personal experience, but not personalised, as such – for me, that term suggests an industrial process driven by big data and an infinite number of calculable relations based on things like likes and preferences. Understood in this way, personalisation seems to bear little relation to the highly individualised experience of a one-to-one portrait sitting. Perhaps we need a vocabulary that can differentiate between an experience that is individualised, and one that is personalised? Throughout our conversation, we’ve often both found it challenging to think about your work in relation to a dominant concept of [algorithmic] personalisation. In a recent essay published in Critical Inquiry, Kris Cohen notes that personalisation, and indeed networked life more generally, ‘disorientates all of our existing vocabularies of personhood and collectivity’. Do you think that this sematic disorientation might explain our difficulty in thinking through your work in relation to personalisation?

Flick: Absolutely!

Felicity Allen at work on a portrait of Fiona Johnstone, 5 September 2019, for People Like You

Works referenced

Linda Nochlin, “Some women realists”. Arts Magazine (May 1974), p.29.

Allan Sekula, “The Body and the Archive”. October 39 (Winter 1986), pp. 3-64.

Siegfried Kracauer, The Mass Ornament: Weimar Essays, trans. Thomas Y Levin. Harvard University Press, Cambridge, Massachusetts and London, England: 1995.

Ross Krauss, “Grids”. October 9 (Summer 1979), pp. 50-64.

Isabelle Graw, “The Value of Painting: Notes on Unspecificity, Indexicality, and Highly Valuable Quasi-Persons”, in Isabelle Graw, Daniel Birnbaum and Nikolaus Hirsh (eds.), Thinking Through Painting: Reflexivity and Agency Beyond the Canvas. Sternberg Press, Berlin; 2012.

Kris Cohen, “Literally, Ourselves”. Critical Inquiry 46 (Autumn 2019), pp. 167-192.

How Do You See Me?

Fiona Johnstone

30 September 2019

Fiona Johnstone

30 September 2019

Artist Heather Dewey-Hagborg’s new commission for The Photographers’ Gallery, How Do You See Me?, explores the use of algorithms to ‘recognise’ faces. Displayed on a digital media wall in the foyer of the gallery, the work takes the form of a constantly shifting matrix of squares filled with abstract grey contours; within each unit, a small green frame identifies an apparently significant part of the composition. I say ‘apparently’, because the logic of the arrangement is not perceptible to the eye; although the installation purports to represent a human face, there are no traces of anything remotely visually similar to a human visage. At least to the human eye.

To understand How Do You See Me?, and to consider its significance for personalisation, we need to look into the black box of facial recognition systems. As explained by Dewey-Hagborg, speaking at The Photographers’ Gallery’s symposium What does the Data Set Want? in September 2019, a facial recognition system works in two phases: training and deployment. Training requires data: that data is your face, taken from images posted online by yourself or by others (Facebook, for example, as its name suggests, has a vast facial recognition database).

The first step for the algorithm working on this dataset is to detect the outlines of a generic face. This sub-image is passed on to the next phase, where the algorithm identifies significant ‘landmarks’, such as eyes, nose and mouth, and turns them into feature vectors, which can be represented numerically.  For the ‘recognition’ or ‘matching’ stage, the algorithm will compare multiple figures across the dataset, and if the numbers are similar enough, then a match is identified – although this might take millions of goes. The similarity of the represented elements remains unintelligible to the human eye, calling a ‘common sense’ understanding of similarity-as-visual-resemblance into question.

In the western culture of portraiture the face has traditionally acted as visual shorthand for the individual; through this new technology, the face (read, the person) is transfigured into numeric code, allowing for algorithmic comparison and categorisation across a vast database of other faces/persons that have been similarly processed. How Do You See Me? asks what it means to be ‘recognised’ by AI. What version of ‘you’ emerges from a system like this, and how is it identifiable as such? Dewey-Hagborg described the project as an attempt to form her own subjectivity in relation to AI, noting that her starting point was a curiosity as to how she might be represented within the structure of the system, and what these abstractions of her ‘self’ might look like. In an attempt to answer these questions, she used an algorithm to build a sequence of images stemming from the same source, but varying widely in terms of appearance, working on the hypothesis that eventually one of these pictures would be detected as ‘her’ face. The grid of abstract and figuratively indistinct images on the wall of The Photographers’ Gallery can thus be understood as a loose form of self-representation, the evolution of a set of (non-pictorial) figures that attempt to be detected as a (specific) face. By interrogating the predominantly visual associations of ‘similarity’, which may once have implied a mimetic ‘likeness’ (with connotations of the pictorial portrait, arguably a dominant technology for the production of persons from the sixteenth to the twentieth-century), but which now suggests a statistical correspondence, How Do You See Me? draws attention to changing ideas about how a ‘person’ might be identified and categorised.

Following her own presentation, Dewey-Hagborg discussed her practice with Daniel Rubinstein (Reader in Philosophy and the Image at Central St Martins). Rubinstein argued that this new technology of image-making can teach us something about contemporary identity. Considering our apparent desire to be ‘recognised’ by our phones, computers, and other smart appliances, Rubinstein suggested that the action of presenting oneself for inspection to a device resembles the dynamics of an S&M relationship where the sub presents themselves to the dom. Rubinstein argued that we want to be surveyed by these technologies, because there is a quasi-erotic pleasure in the abdication of responsibility that this submission entails. Citing Heidegger, Rubinstein argued that technology is not just a tool, but reveals our relation to the world. Life and data are not two separate things (and never have been): we need to stop thinking about them as if they are. The face is already code, and the subject is already algorithmic.

Rubinstein’s provocative remarks certainly provide one answer to the question of why people might choose to ‘submit’ to selected technologies of personalisation. They also help us to address personalisation. The project People Like You promises that we will try to ‘put the person back into personalisation’. Whilst this could be taken to imply that there is a single real person – the individual, we aim instead to consider multiple figurations of the ‘person’ on an equal footing with each other. As Rubinstein’s comments suggest, rather than thinking about this relationship in terms of original and copy (the ‘real’ person or individual and a corresponding ‘algorithmic subject’ produced through personalisation), the ‘person’ is always every bit as constructed a phenomenon as an ‘algorithmic’ subject. Or, to put this another way, rather than taking a liberal notion of personhood for granted as our starting point, our aim is to interrogate the contemporary conditions that make multiple different models of personhood simultaneously possible.

Who gets to feed at the biobank?

William Viney

10 September 2019

William Viney

10 September 2019

In the United Kingdom, initiatives such as UK Biobank and the 100,000 Genome Project are now complete, and the NHS Genomic Medicine Service launched last year. With the consent of patients, local NHS trusts collect data and samples for research purposes. Each is a kind of biobank – an organised collection of biological specimens associated with computerised files, including demographic, clinical and biological data. Biobanks are an increasingly important part of research infrastructures in biomedicine and are important to realising the NHS’ desire for a more personalised healthcare system.

More recently, clinicians and researchers have been calling for wider participation in biobanking. This is because participation in biomedical research is seen as fundamental to developing more ‘targeted’ treatments, to foster a transition from a ‘one-size-fits-all’ models of healthcare to more timely, accurate, and preventative interventions. Researchers and clinicians may also need wide and inclusive participation – including patients traditionally excluded from research – to make sure that biological samples and datasets are diverse and representative.

The People Like You project is interested in these and other developments that link healthcare, research, data science, and data infrastructures. My own involvement in biobanking began before I joined the project, when I enrolled as a participant in TwinsUK based at the Department of Twin Research, King’s College London – the UK’s largest registry for twins. When my brother and I visited TwinsUK, the group collected basic biometric data, measuring height, weight, and blood pressure, also the strength of our grip and the capacity of our lungs. We gave samples of our blood, hair and spit, from which DNA, RNA, metabolites and numerous other molecules can be extracted. Our faces were swabbed in different places to test our sensitivity to different chemicals. All was recorded. We were not only enrolled, we are incorporated.

Participating in a biobank is different to enrolling in a discrete study because participants are not told exactly when and how their samples or data are used. The data stored by TwinsUK is available to any bona fide researchers, anywhere in the world. And so a biobank is not only a store of samples and data. It is also a registry or store of names and contact details, linking to individuals who have declared themselves interested in research and will give time, energy, and lots of different kinds of data. When the wind blows in the direction of studies interested in ‘personalised’ tests and interventions, this registry faced new opportunities and challenges, as did its participants.

In 2018, TwinsUK asked if I would take part in a new study called PREDICT. I was interested because it was described as a ‘ground-breaking research study into personalised nutrition’ that would ‘help you choose foods for healthy blood sugar and fat levels.’ Being involved was not straightforward. After a visit to St. Thomas’ Hospital, participants returned home and spent the next 14 days measuring blood glucose, insulin, fat levels, inflammation, sleep patterns and their gut microbiome diversity, both in response to standardised foods and each participant’s chosen diet. In return, participants would be given summary feedback on the their metabolic response. What interested me was how recruitment targeted existing members of the registry in the usual email format and their unique study number. And so it looked like any other Department of Twins Research study. But it is not like any other study.

Although Kings College London is the study sponsor and the Human Research Authority has provided the usual ethical approval, PREDICT is a large collaboration between several European and American universities, backed by venture capital investment from around the world. Tim Spector, the director of TwinsUK, is part of the scientific group that leads the group and has an equity stake in the private company called ZOE, who aims ‘to help people eat with confidence’. It is ZOE, not TwinsUK, that is processing the data that will build predictive – and ‘personalised’ – algorithms for future ZOE customers.

There is nothing nefarious or illegal about PREDICT. Collaborations between university scientists and private companies have been common for centuries. But the presentation of PREDICT’s results led me to think differently about biobanks and biobank participation in an era of personalised medicine and healthcare. PREDICT’s innovation threads together a set of historical tendencies that are important for how personalisation is seen is a desirable, evidence-based, and marketable product.

Changes in how UK universities are funded and the NHS is structured have changed the potential uses of biobanks. This is not always obvious to existing research participants (who, at TwinsUK, have a mean average age of 55 years; some of whom have been volunteers for 25+ years). In the case of PREDICT, TwinsUK assure me that all the proper licences and contracts are in place so that data can be shared with commercial collaborators and participants are given information sheets explaining how their data is used. But what does informed consent become – and ‘participation’ signify – when the purpose of a biobank shifts to include corporate interests outside the health service.

Initial results from PREDICT have been more actively disseminated in the mainstream media than in a peer-reviewed journals (summary results have been presented at a large conference in the US). Significant resources have been ploughed into garnering widespread coverage in The New York Times, Daily Mail, The Times and The Guardian. The data from the first PREDICT study has not been made available to other groups.

Begun in 1993 to investigate aging related diseases, TwinsUK started in the public sector. It still receives money from the Biomedical Research Council at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London, to make translational research benefit everyone, and its other funders, the Medical Research Council, Wellcome Trust, and the European Commission, are committed to the principles of open and equitable science. But with the turn towards ‘personalised’ interventions in nutrition a fresh wave of transatlantic venture capital has become available to biomedical researchers who have access to people, resources, and data, accumulated over years of state funded work.

One facet of what Mark Fisher called ‘capitalist realism’ is the insistence that things are what they are and they cannot be another way. In biomedicine, this has affected the kinds of research that get funded and the corporate interests allowed to inform research, when and how. It is understandable that the microbiome that feeds you may be more worthy of research than the many that are not so financially nourishing. But who is keeping an eye on the opportunity costs?





Scott Wark

16 July 2019

Scott Wark

16 July 2019

One of this project’s lines of inquiry is to ask who the “person” is in “personalisation”. This question raises others: Is personalisation actually more personal? Are personalised services about persons, or do they respond to other pressures? This question also resonates differently in the three different disciplines that we work in. In health, it might invoke the promise of more effective medicine. In data science, the problem of indexing data to persons. In digital culture, though, this tagline immediately invokes more sinister—or at least more ambiguous—scenarios, for me at least. When distributed online services are personalised, how are they using the “person” and to whose benefit? Put another way: Whose person is it anyway?

What got me thinking about these differences was a recently-released report on the use of facial recognition technologies by police forces in the United Kingdom. The Metropolitan Police in London have been conducting a series of trials in which this technology is deployed to assist in crime prevention. Other forces around the country, including South Wales and Leicester, have also deployed this technology. These trails have been contentious, leading to criticism by academics, rights groups, and even a lawsuit. As academics have noted elsewhere, these systems particularly struggle with people with darker skin, which they have difficulty processing and recognising. What it also got me thinking about was the different and often conflicting meaning of the “person” part of personalisation.

Facial recognition is a form of personalisation. It takes an image, either from a database—in the case of your Facebook photos—or from a video feed—the Met system is known as “Live Facial Recognition”—and processes it to link it to a profile. Online, this process makes it easier to tag photographs, though there are cases in which commercial facial recognition systems have used datasets of images extracted from public webpages to “train” their algorithms. The Live Facial Recognition trials are controversial because they’re seen as a form of “surveillance creep”, or a further intrusion of surveillance into our lives. Asking why is indicative.

The police claim that they are justified in using this technology because they operate it in public and because it will make the public safer. The risk that the algorithms underlying these systems might actually reproduce particular biases built in to their datasets or exacerbate problems with accuracy around different skin tones challenge these claims. They’re also yet to be governed by adequate regulation. But these issues only partly explain why this technology has proven to be so controversial. Facial recognition technologies may also be controversial because they create a conflict between different conceptions of the “person” operating in different domains.

To get a little abstract for a moment, facial recognition technology creates an interface between different versions of our “person”. When we’re walking down the street, we’re in public. As more people should perhaps realise, we’re in public when we’re online, too. But the person I am on the street and the person I am online isn’t the same. And neither person is the same as the one the government constructs a profile of when I interact with it—when I‘m taxed, say, or order a passport. The controversy surrounding facial recognition technology arises, I think, because it translates a data-driven form of image processing from one domain—online—to another: the street. It translates a form of indexing, or linking one kind of person to another, from the domain of digital culture into the domain of everyday life.

Suddenly, data processing techniques that I might be able to put up with in low-stakes, online situations in exchange for free access to a social media platform have their stakes raised. The kind of person I thought I could be on the street is overlaid by another: the kind of person I am when I’m interfacing with the government. If I think about it—and maybe not all of us will—this changes the relative anonymity I might otherwise have expected when just another “person on the street”. This is made clear by the case of a man who was stopped during one facial recognition trial for attempting to hide his face from the cameras, ending up with a fine and his face in the press for his troubles. Whether or not I’m interfacing with the government, facial recognition means that the government is interfacing with me.

In the end, we might gloss the controversy created by facial recognition by saying this. We seem to have tacitly decided, as a society, to accept a little online tracking in exchange for access to different—even multiple—modes of personhood. Unlike online services, there’s no opt-out for facial recognition. Admittedly, the digital services we habitually use are so complicated and multiple that opting out of tracking is impracticable. But their complexity and the sheer weight of data that’s processed on the way to producing digital culture means that, in practice, it’s easy to go unnoticed online. We know we have to give up our data in this exchange. Public facial recognition is a form of surveillance creep and it has rightly alarmed rights organisations and privacy advocates. This is not only because we don’t want to be watched. After all, we consent to being watched online, having our data collected, in exchange for particular services. Rather, it’s because it produces a person who is me, but who isn’t mine. The why part of “Why am I being tracked” merges with a “who”—both “Who is tracking me”? and “Who is being tracked? Which me?”

In writing this, I don’t mean to suggest that this abstract reflection is more incisive or important than other critiques of facial recognition technology. All I want to suggest is that recognising which “person” is operating in a particular domain can help us to get a better handle on these kinds of controversies. After all, some persons have much more freedom in public than others. Some are more likely to be targeted for the colour of their skin how respectable they seem, how they talk, what they’re wearing, even how they walk. In the context of asking who the “person” is in “personalisation”, what this controversy shows us is that what “person” means is dependent not only on this question’s context, but the ends to which a “person” is put. Amongst other things, what’s at stake in technologies like these is the question of whose person a particular person is—particularly when it’s nominally mine.

The question, Whose person is it anyway?, is a defining one for digital culture. If recent public concern over privacy, data security, and anonymity teach us anything, it’s that it’ll be a defining question for new health technologies and data science practices, too.

Tails you win

William Viney

13 May 2019

William Viney

13 May 2019

I came home from a trip to Italy one day having heard that my dear dog Wallace was gravely ill. He had an iron temperament – haughty and devious, a great dog but not much of a pet. He was my constant companion from the age of 10. By the time I was home that summer in 2003 he was already in the ground. The log we used to chain him to – the only way we could stop him running off – was already on the fire. He lived fast and died young. The cause of his death was uncertain, but it was likely connected to Wallace’s phenomenal appetite. Our farm dogs had carnivorous diets: canned meats and leftovers and dry food, all mixed together. But this was never enough for Wallace, who was a very hungry beagle, and who died after eating something truly gruesome on the farm. Pity Wallace, who died for the thing he loved.

While browsing on Twitter a few weeks ago a promoted ad appeared that suggests I should buy their personalised dog food. I felt a familiar pang of sadness. True to the idea that any product can have the word ‘personalised’ attached to it, have sought to personalise pet food – the stuff that is proverbially uniform, undifferentiated, derivative – with ingredients selected especially for your dog’s individual needs. Beyond the familiar platitudes I wondered what is being ‘personalised’ when dog food is personalised: what and why is this product being sold to me?

I don’t have a dog or anything else in the house that might eat dog food. I have the memory of a dog now dead for 15 years. Such is the informational asymmetry on social media platforms that I can guess, but I don’t really know, how decided to spend money marketing their product on my Twitter feed. How had I been selected? Because I associated myself with the weird abundance of ‘doggo’ accounts? Surely something more sophisticated is needed than interacting with some canine-related content? But for a relatively new company like, which now has Nestlé Purina Petcare as its majority shareholder, advertising to new customers is also a way of announcing themselves to investors and rivals, since their ads celebrate their innovation within a market – ‘the tailor-made dog food disrupting the industry’ – as well as promising products ‘as unique as your dog’. Whatever made me the ostensive target for this company’s product, the algorithmic trap was sprung from social media in order to ‘disrupt’ how you care for the animals in your home. provide personalised rather than customised products. The personalised object or experience is iterative and dynamic, it can be infinitely refined: personalisation seeks and develops a relationship with a person or group of persons; it may even develop the conditions for that group to join together and exist. Personalisation is primarily a process rather than a one-off event. A customised thing, by contrast, is singular and time-bound; it may have peers but it has no equal or sequel. So, many surgical interventions are individualised according to the person, but the patient usually hopes it’s a single treatment. Personalised medicine, on the other hand, is serial and data-driven; a testing infrastructure that recalibrates through each intervention, shaping relationships between different actors within a system. sells dog food to dog owners. It does this by capturing and managing a relationship between dogs and owners, mediated by the processing of group and individual-level data. Such a system can be lifelong, informing not one but multiple interactions.

When debates continue to turn on the ethical uses of machine learning, its misrepresentations and its inherent biases, I am struck by how even critical voices seek adjustments and inclusions according to consumer rights: an approach that is happily adapted to capitalist prosumerism. ‘Personalise #metoo!’ To simply disregard’s ads on Twitter as an intrusive failure of targeted marketing and personalisation may overlook a wider project that is harder to evaluate from an individual, rights-based, or anthropocentric perspective. The promise of disruption through personalised dog food tells us something about personalisation that stretches beyond transactions between company and client.

By personalising pet care, seeks to enhance interactions between different ‘persons’, extending values of consumer preference and taste, satisfaction and brand loyalty with a blanket of anthropocentric ‘personhood’ to cover both the machines that market and deliver this product and the animal lives that we are told should benefit. No one asks the dog what it wants or needs. The whole system, from company to client and canine, is being personalised, but from a wholly human point of view. And yet, despite messages to the contrary, dogs probably don’t care that their food is ‘personalised’ in the way that desire.

It’s not hard to imagine the kind of dog food customised to canine desires, the kind of foods that kill dogs like Wallace. I doubt, somehow, that would like to facilitate this deathwish, since it would be a customised last supper rather than a personalised relation, sold over and over again.

This is a … toilet

Celia Lury

2 March 2019

Celia Lury

2 March 2019

In the project ‘People Like You’ we are interested in the creation and use of categories: from the making of natural kinds to what has been called dynamic nominalism, that is, the process in which the naming of categories gives opportunities for new kinds of people to emerge. And while the making of categories is often the prerogative of specialised experts, the last few years have seen a proliferation of categories associated with social, political and medical moves to go beyond the binaries of male/female and men/women. Emerging categories include: transgender, gender-neutral, intersex, gender-queer and non-binary.

The question of who gets included, who gets excluded and who belongs in categories is complicated, and depends in part on where the category has come from, who created it, who maintains it, who is conscripted into it, who needs to be included and who can avoid being categorised at all. Categories are rarely simply accepted; they need to be communicated, are frequently contested and may be rejected. There is a politics of representation in the acceptance – or not – of categories.

Take this example of a sign for a ‘gender-neutral’ toilet. Before I saw it, I knew what would be behind the door to which it was attached since the building work associated with the conversion of men’s and women’s toilets into gender-neutral toilets had taken weeks. But when the building work was finished and I was confronted with this sign – marking the threshold into a new categorical space – I didn’t know whether to laugh or cry. I am familiar, as no doubt you are too, with signs for what might now be called gender-biased toilets; that is, toilets for either men or women. Typically, the signs make use of pictograms of men or women, with the figure for ‘women’ most frequently distinguished from an apparently unclothed ‘man’ by the depiction of a skirt. Sometimes the signs also employ the words ‘men’ and ‘women, or ‘gentlemen’ or ‘ladies’. But the need to signal to the viewer of the sign that they would be occupying a gender-neutral space on the other side of the door, seemed to have floored the institution in which the toilet was located. The conventional iconography was, apparently, wanting. Perhaps it seemed impolitic – too difficult, imprudent or irresponsible – to represent a category of persons who are neither ‘men’ nor ‘women’. But in avoiding any representation of person, in making use of the word and image of a toilet (which of course is avoided in the traditional iconography, presumably as being impolite if not impolitic), I couldn’t help but think that the sign was inviting me – if I was going to step behind the door – to identify, not with either the category ‘men’ or ‘women’, but with a toilet. The sign intrigued me. Why, I wondered, if it was considered so difficult to depict a gender-neutral person, not just make this difficulty visible once, and simply show either a pictogram of a toilet or the word ‘toilet’? Why ‘say’ toilet twice? I recalled a work of art by the artist Magritte titled The Treachery of Images (1926). In this work, a carefully drawn pipe is accompanied by the words ‘Ceci n’est pas une pipe’, or ‘This is not a pipe’. Magritte himself is supposed to have said, The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it’s just a representation, is it not? So if I had written on my picture ‘This is a pipe’, I’d have been lying! In an essay on this art work (1983), Michel Foucault says the same thing differently: he observes that the word ‘Ceci’ or ‘This’ is (also) not a pipe. Foucault describes the logic at work in the art work as that of a calligram, a diagram that ‘says things twice (when once would doubtless do)’ (Foucault 1983: 24). For Foucault, the calligram ‘shuffles what it says over what it shows to hide them from each other’, inaugurating ‘a play of transferences that run, proliferate, propagate, and correspond within the layout of the painting, affirming and representing nothing” (1983: 49). What, then, does the doubling of the gender-neutral door sign imply about the category of the gender-neutral? Perhaps there is a nostalgia for when there was of play of transferences, when the relations between appearance and reality could be – and were – continually contested. Perhaps, however, it is a new literalism, what Annette Rouvroy and Thomas Berns call ‘a-normative objectivity’ (2013). Then again (and is this my third or fourth attempt to work out why the sign made me want to laugh and cry?), perhaps there is also an invitation to call into existence ‘something’– rather than the ‘nothing’ that Foucault celebrates – even if, for the category of the gender-neutral to come into existence, you have to (not) say something twice.


  • Foucault, F. (1983) This is Not a Pipe, translated and edited by J. Harkness, Berkeley and Los Angeles: University of California Press.
  • Rouvroy, A. and Berns, T. (2013), ‘Algorithmic governmentality and prospects of emancipation’, Reseaux, 2013/1 no. 177, pp. 163-196, translated by Elizabeth Libbrecht.
Data Portraits

Fiona Johnstone

13 February 2019

Fiona Johnstone

13 February 2019

One of the aims of People Like You is to understand how people relate to their data and its representations. Scott Wark has recently written about ‘data selves’ for this blog; an alternative (and interconnected) way of thinking about persons and their data is through the phenomenon of the data portrait.

A quick Google of ‘data portraits’ will take you to a website where you can purchase a bespoke data portrait derived from your digital footprint. Web-crawler software tracks and maps the links within a given URL; the information is then plotted onto a force directed graph and turned into an aesthetically pleasing (but essentially unrevealing) image. Drawing on a similar concept, Jason Salavon’s Spigot (Babbling Self-Portrait) (2010) visualises the artist’s Google search history, displaying the data on multiple screens in two different ways; one using words and dates, the other as abstract bands of fluctuating colour. The designation of the work as a self-portrait raises interesting questions about agency and intentionality in relation to one’s digital trace: as well as referring to identities knowingly curated via social media profiles or personal websites, the data portrait can also suggest a shadowy alter-ego that is not necessarily of our own making.

Erica Scourti’s practice interrogates the complex interactions between the subject and their digital double: her video work Life in AdWords (2012-13) is based on a year-long project where Scourti regularly emailed her personal diary to her G-mail account, and then performed to webcam the list of suggested ad-words that each entry generated. A ‘traditional’ portrait in the physiognomic sense (formally, it consists of a series of head-and-shoulders shots of the artist speaking directly to camera), Life in AdWords is also a portrait of the supplementary self that is created by algorithmically generated, ‘personalised’ marketing processes. Pushing her investigation further, Scourti’s paperback book The Outage (2014) is a ghost-written memoir based on the artist’s digital footprint: whilst the online data is the starting point, the shift from the digital to the analogue allows the artist to probe the gaps between the original ‘subject’ of the data and the uncanny doppelgänger that emerges through the process of the interpretation and materialisation of that information in the medium of the printed book.

Other artists explore the implications of representation via physical tracking technologies. Between 2010 and 2015, Susan Morris wore an Actiwatch, a personal health device that registers the body’s movement. At the end of each year she sent the data to a factory in Belgium, where it was translated into coloured threads and woven into a tapestry on a Jacquard loom (a piece of technology that was the inspiration for Babbage’s computer), producing a minute-by-minute data visualisation of her activity over the course of that year. Unlike screen-based visualisations, the tapestries are highly material entities that are both physically imposing (SunDial:NightWatch_Activity and Light 2010-2012 (Tilburg Version) is almost six metres long) and extremely intimate, with disruptions in Morris’s daily routine clearly observable. Morris was attracted to the Actiwatch for its ability to collect data not only during motion, but also when the body is at rest; the information collected during sleep – represented by dark areas on the canvas – suggests an unconscious realm of the self that is both opaque and yet quantifiable.

Susan Morris, SunDial:NightWatch_Activity and Light 2010-2012 (Tilburg Version), 2014. Jacquard tapestry: silk and linen yarns, 155 x 589cm.  © Susan Morris.

Katy Connor is similarly interested in the tensions between the digital and material body. Using a sample of her own blood as a starting point, Connor translates this biomaterial through the scientific data visualisation process of Atomic Force Microscopy (AFM), which imagines, measures and manipulates matter at the nanoscale. Through Connor’s practice, this micro-data is transformed into large 3D sculptures that resemble sublime landscapes of epic proportions.

Katy Connor, Zero Landscape (installation detail), 2016.
Nylon 12 sculpture against large-scale risograph (3m x 12m); translation of AFM data from the artist’s blood.  © Katy Connor.

One strand of the People Like You project focuses particularly on how people relate to their medical data. Tom Corby was diagnosed with Multiple Myeloma in 2013, and in response begun the project Blood and Bones, a platform for the data generated by his illness. The information includes the medical (full blood count / proteins / urea, electrolytes and creatinine); the affective (mood, control index, physical discomfort index, stoicism index, and a ‘hat track’ documenting his headwear for the duration of the project); and financial data (detailing the costs to the NHS of his treatment). Applying methods from data science to the genre of illness blogging, Corby’s project is an attempt to take ownership of his data creatively, and thus to regain a measure of control over living with disease.

In the final pages of his influential (although now rather dated) book, Portraiture, the art historian Richard Brilliant envisaged a dystopian future where the existence of portraiture (as mimetic ‘likeness’) is threatened by ‘actuarial files, stored in some omniscient computer, ready to spew forth a different kind of personal profile, beginning with one’s Social Security number’ (Brilliant 1991). Brilliant locates the implicit humanism of the portrait ‘proper’ in opposition to a dark Orwellian vision of the individual reduced to data. Writing in 1991, Brilliant could not have foreseen the ways in which future technologies would affect ideas about identity and personhood; comprehending how these technologies are reshaping concepts of the person today are one of the aims of People Like You.

Sophie Day

14 January 2019

Our series was formally launched with introductions from Kelly Gleason, Cancer Research UK senior research nurse, and Iain McNeish, Head of Division, Cancer, (both at Imperial College London & Imperial College Healthcare NHS Trust). Later we heard from Adam Taylor (National Physical Laboratory) about work in the Rosetta Team under Josephine Bunch which is supported to map cancer through the first round of CRUK Grand Challenges so as to improve our understanding of tumour metabolism (

To begin with, we learned about the breakthrough presented by tamoxifen in the development of personalised cancer medicine before hearing more about the infinite complexity of cancer biology. Twenty years ago, treatments were given to everyone with an anatomically defined cancer. This was frustrating since staff knew from experience that the treatment wouldn’t work for most people and many patients were disappointed. The introduction of tamoxifen led to stratification based on a common oestrogen receptor. Later, in ovarian cancer, it became clear that PARP inhibitors could be used successfully on approximately 20% of patients, who had inherited particular susceptibilities (in BRCA-1 and BRCA-2). Nonetheless, sub-group or stratified medicine is a long way from the goal of delivering unique treatment to everyone’s unique cancer.

This complexity is clear from the preliminary application of a range of integrated techniques by physicists, chemists and biologists in the Rosetta Team, as Adam then explained. Collaborators map and visualise tumours as a whole in their particular environments along with their constituents down to the level of individual molecules in cells. In combination, these measures give both a detailed picture of different tumour regions and a holistic overview. Amongst the many techniques are AI methods that we have encountered through Amazon or Tesco platforms which find patterns through reducing complexity. For example, 4,000 variables are reduced to three coloured axes that label different chemical patterns in one application of varied mass spectrometry techniques. You can find regions of similarity in the data by colour coding, and explore their molecular characteristics.

Amazon has applied non-negative matrix factorisation to predict how likely we are to buy a particular item once we have bought another specific item. A similar approach enabled McNeish’s group to find patterns among samples of ovarian cancer that had all looked different. The team traced 7 patterns driven by 7 mechanisms among these samples.

Embedded in the study of cancer’s biology and chemistry, data scientists ‘know that these are not just numbers. They know where the numbers come from and the biological and technical effects of these numbers.’ Non-linear methods such as t-SNE help in the analysis of very large data sets. Neural networks have also been developed to use in a hybrid approach where a random selection of data is analysed with t-SNE (stochastic neighbourhood embedding) to provide a training set for neural network applications which are then validated using t-SNE methods on another randomly selected chunk of data.

This approach combines fine-grained detail with broad pattern recognition in different aspects of tumour metabolism. It might lead to the development of a ‘spectral signature’ to read the combined signature of thousands of molecules at diagnosis.

At the end of the evening, most of us revealed anxieties about the attribution of a wholly singular status through personalising practices. Those affected by cancer wanted the ‘right’ treatment for them but we were reassured by the recognition that we also share features with other people. We appreciated the sense of combining and shifting between the ‘close up’, which renders us unique, and a more distant view, where we share a great deal with others.

Many thanks to Maggie’s West London for their hospitality.


You and Your (Data) Self

Scott Wark

2 January 2019

Scott Wark

2 January 2019

You might have seen these adverts on the TV or on a billboard: a man and his doppelgänger, one looking buttoned up and neat and the other, somehow cooler. “Meet your Data Self”, says the poster advert on the tube station wall I often stare at when I’m waiting for the next train. In smaller type, it explains: “Your Data Self is the version of you that companies see when you apply for things like credit cards, loans and mortgages”. And then: “You two should get acquainted”.

This advert has bothered me for quite a while. I’m sure that’s partially intentional—whether I find it funny or whether I find it irritating, its goal is to make the brand it’s advertising, Experian PLC, stick in my mind. I find the actor who plays this everyman and his double, Marcus Brigstocke, annoying—score one to the advert. Beyond Brigstocke’s cocked brow, what bothers me is that this advert raises far more questions than it answers.

Who is this “Data Self” it’s telling me to get acquainted with? Is this person really like me, only less presentable? What impact does this other me have on what the actual me can do? And—this question might come across as a little odd—who does this other me belong to?

Experian is a Credit Reference Agency, so presumably the other ‘me’ is a representation of my financial history: how good I am at paying my bills on time; whether I’ve been knocked back for a credit card or overdraft; even if I’ve been checking my credit history a lot lately, which might come across as suspicious. Banks, credit card companies, phone companies, car dealers—anyone who might extend you credit so you can get a loan or pay something off over time will check in with agencies like Experian to see if you’re a responsible person to lend to.

As a recently-finished PhD student, I’ve no doubt that my other me is not so presentable, to use the visual metaphor presented by this advert’s actor/doppelgänger. A company like Experian might advise another company, like a bank, to not front me money for the long summer holiday I’m dreaming of taking to Northern Italy as I wait for the next packed tube. This “me” might not be trustworthy. Or, to put it another way, this “me” might not indicate trustworthiness.

The point of this advert is to get me to order a credit report from Experian so that I can understand my credit history and so that I can build it up or make it better. This service is central to the contemporary finance industry, which has to weigh the risk of lending money or extending credit to someone like me against the reward they get when I pay it back. If I want to be a better me, it suggests, I ought to get better acquainted with myself—or rather, my data self. If I want that holiday, its visual metaphor suggests, I’d better straighten my data self’s tie.

There’s lots more that might be said about how credit agencies inform the choices we can make and handle our data. One of the more straightforward comments we might make about them is also one that interests us most: This other, data “me” isn’t me. This is perhaps obvious—the advert’s doppelgänger is a metaphor, after all. It’s a person like me, it’s constructed from data about me, and it influences my life, but it’s not me. But this also means that This other, data “me” isn’t mine.

This advert presents just one example of the many data selves produced when we consciously or inadvertently give up our data to other companies. In this case, we agree to our data being passed on to credit rating agencies like Experian every time we get given credit. What’s interesting about this data self is that whilst it isn’t you, it has an effect on a future version of you—in my offhand example, a you who might be holidaying in Italy; or, more problematically, a you who might need an overdraft to make ends meet month-to-month. To riff on our project’s title, these data selves are, quite literally, people like you. They might not be you, but they have a real effect on your life.

We need todo a lot more work researching who these datafied versions of ourselves actually are and what effect they have on being a person in our big data present. As Experian point out in another campaign fronted by food writer and austerity campaigner Jack Monroe, several million U.K. residents are “invisible” to the country’s financial services because they don’t have a credit profile. Conversely, we might ask, what does it mean to be a person in our big data present if who we are is judged on our data doppelgängers? What does it mean when my other “me” isn’t mine—when it’s opaque, confusing, and sold to me as a service?

Countless other digital platforms and services create both fleeting and lasting “data selves” that are used to try to sell us products, for instance, or to better tailor services to our needs. This process is called “personalisation”. One of the things we want to ask as part of our research project is this: who are we when who we are is determined by who we are like? Credit Reference Agencies and the “data selves” they produce make this tangled question tangible, but it applies to many other areas of contemporary life—from finance to medicine, from our participation in digital culture to our status as individuals, actors, citizens, and members of populations. This question raises others about what it means to be a “me” in the present. These are the questions, I think, that bind this project together.

For more information about Credit Reference Agencies, see the Information Commissioner’s Office information page.

What is Personalisation?

William Viney

26 November 2018

William Viney

26 November 2018

Personalisation is at once ubiquitous in contemporary life and a master of disguise. Its complexity hides in plain sight. Personalisation may mean producing products and services to ideas of individual demand, but it also means much more than this. Personalisation connects diverse practices and industries such as finance and marketing, medicine and online retail. But it also goes by many aliases – patient-centred, user-oriented, stratified and segmented – in ways that can make it hard to follow. It’s not always clear what personalised products and services share in common.

The ‘People Like You’ project does not shy from this diversity. It works across the fields of medicine, data science, and digital culture to understand the differences in each of these domains, as well as how people and practices work across them. One challenge of understanding emerging practices that are forming within and between particular industries is that histories of personalisation may be contested, sensitive, or rapidly developing. We want to find ways to explore different meanings of the term ‘personalisation’ in the United Kingdom, among people from different working backgrounds: academic and commercial scientists in the biomedical, biotechnology and pharmacology; public policy; advertising and public relations; communications; logistics; financial analysis. So we have designed a study that might be the first of its kind in the UK – an oral history of personalisation. 

The ‘What is Personalisation?’ study uses stakeholder interviews to establish how and why each industry personalises, and with what techniques of categorisation, monitoring, tracking, testing, retesting, aggregation and individuation. These interviews are in-depth and semi-structured. They usually last an hour or more. Interviews allow us an opportunity to understand how a particular individual views their work, industry, profession or experience.

A wide range of policy makers, activists, scientists, technologists, and healthcare professionals have already participated, detailing how they see the emergence of personalisation affecting their lives. Striking themes have revealed just some of the connective aspects of personalised culture: the links between standardisation, promise and failure; how languages of democratic and commercial empowerment contest state, regulative, or market legislative and economic power; how products or services can treat prototyping as a continuous process; the influence of management and design consultancies; and the way mobile technologies interpretr data in real time to produce ‘unique’ experiences for users. These are just some of the ideas that we have talked about during our interviews. We also get to discuss when and how these ideas emerged and became popular in a given industry, field or policy area.

The connections that can be made across different fields, practices, or industries can be contrasted to the highly specific emergence of personalisation in some areas. For instance, the special confluence of disability and consumer rights activism that formed alongside and, at times, in opposition to deregulation in healthcare systems in the late 1980s created individual (later personalised) health budgets, now an important policy instrument used by the National Health Service’s personalised care services. The challenge is to understand the historical and social formation of a particular patch in [personalisation’s history, its various actors and networks, to recognise adjacent and comparable developments. We are doing this whilst recognising broader patterns that are germane to other contemporary figures of personalisation. One of these may be the specific inclusion and exclusion factors that prevent a personalised service becoming a mass standardised service.  Another is to understand whether or not personalisation is being heralded as a success or as a response to failure – not the best of all available options but an alternative to foregone possibilities].

Our work takes patience and a lot of help from those who are passionate experts in their field. If you feel you have an experience of personalisation that would make an important contribution to this study then please get in touch with William Viney (