Blog

Scott Wark

15 May 2020

The Figurations: Persons In/Out of Data conference was held at Goldsmiths, University of London, in December, 2019. Over two days, it gathered researchers from across the humanities and social sciences to explore how the concept of the “figure” and its cognates—figuration, to figure, to figure out, and so on—might inform the theoretical frameworks and methodological formulations we use to study developing personalisation and data practices.

In the conference’s blurb, we summed up the conference’s interests like this: [t]he intersection between data and person isn’t fixed; it has to be figured. Per its subtitle, the conference was interested in persons, the putative subjects of the processes of personalisation that we study here at People Like You. But it was also interested in the data processing techniques that make persons tractable to processes like personalisation. Our proposition was quite simple: perhaps what these data processing techniques do, in otherwise-distinct domains—perhaps what they have in common—is that they configure and distribute personhood in the data they assemble, as outputs for other operations. Our gambit was that making this proposition the basis of a conference would encourage other scholars, from a range of disciplines, to come and think through it with us. And it did.

Over two days, we hosted four keynote presentations, by AbdouMaliq Simone and Wendy H. K. Chun on the first day and Jane Elliott and John Frow on the second. 45 papers were also delivered by 61 researchers from a wide range of disciplines and places, including the medical humanities, anthropology, sociology, media studies, geography, human-computer interaction, literature, art history, legal studies, and visual cultures.

Each day was punctuated at beginning and end by a keynote presentation. The first day started with Simone’s discussion of how a “we” is—must—be configured in order to continue to inhabit a planet that’s in excess of our experience and understanding. The day was capped off by a presentation by Chun on the discriminatory politics of the machine-learning-based recognition systems that subtend and orchestrate many of our relations with networked technologies. The second day started with a presentation by Jane Elliott on longitudinal research, which used the example of the 1958 British Birth Cohort Study to discuss, with great nuance, what challenges face researchers working on figuring individuals over large time scales or, conversely, using the micro-scale data offered by wearable technologies. Finally, John Frow concluded the conference with a presentation on “data shadows,” which connected questions of surveillance and data processing to the problem of how we might recognise ourselves in their products.

Particular themes emerged over the course of two days, both in these keynotes and in the parallel sessions that they bookended. Many presenters offered compelling conceptualisations of the different ways that persons might be figured, whether as patients or users, data doubles or digital subjects; whether imagined as individuals or as they’re assembled, by data, into collectives. Other presenters focused more on data and how it’s processed, conceptualising abstract processes as figures that configure or constitute persons. Matt Spencer’s presentation, for instance, articulated the configuring influence of trust over the infrastructures that manage data, whilst Emma Garnett unpicked how pollution has to be figured, in order for us to understand it and, so, understand our relationship to it. This variety was stimulating. It also contributed to a sense of coherence in the concept of the figure we’d adopted as a guiding thread.

What emerged from these papers was a sense that the concept of the figure was multiple, but nevertheless helped us get a handle on how data and persons are mutually configured, as, for example, figures of speech, or inter-operable subjects. How it does is different in different contexts, depending on what techniques and technologies are involved and to what ends they’re employed. But the concept of the figure and its cognates help us to apprehend figuration as a process with particular characteristic features. It helps us see what data are and what data do. It captures data by tracking what people do. It makes data commensurable by establishing likenesses. It situates data in contexts that delimit its scope. What emerges are figures of persons constituted in/out of data.

Finally, it also brought home a key point that, for me at least, often tacitly informs the work People Like You does as a team. To study problems that arise from the relationship of persons and data, that are large scale, and that cut across very different domains—in our case, personalisation—we have to adopt interdisciplinary approaches informed by novel methods. Moreover, we need concepts that are fit for purpose. This conference affirmed to us that the figure and figuration is just such a concept. At scale, it can do the kind of conceptual work we need to understand the complex processes that inform how we might understand a person to be.

It’s a few months after the conference, but we’ve still been working with its outcomes. We’re aiming to make available an edited collection of papers by keynotes and presenters from the conference and members of the PLY team. We hope this collection will capture something of the breadth that made the conference successful. But we also hope that it’ll give readers conceptual and methodological tools to do their own figuring.

We’ll have more on this soon. In the meantime, thanks to everyone who presented or attended!

For photos of the conference, check out our gallery.

 

 

 

Sophie Day

24 April 2020

Sophie Day and Celia Lury

We are all now familiar with what 2 metres looks like, as we go for solitary walks in parks or stand in queues to shop for family and friends. We draw lines, we stand aside or behind or in front of others; we walk around and in parallel to each other.

Improvising, trying out ways to be ‘close up, at a distance’ (Kurgan 2013), we leave food outside doors and put our hands to windows separating us from friends and relatives who cannot leave their homes. Participating in group chats, we appear to ourselves and others as one talking head among others. We sing across balconies, we mute ourselves in synchronized patterns. Our actions producing insides and outsides, we appear alone together.

In describing 2 metres as a social distance, we acknowledge that we are part of a bigger picture. But who is it a picture of and how does it come about?  How is the compulsion of proximity (Boden and Molotch 1994) – the need to be close to others – being reconfigured as proximity at a (social) distance? And what kind of social is this? Does it add up to a society? Who is included and who is left outside? Are we all in this together?

 

In the 2 metre rule and the complicated guidance about who has to stay indoors and who can go out and why, we see grid reactions. This is a phrase used by Biao Xiang in his discussion of the management of the COVID-19 epidemic in China by already existing administrative units. He says, ‘Residential communities, districts, cities and even entire provinces act as grids to impose blanket surveillance over all residents, minimize mobilities, and isolate themselves. In the Chinese administrative system, a grid is a cluster of households, ranging from 50 in the countryside to 1000 in cities. Grid managers (normally volunteers) and grid heads (cadres who receive state salaries) make sure that rubbish is collected on time, cars are parked properly, and no political demonstration is possible. During an outbreak, grid managers visit door to door to check everyone’s temperature, hand out passes which allow one person per household to leave home twice a week, and in the case of collective quarantine, deliver food to the doorstep of all families three times a day.’

 

Image posted by Adam Jowett, April 20th, 2020, adapted from Fathromi Ramdlon, via https://pixabay.com/illustrations/physical-distancing-social-distancing-4987118/

 

While gridding has long been a core technology of rule through centralised command and control according to the priorities of military, state and industrial logistics, the pandemic is leading to a multiplicity of grid reactions. In the UK, some grids are imposed, while others are improvised.  We (variously) wear face masks,  choreograph meetings in Zoom (faces within faces, faces on their own, face by face, just not face to face), and order goods on-line while governments refuse to let cruise ship passengers disembark, impose 2 metres outside care homes but not inside, divert PPE from one country to another, and close borders to people, but not to goods. There is no single grid in operation. We are all making neighbours differently.

 

 

As we do so, we learn about grids; they can be creative, they can be fun, but grid reactions also make visible the social of social distancing and the politics of proximity. For example, watch a man from Toronto wearing a so-called “social distancing machine” (a hoop with a 2 metre radius that a person can wear around their middle) while walking around the city.

 

The aim of this machine was to show that sidewalks (the North American word for pavements seems especially appropriate right now) are too narrow, particularly when people are being asked to socially distance. Its creator, Daniel Rotsztain, who is part of the Toronto Public Space Committee, a group that advocates for more “inclusive and creative” public spaces, said to Global News Radio AM 640, ‘I think even before COVID, you could say that pedestrians are jostling for space in Toronto, but COVID really exposed that’. He proposes that some streets should be closed to traffic to give pedestrians more room to maintain distance. In the grip of a pandemic, the grids of family, household, and district – the negotiation of which are so fundamental to social life – can no longer be taken for granted. Some grid reactions are described in terms of shielding the vulnerable, but who are they really shielding – those inside or those outside? For whom does a home become a prison, and when does a cruise ship become a floating container?

 

Still, Ryan Rocca, “Coronavirus: Man wears ‘social distancing machine’ to show local sidewalks are ‘too narrow’. Global News, April 13, 2020: https://globalnews.ca/news/6812047/coronavirus-toronto-social-distancing-machine/

 

A grid seems to freeze time within spatial relations but it is a way of managing mobility. We move up the queue outside the supermarket in sequenced intervals rather than as and when we like. While the grid seems fixed, it calibrates movement – the transmission of a virus, for example – assigning spatial and metric values to this temporal process in an interplay of number based code and patterning (Kuchler 2017). In the UK, 2 metres is the measure that is being imposed to mediate the Ro or basic reproduction number, the number that indicates how many new cases one infected person generates. Recognizing 2 metres as a social distance acknowledges that transmission is not simply a matter of biology, but of how social life is gridded. But while the Ro conventionally takes the individual person as the unit of transmission, the examples above suggest that it is the operation of multiple grid reactions – and the failure or success of the interoperability of their metrics – which matters. We need to ask: what kinds of families fit into what kinds of households into what kinds of schools? how do they inter-connect? how differently permeable are private homes, second homes and social care homes? how will apps measure social distance? And perhaps most importantly, how do grid reactions change the ways in which the virus discriminates? In what has been described as a very large experiment, the interoperability of grids is being tested in real time across diverse informational surfaces – models, materials, walls, windows, screens, apps and borders – to create new grids with as yet unknown consequences.

 

Bibliography

Boden, D. and Molotch, H. L. (1994) The compulsion of proximity in R. Friedland and D. Boden (eds.) Now/Here: Space, Time and Modernity, University of California Press, pp. 257-286.

Jowett, Adam. (2020) Carrying out qualitative research under lockdown – Practical and ethical considerations. Athttps://blogs.lse.ac.uk/impactofsocialsciences/2020/04/20/carrying-out-qualitative-research-under-lockdown-practical-and-ethical-considerations/

Kuchler, S. (2017) Differential geometry, the informational surface and Oceanic art: The role of pattern in knowledge economies, Theory, Culture and Society, 34(7-8): 75-97.

Kurgan, L. (2013) Close Up at a Distance: Mapping, Technology, and Politics, The MIT Press.

 

Helen Ward

11 March 2020

“Of all the gin joints in all the towns … of all the one-horse towns … why did this virus have to come to mine?”

The words of my friend Paul who is living in an Italian town under lockdown because of the novel coronavirus epidemic. His frustration is palpable as his plans for travel, work and social life were put on hold for at least two weeks (and subsequently extended for another three). But he reasons, “despite the fact that it’s not a killer disease, we can’t all go around with pneumonia. I don’t want pneumonia myself…and I wouldn’t wish it on any of the local citizens so in a sense, I’m sort of with the authorities, even though it’s against my own personal interests at this moment in time, I think that the lockdown is correct” (interview, 26 Feb 2020).

 

 

 

 

Usually busy street in Codogno deserted, 28 Feb 2020 (credit: Paul O’Brien)

Public health interventions often raise this dilemma – to protect “the community”, individuals have to take actions for which they may see little or no benefit, and at worst experience, or imagine, damage. And in the case of emergency response, health advice tends towards blanket coverage rather than personalised recommendations. A potential pandemic looks like the other end of the spectrum from personalised medicine. The latter uses genomic and other molecular techniques together with large data sets to promise the right treatment or intervention for the right person at the right time through precision diagnostics and therapeutics. The “one size fits all” approach of epidemic response seems far removed from this, with recommendations for handwashing, social distancing and, as in the case of Wuhan and Lombardy (and now the whole of Italy), mass quarantine.

There is no lack of data on COVID-19. Indeed it is the first pandemic in the era of such widespread and easy access to information from 24-hour news, social media and almost real-time updates of numbers of cases, deaths and responses on websites such as worldometer.  This data sharing is unprecedented, as is the openness of publishing results and sharing information on cases and code. This initial data collection is the first stage of any outbreak investigation, where cases are described by time, person and place. In China, scientists used social media reports to crowdsource a daily line-listing of cases with as much data as possible, and this was then compared with official reports. (Sun et al, 2020) Although incomplete, this method had great promise, and teams are now looking to develop methods for more automated approaches, including “developing and validating algorithms for automated bots to search through cyberspace of all sorts, by text mining and natural language processing (in languages not limited to English)”. (Leung and Leung, 2020)

But while social media and online publishing is facilitating data access and sharing, it is also leading to what the WHO have termed an infodemic, “an overabundance of information — some accurate and some not — that makes it hard for people to find trustworthy sources and reliable guidance when they need it”. Sylvie Briand, director of Infectious Hazards Management at WHO’s Health Emergencies Programme, explains that this is not new, but different. “We know that every outbreak will be accompanied by a kind of tsunami of information, but also within this information you always have misinformation, rumours, etc. We know that even in the Middle Ages there was this phenomenon…But the difference now with social media is that this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further.” (Zarocostas 2020).

Conspiracy theories and misinformation about COVID-19 have indeed been spreading widely, from ideas that the disease is caused by radiation from 5G masts, to malicious reports of specific individuals being infected and suggestions of fictitious cures. These can be highly influential in determining people’s response to official advice in an outbreak situation.  Working on the role of misinformation on vaccine uptake, Larson describes resulting emotional contagion and “insidious confusion” which can undermine control efforts. (Larson 2018) Health behaviours in relation to infectious disease are complex and shaped by a wide range of factors including beliefs about prognosis and treatment efficacy, symptom severity, social and emotional factors. (Brainard et al, 2019) They are also based on the extent to which the source of the advice is trusted and respected. A survey of 1700 people in Hong Kong in the early days of the COVID-19 outbreak showed that doctors were the most trusted source of information, but that most information was actually obtained from social media. (Kwok et al. 2020)

Lack of trust was found to have undermined the response to SARS in China in 2003, leading to changes in the way that risks were communicated in the H7N9 influenza in 2013. A qualitative study of both outbreaks concluded, “Trust is the basis for communication. Maintaining an open and honest attitude and actively engaging stakeholders to address their risk information needs will serve to build trust and facilitate multi-sector collaborations in dealing with a public health crisis.” (Qiu et al 2018). The focus on engaging stakeholders in the community is a crucial and often neglected part of epidemic response. (Gillespie 2016, WHO 2020)

So, can we expect people to respond appropriately to the one-size-fits-all messages to try and reduce the transmission of coronavirus? The response will depend on a number of factors including whether people trust the source of the message, the threat is perceived as real, the interventions are seen as likely to work, and the disruption proportionate. Evidence so far suggests that people are making changes – 30% of 1400 people who responded to my non-random Twitter survey had already changed their behaviour by 22 February, and the disappearance of soap and hand sanitiser from the shelves indicates intention to adopt hygiene practices. Respondents to a UK survey on 27-29 February reported a range of coronavirus related actions, including more handwashing (62%) and changed travel plans (21%). (Brandwatch, 2020)

Living in in Codogno, Italy, my friend has no choice but to change his behaviour, but after initial annoyance he supports the lockdown as a necessary action to protect others. He is not particularly concerned about his own risk, yet in our conversations, and those with many others in person and online, there has been an interesting focus on the differential impact of COVID-19. The severity is clearly greater in older people and in people with some pre-existing conditions. This knowledge can be reassuring for many, if people like them don’t seem to be badly affected but frightening for others. Reports of deaths have often been accompanied by descriptions such as “old” and “with underlying health conditions”. I commented on Twitter that this can create a “disturbing narrative this is acceptable, and can make the young & fit feel reassured”, and had a surprisingly positive response with over 20,000 impressions and 200 likes (many more than usual). One person replied, “I agree, the corollary is… that’s all right then, won’t affect us”.

 

It is not surprising that people want more precise information on risks, and this will eventually affect the response by identifying those people who should be first to receive vaccines and treatments. But we need to take care that it the information is not used to create complacency in those who do not feel personally vulnerable. In HIV prevention, the concept of high-risk groups was counter-productive in many settings, leading on the one hand to stigma directed at those groups, and on the other to a lack of protective behaviour by people who felt that messages did not apply to them. We need to caution against that response. Even if coronavirus is mild for most people, it has the potential to seriously disrupt healthcare if it spreads quickly. The nature of the illness puts particular demands on critical care. In Italy they are struggling with lack of critical care beds already, and the UK has far lower capacity. (Rhodes, 2012)

In an emergency it is even more important that we take measures that protect others, not just focus on our own personal risks and benefits. So please, wash your hands well, and don’t be offended if I don’t offer to shake your hand when we meet.

 

Acknowledgements

Thanks to Paul O’Brien for sharing his experience and photograph. HW receives funding from Imperial NIHR Biomedical Research Centre and Wellcome Trust.

Footnotes/references

Brandwatch. https://www.brandwatch.com/blog/react-british-uk-public-coronavirus-survey/

Brainard J, Weston D, Leach S, Hunter PR. Factors that influence treatment-seeking expectations in response to infectious intestinal disease: Original survey and multinomial regression [published online ahead of print, 2019 Dec 6]. J Infect Public Health. 2019;S1876-0341(19)30340-5. doi:10.1016/j.jiph.2019.10.007

Gillespie AM, Obregon R, El Asawi R , et al. Social mobilization and community engagement central to the Ebola response in West Africa: lessons for future public health emergencies. Glob Health Sci Pract 2016;4:626–46. doi:10.9745/GHSP-D-16-00226

Kwok KO, Li KK,  Chan HH et al. Community responses during the early phase of the COVID-19 epidemic in Hong Kong: risk perception, information exposure and preventive measures medRxiv 2020.02.26.20028217;  doi:https://doi.org/10.1101/2020.02.26.20028217

Larson H. The biggest pandemic risk? Viral misinformation. Nature 562, 309 (2018) doi: 10.1038/d41586-018-07034-4

Leung GM, Leung K.  Crowdsourcing to mitigate epidemics The Lancet Digital Health, 2020 (February 20) https://doi.org/10.1016/S2589-7500(20)30055-8

Qiu W, Chu C, Hou X, et al. A Comparison of China’s Risk Communication in Response to SARS and H7N9 Using Principles Drawn From International Practice. Disaster Med Public Health Prep. 2018;12(5):587–598. doi:10.1017/dmp.2017.114

Rhodes A, Ferdinande P, Flaatten H, Guidet B, Metnitz PG, Moreno RP. The variability of critical care bed numbers in Europe. Intensive Care Med. 2012;38(10):1647–1653. doi:10.1007/s00134-012-2627-8

Sun K, Chen J, Viboud C Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. Lancet Digital Health. 2020; (published online Feb 20) https://doi.org/10.1016/S2589-7500(20)30026-1

World Health Organisation. Risk communication and community engagement (RCCE) readiness and response to the 2019 novel coronavirus (2019-nCoV). Interim guidance v2, 26 January 2020. WHO/2019-nCoV/RCCE/v2020.2

Zarocostas J. How to fight an infodemic. Lancet. 2020;395(10225):676. doi:10.1016/S0140-6736(20)30461-X

*This blog has also been published at Imperial’s Patient Experience Research Centre.

Report from Santiago, Chile

Scott Wark

17 February 2020

Scott Wark

17 February 2020

Over the past year, members of the People Like You team have been collaborating with Martin Tironí, Matías Valderrama, Dennis Parra Santander, and Andre Simon from the Pontificia Universidad Católica de Chile in Santiago, Chile, on a project called “Algorithmic Identities.” Between the 13th and 20th of January, Celia Lury, Sophie Day, and Scott Wark visited Santiago to participate in a day-long workshop discussing the collaboration to date and to discuss where it might go next.

The Algorithmic Identities project was devised to study how people understand, negotiate, shape, and in turn are shaped by algorithmic recommendation systems. Its premise is that whilst there’s lots of excellent research on these systems, little attention has been paid to how they’re used: how people understand them, how people feel about them, and how people become habituated to them as they interact with online services.

But we’re also interested in how algorithmic recommendation systems might be rendered legible to research. The major online services and social media platforms that people use are typically proprietary. Their algorithms are closely-guarded: we can study their effects on users, but not the algorithms themselves. In media-theoretical argot, they’re “black boxed”.

To study these systems, we adopted a critical making approach to doing research: we made an app. This app, ‘Big Sister’, emulates a recommendation system. It takes text-based user data from one of three sources—Facebook or Twitter, through these services’ Application Programming Interfaces, or a user-inputted text—and runs this data through an IBM service called Watson Personality Insights. This service generates a “profile” of the user based on the ‘big five personality traits’, which are widely used in the business and marketing world. Finally, the user can then connect Big Sister to their Spotify account to generate music recommendations based on this profile.

Our visit to Santiago happened after the initial phase of this project. Through an open call, we invited participants in Chile and the United Kingdom to use Big Sister and to be interviewed about their experience. Using an ethnographic method known as “trace interviews”, in which Big Sister acts as a frame and prompt for exploring participants’ experiences of the app and their relationship to algorithmic recommendation systems in general, we conducted a trial/first set of interviews—four in Santiago and five in London—which formed the basis the workshop.

This workshop had a formal component: an introduction and outline of the project by Martín Tironi; presentations by Celia Lury and Sophie Day; and an overview of the initial findings by Matías Valderrama and Scott Wark. But it also had a discursive element: Tironi and Valderrama invited a range of participants from academic and non-governmental institutions to discuss the project, its theoretical underpinnings, its findings and its potential applications.

Tironi’s presentation outlined the concepts that informed the project’s design. Its comparative nature—the fact that it’s situated in Santiago and in the institutional locations of the People Like You project, London and Coventry—allows us to compare how people navigate recommendation in distinct cultural contexts. More crucially, it implements a mode of research that proceeds through design, or via the production of an app. Through collaborations between social science and humanities scholars and computer scientists—most notably the project’s programmer, Andre Simon—it positions us, the researchers, within the process of producing an app rather than in the position of external observers of a product.

This position can feel uncomfortable. The topic of data collection is fraught; by actively designing an app that emulates an algorithmic recommendation system, we no longer occupy an external position as critics. But it’s also productive. Our app isn’t designed to provide a technological ‘solution’ to a particular problem. It’s designed to produce knowledge about algorithmic recommendation systems, for us and our participants. Because our app is a prototype, this knowledge is contingent and imprecise—and flirts with the potential that the app might fail. It also introduces the possibility of producing different kinds of knowledge.

My presentation with Valderrama outlined some preliminary interview findings and emerging themes. Our participants are aware of the role that recommendation systems play in their lives. They know that these systems collect data as the price for the services they receive in turn. That is, they have a general ‘data literacy,’ but tend to be ambivalent about data collection. Yet some participants found the profiling component of our app confronting—even ‘shocking’. One participant in the UK did not expect their personality profile to characterise them as ‘introverted’. Another in Santiago wondered how closely their high degree of ‘neuroticism’ correlated to the ongoing social crisis in Chile, marked by large-scale, ongoing protests about inequality and the country’s constitution.

Using the ‘traces’ of their engagement with the app, these interviews opened up fascinating discussions about participants’ everyday relationship with their data. Participants in both places likened recommendations to older prediction techniques, like horoscopes. They expected their song recommendations to be inappropriate or even wrong, but using the app allowed them to reflect on their data. We began to get the sense that habit was a key emergent theme.

We become habituated to data practices, which are designed to shape our actions to capture our data. But we also live with, even within, the algorithmic recommendation systems that inform our everyday lives. We inhabitthem. We began to understand that our participants aren’t passive recipients of recommendations. Through use, they develop a sense of how these systems work, learning to shape the data they provide to shape the recommendations they receive. Habit and inhabitation intertwine in ambivalent, interlinked acts of receiving and prompting recommendation.

Lury’s and Day’s presentations took these reflections further, presenting some emergent theoretical speculations on the project. Day drew a parallel between the network-scientific techniques that underpin recommendation and anthropological research into kinship. Personalised recommendations work, counter-intuitively, by establishing likenesses between different users: a recommendation will be generated by determining what other people who like the same things as you also like. This principle is known as ‘homophily.’ Day highlighted the anthropological precursors to this concept, noting how this discipline’s deep study of kinship provides insights into how algorithmic recommendation systems group us together. In studies of kinship, ‘heterophily’—liking what is different—plays a key role in explaining particular groupings, but while this feature is mobilised in studies of infectious diseases, for example in what are called assortative and dissassortative mixing patterns, it has been less explicitly discussed in commentaries on algorithmic recommendation systems. Her presentation outlined a key line of enquiry that anthropological thinking can bring to our project.

Lury’s presentation wove reflections on habit together with an analysis of genre. Lury asked if recommendation systems are modifying how genre operates in culture. Genres classify cultural products so that they can be more easily found and consumed. They can be large and inclusive categories, like ‘rap’ or ‘pop’; they can also be very precise: ‘vapourwave,’ for instance. When platforms like Spotify use automated processes, like machine learning, to finesse large, catch-all genres and to produce hundreds or thousands of micro-genres that emerge as we ‘like’ cultural products, do we need to change what we mean by ‘genre’? Moreover, how does this shape how we inhabit recommendation systems? Lury’s presentation outlined another key line of enquiry that we’ll pursue as our research continues.

For me, our visit to Santiago confirmed that the ‘Algorithmic Identities’ project is producing novel insights into users’ relationship to algorithmic recommendation systems. These systems are often construed as opaque and inaccessible. But though we might not have access to the algorithms themselves, we can understand how users shape them as they’re shaped by them. The ‘personalised’ content they provide emerges through habitual use—and can, in turn, provide a cultural place of inhabitation for their users.

We’ll continue to explore these themes as the project unfolds. We’re also planning a follow-up workshop, tentatively entitled ‘Recommendation Cultures,’ in London in early 2020. Our work and this workshop will, we hope, reveal more about how we inhabit recommendation cultures by exploring the relations between personalised services and the people who use them. Rather than simply existing in parallel with each other, we want to think about how they emerge together in parallax.

What do pictures want?

Sophie Day

20 December 2019

Sophie Day

20 December 2019

WJT Mitchell’s title to his 2004 book is ‘What do pictures want?’  Why do we behave as if pictures are alive, possessing the power to influence us, to demand things from us, to persuade us, seduce us, or even lead us astray?

Steve McQueen’s Year 3 (2019) involved mass participation and includes 3,128 photographs, two-thirds of London’s 7-year olds. Class pictures of these 76,000 children are on display in Tate Britain’s Duveen Galleries until May 2020, but available slots for school visits are fully booked.  These are classic school photographs –  wide angle, everyone visible, most children in uniform, sitting and standing in three or four rows by virtue of the traditional low benches, and the children framed by familiar accoutrements of school gyms and halls. The images are arranged in blocks of colour from top to bottom of the gallery walls.


Year 3 at Duveen Galleries, Tate Britain (photograph, Sophie Day)

In addition to the gallery display, McQueen was committed to exhibiting class photographs on billboards on older, mid-20th century shop gables and houses in London’s further reaches as well as prime advertising spots in the centre. The London arts organisation, Artangel worked with PosterScope and leading companies for outdoors locating and marketing, to site billboards across the city but outside the borough in which the photograph had been taken. At least one billboard should be sited so that it was easy to visit from the relevant school. Cressida Day from Artangel told me about the logistics of placing billboards across London for two weeks in November 2019, ahead of the exhibition at Tate Britain. Fifty-three schools appeared on around 600 sites which were put up with paper and paste in 48 sheets for one billboard, 96 for a double space. Such spots are hard to find in central London where most advertising is now digital. Only digital, in portrait format, is available in some boroughs such as the City of London while paper and paste offers the necessary landscape format. Pasting up is a dying art and takes a year to learn.


Year 3 billboard by A12 extension, east London (photograph, Sophie Day)

I was interested in this vision of a mass public seeing itself. Billboards and exhibition evoked repeated hopes for London’s future, as Harry Thorne found in reviews from The Guardian, The Times, Arts & Collections, ArtDaily and The Telegraph.[2] One teacher expressed delight about the public display of pictures of children she taught with special needs. They are mostly invisible, she said, and are not part of publics. Of course, they want (to be on) a billboard.  No one ever sees them. Comments on twitter’s #Year3Project read, “The … is so cool! We’re used to numerical data on populations, but here you can SEE a cross-section of London, …” and, from a participating school,  “… we are the art work. We are the audience.”

Apparently less than half of London’s schools now take year pictures and the photographs that are still taken do not follow past practice, replacing images of children in serried rows with movement and activity. A web search for school photo will come up at once, however, with an offer to find your old class picture for you. You might then imagine or trace your cohort forward in time from the recent past.[3]  I wonder if the evocation of collectives on billboards through practices that used to be common jolted spectators into asking about London’s future. The sense of collective, including year groups, was orchestrated by public institutions through widely shared events that moved you predictably from school photos, through education in general, and into work placements, health checks, jobs ….  As public institutions themselves are severely trimmed and as their role or value is celebrated less often through school photos and equivalent markers, what sort of London will appear with these Year 3 children? How will it be recognised and by whom?

Measures were adopted to safeguard the audience and portraits, but different kinds of public emerge in relation to digital media. Advertisers follow voluntary restrictions within a 100-metre area around schools and refrain from advertising alcohol, e-cigarettes, fast food, sweets, gambling or lotteries. The display of Year 3 images on billboards followed the same guidelines. In addition, there were to be no adverts from these sectors next to Year 3 portraits.[4] In consequence, more billboards ended up in the underground network than expected, where TFL’s policy on advertising is more stringent.

Were the audience to these pictures in need of safeguards? Perhaps Year 3 pictures would affect their surroundings and so the companies placing the billboards as advised by NSPCC (National Society for the Prevention of Cruelty to Children) officers, wanted to create child-friendly environments. If ‘the artwork was also the audience’ (above), would an audience of young people would come to look at the billboards in person, where they would be protected by these guidelines? But pictures of billboards that then circulate on social media (and here, for instance) cannot take these protective measures with them. What do these images want from their audiences? To be seen from what distance, in what context? How do these images and their audiences differ? If the images show ‘People Like You’, do they – in turn – like you?

Geotargeting and geofencing are increasingly important to out-of-home advertising. You may be looking at a billboard that is looking at you.  Data such as gender, age, race, income, interests, and purchasing habits can be used by companies to trigger an advertisement directly or to show ads in the future that they will have learned are appropriate, perhaps to Year 3 parents at school pick-up time and  teenagers in the evening. Once your phone has been detected, an advertising company can follow up with related ads in your social media feed or commercials at home on your smart TV.[5]  Artangel also used geolocating technology to target ads via Facebook or Instagram and direct people to Artangel’s website to find out more about the project.

Roy Wagner’s comment on the early Wittgenstein provides an appropriate gloss. Rather than picturing facts to ourselves, Wagner suggested, “Facts picture us to themselves” (The Logic of Invention, 2018).


[1] With thanks to Cressida Day, Celia Lury and Will Viney

[2] Harry Thorne, What All the Reviews of Steve McQueen’s ‘Year 3’ at Tate Britain Have Got Wrong. Frieze, 15 November 2019 at https://frieze.com/article/what-all-reviews-steve-mcqueens-year-3-tate-britain-have-got-wrong.

[3] The early days of Facebook would be remembered by some parents of Year 3 pupils: TheFacebook, as it was then called, made an online version of Harvard’s paper registers which were handed to all new students. They had photos of your classmates alongside their university ‘addresses’ or ‘houses’.

[4] 38 billboards were never put up in consequence. In a digital equivalent, where you cycle through six images in a minute, you would have had to place six school photos one after the other to avoid neighbouring advertising.

[5] See Thomas Germain, Digital Billboards Are Tracking You. And They Really, Really Want You to See Their Ads. CR Consumer Reports, November 20, 2019 at https://www.consumerreports.org/privacy/digital-billboards-are-tracking-you-and-they-want-you-to-see-their-ads/

Fiona Johnstone in conversation with Felicity Allen

Fiona Johnstone

5 November 2019

Fiona Johnstone

5 November 2019

As part of our investigation into personalisation, People Like You is working with artist Felicity Allen (http://felicityallen.co.uk). Up to fifteen people will participate in Allen’s Dialogic Portraits practice, sitting for Allen in her studio in Ramsgate, where she will paint their portrait and invite them to reflect upon the process – and personalisation – with her.  Fiona Johnstone, postdoctoral research fellow with People Like You, sat for Allen and discussed her practice in relation to personalisation.

Fiona: Can you tell me a little more about the process of making Dialogic Portraits? The phrase suggests a conversation or dialogue; it reminds me of Linda Nochlin’s famous line about a portrait being ‘the meeting of two subjectivities’. I’m interested in how this relationship can be captured and made manifest in an artwork.

Flick. I’ve been working with Dialogic Portraits as a format for around ten years. Each sitting involves both talking and silence; each portrait is a document of the time that the sitter and I spend together. As well as creating a pictorial portrait, I also produce audio and video recordings, and make written observations. These then go towards making, say, an artist’s book or a film. I’m interested in how people respond to the experience of sitting, and in how they relate to the version of themselves that is given back to them in the finished portrait.

Fiona: The notion of series is important for dialogic portraiture, is that correct?

Flick: Yes, series, but also concept. Each series of Dialogic Portraits (Begin Again [2009-2014], You [2014-2016], and As if They Existed [2015-2016] and, currently, People Like You, Refugee Tales, and Interpreting Exchange) is informed by a concept that loosely links all the sitters in some way. For example, for Begin Again, which I started at the end of a decade of not-painting, I invited people who I had been working with [as Head of Interpretation & Education at Tate Britain] during that decade to sit for me. This enabled me to explore the limits of what we understand to constitute labour – intellectual, administrative, affective or domestic – and to think through the significance of this labour in relation to the production of both portraits and persons. For each portrait produced (76 in total), I wrote a diaristic note and recorded an interview with the sitter.

Fiona: I’m interested in the presentational format of Begin Again, which takes the shape of a two-volume book with images and texts (pictured), and also an exhibition (in 2015) where the portraits were hung as a wall-sized grid of faces (pictured). This configuration conjures several associations for me: a database, a filing system, or a rogues’ gallery. This reminds me of two textual reference points. The first is Siegfried Kracauer’s ‘The Mass Ornament’; Kracauer argues that in the modern period, people can only be understood as part of a mass, not as self-determining individuals. The second is Allan Sekula’s famous essay on photography, ‘The Body and the Archive’, where he explores the relationship in the early nineteenth-century between photographic portraiture, the standardisation of police and penal procedures, and the rise of the pseudo-sciences of physiology and phrenology (both comparative taxonomic systems which in turn contributed to the development of the discipline of statistics). Finally, it also made me think of an Instagram wall!

Flick: The associations with Instagram wouldn’t have occurred to me. I started working with the grid before I started using social media, and certainly before I was aware of Instagram [which was launched in 2010]. The grid was partly a practical solution to the problem of how to display multiple images within a limited space. For me, the associations of the grid would be minimalist or modernist – as in Rosalind Krauss’ reading of the grid – rather than to do with Instagram.

Fiona: That’s interesting. Krauss claims that the grid is a symptom of modern art’s hostility to narrative and to discourse – this seems antithetical to your own work, which connects image and text. She also describes the order of the grid as that of ‘pure relationship’, whereby objects no longer have any particular kind of value or order in themselves, but only in relation to each other. Perhaps this notion of ‘pure relationship’ might offer us a way into thinking about personalisation in relation to your work?

Flick: With the Begin Again wall I was certainly thinking about the individual in relation to the mass; the paradoxical effect of working with a group or series of people is that you starting thinking about them all as individuals. The format is also, crudely speaking, about taking status away from people by putting them alongside other people. It disrupts the way in which we privilege certain people. It’s vaguely political, challenging hierarchy.

Begin Again nos 1–21 (2014), Felicity Allen, 2-volume limited edition artists book

Fiona: It feels as though you are working with an enduringly humanistic notion of the person. In particular, you work primarily with the face, a part of the person that has longstanding associations with phenomenological presence. Your images are often closely cropped; the focus is solely on the face, rather than on any contextual details, such as background or clothes, that might give the viewer a clue as to the identity of the sitter.

Flick: I agree that the face has strong humanistic associations. I’m thinking of Levinas’ idea that the face is basically something that stops you killing people – it makes a demand on you, and that relationship is inherently ethical. In terms of contextual details, I’m now starting to crop my images much less closely, because I’m interested in notions of personal branding and role-playing through the way in which people choose to present themselves – through branded clothes, for example.

Fiona: I wanted to ask you about the significance of persona. Many of our conversations on this project have looked at personalisation in relation to digital technologies and data science. Digital personalization technologies reflect a longer preoccupation with the ‘person’ and the ‘persona’, and it seems to me that your work, which is almost resolutely analogue, might offer us a different way of approaching personalisation. The origins of the terms personalisation, personal, and personalise all stem from the Latin personalis or personale, which means ‘pertaining to a person’. Can we talk about the concept of persona in relation to your work?

Flick: The watercolours are resolutely analogue but there’s usually a kind of comprehensive digital work – a book or a film – which brings the series together. I am interested in how my sitters perform certain roles, but I’m also interested in the way in which I perform – or inhabit – the role of the artist. Begin Again was absolutely about getting people to see me differently, I had a strong consciousness of that very quickly. I was undressing as a manager, and dressing up as an artist.

Begin Again (2015), Felicity Allen, detail of exhibition installation during a residency at Turner Contemporary, Margate

Fiona: Do you find painting (someone’s portrait) to be performative?

Flick: It’s totally performative for both artist and sitter. We both find it exhausting, sitters as well as me.

Fiona: It’s a little like being on a therapist’s couch.

Flick: Yes, or at the hairdressers. But I try to manage the relationship to ensure that I’m not turning into the analyst or the hairdresser.

Fiona: How do you do that?

Flick: By talking back! And by being very conscious of how the sitting is going. But it does mean that I’m constantly retelling – or reperforming – my own stories.

Fiona: There’s a kind of labour, a selling-of-self, involved in that process of storytelling; that’s part of your exchange with the sitter. I’m wondering if you have read any of Isabelle Graw’s work on painting? Graw describes painting as ‘a form of production of signs that is experienced as highly personalised’. What she means by this is that painting has a direct indexical link to its maker; there is a close relationship between person and product. She links this to Alfred Gell’s definition of artworks as ‘indexes of agency’. As a ‘record of time spent together’, your work has a strong claim to the indexical.

Flick: Do you believe Graw’s argument?

Fiona: It’s seductive, but I don’t really buy it – why is this true of painting, but not of drawing or sculpture?

Flick: I don’t believe it, but I feel it. There’s something about the flow, the wet, that is very important about painting. I’ve got this board, and I’ve got paper on it. As I’m painting and I’m using my brush it’s like a proxy for stroking the face. There’s a brushstroke going on, and there’s a body that I could be stroking. It’s about touch, and feeling, and all that stuff – if I was to use a camera, I wouldn’t have that.

Fiona: So for you there is a strong sense that the (painted) portrait is a proxy for the person, but also that your tools are proxies for your own libidinal body.

Flick: Right.

Fiona: I’ve been trying to think about whether I have found my experience of sitting for you to be a personalised one. I think that I’d describe it as a personal or inter-personal experience, but not personalised, as such – for me, that term suggests an industrial process driven by big data and an infinite number of calculable relations based on things like likes and preferences. Understood in this way, personalisation seems to bear little relation to the highly individualised experience of a one-to-one portrait sitting. Perhaps we need a vocabulary that can differentiate between an experience that is individualised, and one that is personalised? Throughout our conversation, we’ve often both found it challenging to think about your work in relation to a dominant concept of [algorithmic] personalisation. In a recent essay published in Critical Inquiry, Kris Cohen notes that personalisation, and indeed networked life more generally, ‘disorientates all of our existing vocabularies of personhood and collectivity’. Do you think that this sematic disorientation might explain our difficulty in thinking through your work in relation to personalisation?

Flick: Absolutely!

Felicity Allen at work on a portrait of Fiona Johnstone, 5 September 2019, for People Like You

Works referenced

Linda Nochlin, “Some women realists”. Arts Magazine (May 1974), p.29.

Allan Sekula, “The Body and the Archive”. October 39 (Winter 1986), pp. 3-64.

Siegfried Kracauer, The Mass Ornament: Weimar Essays, trans. Thomas Y Levin. Harvard University Press, Cambridge, Massachusetts and London, England: 1995.

Ross Krauss, “Grids”. October 9 (Summer 1979), pp. 50-64.

Isabelle Graw, “The Value of Painting: Notes on Unspecificity, Indexicality, and Highly Valuable Quasi-Persons”, in Isabelle Graw, Daniel Birnbaum and Nikolaus Hirsh (eds.), Thinking Through Painting: Reflexivity and Agency Beyond the Canvas. Sternberg Press, Berlin; 2012.

Kris Cohen, “Literally, Ourselves”. Critical Inquiry 46 (Autumn 2019), pp. 167-192.

How Do You See Me?

Fiona Johnstone

30 September 2019

Fiona Johnstone

30 September 2019

Artist Heather Dewey-Hagborg’s new commission for The Photographers’ Gallery, How Do You See Me?, explores the use of algorithms to ‘recognise’ faces. Displayed on a digital media wall in the foyer of the gallery, the work takes the form of a constantly shifting matrix of squares filled with abstract grey contours; within each unit, a small green frame identifies an apparently significant part of the composition. I say ‘apparently’, because the logic of the arrangement is not perceptible to the eye; although the installation purports to represent a human face, there are no traces of anything remotely visually similar to a human visage. At least to the human eye.

To understand How Do You See Me?, and to consider its significance for personalisation, we need to look into the black box of facial recognition systems. As explained by Dewey-Hagborg, speaking at The Photographers’ Gallery’s symposium What does the Data Set Want? in September 2019, a facial recognition system works in two phases: training and deployment. Training requires data: that data is your face, taken from images posted online by yourself or by others (Facebook, for example, as its name suggests, has a vast facial recognition database).

The first step for the algorithm working on this dataset is to detect the outlines of a generic face. This sub-image is passed on to the next phase, where the algorithm identifies significant ‘landmarks’, such as eyes, nose and mouth, and turns them into feature vectors, which can be represented numerically.  For the ‘recognition’ or ‘matching’ stage, the algorithm will compare multiple figures across the dataset, and if the numbers are similar enough, then a match is identified – although this might take millions of goes. The similarity of the represented elements remains unintelligible to the human eye, calling a ‘common sense’ understanding of similarity-as-visual-resemblance into question.

In the western culture of portraiture the face has traditionally acted as visual shorthand for the individual; through this new technology, the face (read, the person) is transfigured into numeric code, allowing for algorithmic comparison and categorisation across a vast database of other faces/persons that have been similarly processed. How Do You See Me? asks what it means to be ‘recognised’ by AI. What version of ‘you’ emerges from a system like this, and how is it identifiable as such? Dewey-Hagborg described the project as an attempt to form her own subjectivity in relation to AI, noting that her starting point was a curiosity as to how she might be represented within the structure of the system, and what these abstractions of her ‘self’ might look like. In an attempt to answer these questions, she used an algorithm to build a sequence of images stemming from the same source, but varying widely in terms of appearance, working on the hypothesis that eventually one of these pictures would be detected as ‘her’ face. The grid of abstract and figuratively indistinct images on the wall of The Photographers’ Gallery can thus be understood as a loose form of self-representation, the evolution of a set of (non-pictorial) figures that attempt to be detected as a (specific) face. By interrogating the predominantly visual associations of ‘similarity’, which may once have implied a mimetic ‘likeness’ (with connotations of the pictorial portrait, arguably a dominant technology for the production of persons from the sixteenth to the twentieth-century), but which now suggests a statistical correspondence, How Do You See Me? draws attention to changing ideas about how a ‘person’ might be identified and categorised.

Following her own presentation, Dewey-Hagborg discussed her practice with Daniel Rubinstein (Reader in Philosophy and the Image at Central St Martins). Rubinstein argued that this new technology of image-making can teach us something about contemporary identity. Considering our apparent desire to be ‘recognised’ by our phones, computers, and other smart appliances, Rubinstein suggested that the action of presenting oneself for inspection to a device resembles the dynamics of an S&M relationship where the sub presents themselves to the dom. Rubinstein argued that we want to be surveyed by these technologies, because there is a quasi-erotic pleasure in the abdication of responsibility that this submission entails. Citing Heidegger, Rubinstein argued that technology is not just a tool, but reveals our relation to the world. Life and data are not two separate things (and never have been): we need to stop thinking about them as if they are. The face is already code, and the subject is already algorithmic.

Rubinstein’s provocative remarks certainly provide one answer to the question of why people might choose to ‘submit’ to selected technologies of personalisation. They also help us to address personalisation. The project People Like You promises that we will try to ‘put the person back into personalisation’. Whilst this could be taken to imply that there is a single real person – the individual, we aim instead to consider multiple figurations of the ‘person’ on an equal footing with each other. As Rubinstein’s comments suggest, rather than thinking about this relationship in terms of original and copy (the ‘real’ person or individual and a corresponding ‘algorithmic subject’ produced through personalisation), the ‘person’ is always every bit as constructed a phenomenon as an ‘algorithmic’ subject. Or, to put this another way, rather than taking a liberal notion of personhood for granted as our starting point, our aim is to interrogate the contemporary conditions that make multiple different models of personhood simultaneously possible.

Who gets to feed at the biobank?

William Viney

10 September 2019

William Viney

10 September 2019

In the United Kingdom, initiatives such as UK Biobank and the 100,000 Genome Project are now complete, and the NHS Genomic Medicine Service launched last year. With the consent of patients, local NHS trusts collect data and samples for research purposes. Each is a kind of biobank – an organised collection of biological specimens associated with computerised files, including demographic, clinical and biological data. Biobanks are an increasingly important part of research infrastructures in biomedicine and are important to realising the NHS’ desire for a more personalised healthcare system.

More recently, clinicians and researchers have been calling for wider participation in biobanking. This is because participation in biomedical research is seen as fundamental to developing more ‘targeted’ treatments, to foster a transition from a ‘one-size-fits-all’ models of healthcare to more timely, accurate, and preventative interventions. Researchers and clinicians may also need wide and inclusive participation – including patients traditionally excluded from research – to make sure that biological samples and datasets are diverse and representative.

The People Like You project is interested in these and other developments that link healthcare, research, data science, and data infrastructures. My own involvement in biobanking began before I joined the project, when I enrolled as a participant in TwinsUK based at the Department of Twin Research, King’s College London – the UK’s largest registry for twins. When my brother and I visited TwinsUK, the group collected basic biometric data, measuring height, weight, and blood pressure, also the strength of our grip and the capacity of our lungs. We gave samples of our blood, hair and spit, from which DNA, RNA, metabolites and numerous other molecules can be extracted. Our faces were swabbed in different places to test our sensitivity to different chemicals. All was recorded. We were not only enrolled, we are incorporated.

Participating in a biobank is different to enrolling in a discrete study because participants are not told exactly when and how their samples or data are used. The data stored by TwinsUK is available to any bona fide researchers, anywhere in the world. And so a biobank is not only a store of samples and data. It is also a registry or store of names and contact details, linking to individuals who have declared themselves interested in research and will give time, energy, and lots of different kinds of data. When the wind blows in the direction of studies interested in ‘personalised’ tests and interventions, this registry faced new opportunities and challenges, as did its participants.

In 2018, TwinsUK asked if I would take part in a new study called PREDICT. I was interested because it was described as a ‘ground-breaking research study into personalised nutrition’ that would ‘help you choose foods for healthy blood sugar and fat levels.’ Being involved was not straightforward. After a visit to St. Thomas’ Hospital, participants returned home and spent the next 14 days measuring blood glucose, insulin, fat levels, inflammation, sleep patterns and their gut microbiome diversity, both in response to standardised foods and each participant’s chosen diet. In return, participants would be given summary feedback on the their metabolic response. What interested me was how recruitment targeted existing members of the registry in the usual email format and their unique study number. And so it looked like any other Department of Twins Research study. But it is not like any other study.

Although Kings College London is the study sponsor and the Human Research Authority has provided the usual ethical approval, PREDICT is a large collaboration between several European and American universities, backed by venture capital investment from around the world. Tim Spector, the director of TwinsUK, is part of the scientific group that leads the group and has an equity stake in the private company called ZOE, who aims ‘to help people eat with confidence’. It is ZOE, not TwinsUK, that is processing the data that will build predictive – and ‘personalised’ – algorithms for future ZOE customers.

There is nothing nefarious or illegal about PREDICT. Collaborations between university scientists and private companies have been common for centuries. But the presentation of PREDICT’s results led me to think differently about biobanks and biobank participation in an era of personalised medicine and healthcare. PREDICT’s innovation threads together a set of historical tendencies that are important for how personalisation is seen is a desirable, evidence-based, and marketable product.

Changes in how UK universities are funded and the NHS is structured have changed the potential uses of biobanks. This is not always obvious to existing research participants (who, at TwinsUK, have a mean average age of 55 years; some of whom have been volunteers for 25+ years). In the case of PREDICT, TwinsUK assure me that all the proper licences and contracts are in place so that data can be shared with commercial collaborators and participants are given information sheets explaining how their data is used. But what does informed consent become – and ‘participation’ signify – when the purpose of a biobank shifts to include corporate interests outside the health service.

Initial results from PREDICT have been more actively disseminated in the mainstream media than in a peer-reviewed journals (summary results have been presented at a large conference in the US). Significant resources have been ploughed into garnering widespread coverage in The New York Times, Daily Mail, The Times and The Guardian. The data from the first PREDICT study has not been made available to other groups.

Begun in 1993 to investigate aging related diseases, TwinsUK started in the public sector. It still receives money from the Biomedical Research Council at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London, to make translational research benefit everyone, and its other funders, the Medical Research Council, Wellcome Trust, and the European Commission, are committed to the principles of open and equitable science. But with the turn towards ‘personalised’ interventions in nutrition a fresh wave of transatlantic venture capital has become available to biomedical researchers who have access to people, resources, and data, accumulated over years of state funded work.

One facet of what Mark Fisher called ‘capitalist realism’ is the insistence that things are what they are and they cannot be another way. In biomedicine, this has affected the kinds of research that get funded and the corporate interests allowed to inform research, when and how. It is understandable that the microbiome that feeds you may be more worthy of research than the many that are not so financially nourishing. But who is keeping an eye on the opportunity costs?

 

 

 

WHOSE PERSON IS IT ANYWAY?

Scott Wark

16 July 2019

Scott Wark

16 July 2019

One of this project’s lines of inquiry is to ask who the “person” is in “personalisation”. This question raises others: Is personalisation actually more personal? Are personalised services about persons, or do they respond to other pressures? This question also resonates differently in the three different disciplines that we work in. In health, it might invoke the promise of more effective medicine. In data science, the problem of indexing data to persons. In digital culture, though, this tagline immediately invokes more sinister—or at least more ambiguous—scenarios, for me at least. When distributed online services are personalised, how are they using the “person” and to whose benefit? Put another way: Whose person is it anyway?

What got me thinking about these differences was a recently-released report on the use of facial recognition technologies by police forces in the United Kingdom. The Metropolitan Police in London have been conducting a series of trials in which this technology is deployed to assist in crime prevention. Other forces around the country, including South Wales and Leicester, have also deployed this technology. These trails have been contentious, leading to criticism by academics, rights groups, and even a lawsuit. As academics have noted elsewhere, these systems particularly struggle with people with darker skin, which they have difficulty processing and recognising. What it also got me thinking about was the different and often conflicting meaning of the “person” part of personalisation.

Facial recognition is a form of personalisation. It takes an image, either from a database—in the case of your Facebook photos—or from a video feed—the Met system is known as “Live Facial Recognition”—and processes it to link it to a profile. Online, this process makes it easier to tag photographs, though there are cases in which commercial facial recognition systems have used datasets of images extracted from public webpages to “train” their algorithms. The Live Facial Recognition trials are controversial because they’re seen as a form of “surveillance creep”, or a further intrusion of surveillance into our lives. Asking why is indicative.

The police claim that they are justified in using this technology because they operate it in public and because it will make the public safer. The risk that the algorithms underlying these systems might actually reproduce particular biases built in to their datasets or exacerbate problems with accuracy around different skin tones challenge these claims. They’re also yet to be governed by adequate regulation. But these issues only partly explain why this technology has proven to be so controversial. Facial recognition technologies may also be controversial because they create a conflict between different conceptions of the “person” operating in different domains.

To get a little abstract for a moment, facial recognition technology creates an interface between different versions of our “person”. When we’re walking down the street, we’re in public. As more people should perhaps realise, we’re in public when we’re online, too. But the person I am on the street and the person I am online isn’t the same. And neither person is the same as the one the government constructs a profile of when I interact with it—when I‘m taxed, say, or order a passport. The controversy surrounding facial recognition technology arises, I think, because it translates a data-driven form of image processing from one domain—online—to another: the street. It translates a form of indexing, or linking one kind of person to another, from the domain of digital culture into the domain of everyday life.

Suddenly, data processing techniques that I might be able to put up with in low-stakes, online situations in exchange for free access to a social media platform have their stakes raised. The kind of person I thought I could be on the street is overlaid by another: the kind of person I am when I’m interfacing with the government. If I think about it—and maybe not all of us will—this changes the relative anonymity I might otherwise have expected when just another “person on the street”. This is made clear by the case of a man who was stopped during one facial recognition trial for attempting to hide his face from the cameras, ending up with a fine and his face in the press for his troubles. Whether or not I’m interfacing with the government, facial recognition means that the government is interfacing with me.

In the end, we might gloss the controversy created by facial recognition by saying this. We seem to have tacitly decided, as a society, to accept a little online tracking in exchange for access to different—even multiple—modes of personhood. Unlike online services, there’s no opt-out for facial recognition. Admittedly, the digital services we habitually use are so complicated and multiple that opting out of tracking is impracticable. But their complexity and the sheer weight of data that’s processed on the way to producing digital culture means that, in practice, it’s easy to go unnoticed online. We know we have to give up our data in this exchange. Public facial recognition is a form of surveillance creep and it has rightly alarmed rights organisations and privacy advocates. This is not only because we don’t want to be watched. After all, we consent to being watched online, having our data collected, in exchange for particular services. Rather, it’s because it produces a person who is me, but who isn’t mine. The why part of “Why am I being tracked” merges with a “who”—both “Who is tracking me”? and “Who is being tracked? Which me?”

In writing this, I don’t mean to suggest that this abstract reflection is more incisive or important than other critiques of facial recognition technology. All I want to suggest is that recognising which “person” is operating in a particular domain can help us to get a better handle on these kinds of controversies. After all, some persons have much more freedom in public than others. Some are more likely to be targeted for the colour of their skin how respectable they seem, how they talk, what they’re wearing, even how they walk. In the context of asking who the “person” is in “personalisation”, what this controversy shows us is that what “person” means is dependent not only on this question’s context, but the ends to which a “person” is put. Amongst other things, what’s at stake in technologies like these is the question of whose person a particular person is—particularly when it’s nominally mine.

The question, Whose person is it anyway?, is a defining one for digital culture. If recent public concern over privacy, data security, and anonymity teach us anything, it’s that it’ll be a defining question for new health technologies and data science practices, too.

Tails you win

William Viney

13 May 2019

William Viney

13 May 2019

I came home from a trip to Italy one day having heard that my dear dog Wallace was gravely ill. He had an iron temperament – haughty and devious, a great dog but not much of a pet. He was my constant companion from the age of 10. By the time I was home that summer in 2003 he was already in the ground. The log we used to chain him to – the only way we could stop him running off – was already on the fire. He lived fast and died young. The cause of his death was uncertain, but it was likely connected to Wallace’s phenomenal appetite. Our farm dogs had carnivorous diets: canned meats and leftovers and dry food, all mixed together. But this was never enough for Wallace, who was a very hungry beagle, and who died after eating something truly gruesome on the farm. Pity Wallace, who died for the thing he loved.

While browsing on Twitter a few weeks ago a promoted ad appeared that suggests I should buy their personalised dog food. I felt a familiar pang of sadness. True to the idea that any product can have the word ‘personalised’ attached to it, Tails.com have sought to personalise pet food – the stuff that is proverbially uniform, undifferentiated, derivative – with ingredients selected especially for your dog’s individual needs. Beyond the familiar platitudes I wondered what is being ‘personalised’ when dog food is personalised: what and why is this product being sold to me?

I don’t have a dog or anything else in the house that might eat dog food. I have the memory of a dog now dead for 15 years. Such is the informational asymmetry on social media platforms that I can guess, but I don’t really know, how Tails.com decided to spend money marketing their product on my Twitter feed. How had I been selected? Because I associated myself with the weird abundance of ‘doggo’ accounts? Surely something more sophisticated is needed than interacting with some canine-related content? But for a relatively new company like Tails.com, which now has Nestlé Purina Petcare as its majority shareholder, advertising to new customers is also a way of announcing themselves to investors and rivals, since their ads celebrate their innovation within a market – ‘the tailor-made dog food disrupting the industry’ – as well as promising products ‘as unique as your dog’. Whatever made me the ostensive target for this company’s product, the algorithmic trap was sprung from social media in order to ‘disrupt’ how you care for the animals in your home.

Tails.com provide personalised rather than customised products. The personalised object or experience is iterative and dynamic, it can be infinitely refined: personalisation seeks and develops a relationship with a person or group of persons; it may even develop the conditions for that group to join together and exist. Personalisation is primarily a process rather than a one-off event. A customised thing, by contrast, is singular and time-bound; it may have peers but it has no equal or sequel. So, many surgical interventions are individualised according to the person, but the patient usually hopes it’s a single treatment. Personalised medicine, on the other hand, is serial and data-driven; a testing infrastructure that recalibrates through each intervention, shaping relationships between different actors within a system. Tails.com sells dog food to dog owners. It does this by capturing and managing a relationship between dogs and owners, mediated by the processing of group and individual-level data. Such a system can be lifelong, informing not one but multiple interactions.

When debates continue to turn on the ethical uses of machine learning, its misrepresentations and its inherent biases, I am struck by how even critical voices seek adjustments and inclusions according to consumer rights: an approach that is happily adapted to capitalist prosumerism. ‘Personalise #metoo!’ To simply disregard Tails.com’s ads on Twitter as an intrusive failure of targeted marketing and personalisation may overlook a wider project that is harder to evaluate from an individual, rights-based, or anthropocentric perspective. The promise of disruption through personalised dog food tells us something about personalisation that stretches beyond transactions between company and client.

By personalising pet care, Tails.com seeks to enhance interactions between different ‘persons’, extending values of consumer preference and taste, satisfaction and brand loyalty with a blanket of anthropocentric ‘personhood’ to cover both the machines that market and deliver this product and the animal lives that we are told should benefit. No one asks the dog what it wants or needs. The whole system, from company to client and canine, is being personalised, but from a wholly human point of view. And yet, despite messages to the contrary, dogs probably don’t care that their food is ‘personalised’ in the way that Tails.com desire.

It’s not hard to imagine the kind of dog food customised to canine desires, the kind of foods that kill dogs like Wallace. I doubt, somehow, that Tails.com would like to facilitate this deathwish, since it would be a customised last supper rather than a personalised relation, sold over and over again.

This is a … toilet

Celia Lury

2 March 2019

Celia Lury

2 March 2019

In the project ‘People Like You’ we are interested in the creation and use of categories: from the making of natural kinds to what has been called dynamic nominalism, that is, the process in which the naming of categories gives opportunities for new kinds of people to emerge. And while the making of categories is often the prerogative of specialised experts, the last few years have seen a proliferation of categories associated with social, political and medical moves to go beyond the binaries of male/female and men/women. Emerging categories include: transgender, gender-neutral, intersex, gender-queer and non-binary.

The question of who gets included, who gets excluded and who belongs in categories is complicated, and depends in part on where the category has come from, who created it, who maintains it, who is conscripted into it, who needs to be included and who can avoid being categorised at all. Categories are rarely simply accepted; they need to be communicated, are frequently contested and may be rejected. There is a politics of representation in the acceptance – or not – of categories.

Take this example of a sign for a ‘gender-neutral’ toilet. Before I saw it, I knew what would be behind the door to which it was attached since the building work associated with the conversion of men’s and women’s toilets into gender-neutral toilets had taken weeks. But when the building work was finished and I was confronted with this sign – marking the threshold into a new categorical space – I didn’t know whether to laugh or cry. I am familiar, as no doubt you are too, with signs for what might now be called gender-biased toilets; that is, toilets for either men or women. Typically, the signs make use of pictograms of men or women, with the figure for ‘women’ most frequently distinguished from an apparently unclothed ‘man’ by the depiction of a skirt. Sometimes the signs also employ the words ‘men’ and ‘women, or ‘gentlemen’ or ‘ladies’. But the need to signal to the viewer of the sign that they would be occupying a gender-neutral space on the other side of the door, seemed to have floored the institution in which the toilet was located. The conventional iconography was, apparently, wanting. Perhaps it seemed impolitic – too difficult, imprudent or irresponsible – to represent a category of persons who are neither ‘men’ nor ‘women’. But in avoiding any representation of person, in making use of the word and image of a toilet (which of course is avoided in the traditional iconography, presumably as being impolite if not impolitic), I couldn’t help but think that the sign was inviting me – if I was going to step behind the door – to identify, not with either the category ‘men’ or ‘women’, but with a toilet. The sign intrigued me. Why, I wondered, if it was considered so difficult to depict a gender-neutral person, not just make this difficulty visible once, and simply show either a pictogram of a toilet or the word ‘toilet’? Why ‘say’ toilet twice? I recalled a work of art by the artist Magritte titled The Treachery of Images (1926). In this work, a carefully drawn pipe is accompanied by the words ‘Ceci n’est pas une pipe’, or ‘This is not a pipe’. Magritte himself is supposed to have said, The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it’s just a representation, is it not? So if I had written on my picture ‘This is a pipe’, I’d have been lying! In an essay on this art work (1983), Michel Foucault says the same thing differently: he observes that the word ‘Ceci’ or ‘This’ is (also) not a pipe. Foucault describes the logic at work in the art work as that of a calligram, a diagram that ‘says things twice (when once would doubtless do)’ (Foucault 1983: 24). For Foucault, the calligram ‘shuffles what it says over what it shows to hide them from each other’, inaugurating ‘a play of transferences that run, proliferate, propagate, and correspond within the layout of the painting, affirming and representing nothing” (1983: 49). What, then, does the doubling of the gender-neutral door sign imply about the category of the gender-neutral? Perhaps there is a nostalgia for when there was of play of transferences, when the relations between appearance and reality could be – and were – continually contested. Perhaps, however, it is a new literalism, what Annette Rouvroy and Thomas Berns call ‘a-normative objectivity’ (2013). Then again (and is this my third or fourth attempt to work out why the sign made me want to laugh and cry?), perhaps there is also an invitation to call into existence ‘something’– rather than the ‘nothing’ that Foucault celebrates – even if, for the category of the gender-neutral to come into existence, you have to (not) say something twice.

Bibliography:

  • Foucault, F. (1983) This is Not a Pipe, translated and edited by J. Harkness, Berkeley and Los Angeles: University of California Press.
  • Rouvroy, A. and Berns, T. (2013), ‘Algorithmic governmentality and prospects of emancipation’, Reseaux, 2013/1 no. 177, pp. 163-196, translated by Elizabeth Libbrecht.
Data Portraits

Fiona Johnstone

13 February 2019

Fiona Johnstone

13 February 2019

One of the aims of People Like You is to understand how people relate to their data and its representations. Scott Wark has recently written about ‘data selves’ for this blog; an alternative (and interconnected) way of thinking about persons and their data is through the phenomenon of the data portrait.

A quick Google of ‘data portraits’ will take you to a website where you can purchase a bespoke data portrait derived from your digital footprint. Web-crawler software tracks and maps the links within a given URL; the information is then plotted onto a force directed graph and turned into an aesthetically pleasing (but essentially unrevealing) image. Drawing on a similar concept, Jason Salavon’s Spigot (Babbling Self-Portrait) (2010) visualises the artist’s Google search history, displaying the data on multiple screens in two different ways; one using words and dates, the other as abstract bands of fluctuating colour. The designation of the work as a self-portrait raises interesting questions about agency and intentionality in relation to one’s digital trace: as well as referring to identities knowingly curated via social media profiles or personal websites, the data portrait can also suggest a shadowy alter-ego that is not necessarily of our own making.

Erica Scourti’s practice interrogates the complex interactions between the subject and their digital double: her video work Life in AdWords (2012-13) is based on a year-long project where Scourti regularly emailed her personal diary to her G-mail account, and then performed to webcam the list of suggested ad-words that each entry generated. A ‘traditional’ portrait in the physiognomic sense (formally, it consists of a series of head-and-shoulders shots of the artist speaking directly to camera), Life in AdWords is also a portrait of the supplementary self that is created by algorithmically generated, ‘personalised’ marketing processes. Pushing her investigation further, Scourti’s paperback book The Outage (2014) is a ghost-written memoir based on the artist’s digital footprint: whilst the online data is the starting point, the shift from the digital to the analogue allows the artist to probe the gaps between the original ‘subject’ of the data and the uncanny doppelgänger that emerges through the process of the interpretation and materialisation of that information in the medium of the printed book.

Other artists explore the implications of representation via physical tracking technologies. Between 2010 and 2015, Susan Morris wore an Actiwatch, a personal health device that registers the body’s movement. At the end of each year she sent the data to a factory in Belgium, where it was translated into coloured threads and woven into a tapestry on a Jacquard loom (a piece of technology that was the inspiration for Babbage’s computer), producing a minute-by-minute data visualisation of her activity over the course of that year. Unlike screen-based visualisations, the tapestries are highly material entities that are both physically imposing (SunDial:NightWatch_Activity and Light 2010-2012 (Tilburg Version) is almost six metres long) and extremely intimate, with disruptions in Morris’s daily routine clearly observable. Morris was attracted to the Actiwatch for its ability to collect data not only during motion, but also when the body is at rest; the information collected during sleep – represented by dark areas on the canvas – suggests an unconscious realm of the self that is both opaque and yet quantifiable.

Susan Morris, SunDial:NightWatch_Activity and Light 2010-2012 (Tilburg Version), 2014. Jacquard tapestry: silk and linen yarns, 155 x 589cm.  © Susan Morris.

Katy Connor is similarly interested in the tensions between the digital and material body. Using a sample of her own blood as a starting point, Connor translates this biomaterial through the scientific data visualisation process of Atomic Force Microscopy (AFM), which imagines, measures and manipulates matter at the nanoscale. Through Connor’s practice, this micro-data is transformed into large 3D sculptures that resemble sublime landscapes of epic proportions.

Katy Connor, Zero Landscape (installation detail), 2016.
Nylon 12 sculpture against large-scale risograph (3m x 12m); translation of AFM data from the artist’s blood.  © Katy Connor.

One strand of the People Like You project focuses particularly on how people relate to their medical data. Tom Corby was diagnosed with Multiple Myeloma in 2013, and in response begun the project Blood and Bones, a platform for the data generated by his illness. The information includes the medical (full blood count / proteins / urea, electrolytes and creatinine); the affective (mood, control index, physical discomfort index, stoicism index, and a ‘hat track’ documenting his headwear for the duration of the project); and financial data (detailing the costs to the NHS of his treatment). Applying methods from data science to the genre of illness blogging, Corby’s project is an attempt to take ownership of his data creatively, and thus to regain a measure of control over living with disease.

In the final pages of his influential (although now rather dated) book, Portraiture, the art historian Richard Brilliant envisaged a dystopian future where the existence of portraiture (as mimetic ‘likeness’) is threatened by ‘actuarial files, stored in some omniscient computer, ready to spew forth a different kind of personal profile, beginning with one’s Social Security number’ (Brilliant 1991). Brilliant locates the implicit humanism of the portrait ‘proper’ in opposition to a dark Orwellian vision of the individual reduced to data. Writing in 1991, Brilliant could not have foreseen the ways in which future technologies would affect ideas about identity and personhood; comprehending how these technologies are reshaping concepts of the person today are one of the aims of People Like You.

Sophie Day

14 January 2019

Our series was formally launched with introductions from Kelly Gleason, Cancer Research UK senior research nurse, and Iain McNeish, Head of Division, Cancer, (both at Imperial College London & Imperial College Healthcare NHS Trust). Later we heard from Adam Taylor (National Physical Laboratory) about work in the Rosetta Team under Josephine Bunch which is supported to map cancer through the first round of CRUK Grand Challenges so as to improve our understanding of tumour metabolism (https://www.cancerresearchuk.org/funding-for-researchers/how-we-deliver-research/grand-challenge-award/funded-teams-bunch).

To begin with, we learned about the breakthrough presented by tamoxifen in the development of personalised cancer medicine before hearing more about the infinite complexity of cancer biology. Twenty years ago, treatments were given to everyone with an anatomically defined cancer. This was frustrating since staff knew from experience that the treatment wouldn’t work for most people and many patients were disappointed. The introduction of tamoxifen led to stratification based on a common oestrogen receptor. Later, in ovarian cancer, it became clear that PARP inhibitors could be used successfully on approximately 20% of patients, who had inherited particular susceptibilities (in BRCA-1 and BRCA-2). Nonetheless, sub-group or stratified medicine is a long way from the goal of delivering unique treatment to everyone’s unique cancer.

This complexity is clear from the preliminary application of a range of integrated techniques by physicists, chemists and biologists in the Rosetta Team, as Adam then explained. Collaborators map and visualise tumours as a whole in their particular environments along with their constituents down to the level of individual molecules in cells. In combination, these measures give both a detailed picture of different tumour regions and a holistic overview. Amongst the many techniques are AI methods that we have encountered through Amazon or Tesco platforms which find patterns through reducing complexity. For example, 4,000 variables are reduced to three coloured axes that label different chemical patterns in one application of varied mass spectrometry techniques. You can find regions of similarity in the data by colour coding, and explore their molecular characteristics.

Amazon has applied non-negative matrix factorisation to predict how likely we are to buy a particular item once we have bought another specific item. A similar approach enabled McNeish’s group to find patterns among samples of ovarian cancer that had all looked different. The team traced 7 patterns driven by 7 mechanisms among these samples.

Embedded in the study of cancer’s biology and chemistry, data scientists ‘know that these are not just numbers. They know where the numbers come from and the biological and technical effects of these numbers.’ Non-linear methods such as t-SNE help in the analysis of very large data sets. Neural networks have also been developed to use in a hybrid approach where a random selection of data is analysed with t-SNE (stochastic neighbourhood embedding) to provide a training set for neural network applications which are then validated using t-SNE methods on another randomly selected chunk of data.

This approach combines fine-grained detail with broad pattern recognition in different aspects of tumour metabolism. It might lead to the development of a ‘spectral signature’ to read the combined signature of thousands of molecules at diagnosis.

At the end of the evening, most of us revealed anxieties about the attribution of a wholly singular status through personalising practices. Those affected by cancer wanted the ‘right’ treatment for them but we were reassured by the recognition that we also share features with other people. We appreciated the sense of combining and shifting between the ‘close up’, which renders us unique, and a more distant view, where we share a great deal with others.

Many thanks to Maggie’s West London for their hospitality.

 

You and Your (Data) Self

Scott Wark

2 January 2019

Scott Wark

2 January 2019

You might have seen these adverts on the TV or on a billboard: a man and his doppelgänger, one looking buttoned up and neat and the other, somehow cooler. “Meet your Data Self”, says the poster advert on the tube station wall I often stare at when I’m waiting for the next train. In smaller type, it explains: “Your Data Self is the version of you that companies see when you apply for things like credit cards, loans and mortgages”. And then: “You two should get acquainted”.

This advert has bothered me for quite a while. I’m sure that’s partially intentional—whether I find it funny or whether I find it irritating, its goal is to make the brand it’s advertising, Experian PLC, stick in my mind. I find the actor who plays this everyman and his double, Marcus Brigstocke, annoying—score one to the advert. Beyond Brigstocke’s cocked brow, what bothers me is that this advert raises far more questions than it answers.

Who is this “Data Self” it’s telling me to get acquainted with? Is this person really like me, only less presentable? What impact does this other me have on what the actual me can do? And—this question might come across as a little odd—who does this other me belong to?

Experian is a Credit Reference Agency, so presumably the other ‘me’ is a representation of my financial history: how good I am at paying my bills on time; whether I’ve been knocked back for a credit card or overdraft; even if I’ve been checking my credit history a lot lately, which might come across as suspicious. Banks, credit card companies, phone companies, car dealers—anyone who might extend you credit so you can get a loan or pay something off over time will check in with agencies like Experian to see if you’re a responsible person to lend to.

As a recently-finished PhD student, I’ve no doubt that my other me is not so presentable, to use the visual metaphor presented by this advert’s actor/doppelgänger. A company like Experian might advise another company, like a bank, to not front me money for the long summer holiday I’m dreaming of taking to Northern Italy as I wait for the next packed tube. This “me” might not be trustworthy. Or, to put it another way, this “me” might not indicate trustworthiness.

The point of this advert is to get me to order a credit report from Experian so that I can understand my credit history and so that I can build it up or make it better. This service is central to the contemporary finance industry, which has to weigh the risk of lending money or extending credit to someone like me against the reward they get when I pay it back. If I want to be a better me, it suggests, I ought to get better acquainted with myself—or rather, my data self. If I want that holiday, its visual metaphor suggests, I’d better straighten my data self’s tie.

There’s lots more that might be said about how credit agencies inform the choices we can make and handle our data. One of the more straightforward comments we might make about them is also one that interests us most: This other, data “me” isn’t me. This is perhaps obvious—the advert’s doppelgänger is a metaphor, after all. It’s a person like me, it’s constructed from data about me, and it influences my life, but it’s not me. But this also means that This other, data “me” isn’t mine.

This advert presents just one example of the many data selves produced when we consciously or inadvertently give up our data to other companies. In this case, we agree to our data being passed on to credit rating agencies like Experian every time we get given credit. What’s interesting about this data self is that whilst it isn’t you, it has an effect on a future version of you—in my offhand example, a you who might be holidaying in Italy; or, more problematically, a you who might need an overdraft to make ends meet month-to-month. To riff on our project’s title, these data selves are, quite literally, people like you. They might not be you, but they have a real effect on your life.

We need todo a lot more work researching who these datafied versions of ourselves actually are and what effect they have on being a person in our big data present. As Experian point out in another campaign fronted by food writer and austerity campaigner Jack Monroe, several million U.K. residents are “invisible” to the country’s financial services because they don’t have a credit profile. Conversely, we might ask, what does it mean to be a person in our big data present if who we are is judged on our data doppelgängers? What does it mean when my other “me” isn’t mine—when it’s opaque, confusing, and sold to me as a service?

Countless other digital platforms and services create both fleeting and lasting “data selves” that are used to try to sell us products, for instance, or to better tailor services to our needs. This process is called “personalisation”. One of the things we want to ask as part of our research project is this: who are we when who we are is determined by who we are like? Credit Reference Agencies and the “data selves” they produce make this tangled question tangible, but it applies to many other areas of contemporary life—from finance to medicine, from our participation in digital culture to our status as individuals, actors, citizens, and members of populations. This question raises others about what it means to be a “me” in the present. These are the questions, I think, that bind this project together.

For more information about Credit Reference Agencies, see the Information Commissioner’s Office information page.

What is Personalisation?

William Viney

26 November 2018

William Viney

26 November 2018

Personalisation is at once ubiquitous in contemporary life and a master of disguise. Its complexity hides in plain sight. Personalisation may mean producing products and services to ideas of individual demand, but it also means much more than this. Personalisation connects diverse practices and industries such as finance and marketing, medicine and online retail. But it also goes by many aliases – patient-centred, user-oriented, stratified and segmented – in ways that can make it hard to follow. It’s not always clear what personalised products and services share in common.

The ‘People Like You’ project does not shy from this diversity. It works across the fields of medicine, data science, and digital culture to understand the differences in each of these domains, as well as how people and practices work across them. One challenge of understanding emerging practices that are forming within and between particular industries is that histories of personalisation may be contested, sensitive, or rapidly developing. We want to find ways to explore different meanings of the term ‘personalisation’ in the United Kingdom, among people from different working backgrounds: academic and commercial scientists in the biomedical, biotechnology and pharmacology; public policy; advertising and public relations; communications; logistics; financial analysis. So we have designed a study that might be the first of its kind in the UK – an oral history of personalisation. 

The ‘What is Personalisation?’ study uses stakeholder interviews to establish how and why each industry personalises, and with what techniques of categorisation, monitoring, tracking, testing, retesting, aggregation and individuation. These interviews are in-depth and semi-structured. They usually last an hour or more. Interviews allow us an opportunity to understand how a particular individual views their work, industry, profession or experience.

A wide range of policy makers, activists, scientists, technologists, and healthcare professionals have already participated, detailing how they see the emergence of personalisation affecting their lives. Striking themes have revealed just some of the connective aspects of personalised culture: the links between standardisation, promise and failure; how languages of democratic and commercial empowerment contest state, regulative, or market legislative and economic power; how products or services can treat prototyping as a continuous process; the influence of management and design consultancies; and the way mobile technologies interpretr data in real time to produce ‘unique’ experiences for users. These are just some of the ideas that we have talked about during our interviews. We also get to discuss when and how these ideas emerged and became popular in a given industry, field or policy area.

The connections that can be made across different fields, practices, or industries can be contrasted to the highly specific emergence of personalisation in some areas. For instance, the special confluence of disability and consumer rights activism that formed alongside and, at times, in opposition to deregulation in healthcare systems in the late 1980s created individual (later personalised) health budgets, now an important policy instrument used by the National Health Service’s personalised care services. The challenge is to understand the historical and social formation of a particular patch in [personalisation’s history, its various actors and networks, to recognise adjacent and comparable developments. We are doing this whilst recognising broader patterns that are germane to other contemporary figures of personalisation. One of these may be the specific inclusion and exclusion factors that prevent a personalised service becoming a mass standardised service.  Another is to understand whether or not personalisation is being heralded as a success or as a response to failure – not the best of all available options but an alternative to foregone possibilities].

Our work takes patience and a lot of help from those who are passionate experts in their field. If you feel you have an experience of personalisation that would make an important contribution to this study then please get in touch with William Viney (w.viney@gold.ac.uk).