Blog
Personalisation in the Expanded Field

Scott Wark

2 March 2022

Scott Wark

2 March 2022

Over the past few years, the People Like You project has addressed a few recurring themes. One that I’ve been particularly interested in is what I think of as the paradox of personalisation.

A lot of what we think of as personalisation rests on increasingly sophisticated data processing techniques. Take personalised online advertising, for example. When an ad pops up in one of my social media feeds, it’s not that it’s tailored specifically to me.

Rather, based on preferences I’ve expressed through actions I’ve taken online (“liking”) and preferences I share with other people who I am like (“likeness”), this kind of advertising works by inferring that because people like me have liked a particular thing, I might, too.

Often, what makes this targeting precise – or, at least, seem precise – are techniques that constantly refine the sorting and categorising mechanisms involved personalisation, or that combine multiple categories to generate new targets.

The paradox of personalisation is this: what personalisation targets isn’t necessarily me, but the category or categories to which it has inferred that I belong.

I’ve found this paradox useful to think with, for two main reasons:

In general terms, it helps dispel some of the hype surrounding personalisation. What makes new personalising techniques novel is not a magical ability to figure out who an individual is, but an increasingly sophisticated ability to categorise. So, what we’re talking about when we talk about “personalisation” is often, in fact, categorisation.

It has also helped me to account for why, for all their promises to address us as individuals, personalisation might nevertheless produce detrimental or discriminatory outcomes for particular kinds of individual – namely, people who already suffer from other forms of discrimination.

With my colleague Thao Phan, I’ve published research on how personalisation’s novel means of categorising produces new forms of racialisation. Instead of categorising people based on how they look, personalising techniques might instead categorise people based on their preferences – such as their interest in particular language groups, foods, cultural practices, or hobbies. In the case study we used to explain this process, Facebook called these categories “ethnic affinities.”

These interests, we argue, can be used as proxy markers for race. Indeed, we show how they have been used to exclude people categorised as having an “affinity” for a particular ethnic grouping from access to basic services, such as housing and employment.

Thao and I are interested in quite abstract questions about the relationship between data and discrimination in an age of personalisation. One of our claims is that the capacity to process large amounts of data about individuals changes the very nature of categories like race, transforming it from a visual marker – I look different, therefore I must be like other people who look different – to a behavioural one: I prefer different things, therefore I must be like people who prefer the same things.

But there are takeaways from this research for activists and policymakers, too.

Amongst people working on data processing and inequality, it’s become increasingly accepted that large-scale data processing systems create unequal outcomes because they’re fed with data produced in unequal circumstances. While this may often be true, it also lets the systems that process our data off the hook. Unequal outcomes – inequality that takes the form of racialised, differential access to services or resources – can also be produced by the systems that process otherwise-neutral data.

Focusing on the paradox of personalisation allowed us to reach this conclusion, because it helped us to grasp what personalisation does, in practice. In practice, personalisation doesn’t just address people as individuals; it addresses them as individuals who are first categorised and sorted in to groups.

Studying how such groups are formed and, prior to this, how processes of categorisation work, is arguably one of the keys to understanding what personalisation is and what its broader social and political impacts are – not just in digital culture, but in general.

Of course, this broadens the scope of what the study of personalisation could entail.

Over the past few months, I’ve been developing my research into personalisation in a direction that, on the face of things, doesn’t seem to have much to do with personalisation at all – but which, I think, has the potential to illuminate the broader social and cultural dynamics in which personalisation is embroiled today.

The aim of this new line of research is to investigate the emergence of a very specific collective term in the United Kingdom: “East and Southeast Asian” (ESEA).

In dialogue with collaborators within and beyond academia,[1] I’ve been developing a research project looking at the emergence of a very specific collective term in the United Kingdom: “East and Southeast Asian” (ESEA).

At heart, ESEA is part of a burgeoning social movement that’s emerged in response to racism suffered by East and Southeast Asian people during the COVID-19 pandemic. Paraphrasing Diana Yeh, who’s been working on this topic for a while, ESEA is a political project. It’s the product of a political movement that’s been devised to mobilise a broad coalition of people against racist violence. It is, admittedly, an ambiguous and sometimes even fraught term – that’s its strength.

This collective term holds both academic and personal interest for me.

It’s of academic interest, because it offers us a way of studying the kinds of personalisation I previously looked at with Thao – from the other side.

The basic question that’s been driving my research for PLY over the last few years is this: how does the categorisation that’s so essential to personalisation actually work? What are its mechanisms, and how have new data processing techniques changed these mechanisms?

Using a combination of digital methods and social and media theory, the research I’m doing into ESEA looks at this term as something that’s produced by people involved in specific social and political movements, while also being shaped by the digital technologies these movements use to build their membership and raise public awareness.

Paraphrasing Susan Leigh Star and Geoffrey Bowker, one of the best ways to understand how categories work is to analyse what they leave out. ESEA has been created by a community of people who feel unrepresented by the existing institutional language available to them. With ESEA, then, we see the emergence of a term that, amongst other things, responds to a social, political, and institutional absence. And this can tell us a lot about how people get included or excluded from the systems of categorisation that underpin personalisation.

But ESEA is of deep personal interest to me, too.

Half of my family is from Southeast Asia. I’ve always thought of myself as an amalgam of Australia (where I was born) and Southeast Asia (my mother is from Malaysia). To use a little online-cultural argot, when I first heard about this term and the campaigns around it, I felt seen.

At PLY, we often joke about putting the “person” back in to “personalisation.” With this project, then, I’m taking this phrase quite literally.

I think we can study personalisation by looking at what it excludes or who it leaves out. Indeed, to push the logic of this by-now-standard critical social science approach further, I also think we can study personalisation by looking at the contested processes by which people who feel excluded make themselves included – or, the processes by which they make society, culture, and politics feel personalised rather than depersonalising.

This brings us back to where we started, with the paradox of personalisation I introduced above. The (paradoxical) logic of thinking personalisation through techniques of categorisation has specific applications, like in mine and Thao’s research into ethnic affinities. But it can also be generalised.

Call it personalisation in the expanded field. By studying the ways in which individuals are categorised – and, indeed, how some are included in some categories and why some are excluded from some others – we learn not only about how personalisation works or how categories are made but how they’re made to be lived in and lived with. We can start to think about what values get embedded into personalisation by way of its categories. Perhaps, too, we might begin to be able to think these values otherwise.

 

[1] These include academic partners – Jonathan Gray of KCL/Public Data Lab and Wing-Fai Leung of KCL – and third sector organisations – besea.n (Britain’s East and Southeast Asian Network) and EVR (End Violence and Racism Against East and Southeast Asian Communities)