Scott Wark

6 August 2021

On the 10th and 11th of June, People Like You held a workshop. Its ostensible aim was to provide our principal and co-investigators, Sophie Day, Celia Lury, and Helen Ward, with a forum to present an early draft of the book, the culmination of our project. So, we invited a group of people who have inspired our thinking about personalisation to work through some of its key ideas. Presenters either responded directly to a paper outlining this book, or talked about related topics. We thought we’d get to enjoy two days of smart and incisive reflections. What we got exceeded all of our expectations.

Presenters and respondents, Day 1

Top: (L-R): Sophie Day, Helen Ward, Martin Tironi; Middle: (L-R): M. Murphy, Dominique Cardon, Louise Amoore; Bottom: Celia Lury
Click through for more images from the day!

Day, Lury, and Ward opened proceedings. Synthesising our research into personalisation in three broad fields – health care, digital culture, and data science – their presentation proposed that the emergence of a “ubiquitous culture of personalisation” has spurred the development of what they call a “new political arithmetic.” This phrase – a play on William Petty’s late- 17th-century phrase for using statistical techniques to govern a polity – is designed to draw attention to changes personalisation is effecting at scale. By inviting participation, using preferences and likeness to produce precise categorisations or classifications, and dynamically testing these in order to predict outcomes, personalisation, they argue, institutes a “distributive logic”: it sorts people and things and allocates resources.

Hence “political arithmetic.” But why “new”? What’s novel about personalisation, they argue, is that its processes – increasingly, reliant on big data – change how we’re able to predict, and therefore intervene in, the future. Its distributive logic plays out through continuous testing and re-testing. If a particular manner of ordering doesn’t fit a particular set of persons or things, try another. The result is a capacity to intervene in futures before they emerge. So, ubiquitous personalisation might be changing what we know as prediction, as testing and retesting engineers what they call a “continuous present.”

The first part of the workshop rounded out with series of invited responses. First up was Martín Tironi, who’s collaborating on our ‘Algorithmic Identities’ project. For Tironi, Day, Lury, and Ward’s proposition that ubiquitous personalisation constitutes a “new political arithmetic” suggests that personalisation not only institutes what we’ve previously called a “mode of individuation,” but a “mode of configuration, action, [and] distribution of the social.” Tironi left us with a pair of pertinent questions that resonate with his research into smart cities. Does this “political arithmetic” open a sphere of “play” in which individuals can exercise agency? And, how does it account for “nonhuman” participants in distributed reproduction?

Next was M. Murphy, but I want to return to their response at the end. Our third respondent, Dominique Cardon, suggested that our proposal could be construed as an answer to a classical sociological question, namely, “what is society”? He noted that those of us working on such questions needed to avoid the tendency to “reify” a past society governed by demographic categories – blunt techniques of segmentation – when attempting to establish the novelty of our proposed “new” arithmetic. Nevertheless, he also suggested that what we’re describing fits with modes of social ordering that move from “a world of discrete categories to a world of emergent categories.” The question he had for us was this: does ubiquitous personalisation institute a shift from a “government of causes to one of effects”?

Our final respondent, Louise Amoore, invoked an alternative to our three p’s – participation, precision, and prediction – in the form of three a’s: address, accuracy, and – with a little gerrymandering – what we’ll call arithmetic. In order to secure participation, she noted, computational techniques rely on the capacity to address targets of personalisation. Citing Lorraine Daston and Samuel Weber, she noted that using data to target interventions relies on degrees of tolerance for interventions’ accuracy. Finally, she noted that there is a politics of “sovereign knowledge” involved in any political arithmetic. Invoking the term’s colonial legacies, she asked us to consider how personalisation orders people according not only to likeness, but also unlikeness. That is, how could our project’s titular aim – thinking through how personalisation assembles “people like you” – account for people you are not like?

Part 2 of the workshop involved a series of presentations by people whose work we draw upon. First was Cori Hayden, who presented on generic medicine. Reflecting on the emergence of “similares” in Mexico – that is, generic drugs and associated clinics – Hayden’s presentation drew out the complex relationship between personalisation and state provision. In Mexico, the emergence of “similares” fills a niche left open by the high price of branded medicine and state medical clinics. Ironically, “simi” clinics offer cheaper, more accessible care that  can feel more personalised. In the populism mobilised by “simi,” we see personalisation playing out in the name of the generic – and what Hayden called the “politicisation of similitudes.” By this, she means a politics of health in which the “generic” is not counterposed to difference and variety, but seems to incorporate it. Her “generic” contains the multitude in its (dis)similarity.

Hayden’s presentation rounded out the first, packed day. Day 2 began with Emily Rosamond, who presented work from her forthcoming book on what she calls “reputational warfare.” Her basic question is this: how ought we to conceptualise the value of reputation in online spaces? Rather than thinking it through participatory culture, labour, microcelebrity, or wages, she proposed understanding it as an asset. So, she asked, how do personalities on platforms – like YouTube – assetise themselves? Conversely, how do platforms turn a collection of personalities into a “hedged portfolio”? Assetisation highlights the tension inherent in the idea that one’s personality is “inalienable” yet also “assetisable.” Insofar as personalisation techniques are “precise but not accurate,” personality ought to be understood as a lure for participation. For us, this was a necessary and productive adjunct to our proposal.

In a stroke of programming serendipity, Rosamond was followed up by Fabian Muniesa, whose work on assestisation informs both hers and ours. Rather than talking about assetisation, Muniesa outlined fresh thinking he’d been doing on “paranoia.” Platforms, he argued, engender a paranoiac conception of media. Confronted by targeting and prediction, we find ourselves constantly asking how – how does a particular address construe me as someone who will want to use a particular service or purchase a particular product? How does it conceive of me as this me, and not another? For him, paranoia is the prevailing psychological response we have collectively adopted to a hyper-mediated world in which desire is always mediated by value. What he maps, I think, is emergent politics of personalisation for a world of increasingly-automated hyper-connection.

Dominique Cardon returned to present new research he’d been doing with his colleague, Jean-Marie John Matthews, on the impact machine learning has on the ordering of society. To make this argument, they invoked Luc Boltanski’s concept of the “reality test.” For Boltanski, reality isn’t a given, but is secured by the institutions which order society. This ordering of reality is not fixed, but is frequently “tested” by those who are subject to it. With the proliferation of machine learning, they proposed that the nature of this process of testing reality might be changing. By instituting systems in which reality is subject to constant testing in order to determine what possible arrangement of people or things might achieve a pre-given outcome, machine learning is creating a situation in which tests lose touch with reality. The conclusions they drew in their work resonated strongly with ours. Because such tests are no longer deductive – that is, no longer proceed by testing an hypothesis – society comes to be ordered by “complex domination”: a process that seeks out the distribution of relations best suited to a particular outcomes and, in doing so, collapses future into present and possibility into probability.

The last presentation in Part 2 was by Amoore. Like Cardon and John Matthews, Amoore was interested in asking how machine learning – specifically, “deep learning” – may be reordering what politics can be. If machine learning institutes techniques of prediction that foreclose potential futures by attempting to configure relations to achieve predetermined outcomes, what does this do to politics? To tease out a response to this question, Amoore focused on the “function,” or an algorithmic operation of optimising by mapping an input onto an output. For Amoore, the product of a capacity to start with an outcome and “retroactively” design a system to make it possible is a situation in which one no longer looks for solutions for problems, but for problems that fulfil a particular pre-determined solution. Like Cardon and John Matthews, Amoore sees in these techniques a proliferation of testing. But she also sees a situation that undercuts our capacity to advocate for better futures. If this is one way of defining politics, she asked, how can machine learning become a tool for opening up futures instead of closing them down? How might we even begin to think incommensurability in and through models that are designed to render all difference commensurable?

The day wrapped up with a summary of the workshop as a whole by Penny Harvey. Harvey drew a number of useful connections between our work on personalisation and prior research on topics like topology. In our conception, she noted, personalisation has a topological bent. But I want to end by reflecting on commensurability and incommensurability, because Amoore’s parting question brings us back to M. Murphy’s response to the paper Day, Lury, and Ward gave to open our workshop.

We call the result of ubiquitous personalisation a “new political arithmetic.” This concept incited Murphy to mount a “decolonial” and “queer” pushback. If we posit this “arithmetic” as a “social analytic,” perhaps we need to be willing to question – to “puncture” – such “analytic phantasms.” For me, I think they outlined what’s really at stake in this “new political arithmetic.” If society is ordered in this way, what becomes of other worlds? Does our conception of society in the wake of ubiquitous personalisation leave space for thinking their existence or persistence outside of a new “political arithmetic” – and can we imagine a different mathematics that would make decolonial and/or queer worlds possible?

Speaking for myself instead of the project, I would affirm that this question is one of the most crucial personalisation raises. But I’d reformulate it in terms closer to Amoore’s than Murphy’s. One might indeed ask if this conception of society forecloses “worlds.” I think we get more purchase on the present if we ask how a new “political arithmetic” reconfigures politics. Does the analytic we describe give us the conceptual means to plot our way out of this ordering of society and towards alternate political configurations? Perhaps. Does it need to leave analytical space for an outside in order to do so? Perhaps not – but then, what this “new political arithmetic” might describe is contemporary politics’ most crucial site of contestation: not only who gets to make futures, but who controls their terms.