One of this project’s lines of inquiry is to ask who the “person” is in “personalisation”. This question raises others: Is personalisation actually more personal? Are personalised services about persons, or do they respond to other pressures? This question also resonates differently in the three different disciplines that we work in. In health, it might invoke the promise of more effective medicine. In data science, the problem of indexing data to persons. In digital culture, though, this tagline immediately invokes more sinister—or at least more ambiguous—scenarios, for me at least. When distributed online services are personalised, how are they using the “person” and to whose benefit? Put another way: Whose person is it anyway?
What got me thinking about these differences was a recently-released report on the use of facial recognition technologies by police forces in the United Kingdom. The Metropolitan Police in London have been conducting a series of trials in which this technology is deployed to assist in crime prevention. Other forces around the country, including South Wales and Leicester, have also deployed this technology. These trails have been contentious, leading to criticism by academics, rights groups, and even a lawsuit. As academics have noted elsewhere, these systems particularly struggle with people with darker skin, which they have difficulty processing and recognising. What it also got me thinking about was the different and often conflicting meaning of the “person” part of personalisation.
Facial recognition is a form of personalisation. It takes an image, either from a database—in the case of your Facebook photos—or from a video feed—the Met system is known as “Live Facial Recognition”—and processes it to link it to a profile. Online, this process makes it easier to tag photographs, though there are cases in which commercial facial recognition systems have used datasets of images extracted from public webpages to “train” their algorithms. The Live Facial Recognition trials are controversial because they’re seen as a form of “surveillance creep”, or a further intrusion of surveillance into our lives. Asking why is indicative.
The police claim that they are justified in using this technology because they operate it in public and because it will make the public safer. The risk that the algorithms underlying these systems might actually reproduce particular biases built in to their datasets or exacerbate problems with accuracy around different skin tones challenge these claims. They’re also yet to be governed by adequate regulation. But these issues only partly explain why this technology has proven to be so controversial. Facial recognition technologies may also be controversial because they create a conflict between different conceptions of the “person” operating in different domains.
To get a little abstract for a moment, facial recognition technology creates an interface between different versions of our “person”. When we’re walking down the street, we’re in public. As more people should perhaps realise, we’re in public when we’re online, too. But the person I am on the street and the person I am online isn’t the same. And neither person is the same as the one the government constructs a profile of when I interact with it—when I‘m taxed, say, or order a passport. The controversy surrounding facial recognition technology arises, I think, because it translates a data-driven form of image processing from one domain—online—to another: the street. It translates a form of indexing, or linking one kind of person to another, from the domain of digital culture into the domain of everyday life.
Suddenly, data processing techniques that I might be able to put up with in low-stakes, online situations in exchange for free access to a social media platform have their stakes raised. The kind of person I thought I could be on the street is overlaid by another: the kind of person I am when I’m interfacing with the government. If I think about it—and maybe not all of us will—this changes the relative anonymity I might otherwise have expected when just another “person on the street”. This is made clear by the case of a man who was stopped during one facial recognition trial for attempting to hide his face from the cameras, ending up with a fine and his face in the press for his troubles. Whether or not I’m interfacing with the government, facial recognition means that the government is interfacing with me.
In the end, we might gloss the controversy created by facial recognition by saying this. We seem to have tacitly decided, as a society, to accept a little online tracking in exchange for access to different—even multiple—modes of personhood. Unlike online services, there’s no opt-out for facial recognition. Admittedly, the digital services we habitually use are so complicated and multiple that opting out of tracking is impracticable. But their complexity and the sheer weight of data that’s processed on the way to producing digital culture means that, in practice, it’s easy to go unnoticed online. We know we have to give up our data in this exchange. Public facial recognition is a form of surveillance creep and it has rightly alarmed rights organisations and privacy advocates. This is not only because we don’t want to be watched. After all, we consent to being watched online, having our data collected, in exchange for particular services. Rather, it’s because it produces a person who is me, but who isn’t mine. The why part of “Why am I being tracked” merges with a “who”—both “Who is tracking me”? and “Who is being tracked? Which me?”
In writing this, I don’t mean to suggest that this abstract reflection is more incisive or important than other critiques of facial recognition technology. All I want to suggest is that recognising which “person” is operating in a particular domain can help us to get a better handle on these kinds of controversies. After all, some persons have much more freedom in public than others. Some are more likely to be targeted for the colour of their skin how respectable they seem, how they talk, what they’re wearing, even how they walk. In the context of asking who the “person” is in “personalisation”, what this controversy shows us is that what “person” means is dependent not only on this question’s context, but the ends to which a “person” is put. Amongst other things, what’s at stake in technologies like these is the question of whose person a particular person is—particularly when it’s nominally mine.
The question, Whose person is it anyway?, is a defining one for digital culture. If recent public concern over privacy, data security, and anonymity teach us anything, it’s that it’ll be a defining question for new health technologies and data science practices, too.