Phan, T., Wark, S.

Culture Machine, 2021

View publication

Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad “ethnic affinity” categories: “African American,” “U.S.-Hispanic,” and “Asian American.” This paper uses the life and death of these “ethnic affinity” categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques. These categories worked by analysing users’ preferences and behaviour: they were supposed to capture an “affinity” for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to “personalise” content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a “post-racial” mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals. This article uses an analysis of these categories to argue that even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate “proxies,” like racialised preferences or language use, as racialising substitutes. This article concludes by arguing that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation today.


Racialisation, Facebook, Ethnic Affinities, Proxies, Post-Race

Than, P., and Wark, S. ‘What Personalisation Can Do For You! Or, How to Do Discrimination Without Race.’ Culture Machine 20. 2021.