Following the revelations of Cambridge Analytica’s involvement in Brexit and the election of Trump, J.A. Smith asks whether the relationship between populism and digital media goes deeper than the rogue behaviour of particular companies and campaigns.
Wasn’t the digital revolution meant to usher in a new age of connectivity, of frictionless contact that rendered all traditional boundaries – geographical ones included – irrelevant? The ‘openness’ at the heart of the Californian ideology is expansive enough to include both a right-on political correctness in heart-warming Google Doodles and Facebook gender designations and fantasies of unlimited commerce across national boundaries. During the Arab Spring of 2010, it was widely imagined that Twitter was on the cusp of bringing liberal democracy to Egypt, Tunisia, and Syria. Between allegations of Russian bots and troll farms interfering in elections and the recent revelations of misuse of Facebook data by Cambridge Analytica for the Trump and Brexit ‘Leave’ campaigns, how did we get to a situation where these platforms are rewarding a politics of cultural conservatism, economic protectionism, and the securing of national borders? Not to mention – in the eyes of many liberals – undermining democracy itself?
Populism’s elementary move is to presume to disinter from the complexity and diversity of modern societies a single and coherent ‘voice’ of the people. Of course, sensible liberals and advocates of pragmatic, ‘professional’ centrism know such a thing is only a damaging fiction: that the appearance of coherence can only be created by delegitimising dissentient voices (or worse), and that true politics can only take place via the patient and cautious negotiation of the needs of many interest groups. This is well and good at the level of executive and legislative power. But why then, we may ask, did precisely the same professional caste of liberal policymakers fall over themselves to bolster and become associated with a digital capitalism grounded in precisely the opposite assumptions? If Trump, Orbán, Erdoğan, and the others are only producing a crude, if effective, fiction when they claim to have distilled the will of ‘the people’ into their political programmes, then – by contrast – the institutional forms digital capitalism was permitted to take after the financial crash have a far better claim to having made such a distillation.
As Evgeny Morozov has most consistently argued, in the past decade, area after area of our social field has been subjected to the digital-capitalist model of what Morozov calls ‘solutionism’: the assumption that aggregated data, navigated by algorithms, will find more effective solutions to our shared needs than the decision making of old-style human ‘gatekeepers’. Just as information appearing in Google searches was made more effective by being based on records of what previous users had prioritised, crime was to be prevented by making crime statistics by local area open source, customers could be matched with restaurants using apps that detected their diet, social milieu and spending habits, and music and video recommendations could be lined up automatically by streaming services, based on previous listening and viewing choices of the data-aggregated ‘everybody’.
Morozov’s point when he made this analysis in 2013 was that outsourcing old forms of ‘human’ decision making to algorithms was likely to create unanticipated problems. Open source crime stats might help us to keep alert in dangerous neighbourhoods, but they would also discourage people from moving there; and the apps that recommend restaurants might equally be employed by bouncers to keep riff-raff out. What we can recognise in retrospect, however, is that when digital capitalism was asking us to prize algorithmic decision making more highly than that of restaurant critics, magazine editors, and other ‘old’ forms of cultural authority, it was also giving us a pretty neat trial run at the rejection of ‘experts’ in favour of the reified voice of an aggregated ‘people’ at the heart of political populism.
The irony is that no one was more impressed by the crypto-populism of digital solutions than the Obama milieu: the very people who, on the plane of ‘official’ politics, valued the gatekeeping expertise of Ivy League schools and Blue Chip companies more than anyone and, as Thomas Frank has shown, to an embarrassingly apolitical degree. In the post-crash years, their status as the sort of exciting innovators Democrats wanted to be associated inoculated digital platforms from the hard-won scrutiny applied to more traditional companies. Even as they aggressively emulated the corporate misbehaviour seen in the Finance sector before the crash, and – in the practices of Uber and Amazon – normalised kinds of precarious employment that made life so difficult for people after it. To see how far the logic of digital ‘solutionism’ stretched in the Obama administration, we only have to look to the representative forms the security state took during this period: automated drones on foreign soil, exhaustively collected data on every phone call and internet transaction domestically and beyond. To the algorithmic ears of the security state, the voice of the people (and its enemies) was constantly sounding, clear as day.
The professional, ‘qualified’ caste of politicians represented by Obama and Hillary Clinton, and by David Cameron and George Osborne, were mortified to be replaced by populists who denounced experts and claimed to act at one with the popular will. But whatever the manipulations or otherwise of their digital campaigning, the case can be made that Trump and the Brexiteers did little more than take the logic of digital capitalism at its word, applying to the explicitly political arena rules and assumptions that had already been accepted in any number of cultural, social, economic, and even military ones: indeed with the explicit encouragement of ‘centrist’ governments. In this way, populism, as well as being a particular contingent political style, actually runs far deeper in the logic of culture in our present digital capitalism, and all of us who use digital media have some complicity with it.
But what is the nature of this crypto-populism at the heart of digital capitalism? Is there something redeemable about it in its digital form? The argument has been made since well before the rise of mass data-driven platforms that algorithmic choices made by computers are inherently regressive. Just like the worst kinds of political populist, almost of necessity algorithmic decision making gradually mutes and erases minority and specialist interests. Why? Because, perhaps, their decisions are too robotic, lacking the magic human touch that can only come from actual human agents? Actually, it is more like the opposite: algorithmic decision making is regressive precisely because it is too human, in the sense of the title of Nietzsche’s book, Human All Too Human. Data can record nothing but what humans have done before; and the algorithms used to interpret the data can only do so in ways that silently repeat the priorities and prejudices of the companies that design them.
For all its right-on credentials, there has long been speculation about a far right subculture within Silicon Valley: perhaps unsurprising given its combination of whiteness and maleness, megalomaniac love of grand promises of cultural transformation, and the self-representation of many of its workers as former ‘nerds’, unlucky in love. But as Safiya Umoja Noble and others have shown, there are innumerable tinier, unconscious, and more incidental ways in which algorithms have been trained to repeat and reinforce prejudices and inequalities already existing ‘offline’.
It is tempting, on that basis, to think that algorithmic decision making can incorporate none of the chance encounters between human and ‘other’ (human or otherwise) that might lead to some new idea; just as a truly populist political project that imagines itself deriving all agency from an aggregate of ‘what the people already think’ could never create a radically different society. (The craziest measures imagined for Trump’s America or by the hard Brexiteers, for instance, are only ever exaggerated versions of measures already in place. It may scandalise populists and centrists both to find that neither of these uprisings actually represents a true systemic change). As my colleague Andrew Gibson writes in a forthcoming book, ‘the modern belief that the good and properly rational people are the people-to-come has collapsed into the belief that the people are good and properly rational in themselves right now’. The people-to-come is not part of either digital or political populism’s program, because a system of judgment constructed on what users have done or liked in the past, leaves no space for articulating what the future should be.
But can this be the whole story? I support calls by Morozov, Nick Srnicek and others to ‘socialise the data centres’, to take this precious technology out of the hands of those whose only priority is to keep us on its platforms doing… it doesn’t matter what… as long as we are on them producing data and justifying them to their advertisers. One hopes that the Cambridge Analytica scandal and Zuckerberg hearing will at least inspire a greater general awareness and questioning of how the Internet materially ‘works’. And who can think otherwise than that the actual knowledge workers creating these all-structuring technologies should be made at least more reflective of the plurality of the society they are going to affect? Yet even within the structures of digital capitalism ‘as it is’ there are things to be done.
While it seems clear that there is a structurally regressive tendency in the crypto-populism of algorithmically directed models, always confirming more and more tightly a certain ‘sameness’, this is not to say that it does not at the same time produce certain un-anticipatable forms of ‘difference’. Indeed, we can say that algorithms proceed in a way anticipated in one of the central insights of philosophical ‘deconstruction’: that every repetition also introduces some minimal difference. Its processes of confirming, congealing, and exaggerating desire also cannot help but to create new ones. In this respect, I take a similar view to my friend Alfie Bown, who has written: ‘what is much scarier than the fact that the user can fulfil any desire via the mobile phone is the possibility that the phone creates those desires in the first place. While the user thinks they are doing what they want, as if desires already existed and are simply facilitated by the device, in fact Google has an even greater power: the ability to create and organise desire itself’.
In a bewitching episode of Johann Wolfgang von Goethe’s The Sorrows of Young Werther (1774), the young man watches the beautiful, unattainable Lotte feeding a pet canary by holding bits of seed in her mouth, which the bird pecks from between her lips. Werther is beguiled when the bird flies over to him and pecks his mouth in turn. We are accustomed to this thrill of the intermediary object when a prospective lover drinks out of our glass or gives us a drag on their cigarette. But Goethe insists on another element here. The intermediary object has its own desire. Werther says that the canary is left unsatisfied by his mouth, and it flies back to Lotte, who has the seeds. This time, Werther has to turn away as she feeds him, finding the scene too erotic to bear. What has taken place here? The bird we expected to treat as the simple conduit for our existing desires inadvertently throws unexpected new ones into the mix. (Goethe’s weird insistence that the bird actually penetrates both of their mouths with its beak is part of a general tendency of the novel to see the apparent ‘background’ of the world as always potentially pulsating with strange kinds of eroticism). Just so, I think, there are unexpected forms of libido created and unleashed by our digital intermediaries, even beyond those Bown says Google has the ability to ‘organise’.
Writers as various as Jodi Dean, Tiziana Terranova, Dominic Pettman, and Franco ‘Bifo’ Berardi have been careful to draw a firm line separating the online culture of ‘connectivity’ from the old traditions of solidarity and collectivity on the left. As Berardi has recently put it, ‘connective intelligence is unfit to act as collective intelligence: it is unfit to activate solidarity’. We should not be so quick to make this judgement today. The past couple of years – ‘the long 2016’ as it has been called – have seen victories for regressive populist movements as well as the unexpected appearance in the political sphere of right-wing computer gamers and message board nerds. But it has seen the emergence of new kinds of digital activism on the left, from Momentum’s stunning contribution to Jeremy Corbyn’s humiliation of the right in the UK General Election last year, to the re-emergence of a mass feminism movement in the Women’s March and #MeToo movement. Messy, part-formed, and unpredictable as these new digital energies are, they have occurred and could only occur as surprising emergences of difference, bursting against the structural regressive sameness that populism and digital capitalism share.
J.A. Smith is a lecturer at Royal Holloway, University of London. His ‘Samuel Richardson and the Theory of Tragedy’ has just appeared in paperback.
Pingback: Trending Now #30 | Daily Links & News
Pingback: 民粹主义与数字化媒体-聚惠乌托邦,真的好实惠
Pingback: Huel is #Cancelled – Everyday Analysis