Computational Propaganda and Totalitarianism (A Thread)
I wrote a Twitter thread today this is probably worth keeping. I reproduce it here with minor edits for clarity. (And yes, I delete tweets eventually)
What McKew describes here is what researchers have been seeing in the data for ages now. Trolls, bots, and borgs aren’t just putting out fake news — they are framing our national discussion, setting the terms of the debate.
They can do this partially because these tightly integrated networks can test hundreds of memes and then quickly promote the ones that work through systems of automation. They can jump on a crisis and ramp up in minutes while everyone else is still disorganized.
The Russians run a farm league system that feeds the bot-borg networks that the right started constructing from 2012 forward. Those conservative borg networks are like a water cannon that can be turned on anything instantly.
Along the way the flood of fakery and semi-fakery and amplified authenticity starts to peel off real figures through relentless mention campaigns. This is how many of the things Trump retweets reach him, for instance. And it fools reporters into thinking they are watching a real debate.
(I’ve said it before — this is a form of “laundering”, where the fake is slowly leveraged into the real.)
None of this is secret. Maybe you don’t like McKew. Check out @DFRLab. Check out @katestarbird’s work. Or @krisshaffer. We as a society are applying atomistic transactional models to systems problems. It doesn’t work. These researchers can show you why.
Or, here, let Hannah Arendt explain it to you, yet again. The idea of totalitarian propaganda is to construct grand narratives out of the refuse of civilization:
The thing is, computational propaganda automates this process, allowing people to manufacture a sense that something big is being covered up OUT OF NOTHING and ALMOST INSTANTLY. That article describes how it is done, by first collecting the refuse, sorting through it, identifying the promising pieces and propagating the narrative.
It’s the narrative that is key. The bits its composed of are junk. The effects of computational propaganda are not “Oh, well climate change was disproved” or “I think Hillary killed people” it’s “Well, there’s something fishy going on there.” When a narrative digs in the facts don’t stand a chance.
Arendt goes on to explain the main trick of repetition in propaganda — it isn’t about helping the population remember, it’s about providing a consistency in a world where most things are uncomfortably inconsistent. People see things that are universal and consistent as feeling “true”. That’s why conspiracy theories are so attractive, and that’s why the lifting up the right sustained topic and framing is more useful than the organic, shifting dialogue.
Unbotted, unborged, and untrolled Twitter might be an organic and shifting firehose of info. Maybe. But computational propaganda replaces this shifting world of shades of gray facts and organic trends with self-consistent, persistent, and broadly imaginative trending topics that tie together in generative narratives that require little work to grasp.
This is Arendt’s propaganda machine, automated.
The thing is, the way Twitter and Facebook are set up — the way they let themselves be gamed, willingly, by these groups — all but ensures those Arendtian narratives will root and supplant the more organic discussions and investigations.
Machines have always been sought to provide the level of consistency that humans can’t reproduce. And that’s what’s going on here, as bots and borgs produce the consistent narratives that individual actors will not be able to replicate. And that automated consistency wins over organic discussion and debate every single time.
In other words, without change in policies, norms, and tech us humans don’t stand a chance.