Scene-Level Trope: Riot Bricks

The “riot bricks” trope is a good example of how scene-level tropes function on “trope-field fit”. The riot bricks trope is used in the service of a larger narrative-level trope around protest — that protests are actually being organized for violence by a secret elite. Like most narrative-level tropes, there are examples of less-than-organic protests (and protest violence) in past and present history, but the conspiracy theory version goes further, imagining a secret elite that provisions these riots day before in plain sight, often through some conspiracy with local officials.

With that as your narrative, what evidence can you bring to bear? Repeatedly since at least 2020, the “riot bricks” trope has been popular. The idea, of course, is some secret conspiracy is making sure bricks are ready-to-hand for some riot that will appear spontaneous but is really a coordinated operation. It’s a silly trope, of course. It’s not as if there aren’t a lack of heavy things around cities or that the professional rioters these people imagine wouldn’t be capable of bringing a brick or two to a protest. There’s not been, to my knowledge, any riot where dozens of rioters have been unloading brick after brick off pallets, at least in recent U.S. memory. There are many more compelling conspiracy theories you could build if you were starting from scratch.

But that’s the thing — you aren’t starting from scratch. When creating these “detail-driven rumors” the conspiracy theorist is stuck with the details at hand, and what the riot bricks trope lacks in believability it makes up for in availability. There’s always bricks around a city, somewhere, after all. So the recipe here is quite straightforward: find some bricks before an anticipated protest. Snap a picture, ask a question that will be clear to those who know the trope but opaque to moderators. Collect retweets. Lather, rinse, repeat.

Scene-Level Trope: Voting Location Cameras Covered Up

Polling locations are, by necessity, formed out of locations that do other things the rest of the year. Schools, churches, community centers, and the like. They sometimes have cameras installed.

Because voting is private, there are restrictions on filming in polling places. This applies not only to ballot selfies, but to video surveillance. In addition to potential violations of privacy, the use of surveillance cameras filming voters can also be seen as intimidation. For both these reasons, cameras in a polling location may be covered for the day.

This covering of cameras is distinct from use of cameras in counting locations, which often do have surveillance cameras, some of which are mandated by state laws.

The Conspiracy Theory

This is currently a very small conspiracy theory at the moment, with almost no uptake. But it’s worth explaining it beforehand in case there is uptake later. An example of the Obstruction of Oversight and Destruction of Evidence cluster of theories, the idea is that the cameras are covered so that devious poll-workers can execute Sharpiegate-like deceptions. What makes this potentially compelling as a conspiracy theory is that there is a participatory element to it. A variety of people can take pictures of the cameras with sincerely confused questions about what is going on, and that confusion can be leveraged. But as stated above, the cameras are not meant to be there in the first place. They are there because polling locations are repurposed, and, for example, letting the local church film people voting just because they lent the city a location for the day would be really creepy.

On the whole it’s a good example of the detail-generated conspiracy theories (see Kapferer) that dominate the election space.

Vulnerability of Texas to Gubernatorial Vote-Counting Dispute

From the book Ballot Battles:

In Texas the institution currently empowered to adjudicate a disputed election is its legislature, far from the ideal institution for the dispute. If Texas experiences a ballot-counting dispute in a close gubernatorial election it is hard enough to imagine the state’s legislature resolving that dispute fairly, according to the merits of the case, rather than purely as an exercise of power by whichever party happens to be dominant in the legislature at the time…

A future dispute over ballots in a Texas gubernatorial election is likely to end up in a federal court under some sort of claim based on the precedents of Bush v. Gore and Roe v. Alabama. Moreover, unlike in 1948, the federal judiciary now has jurisdiction over the merits of the claim. The federal court would demand that the state’s vote-counting meet federal constitutional standards for fairness…

Ballot Battles, p. 352-353

Beto O’Rourke is still trailing Texas Governor Gregg Abbot, by a lot. But if the race tightens further, this looming disaster is something to keep in mind.

Disinformation terraforms its information environment

Art from Daein Ballard, CC-BY-SA

Terraforming is a process found in science fiction novels of deliberately modifying the atmosphere and ecology of a planet to make it more habitable for a given life-form. In early sci-fi, that life form was human — drop a few machines on a planet, watch them spin up an atmosphere and ecology, have the humans come back in a few decades or centuries and settle. In later sci-fi, it was often aliens intent on terraforming earth, creating a planet more habitable for them, but deadly for us.

Disinformation can have a terraforming effect too, in the second sense. One of the prominent trends of the past year has been that the disinformation around the Big Lie has created momentum around a host of legislative and policy changes that will make disinformation both cheaper to produce and more impactful.

For instance, the false story that Ruby Freeman was caught on video taking ballots from a “suitcase” was made possible by the transparency measure of having publicly viewable video of the counting facility. This false story in turn created outrage — some genuine and misguided, some cynical and strategic — that has resulted in a push for more counties to have more live feeds from which more video can be deceptively clipped and inaccurately summarized.

False concerns about supposed ballot irregularities have led to publicly available ballot scans in some places, and imperfections in the process by which those scans are stored or released create new news pegs on which to hang dubious fraud allegations. False stories about the Arizona election result in the creation of a bogus external “audit” which generates daily misinformation, which fuels the push in other states for similar external audits.

Each step seems to lead to another, where the material and processes that misinformation thrives on becomes more ubiquitous, more compelling, more ever-present.

Of course, there are people behind all of this, just as in science fiction there’s always someone who dropped the terraformer on the planet’s surface. But there’s also a certain emergent momentum to the whole process that is bigger than any given actor. Seen from this perspective, disinformation is quickly terraforming its environment, making it more habitable and productive for disinformation. In the end, that is going to make it a whole lot less habitable for all of us.

“Rumors are bothersome because they may turn out to be true.”

If you want to study something, a first step might be to go out and collect it. If I was looking for themes in 16th century poetry about food, I’d go out and get 16th century poems about food. If I wanted to look at personal narratives of medical tragedy, I’d either solicit such information or pull examples from existing corpora.

When you get to misinformation, however, this doesn’t quite work. There is a fabulous line in Kapferer’s book on rumor, where he notes many early scholars in the field chose to use as examples untrue rumors; in practice they’re studying misinformation. But as Kapferer pointed out, this avoids the main social problem of rumor. Rumors, Kapferer pointed out, are not a social problem because they are false. Were that the case, people would long ago have ceased to traffic in them. Instead, he points out, “rumors are bothersome because they may turn out to be true.”

Starting from a point where something is already deemed misinformation hides that tension. It’s certainly useful to collect things that continue to circulate on the internet long after they are shown to be in error and ask why they persist. But misinformation often starts out as something more ambiguous, and that initial ambiguity is a crucial part of the story. And so when I think about misinformation around elections and what to look at, I lean less towards “misinformation” as the object of study, and more toward Kapferer’s definition of rumor:

We shall thus label “rumor” the emergence and circulation in society of information that is either not yet publicly confirmed by official sources or denied by them. “Hearsay” is what goes “unsaid,” either because rumors get the jump on official sources (rumors of resignations and devaluations) or because the latter disagree with the former (e.g., the rumor about the “true” culprits in the assassination of John F. Kennedy).

(Rumors, p 13)

Kapferer’s definition gets at the heart of a structural issue for me. What he calls “hearsay” thrives in two separate but related environments. The first is the area which Shibutani’s Improvised News explicated so brilliantly. When official channels fail to provide necessary information, hearsay provides an alternative network to fill in the gaps. This is a collective sense-making. The second is related, but separate: when there is a dispute about “who gets to speak” to an issue, hearsay thrives as an alternative to official channels. In other words, an adversarial sense-making.

Both of these functions are necessary to information environments, which is one reason why “hearsay” exists. But narrow the focus to misinformation, and we can become quickly reductive. Of course if things are wrong and harmful we want to minimize their spread. Such a world doesn’t need to reckon with trade-offs. When we broaden the frame from misinformation to hearsay, the key problem becomes visible. We need hearsay, both to fill informational gaps and to challenge official accounts.

Helping Hearsay

Much of the story of misinformation has been told by institutions who value the official account. In these tellings, misinformation has chipped away at institutional influence, with disastrous results. For them, misinformation is a corrosive force which undermines the utility of the official information system. In this case hearsay, whether they want to admit it or not, is treated as a disease within the body institutional.

There’s truth in this, but I’d propose a return to an underutilized frame, one that has historically informed a number of the fields that have been brought under the misinformation umbrella, from rumor studies to crisis informatics. Instead of seeing versions of hearsay (non-institutional systems of news and analysis) as primarily damaging institutional systems, we could choose to see the hearsay system itself as the thing under attack. That is, in the age of social media, a valuable system of non-institutional knowledge is increasingly gameable and gamed, rendered useless by a variety of threats and incentives that are polluting not the institutional space, but the hearsay space.

No single vision is ever adequate. But leaning at least a bit more into this question — how do we repair and restore the value of informal knowledge networks by protecting them from corruption — might get us to better, and more humane solutions. And it might help dial down what is often an unhelpful posture to both collective and adversarial sense-making, by centering the “bothersome” usefulness and centrality of these systems.

Transparent wrongdoing and mundane revelations

There is a well-known saying — “it’s not the crime, it’s the coverup that gets you.” This is true in the obvious way it is usually meant: many administrative crimes are difficult to prove. They happen at a particular moment, are witnessed by few, and intent is notoriously difficult to get at. Cover-ups, on the other hand, often spiral slowly outwards. Bureaucracies always create many more paper trails than any one individual realizes, and a person covering up a crime ends up realizing that over time.

But in politics and business, coverups are damaging in another way. The truth is that individuals have very little idea what constitutes acceptable behavior in these realms. Suppose someone overheard someone in their office talking about non-public information about a company whose stock they owned. They then called and sold that stock. Is that legal or illegal? People don’t know. So people use a simple signal — If the person tried to hide it, it was probably wrong. If they didn’t, it was probably no big deal. Looking at the behavior is easier than looking through insider trading law for most people.

This signal (concealment = wrongdoing) is pretty ingrained for most people. But the depth of that instinctive reaction raises two interesting problems. First, when lawlessness (or other unethical behavior) is done in the open we have sometimes have a hard time processing it as wrongdoing. Second, when mundane behavior is either concealed or portrayed as being concealed, we sometimes process mundane acts, communications, and events as being nefarious. As usual, propagandists use these patterns to their advantage.

When acceptable behavior is out in the open it intensifies our feeling of acceptableness, and when bad behavior is hidden it intensifies our feeling of wrongdoing. But what about the two other squares on the matrix (open/not ok, hidden/ok)?

Transparent wrongdoing

As has been noted by many commenters, there’s a bit of a paradox with the events of January 6th. Experts disagree whether it was a coup or an insurrection (the difference hinging on the level to which it was guided by elites). But it was clearly one of the two. And one way we know this is it was one of the most documented events in all of human history. The people participating, many believing that they would prevail and wanting credit for that, filmed themselves doing it. Tweeted out that they were headed to the insurrection. Compared it to 1776. None of it was hidden.

That transparency should work in the favor of culpability, and in a narrow legal sense it has. Generally, don’t film yourself doing crimes. But as we have put time in between ourselves and the event, the transparency is being leveraged in another way by propagandists. Propagandists have turned the tables — if this really was an insurrection, they ask, then why were they all filming themselves doing it? To the sociologist and psychologist there are relatively easy answers to these questions; but the transparency signal is not a imminently logical one, and doesn’t respond to footnotes. If you’re explaining, as they say, you’re losing.

In general, we’ve seen a lot of this in recent history. This is a short blog post, so I won’t make a comprehensive list here, but I think anyone who has watched the news over the past few years could make a list of questionable and sometimes illegal behavior that benefitted publicly from the argument “If it was really that wrong, would it have been done this publicly?”

In this way, transparency can take on a secondary, if symbolic, meaning. Transparency often means to be able to “look in” to the inner workings of an organization or action. But for the propagandist, claims of transparency can be used to “look past”, to render the bad behavior itself invisible.

Mundane revelations

The second interesting combination is when the behavior is acceptable, even mundane, and yet a feeling of concealment is used to create an appearance of wrong-doing. Consider two interviews. In one, a scientist makes a claim in a high production clearly official interview that “We don’t actually know how much the earth will heat up in the next ten years.” This is pretty mundane stuff: we know the earth is heating up, but models disagree on the amount we will see in the short term. Taped by a film crew and put on Dateline, you’re likely to read the comment just in that way.

Now take the same interview, and do it with a hidden camera. Suddenly, the same quote can feel like a hidden admission.

We see this a lot in propaganda. Video is caught of someone doing something out in the open, but it is surveillance video, or hidden video. Likewise, leaked or hacked emails can often reveal the boring lives of mid-level bureaucrats and press officers. Highlight the fact it was leaked and add some ominous red circles and highlight, and suddenly the impression of concealment can create a story where there is none. After all, official documents may say, but it’s purportedly “secret” documents that reveal.

The Information Intervention Chain: Interface Layer Example

A couple days ago I wrote up my description of the Information Intervention Chain. One of the points there was that work on each layer decreases the load on the layers below, and helps cover some of the errors not caught in the layers above.

Here’s a simple example, where a user has has responded to someone talking about ivermectin. Their impression — if we take this user at their word, and in this case I do — is that the NIH has started recommending ivermectin.

Now this is false, and I suppose some might say to just ban such things altogether. The NIH is not recommending the use of ivermectin for COVID-19. This is a fact. But I doubt we want to be in the business of policing every small, non-viral, good faith but confused reply people make on social media. Moderation is important, but it needs to be well targeted.

So next we get to the layer of interface. And here we find something pretty interesting. The user believes at least two wrong things:

  1. That this is a recent (late summer/fall) article on the use of ivermectin which negates previous guidance
  2. That this article represents a statement by the NIH

Take a moment to look at the post. Where would they get such ideas?

The fact is that their assumptions are quite reasonable given the failures of the interface to provide context. The article linked here is not from the NIH but rather from the American Journal of Therapeutics. It looks like it comes from the NIH, and that’s largely because the Twitter card (as well as the Facebook card, etc) highlights the NIH.gov address as the source, a side effect of the article being available through a database that is run through the NIH. The card, in this case, actively creates the wrong impression.

The second point — is it new? Note that when the link is shared there is no indication of the publication date of the article. So this article was actually published in the spring, and is, at best, out of date at this point. But Twitter chooses to not make the date of the article visible in the shared card. And that’s not a dig on just Twitter here — at least as far as I can tell, the PubMed page doesn’t expose the publication date or the journal name at the meta level. Somewhat shockingly there seems to be no Facebook or Twitter-specific meta info at all. Even if Twitter wanted to make publication and publication date more visible, it’s not clear the site gives them the information they would need to do it.

Now once you click through, you should be good. Should be good, but I’ll get to that in a moment.

Here’s the good news, if you click the link to the page, you see some of the information of which this person was unaware: the journal name, the fact that it’s on PubMed, the date at the top. But even here we are undone by confusing interface choices.

That banner at the top? From the NIH, supposedly? What does it say?

It says that this is the Library of Medicine by the National Institutes of Health, and you’re in luck, because there’s some important COVID-19 guidance below!

Wait, that’s not what a big banner with an exclamation point saying “COVID-19 Information” means? So tell me what an average person is supposed to think an exclamation-marked heading on an NIH site saying “COVID-19 Information” indicates?

It’s supposed to mean that what is below it is not official?

Well, good luck with that.

People keep wanting to talk about how people are hopelessly biased, or cynical, or post-truth, or whatever. And sure, maybe. But how would we know? How would we possibly know when someone engaging in a plain text reading of both what Twitter and the NIH is providing them here would come to this exact conclusion, that the NIH is now recommending they take ivermectin?

Now, can the layer below the interface intervention — in this case, the individual layer of educational interventions — clean this up? Well, educators have been trying to. Understanding things like the difference between an aggregation site like PubMed and a publisher like the American Journal of Therapeutics are things we teach students. But coming back to the “load” metaphor, it would make a lot more sense to clean this mess up at the interface layer, at least for a major site like PubMed. I mean, I can try to teach several billion people what PubMed is, or, alternatively, Twitter, Facebook, and PubMed itself could choose to make it clear what PubMed is at the interface layer, which would allow education to focus limited time on more difficult problems.

Nothing — not in any of the layers — is determinative in itself. But improving the information environment means chipping away at the things that can be done, in each of the layers, until the problems left are truly only the hard ones. We still aren’t anywhere near to that point.

The Information Intervention Chain

Some notes I just wanted to get down. There are four places where information interventions can be applied.

Moderation/Promotion. A platform always makes decisions on what to put in front of a user. It can decide to privilege information that is more reliable on one or another dimension, or to reduce the dissemination of unreliable or harmful information, either through putting limits on its virality or findability, or through removal. There are clearly things which need to be dealt with at this level, though it is notable that most arguments happen here.

Interface. Once a platform decides to provide a user information, it can choose to supply additional context. This is a place where there has been a mixed bag of interventions. Labeling is an example of one that has often been used in relatively ineffective ways. Other more specific interventions have better effects — for example, letting people know a story deceptively presented new is actually old.

Individual. This is (usually) the realm of educational interventions. We can choose to build in the user capabilities to better assess information they are provided. This might be specific understandings about information-seeking behavior, or more general understandings about subjects in question or the social environment in which they are making judgments (including examining biases they may hold).

Social. Consuming information via social software is not an individual endeavor, but a process of community sense-making. Social interventions seek to empower communities of users to get better at collective sense-making and promotion of helpful information. Some of these interventions may involve platform actions — whatever one thinks of a recent Facebook proposal to identify knowledgeable members in Facebook groups, it is clearly a social intervention meant to aid in collective sense-making. Other interventions may not involve platforms at all — teaching members of communities to correct error effectively, for example, does not require any additional platform tools, but may have substantial impact. Teaching experts to communicate more effectively on social media may bring specific expertise in to communities which desire it, and teaching community leaders the basics of a given subject can provide expertise to those with influence.

The Intervention Chain

People sometimes argue where interventions should be applied. For instance, there is a valid argument that deplatforming figures that are repeatedly deceptive may do more net good than interface interventions or media literacy. Likewise, many scholars point out without strengthening impacted communities or addressing underlying social drivers of bad information little progress can be made.

I think it’s more helpful to see the various layers as a chain. Ultimately, the final level is always social — that’s where messages get turned (or not turned) into meaning and action. And it’s still a place where interventions are relatively unexplored.

But that doesn’t diminish the value of the other layers. Because each layer involves intensive and sometimes contentious work, it relies on the layers of intervention above to reduce the “load” it has to deal with. For instance, the choice between media literacy and moderation is a false choice. Media literacy can be made less cognitively costly for individuals to apply, but there still is a cost. If obvious bullshit is allowed to flow through the system unchecked — if, say, every third post is absolute fantasy — media literacy doesn’t stand a chance.

Likewise, proponents of social interventions often point out the flaws of the individual layer — people are social animals of course, and a technocratic “check before you share” doesn’t reckon with the immense influence of social determinants on sharing and belief. And it’s true that solutions at the individual layer are a bit of a sieve, just as are solutions in the layers above. But we need to stop seeing that as a weakness. Yes, taken individually, interventions at each layer are leaky. But by filtering out the most egregious stuff they make the layers below viable. By the time we hit social interventions, for example, we are are hopefully dealing with a smaller class of problems unsolvable by the other layers. Likewise, moderation interventions can afford to be a bit leaky (as they have to be to preserve certain social values around expression) if subsequent layers are healthy and well-resourced.

Anyway, this is my attempted contribution to get us past the endless cycle where people involved with theorizing one level explain to people theorizing the other levels why their work is not the Real Work (TM). In reality, no link in the intervention chain can function meaningfully without the others. (And yes, I am mixing a “chain” metaphor with a “layers” metaphor, but it’s a first go here.)

In a future post I hope to talk about promising interventions at each level here.

Tropes and Networked Digital Activism #3: How Fact-Checkers Use Knowledge of Tropes to Fact-Check Quickly (and how you could too)

So to review from parts one and two:

  • People have a lot of stuff they can share or attend to online.
  • In order to efficiently create and process content we look at things like “evidence” through tropes
  • Tropes, not narratives or individual claims, are the lynchpin of activism and propaganda, whether true or false, participatory or not. They are more persistent than claims, more compelling than narrative.
  • The success of a given trope is often based on its “fit” with the media, events, and data around the event (the “field”). A trope must be compelling but also fit the media available to creators. In participatory propaganda (see Wanless and Berk) tropes must be productive.
  • In participatory work, productive tropes are often adopted based on their productivity, and then retconned to a narrative. Narrative-claim fit is far less important than trope-field fit.
  • A key indication something is a trope is it can be used across multiple domains — that is they can be used to advance a variety of different claims.
  • For example, the Body Count trope was used against the Clintons — but it was also used in anti-vaccine messaging, Kennedy assassination conspiracism, and theorizing about the John Wayne film that purportedly killed its cast.

Today I’m going to talk about how effective fact-checkers of misinformation also think in tropes in order to debunk nonsense, even if they don’t use that language to describe what they do. I’ll also suggest that making that way of thinking more explicit to readers might make readers more equipped to handle novel misinformation.

But first I wanted to talk about what happens now in terms of platform interventions and end-user interpretation. During the 2020 election, while working a rapid response effort, this tweet came across my radar. It was pulled up as part of an automated Twitter search I had set up for “throwing” + “ballots”.

Tweet of a snapshot of an Instagram claiming to be from official ripping up Trump ballots

I came across this about ten minutes after it was posted, and had a ticket in on it about 5 minutes after that. It was an amazing stroke of luck, really, to come upon it that early. I knew it was going to rack up real numbers, and over the next 45 minutes it did, until it was confirmed with the folks in Erie that this person did not, in fact, work in Erie. They couldn’t be a poll worker because they were not even a registered voter.

During that the period in between discovery and debunking, in the absence of official (dis)confirmation, people tried other things. They looked at the Instagram feed, trying to determine the high school or college the poster came from. Looked for other satirical posts. Tried to message the poster. And so on. This is the process of fact-checkers, and they are good at what they do. At the same time, there was something very strange about it, watching the reshares of the post tick up at astonishing rates while evidence was compiled. Because if you had offered any fact-checker 100 to 1 odds on whether this was fake, they would have taken them. It was obviously fake. Not just because it was unlikely that anyone would advertise a crime of this sort on Instagram. It was likely satire gone wrong because we had seen this happen exactly this way before.

It had happened in 2016, for example, which was why I had my scanner looking for this in the first place. Here it’s a satirical post about a postal worker, but same thing:

Tweet where the author claims to be ripping up Trump ballots

This post spiraled out of control, being promoted as an example of open fraud by Gateway Pundit and Rush Limbaugh. Here’s Limbaugh on his program back then:

So you’ve got a postal worker out there admitting he’s opening absentee ballots that have been mailed in and he’s just destroying the ones for Trump. What happens if he opens one up for Hillary, gotta reseal it? I guess they don’t care, what does it matter, as long as it says Hillary on it, what do the Democrats care where it came from? It could be postmarked Mars and they’ll take it.

(as transcribed by Know Your Meme)

This isn’t a case of a bit of misunderstanding blown out of proportion. It was a big misunderstanding in 2016. People really believed this, and generated so many outraged calls that the Post Office (I guess in some foreshadowing of what would happen in 2020) had to issue a statement:

USPS Tweet

Back to 2020 and the “poll worker throws away ballots”. A week after I found this, it would happen in a TikTok video, again — satire/trolling showing a “poll worker” throwing out ballots shared as fact (here reproduced on Instagram) and again, shared to hundreds of thousands of people (maybe millions?) as true.

As far as tropes, satire/trolling of this type is an interesting case (as is completely faked news) as it requires no “field” really, since it is completely fabricated. But it does require a knowledge of the existing tropes. As we mentioned in the first installment of this, the “public official discards ballots” trope is well established, especially in conservative circles, and when certain partisans see an embodiment of that trope via a tweet or a video they immediately comprehend it, share it, etc. The skids are greased and the trolling slides right through the system.

But my larger point is this — in many cases, even though fact-checkers go through the steps debunk something but they already have really solid plausibility judgments about whether it is likely to be fake. And it’s not necessarily because they are better critical thinkers, or have an encyclopedic knowledge of election oversight procedures. It’s because they have seen the same trope before, over and over, and know the ways in which it is likely to come up empty.

Body Double

Let me give a somewhat ridiculous example. Perhaps you are a person who is blessed enough to not know the theory that Joe Biden has been replaced by a Body Double. (Yes, it’s a QAnon thing).

Here’s a picture that’s “evidence”. Joe Biden used to be left-handed, and now he’s right handed.

I’m sure it wouldn’t take you more than 20 seconds to figure out what’s going on here, but the average fact-checker has solved this before they’ve even finished reading the claim.

“Picture is reversed,” they say. “Check out how the handkerchief pocket’s on the wrong side.”

Are they Sherlock Holmes reborn, that they can literally process this in under a second?

No, of course not. They’ve just seen this trope before, and they know its specific bag of tricks. Here is the same “body double forgets what hand to use!” trope used against the President of Nigeria:

The result after checking? Flipped photograph. And we also saw this in 2016, where pictures of Clinton before and after her collapse at a 9/11 event showed “subtle differences in her hairstyle and face” indicating (supposedly) that it might really be a body double:

So, Body Double? Or…

Yes, it turns out that different sides of your face look different.

This is one — just one — of maybe about two dozen things you keep in mind when looking at instances of the Body Double conspiracy trope. Consider this random bag of big and small issues with comparing photographs, paired with a body double conspiracy where I’ve seen it exploited (not comprehensive by any means, just from memory)

  • Camera angle and lighting can make the same person look radically different (Derek Chauvin, too many others to count)
  • When a high resolution photo is compared to a low resolution one, a person looks different (Hillary Clinton)
  • Women in particular can appear to change height due to footwear (Melania Trump, Hillary Clinton, Avril Lavigne)
  • As we grow older our our eye sockets, nose and upper jaw continue to change, and ears lengthen at a rate of a millimeter every four or five years (Paul McCartney, Avril Lavigne, Joe Biden)
  • Face-lifts or other surgery can alter the appearance of the earlobe (Joe Biden, President Muhari)
  • Skin blemishes sometimes go away, are removed, or airbrushed out (Avril Lavigne)
  • Stills from video (especially when photographed on a TV) often create longer or rounder faces due to distortion (Derek Chauvin, Melania Trump, Paul McCartney)
  • Lefties often use their non-dominant hand for certain tasks (Paul McCartney)

An average person might be able to come to all these through a rational process, but for a fact-checker familiar with the trope and its more popular instantiations it’s likely far more automatic. In fact, you can see that the two sides of this essentially work the same list. The conspiracist goes down the mental list looking for signs of changed height, shifts in dominant hand, changes to the shape of the ears, photos where the face seems longer or shorter and sees if any of these differences can be found in the “field” of photos available. Those things are surfaced, wrapped in the trope, and disseminated. Then the fact-checker goes through the same mental list but from the other side of things: looking at the way the different types of “evidence” associated with this trope (earlobes, dominant hand, etc) tend to pan out. E.g. “Yeah, this is the old Paul McCartney’s ears are different thing — you can’t compare old and young photos for that.” They check everything of course, but they don’t start from scratch, they start from an understanding of the sorts of aspect of the field the trope tends to exploit. The creator uses their knowledge of the trope to construct, and the fact-checker uses their knowledge of the trope to deconstruct.

Plausibility judgments and discourse rules

A bit of a detour, but I promise it will make sense in a minute. As part of a larger argument in his recent book The Knowledge Machine, Michael Strevens points out a bit of a misconception about science. Or perhaps it’s a paradox?

Scientists must argue their ideas without any references to the ways in which those ideas are personally plausible to them, or reference to their opinions of the competence of person arguing against them. The rule is that the argument must be focused on the data, and it has to be that way for science to advance. This discourse norm is what allows science to progress. In order to make my case I have to produce evidence that you can then use to make yours.

The misconception though is this — because the norm does not allow the use of plausibility judgments (e.g. in the absence of evidence, do I think this is true or not?) it is often assumed that the scientific mind is one that relies on a lack of preconception and bias towards any idea. But anyone who has worked in science knows that this is false. A scientist without preconceptions, who does not listen to intuitions based on experience, is a horrible scientist. In fact, while the papers a scientist writes are important to the progress of science, it’s the ability of a scientist to make good judgments before the data exists that forms a lot of their value to the system.

Reading this book with COVID-19 raging, I couldn’t help but see that tension in how things have played out over the past two years. We came to a situation that was truly novel, at least as an instance, where there was no data early on and yet decisions had to be made. And these two visions of scientist value came into conflict. Why? Because of things like this — many scientists, asked “Will the vaccines provide at least a year of immunity?” said “We don’t really know.” (ADDED NOTE: here I am referring to immunity against severe disease). That’s a discourse rules answer that applies one model of a scientist’s value (rely on data) to a public issue. Alternatively, a scientist could say “Everything we know about both vaccines — of any type — and coronaviruses says, yeah, you’ll get a year out of it at least.” That’s another model of the value of scientists — that having seen many instances of things like what we were experiencing, at least in some dimensions, that they could make accurate guesses (at least about certain aspects of this, like the durability of immune responses). And while I know this is a controversial statement, I really do believe that a lot of people died unnecessarily and that institutional trust was eroded unnecessarily because many scientists selected on the wrong vision of value. When data is available, it’s discourse rules time (scientist with no presuppositions). But in the absence of data its the ability of scientists to make plausibility judgments that provides value.

Stepping back we see that this isn’t just a pattern for scientists, but rather for all professionals that must make public cases. The rules are not as strict, of course. But reporters and fact-checkers encounter similar patterns and tensions as scientists. From the discourse norms side of the equation, the fact-checker must take each Body Double charge as unique and specific. Like a scientist they can use their knowledge about how things generally go to make informed guesses as to where to look for answers. But the fact check itself is not a list of those intuitions; it is the result of the investigations, data-driven, spurred by those intuitions. And once the evidence is there, the ability to make this case, from the data itself, forms the value of reporting and fact-checking.

But what about when the data is not there? What about when we are in that pandemic situation where the answer is needed now and the data is going to take a while?

I’d argue that that is where we are with a lot of quickly emerging misinformation. Take the first example that we opened with, the “poll worker in Erie, PA “throwing away ballots”. I found this example on that day because, knowing there is a false claim or two like this every election I set up a process that looked for this term about every 20 minutes. That prediction paid off, and within about 10 minutes of it going up I had spotted it (around 11:23). Even then it was quickly accelerating. But it still had only moderate spread when I screencapped it, and at the time I came across it it was the only instance on Twitter.

It took me about five minutes to write up a report suggesting this was a likely hoax, and then promote it to Level 2, at 11:27:

Around the time I escalated it, about 15 minutes after the initial post, a number of other commentators reposted it using their own crop of it.

By the end of the hour, around noon my time, three things had happened. First, reporters and fact-checkers had confirmed there was no such worker in Erie, PA. Second, the initial Instagram post had been taken down. Third, one of the initial posts had been retweeted by Donald Trump Jr., and the whole thing had entered uncontrolled spread, as copies of copies of screenshots of Instagram posts circulated the net. And the reactions to this were, well, prescient.

Relevance to Mitigation Efforts

Now, I’m not arguing that content removal should happen on my (or anyone else’s) intuitions. In fact, I’m arguing the opposite. With the dynamics as they are, trying to speed up content removal is really a bit of a fool’s game, at least for certain types of content.

After all, this is an example of everything going right in a rapid response scenario. A piece of fakery so predictable that I set up a program to explicitly scan for it 10 days before it appeared, a discovery of it within minutes of it being posted, a report on it filed within minutes of finding it, an investigation of it which confirmed with folks in Erie that it was indeed fake, within 25 minutes of my report. And yet, none of it mattered.

One reaction — a wrong one — is that such content could be removed on a guess. That is, we know how this trope goes based on plausibility judgments of fact-checkers who have been in this rodeo dozens of times before, even though we don’t know the details of this instance. Is that enough to take it down? No, a thousand times no — that’s not a future anyone wants. Sometimes bad tropes turn out to be true. There’s a trope, for example, that emerges every time there is an explosion somewhere which claims that really it wasn’t an explosion, it was a missile. This trope has a history so bad it’s almost comical — conspiracy theorists would have you believe Flight TWA 800 didn’t explode, it was hit by a missile, the Pentagon wasn’t hit by a plane on 9/11, it was hit with a missile, the factory that recently exploded in Beruit didn’t explode it was hit by a missile, the RV in Nashville last Christmas, of which we have video from half a dozen directions and a crater under its smoking remains didn’t explode it was (you guessed it) hit by a missile.

But what about MH17, the flight that exploded over Ukraine, and was revealed pretty quickly to have been shot down — by a missile? Sometimes the trope proves true. Just as in this pandemic many times the past was a good guide for scientists to make guesses about how things would turn out — but sometimes the past wasn’t a great guide. So we want behavior that is informed by the sort of plausibility context a fact-checker calls to mind when seeing an instance of a trope, but we do not want summary judgments on that.

And here’s where we find perhaps one application of all this theory around a trope-focused approach. Because what if instead of focusing on truth or falsity of content early in a cycle we focused on providing the sort of trope-specific context fact-checkers bring to the table? We don’t have the fact-check yet — but we do have the history of the trope that informs their plausibility judgments. We know for example that this trope of the “ballot-discarding public official” will appear in 2022 and 2024, and that we’ll go through the same pattern of discovering it and taking so long to disconfirm it that any subsequent actions are rendered meaningless. But what if in the meantime you could ask everyone liking it and sharing it to read a short history of the trope, and the ways its been used in the past. If they still want to tweet it after knowing that hey, this is the same scam people fell for three elections in a row, then okay, go ahead.

You would have a set of tropes and subtrope pages, well-maintained that zeroed in not on broad truths but very specific subtropes that are typically associated with misinformation. You think Melania has been replaced by a body double because of a “height change”? OK, fine, but first look at a page that describes how that “the body double is a different height” played out with Paul McCartney, Hillary Clinton, and Avril Lavigne. When the fact-checking does not exist yet, provide the sort of context a fact-checker would start with.

Such an intervention is interesting to me because far from being an impingement on speech, correctly designed it’s a service. Perhaps I see a claim that “COVID-19 was actually a bioweapon” I want to retweet. As I go to do that, I am sent to a page that reminds me that while such a thing is possible the idea that everything from AIDS to Swine Flu was a bioweapon is pretty old, emerges like clockwork, and a lot of the arguments people make for the claim have been debunked in previous iterations. Does this look like some of those instances, or does it look substantially different?

To someone who just wants to put out the propaganda of course, this isn’t much of a deterrent. But to a user who is legitimately trying to process the issue, such context would be welcome. And if those users make better decisions about sharing it, the downward nudge in virality could be an important factor in a multi-prong approach against misinformation.

I also like, of course, that any attempt to spread something like a body double conspiracy or a claim that a debate participant was wired up with a secret headpiece will necessarily lead to a bunch of people learning about the history of the specific trope and perhaps being inoculated against false future instances of it.

I should say that I have been hesitant in the past nine months or so to suggest interventions at all. So many interventions designed to capture blatant mistruths seem to capture unwitting people of good will while the bad actors find ways to skate around them (or game the referees). And lots of good ideas are hampered by poor designs and non-existent support (e.g. Twitter’s ‘appeals’ process, which to all intents and purposes doesn’t actually exist).

But providing end-users “plausibility contexts” around specific (and very granular) tropes seems a much more promising approach than generic and useless “Go to the CDC”-type labels, and potentially much more responsive to emerging misinformation than claim fact-checking and content removal schemes. But it’s going to start with us moving away from larger, ideology bound narratives on the one hand and away from overly specific but slow-to-verify claims on the other. It’s this middle layer in-between the narrative and the claim — the trope — that is both specific enough that is can be targeted and predictable enough that interventions can finally get a step ahead of the game.

OK, that’s it for today — the final installment of this (Part #4) is next and will talk about using pre-bunking around tropes to reduce misinformation around events associated with those tropes.

Tropes and Networked Digital Activism #2: The Portability and Persistence of Tropes

Part 2 of a series. Follows Part 1. Followed by Part 3.

So to review from yesterday:

  • People have a lot of stuff they can share or attend to online.
  • In order to efficiently create and process content we look at things like “evidence” through tropes
  • Tropes, not narratives or individual claims, are the lynchpin of activism and propaganda, whether true or false, participatory or not. They are more persistent than claims, more compelling than narrative.
  • The success of a given trope is often based on its “fit” with the media, events, and data around the event (the “field”). A trope must be compelling but also fit the media available to creators. In participatory propaganda (term from Wanless and Berk) tropes must be productive.
  • In participatory work, productive tropes are often adopted based on their productivity, and then retconned to a narrative. Narrative-claim fit is far less important than trope-field fit.

Today I want to show how tropes transcend narratives. A good trope that matches a field well will be used in many different contexts.

Over the next week I’ll go through some examples of tropes and the media environments in which they thrive. Most of these tropes are associated with misinformation, even if the trope does fit reality in many circumstances. Let’s start with Body Count.

Body Count

Trope: A hidden cause is behind the deaths of many seemingly unrelated people. This cause is usually being covered up by a political or corporate elite.

Field: Death reports, either current ones (as they occur) or historical ones.

There are many times when a string of deaths is both suspicious and likely related to a cause that is being covered up. W.R. Grace hid evidence for years that its mine operations in Libby, Montana were linked to a string of deaths and illness in the area. It is well-documented that reporters that have challenged Vladimir Putin have often met untimely deaths. In the Philippines, it is widely believed that a series of assassinations of drug dealers are secret extrajudicial killings linked to the Duterte government.

It sometimes takes time to work out that many deaths have a single cause, and those making such claims are sometimes doubted. We rely on careful reporting to find and document these connections, and any good reporter will take such emerging patterns seriously. In those cases the trope can be used for good.

This post, however, is not about those cases. It is about the use of it for the Body Count trope for propaganda.

In the hands of a propagandist, the way the Body Count trope works in propaganda is this:

  1. Pick your villain
  2. As deaths are reported (your field) try to either find, imply, or manufacture a connection to the chosen villain.
  3. Aggregate the deaths, and with each new instance, claim that this is one in a growing body of deaths
  4. When there is pushback on any one death, point to the size of the list (the “body count”) and point out that even if x% of these were true it would be devastating
  5. Body Count works well on two dimensions simultaneously: each new death is a “potential” connection which “eventifies” your claim and grabs attention. But the point is not the individual claim, but the “count” — the impression that there is a steady stream of suspicious deaths of such a volume that something is fishy here.

Clinton Body Count

A classic instance of Body Count trope is the “Clinton Body Count” claim, that hundreds of people who have “crossed” the Clintons and died early deaths have in fact been assassinated, no matter what the actually cause of death was. While there were many drivers of the Clinton Body Count claim, the count was advanced in the early 1990s by far-right activist Linda Thompson. Her initial list from 1993 was built on over time by others, and goes on to this day. The rules to the use of this trope were pretty simple — start by finding a death in Arkansas, Washington D.C., the Democratic Party organization, or in the military. Then find the connection:

  • Were they a Clinton friend? Insinuate they may have been about to “come clean”
  • Were they a Clinton enemy? Insinuate the death may have been payback.
  • Did they not know the Clintons, but were at some event that the Clintons were at? Perhaps they had “seen” something.
  • Did they not know the Clintons, were at no events with the Clintons, but worked for the Democratic Party? Maybe they found some paperwork, or hidden financial information.

And so on. Notice that the narrative here doesn’t play much of a role at all. You just take a death off the stack of recent or historical deaths, and find a connection. Part of what makes this work, as noted by Snopes, is that politicians are connected to a massive number of people compared to ordinary people, so connections are easy to find. And that’s the nature of the “field” — you can expect a nontrivial steady stream of deaths of people that are “associated” with the Clintons to occur as a matter of course. You keep an eye on deaths and see if you can make them fit. Even as I was putting this essay together today, another Clinton Body Count item came in:

The trope here is established enough that I don’t think it required any real effort on the part of people that pulled this death into the trope. Note too that the trope is so established that those sharing it have to do only minimal work to fit the event to the trope. “Found dead, being investigated” next to a Clinton connection is enough for it to trigger processing through the Body Count trope in its readership and encourage them to share. (One reason why any moderation effort is pretty futile — after a while a trope/field connection is so set that it barely needs signaling).

This example shows another key to the Body Count trope: jump in quickly, and point to the lack of knowledge of the cause of death in the immediate aftermath. It’s currently being investigated, as suicides are, but assuming it turns out to be a suicide expect scare quotes to appear around “ruled” a suicide.

Also note that there is really no narrative here, except only in the fuzziest sense. What was the purpose of this supposed killing? Revenge? Apart from the insanity of the premise, there are likely thousands of reporters who have broken stories about the Clintons. Why are they alive? Why would they risk it? Why would they care at this point? How could they pull it off? Why, if they did do it, would they decide to make it look like a suicide? This is an instance of what Rosenblum and Muirhead call “bare assertion”, a “conspiracy theory without the theory”. And it doesn’t need the theory, really, because it has the trope.

Notice too that any death can be made a relevant death in the Body Count trope. This one is a suicide, but the Clinton Body Count has included car crashes, plane malfunctions (“suspicious”), deaths by heart attack of heavy smokers (always death by an “apparent” heart attack), combat deaths, etc. Often there’s the invocation of some characteristic which would supposedly make the death unlikely — they were “thought to be in good health” before the heart attack, they crashed their plane “despite logging many miles as a pilot”, they were “killed in a robbery, but nothing was taken.” Going unnoted is that no-one predicts their own heart attack, that people who log a lot of flying miles increase their chances of dying in a plane crash, not reduce them, and that very often robberies go wrong.

In fact, these different elements become subtropes of the Body Count trope. That “killed in a robbery, but nothing was taken” piece? If you’re hip to trends in 2016 misinformation you might think I am referring to the 2016 death of Seth Rich. I am, in a way. This was supposedly the reason his death was “suspicious” to conspiracy theorists. But note that the subtrope of “killed in a robbery where nothing was taken” was part of Clinton Body Count accusations — in 1998:

Tropes link to other tropes in a fluid way. You keep an eye out for a death that might fight the Body Count trope. Once you find one, then the Body Count trope, in the ways you’ve seen it used, suggests was to juice the claim. Was it a robbery (“supposedly”)? Does it fit the “but nothing was stolen” trope? Use that. So some things were stolen? Ok, was it a “normally peaceful area”? All of these things are easily explainable, of course. For instance, in robberies that go “right”, it is very odd for robbers to leave money. But in robberies that go wrong (which includes most robberies where someone gets shot) that isn’t always the case, since the robbers often high-tail it for fear people heard the shooting. But the point here is not the answers, the point is that the “questions” raised are nearly as automatic as the trope itself. The suicide I mention here? Expect disinformers to go down the list of claims about the Vince Foster suicide in 1993 or any other Body Count death and see what fits.

  • Did they declare it a suicide quickly? Why wasn’t a real investigation done?
  • Did they declare it a suicide only after a long investigation? If it was so obviously a suicide, why did the investigation take so long?
  • Is there any way the death can be said to be unexpected? (Ignore that most suicides are unexpected)
  • Is there any plan that he made that was scheduled for after the suicide? (Ignore that most suicides are impulsive)

And so on. Again, narratives matter to the people producing, sharing, and consuming these, at least somewhat. It’s the Clinton Body Count, not the Paris Hilton Body Count (even though the techniques could be as easily applied there). But integration with the narrative does not drive the construction of these much. In fact, once the trope is set, the whole process works on something more resembling auto-pilot than directed creation, at least for the people that aren’t into meta-conspiracies like QAnon (more on tropes and meta-conspiracies in a future post).

Vaccine Body Count

Here’s the thing — all the same tactics used by the Clinton Body Count people? They’ve been used by anti-vaccine activists for years as well.

I’ll leave the historical tracing of these techniques to people better versed in the history than I. But consider the current vaccine-driven Body Count game being played, and how it matches almost beat-for-beat political Body Count games:

  • A high profile death occurs, for example the death of Marvin Hagler (or a near death like Christian Eriksen)
  • Anti-vaccine activists rush into the breach and scour social media for indication that person was vaccinated, or alternatively, just claim they were (which given most American adults are, would not be surprising).
  • They then take the “purported” cause of death an link it to a “known side effect” of the vaccine, or insist that the “alleged” cause of death can’t be right because the person was thought to be very healthy.
  • The “unexpected” nature of it is everywhere highlighted, where “unexpected” is used to imply suspicious. The fact that it is generally the case that deaths covered by the news are in general unexpected is ignored.
  • If the death just happened, they claim “doctors are looking into a suspicious death”. When doctors reach conclusions and those don’t link to vaccines, then say it was a cover up, and ask why they investigation took so long if it was so simple. If the investigation is short, they ask why it was so short — was it in fact covered up? Why was it never investigated?
  • If the doctors don’t reach their favored conclusion, they look for comments from family that disagree with doctors. If family doesn’t reach their conclusion, they find quotes from friends. If family wondered if at first it was due to vaccines, but now believe it wasn’t, they claim a cover up. If family change their mind the other way, of course, it’s evidence.

None of this requires narrative making or even much deep thought. And it’s almost bizarre how it’s the exactly the same method as the Clinton Body Count despite being from vastly different narratives. Let’s just think about Seth Rich or Vince Foster and replace “doctors” in those final bullets with “police”:

  • If the death just happened, they claim “police are looking into a suspicious death”. When police reach conclusions and those don’t link to foul play, then say it was a cover up, and ask why they investigation took so long if it was so simple. If the investigation is short, they ask why it was so short — was it in fact covered up? Why was it never investigated?
  • If the police don’t reach their favored conclusion, they look for comments from family that disagree with the police. If family doesn’t reach their conclusion, they find quotes from friends. If family wondered if at first it was suspicious, but now believe it wasn’t, they claim a cover up. If family change their mind the other way, of course, it’s evidence.

And so on. But what’s going on here is determined by the possibilities of the “field”, in both cases the steady stream of expected events that can be fit to the trope. And the process of finding them becomes so automatic that just we get into bizarre situations where Twitter has to run a trend like this, every time an unexpected death or near death occurs:

Twitter trend correcting claim Eriksen had been vaccinated.

Non-obvious harms of the Body Count trope

Finally, it’s worth noting that aside from the obvious harms of spreading misinformation, this sort of activity pollutes the information space and may make it harder for people to assess real harms. As an example, it is the case that the CDC is looking into a slightly higher than expected incidence of myocarditis in teens who have been vaccinated. It’s extremely rare, but it warrants more investigation.

One could imagine a world of ethical activists, who use the power of the Body Count trope for good. Such activists would not work from the “field backward” — finding any death and connecting it, no matter how spuriously — but from the cause forward — highlighting exactly the cases that seem to be involved, and perhaps even calling attention to attributes that might link them. Indeed, such activity, while it may annoy the medical establishment, has called attention to disorders, diseases, and effects of chemical exposure that historically needed addressing. Tropes can be powerful tools in that way.

And I shouldn’t say “imagine”. These activists are out there. But as I’ve mentioned a number of times — the process is so automatic I’m not even sure it’s activism at the wheel in many cases. There’s the field, coming in, and there’s the toolbox of tropes to process it, categorize it, frame it, and amplify it. A man collapses on the field or commits suicide, 30 seconds later the Body Count trope is applied. Good signal is lost in a sea of ridiculous noise.


Part 2 of a series. Go to Part 3 or return to Part 1.