Tropes and Networked Digital Activism #1: Trope-Field Fit

Note: In this post, I jump back and forth between the use of tropes to frame events ethically, non-ethically, and in-between. This is not meant to be “both-sidesim”. Rather it is meant to demonstrate something of the utmost importance to policy discussions about misinformation: there isn’t really a magical set of techniques associated with “misinformation” that can be discouraged, dampened, or banned. Rather, a group of people that range from ethical activists to snake oil salesmen to political motivated disinformers all use a toolbox largely shaped by the possibilities of the medium, and frame messages sent through it to work with its strengths and exploit its weaknesses.

After you read this you may want to read Part 2 and Part 3.


I’m watching TV with my partner, and, in the show we’re watching, a member of a gang has said he wants to quit. He’s married now, you see. New lease on life. That’s fine, says the gang’s boss, no problem at all, let’s go for a ride back to headquarters and talk about it.

As the ride progresses, it’s clear they aren’t going to headquarters. Streets get less familiar, and the family man gets nervous. In fact, the car ends up turning off into an abandoned quarry, completely deserted. The character is told to step out and to kneel: he’s betrayed the gang. Pleads for his life. Dramatic music as the gang leader raises the gun and squeezes…

“Gun’s empty,” I say to Nicole.

“Of course,” she says back.

Click. The gun’s empty. Family man exhales in relief.

Now here’s a question — what show were we watching?

If you’re an avid consumer of TV thrillers the answer to that question is likely that it would be near impossible to narrow it down to one. Each of these elements is in hundreds of films and TV episodes:

  • The new family man wanting to retire from a life of crime (or spying, or terrorism), but not allowed to leave the organization
  • The ride where the rider realizes “Wait a second, this isn’t the way to X”
  • The deserted quarry, warehouse, or forest where a character pleads for his life
  • The assassination that turns out to be a scare tactic
  • The empty gun

These elements are so common that even in combination it can be a bit difficult to say what show we were watching. And once you get into combinations that aren’t exactly that sequence, you’re talking hundreds of shows.

These reusable blocks aren’t narratives. They are ready-to-use building blocks for any narrative of a suitably matching genre. They are commonly known as “tropes” and they are an important key to the understanding of participatory propaganda.

Enter the Trope

The word “trope” can mean an awful lot of things. The word itself just means something commonplace, available for reuse. Rhetoricians can be protective of the word, often seeing it as a synonym for a figure of speech. In cultural studies, “trope” often refers to repeated patterns of depiction in literature or film, particularly those that are socially harmful — the “trope” of the devious bisexual, for example, or the trope of the “white savior”. But common parlance in recent years has used trope in a related way popularized by the site TVTropes: a reused “narrative device or convention used in storytelling or production of a creative work.”

Tropes serve two purposes. The first is obvious, hopefully: they allow more efficient construction of narrative. A writer, looking to put together a compelling scene, has a toolbox of things proven to work. That scene where the character realizes while in the car “Wait a second, this isn’t the way to (wherever they thought they were going)!” — that scene works, it’s been tested before. It has the nice impact of a growing realization of the character (and a perhaps an earlier realization of the audience). It gets reused because it has proven itself. In this way, it functions like a narrative Pattern Language, a design approach that has been used in everything from software, to architecture, to pedagogy.

But tropes serve another purpose as well — they are a shorthand for the audience. As Steven Johnson noted in 2005, TV got much, much more complex from the 1950s to the 2000s (and has gotten even more complex since). Even a simplistic TV plot is asking viewers to weaves multiple intersecting narratives and different timelines together in ways a 1970s viewer would find overwhelming. Nowadays, a scene that would have been five or six lines of dialogue 30 or even 10 years ago is reduced down to three or four seconds of camerawork and a head motion. Plots are dense.

Johnson attributes our collective ability to consume more complex entertainment to an increase in willingness of viewers to engage in viewing that involves more cognitive load, but that’s not quite right. As the story of Nicole and me shows, we’re not exactly cognitively overloaded — even though the show we were watching, the solid but unremarkable Swedish thriller Blue Eyes — is many times more complex than the most demanding 1970s espionage film. What’s going on, mostly, is that TV shows can build more complex plots because a shorthand of tropes (as well as the comprehension of other bits of filmic grammar) has developed over time. That shorthand reduces cognitive load for the viewer, allowing for more complexity. A character that has just committed a betrayal gets picked up, looks out the window of the car and furrows their brow. They don’t even need to say “Hey where are we going?” The viewer knows what’s up, and can look for deviations or variations from the expected path. (This is not simply a progression for television — other art forms over time develop such tropes and moves towards more efficient and complex storytelling).

Tropes can be simpler as well (and some of the ones that we’ll talk about in a minute are quite simple). That scene where the detective says — hold on a second, rewind the video, and ENHANCE. Plot problems are solved with tropes that have a proven history of drawing in the viewer — “Here’s the weird thing, detective, the gun we found? It was loaded with blanks…” When they become overused (e.g. “The call is coming from inside the house!”) they become clichés and cease to be effective, but in the time between their first introduction and their relegation to the cliché graveyard they live a long productive life, both in direct reuse and the permutations they spawn.

Digital Activism and Tropes

In the past few years, researchers have given increasing attention to the relationship between elite framing of various issues and the broad participation of a non-elite, digitally networked population in producing and disseminating evidence for those frames (see Wanless & Berk’s work on Participatory Propaganda, for instance, as well as Asmolov). These patterns are not confined to “bad actors”, but are generally patterns around how people advance themes and narratives they find desirable. Some of what’s below is my own particularly trope-focused take, but some of these general patterns are also identified in An Xiao Mina’s excellent From Memes to Movements.

First, lets start with a simple trope that was not generally used to spread misinformation, but will demonstrate the dynamics at play here, which are common to all distributed networked activism and propaganda. (Importantly, I am using propaganda as a neutral term here). Back in 2019, a white woman named Jennifer Schulte called 911 on a Black family using a grill in a park, purportedly because they were in the wrong section of the park for grilling. A woman observing this, Michelle Snider, confronted her about the inherent racism of her use of an emergency number to deal with this minor park policy infringement, while filming her reaction. Over the course of the video the situation escalates, mostly due to the actions of Schulte, and her answers sound increasingly bizarre — she’s really calling 911, after all, because there have been lawsuits about safety of wrongly disposed charcoal. That’s her concern, that’s why she is harassing these Black people. By the time the police arrive, its Schulte who is in tears, claiming she was the person harassed, the true victim.

BBQ Becky, as she was named, became a meme, photoshopped into a million places, portrayed in a skit on Saturday Night Live. For many Black people, she was just one example of the way that white people use the force of the state to control and threaten Black folks, for whom police interactions carry disproportionate risks of jailing, injury, or death. The video itself supported a complex narrative of how systemic racism works, where white people treat the police force as their own security guards to support goals which are about white dominance and control of purportedly public spaces. It was, in its way, evidence supporting this narrative.

People don’t remember this, but the full video of that interaction? It’s 25 minutes long. The situation is undeniably a result of racism, and legitimately dangerous to the family she is reporting, but the video itself is messy, with a good deal of it being about a business card that Schulte refused to give back to Snider. In the longer video, Snider needles Schulte, and becomes a participant in the drama. But as it pinged around Twitter and elsewhere, both as a “discourse” and a “meme” the essential outline of it became clearer — a white woman, calling the police for a minor perceived infringement in a public space, acting as if this is the most normal thing in the world. And thus the larger trope was born.

Very quickly other incidents followed. A month later, a woman is caught on video phoning police on an eight-year old girl for selling water in front of her own apartment without a permit. Notably, this video is shorter — just 15 seconds, and references the #BBQBecky meme by calling the police caller #PermitPatty. It goes viral, and what had been a singular meme is now a deeper character trope, which explicates a chosen narrative concisely. (Again, I do not use narrative pejoratively here). Others follow: Golfcart Gail, Pool Patrol Paula. Some differ in the dynamics, or the actions, but they all follow the basic pattern.

We can see this process as just another level of memetic production and evolution, but I find the concept of tropes more helpful to me personally. This pattern has a character trope, combined with a specific, repeated scene trope. And one one level, the dynamics of this are far older than the web. Tropes have been helping storytellers tell stories since before recorded history.

The first way such tropes help is with selection. It is simply the case that Black Americans suffer many adverse impacts of racism in a given day or week, many of which could be filmed or shared. But as with a screenwriter choosing from many possible scenes to advance a narrative, it is not immediately apparent what sort of events are compelling to an outside audience. Tropes are tested, and proven to be compelling in ways that non-trope media is not. There’s a recipe that works, at least until it gets old. Tropes, in this way, help us spot compelling scenes, locations, characters that we might otherwise consider ordinary (if unjust).

Perhaps more importantly, as a trope becomes legible to an audience, the audience becomes better at both comprehending content associated with the trope, and understanding what content is likely to do well with the followers they share it with. While I do not have the data to prove this, my experience has been that when a new trope emerges, the first instance of that trope meanders a bit through the network. Often, as with BBQ Becky, there’s a “discourse” as to What It Really Means. Subsequent instances are often more concise and move through the network quickly. People know what it is, know what it means, often with only a glance.

Tropes also have deleterious effects, even when used to illustrate real and pressing events. By turning everything to a quickly understandable shorthand, they aid in comprehension but at the cost of flattening experience. I’m not the one to judge, of course, but to my eyes an Amy Cooper, calling 911 in Central Park and talking with distress about a Black man threatening her, is something much more sinister than a BBQ Becky. Run through the trope, however, she risks simply being another of a series of equivalent scenes. At the same time I would predict that had a largely white Twitter audience not been exposed to the trope of BBQ Becky and its kin that they may not have been able to even process the Amy Cooper video as an instance of something systemic and dangerous at all.

It’s important to note as well that while instances of a trope must align with a given narrative in order to be attractive to core audiences, tropes are not necessarily bound to a narrative or ideology. And for the most part they don’t gain their power (as tropes) from the narrative.

Turning Narratives and Themes into Scenes

So far we have sketched out a basic narrative construction box. Up top, there is a narrative. In a film this is the larger plot, the often summarized by the so-called logline. Logline of a relatively famous film: “When an optimistic farm boy discovers that he has powers, he teams up with other rebel fighters to liberate the galaxy from the sinister forces of the Empire.” (Star Wars, obviously). Narratives in propaganda and activism don’t work quite like this, but with some modifications they do: “A shadowy elite pushes suspicious drugs and various lockdown measures during a pandemic as a first step toward world totalitarianism”. Some traditions of cultural studies use the term narrative more broadly, but in literary studies narratives usually involve a certain set of antagonists or protagonists moving toward a goal.

More vague is the “theme”, and it’s often what people are talking about when they talk about narratives. A theme is often a general statement about how the world works, at least part of the time, but there’s no end goal to it. “Hard work pays off” is a theme. “The world is controlled by a shadowy network of corporate elites” is a theme. On the other hand, “Trump is attempting to overturn the election” or “Vaccines are really a plan to implement a totalitarian government” are, to my eyes, narratives. The represent not general statements on “the way the world works” but fuzzy claims about something that is happening.

It may be this distinction is too precious by half. The important thing is this — social movements have narratives and themes but social media is propelled by events. Something happened, something was discovered, something was discovered to have happened. Something is happening. Events are so much the currency of social media that if you want to convey an established fact on Twitter or Reddit, you’re likely to present the fact as an event, prefacing it with something along the lines of “I was today years old when I realized that…” You turn something old into a new event by framing it a discovery. If you want to discuss a general pattern of things, you’re likely to begin by hooking that analysis to a recent event. Occasionally someone gets away with something without eventifying it — “Some notes on the McCartney/Lennon partnership and what it can tell us about distributed wiki collaboration.” But even there we both know that it will do much better if it begins “On the occasion Paul McCartney’s birthday, some notes…”

This is not a new phenomenon, nor a particularly internet one. As far back as 1961, the American scholar Daniel J. Boorstin noted the rise of pseudo-events in response to an ever-hungrier news cycle. According to Boorstin, since the media spotlight almost exclusively privileges events, marketers found ways to gain the spotlight by creating events designed to convey what they wished to convey (or even to just get a piece of the viewer’s attention). Think about the yearly Apple product launch event, for example. The event consists of revealing a bunch of product information that could be uploaded to a website, or features you could discover next time you walk into an Apple Store, at the point you’re ready to purchase something new. In fact, there isn’t actually a reason for Apple to work on a yearly release cycle across all products. There’s no reason that an iPhone should be on a 12 month release cycle and unveiled the same day as a new activity monitor or exercise app or new chip spec in an upcoming MacBook. But the conversion of product changes and updates to compelling events is a large part of what drives Apple’s success. People talk about the genius of Steve Jobs as a designer, and maybe that’s true. But there are a lot of great designers. Jobs’s true genius — and what he is most remembered for, if people are being honest — is the way he understood that products had to be designed with an eye towards this sort of eventification. Designers had always balanced user needs with the needs conveyed by a sales force around what they needed to sell a product. Jobs understood that in an age of scarce attention products had to be designed with an eye towards the compelling events they could create. (Rule 1: delete ports 18 months before it is really viable to create an impassioned, media-consuming public debate about whether you deleted the ports too soon).

Boorstin’s analysis didn’t stop at marketing. In fact, he critiques a lot of the events that have become sacrosanct political traditions. Press conferences, presidential debates, and even interviews are generally inefficient ways to communicate policy, yet part of a larger trend dating back to the adoption, in Boorstin’s view, of a 19th century telegraph-influenced model of news production. In a world obsessed with providing the most up-to-date news, non-events must be framed as events. As news cycles became tighter and more visually driven, the process, in Boorstin’s view, accelerated.

While Boorstin was particularly interested in the idea that psuedo-events were planned, with an eye towards how they might capture the media spotlight, such critiques were expanded by scholars such as Neil Postman, who advanced a more general theory of the impact of the fascination with novelty on news production and consumption. Again, in the interest of not turning this essay into a small book we’ll move on — the point here is that these are long noted trends: as communication technology has increasingly favored and privileged currency ideas, products, claims, and social issues must be framed as discrete events in order to be disseminated more broadly, at least in comparison to an earlier culture. (One may of course argue that currency has always been a staple of orality, but, again, moving on…)

When it comes to social media, this means that activists must convert the narratives and themes they care about into a series of events if they are to be disseminated and have impact. There are a couple different modes for this. The most obvious is what we saw with BBQ Becky and the videos that followed. In that case a series of events illustrating broader themes and specific narratives were captured and amplified. Notably, once it was clear that such events were compelling and aligned with a desired narrative such events were both captured and amplified regularly.

And this is why tropes are so fundamental to the understanding of persuasion, propaganda, and activism online. Tropes are the mechanism through which we reliably convert narratives and themes into scenes.

I use scenes as a term instead of events, because it captures for me the full range of elements which are attended to in these videos, though I don’t mean to indicate that tropes only produce scenes. But here the term is helpful. Like our “empty gun in the abandoned quarry” story that began this, there’s a character, there’s a location, there’s an action. One of the powers of thinking in tropes is that certain combinations are found to work. The “Karen”, for example, is a character trope recognizable at the core of most of the BBQ Becky type videos. It pairs with this plot trope around the policing of public spaces. The phone call is an optional but desirable element. The nature of the public location works well with the character trope — the impact here is partly the disparity between the either genuine or performed distress of the “Karen” and the way those in the surrounding location is just trying to get on with its day. None of these things are required, of course, but to the extent they are there, both creators and viewers intuitively sense the value of them to the event.

This isn’t just a theory. One of the fascinating moments in the trope I’ll call here the “Karen Police” is occurs in October 2018, six months after the trope was established. A woman falsely accuses a young Black boy of groping her. Unlike the the BBQ Becky video that launched this trope, the woman is not questioned or pursued. She’s filmed making what would later turn out to be a fake call to the police, and the creator alternates between filming the calm crowd looking at her like “WTF, lady?”, the upset boy, and her over-the-top performance of victimhood. It’s cleanly done in all the ways the initial BBQ Becky video was not. But the kicker is this — while she is still on the phone you hear a man’s voice on the video from behind the camera. It’s not clear to me if it is the person filming or someone next to them, but they are close.

“Cornerstore Caroline”, he pronounces.

Again, I don’t mean downplay the real and upsetting elements of the video. But life in the U.S. for Black Americans has many upsetting elements and incidents, not all of which are captured or shared. What is interesting to me here is that, even in the middle of this, the people capturing this are capturing it as the trope. They are fully aware of the value of what they are capturing here, and its likely trajectory across the internet. Now established, the trope indicates what content to capture, the framing to share it under, and makes it less cognitively demanding for an audience to process and disseminate.

Trope-Field Fit

In participatory propaganda, tropes solve a specific problem for creators. Scenes — whether captured video, a news article plus framing, or a claimed discovery of “something fishy in the data” are often captured from some larger store of experience, existing media, data, news stories, or larger events. But there’s an awful lot to choose from.

In 2020, for example, the overriding narrative of the Trump campaign was clear from early on: the Democrats were using a variety of coordinated efforts to “steal” the election. Somewhat differently from 2016, this false narrative was interwoven with themes that a “Deep State” government, largely captured by Democrats, was a key force in undermining Trump’s success. The “steal”, from early on, was not the actions of individual illegal voters (as claimed in 2016), but of a vast conspiracy of government officials.

There was already an existing trope that fit this narrative. The trope “ballots discarded/ballots found” focuses on purported discovery (event!) of ballots either not counted, or the appearance of mysterious ballots late in the vote-counting that put an opponent over the top at the last minute. The trope is associated with certain scene locations to make it more compelling. In the 2008 Coleman/Franken contest it was “mysterious box of ballots found in the trunk of a car”. In 2016, it was “boxes of fake ballots” found in a warehouse supporting Clinton. In 2018, in a Florida race it was a box of ballots found behind a school after polling closed, and also in the back of an Avis rental car at the airport.In Massachusetts primary race, it was a box of ballots found in a maintenance closet.

A fake news story in 2016 built on the box of ballot trope

The trope is useful in a number of ways. First, it turns a claim into an event quite nicely. There’s a full scene here — someone discovers a mysterious box in a dodgy location. Second, as seen above, it can fuel a range of media. It can direct the production of fake stories (as the one above) but more importantly it can be used to frame innocuous incidents as something more sinister. Very often the deception is in immediately jumping on the “box of ballots found” story when it is found, but before its contents are inspected. The procedures involved when a potential box of votes is found move forward at a rate less than internet speed. The box must be secured, and not opened until the correct people can be present to inspect it. In the case of the Florida ballots supposedly in a rental car and behind a school, for example, after quite a bit of generated outrage over the incidents it turned out the boxes contained polling supplies, but no ballots (in many districts it is common procedure to reuse the box that is used to ship blank provisional ballots to a polling location to load up supplies at the end of the night, precisely because the remaining provisional ballots are counted, secured and transported from the site under a different process).

But what makes the trope really work here is the range of media and events that can be used to create scenes. Take the above picture, used in the fake story about Clinton. The man that created the story explains his process:

A photograph, he thought, would help erase doubts about his yarn. With a quick Google image search for “ballot boxes,” he landed on a shot of a balding fellow standing behind black plastic boxes that helpfully had “Ballot Box” labels.

It was a photo from The Birmingham Mail, showing a British election 3,700 miles from Columbus — but no matter. In the caption, the balding Briton got a new name: “Mr. Prince, shown here, poses with his find, as election officials investigate.”

New York Times: From Headline to Photograph, a Fake News Masterpiece

There’s lots of pictures of things that are ballots or can be portrayed as ballots online, a Google search away.

Even better is this — after any election you are guaranteed to find some instances of boxes discovered that are associated with polling places. There is a guaranteed stream of events you can use to create this sort of scene after every election, you just need to keep an eye out in the news, and be quick with the sharing, before they open that box and find that it has nothing to do with ballots. This is similar to the “Karen Police” trope — there is a guaranteed stream of events of white women calling the police that will be available for framing, once you know the trope and keep your eye out for it.

I call this pattern “trope-field fit” and believe it is a crucial part of participatory propaganda. A trope has to produce compelling events, of course, whether real or fake. It should align, at least marginally, with the narrative or themes you want to advance. But if you really want it to be participatory, it needs to be a trope that can pull from a known store of events, media, news stories, or the like. The trope and the media to search have to be a good pairing, with the trope telling activists what to look for, and the field providing enough examples that can be turned to that purpose that searching is not in vain.

This isn’t to say that a single event can’t serve a propaganda function, of course. Tropes are powerful, even if not participatory. One interesting parallel to the Karen Police videos came from the right-wing in 2015. At a University of Missouri protest on racial discrimination, a liberal faculty member was filmed telling a student journalist that they couldn’t film the protest, that they needed to leave. She then asks if she can get some “muscle” to remove him.

I’m not here to debate the incident or the larger question of filming and protests. Let’s just say, however, I’m not a fan of this woman while at the same time aware the larger narrative around this video had many problems.

But it’s remarkable to me the similarities in the structure here to the Karen Police videos. A person in a public place, one that they have a right to occupy, is approached and asked to leave by a seemingly hysterical woman, when he refuses, she calls for some “muscle”. It’s connected to a different narrative of course. The “muscle” piece of this is compelling to at least some viewers because it taps into longstanding racist narratives about a weak liberal elite maintaining power through the use of Black muscle to oppress “true” Americans. It’s aligned with a fundamentally different narrative. But structurally, it’s strikingly similar.

In keeping with the general power of this trope, on both the right and the left, the video was also a remarkably effective piece of propaganda (I use propaganda here in its non-judgmental sense of messages designed to create a sympathy with a given worldview). In many ways this video marks a shift in the focus of right-wing activism more generally in 2015, one which coincided with Trump’s rise to prominence. The issue of liberals as the real enemies of free speech here is centered, and higher education is rediscovered as a central villain. In the months after this video was replayed on Fox News in a near loop, Republican support for higher education plummeted, and many mainstream media outlets began to run columns on threats to free speech from the left. Protests, oddly, became seen as a suppression of free speech rather than an expression of it. I can’t pin all of that on this video, but there is some special sauce to it that most definitely aided in accelerating all of this. And part of it is due to how it takes various compelling tropes and creates a compelling scene.

For all its effectiveness, however, this was not the first in a long line of videos of liberal women attempting to remove reporters or those with opposing views from public spaces. And that’s not because people didn’t want more examples. It’s because there just isn’t a predictable stream of events like this. It’s lightning, striking once. You could tell people to keep an eye out for this, but the incidence is going to be so low that they really shouldn’t bother.

Perhaps there were a couple attempts to duplicate this success in the weeks that followed that I don’t remember. If there were, none of them stuck. Instead, it was the trope of the peaceful reporter attacked by “antifa”, a narrative that could draw from a more predictable (and often engineered) set of events that would eventually take root as the participatory propaganda trope of choice around these issues, supporting this nexus of themes. My view on this is that it is not that that trope was more compelling, but it was just a better fit for the sort of video that was generally captured at protests (the “field” of media to mine).

When Tropes Bend to the Field

As mentioned above, trope-field fit is crucial in participatory propaganda. You want compelling arrangement of tropes, but if you really want to supercharge community production of events/scenes around those tropes the trope has to work with a given field of media, data, or predictable stream of events.

Take the example of the “ballots discovered/ballots discarded” trope mentioned above. In the traditional version of it it’s a trope for after the election, when there is likely to be a number of stories about someone, somewhere finding a box of something. Before the election, on the other hand, it doesn’t get much use. You have to have the election before you can find “discarded” ballots. However, in 2020, a new version of this appeared — the discarded mail meme. Since mail-in ballots were in the news, every event where mail was found discarded could be portrayed as a “ballots discarded” instance, even if there were no ballots found. And if the field of current events proved to not supply enough examples, there was wealth of videos and news stories and photographs from the past 20 years showing all sorts of mail being dumped by postal carriers, media that could be reframed as current and election related.

It’s worth noting that with the exception of a weird event involving what appeared to be a postal worker keeping bags of mail at their home, almost none of these events had anything to do with the election. But the dense field of past media and current incidents combined with the discarded mail trope produced one of the more participatory propaganda efforts of the election, largely because the fit between trope and field was so solid.


If this seemed to end suddenly it’s because it did! I had to break up this writing into a few parts. Up next: Part 2 (on the durability and portability of tropes) and Part 3 on tropes and mitigation efforts.

Teach Information Architecture If You Care About Trust

Today’s activity revolves around a tweet that National Geographic (through the Society) has recognized a fifth ocean. I use this tweet here as a jumping off point, but if you want to run it in in another platform you can find examples anywhere.

Today Show’s tweet making claim.

Like over half the prompts we use with SIFT, this is a true prompt, and shows how SIFT works well. If you know the Today Show, you can INVESTIGATE the source through hovering and take into account the blue check, and that’s good to go. If you don’t trust the show or don’t know them, you could TRACE this back to the source, in this case National Geographic, and make sure this is a correct summary of what was said. If you’re just interested in this claim that there are five, not four, oceans, you can FIND better coverage and learn about the larger case for five oceans — which is not a new thing at all, even if the boundaries are under dispute.

Wikipedia index of the Southern Ocean page showing long history of term.

I walk through the process here, beginning to end. Give it a play!

One thing I’ll point out here — when you do activities like this you introduce students to our current knowledge infrastructure. As shown in the video, they learn about NOAA, they learn about the IHO. And I can’t stress how important this is. Oftentimes the first time a student will hear about NOAA, for instance, is in the context of a divisive issue like climate change. Getting students familiar with various agencies and professional organizations, what they do well and what they do poorly, is important as students come to future debates where the nature of these agencies is often misrepresented.

We spend all this time asking “Why don’t people trust agency X on issue Y?” and sometimes there’s good reasons for that! But a lot of the time the question we should be asking is “Why should someone trust Agency X if the only time they ever hear about it is when it is mired in political controversy?” We spend so much time teaching students either facts or methods or concepts in a domain like science, and very little time introducing students to the knowledge producing organizations and social processes in those fields, which is arguably more important info to the average citizen.

I don’t go deeply into this in the video, but as your students click around through the search results on this task, you should get them to look up NOAA, by using the About this Result function in Google, but go the extra step and pull up the Wikipedia page. Talk about the various things that NOAA does, the role of these sorts of agencies in producing knowledge, the vast array of equipment and sensors. The data produced that even when not used by NOAA directly makes the work of many other scientists possible.

Then, maybe when your students do come across NOAA in a politicized context they’ll have some background. But if we don’t teach them, how would they know?

Twitter Should Cancel the Appeals Process or Make It Work (also: I’m in Twitter jail!)

Welp, I was going to write a much more nuanced post about problems with the Twitter appeals process, but I’ll just put this here instead for now.

I got banned wrongly for a tweet last week where I was talking about the history of conspiracy theory and its relationship to current COVID-19 misinformation. Someone had posted that conspiracy pyramid that shows the relative harms of conspiracy and asked where fluoride might fit. I replied saying I thought that fluoride definitely belonged in the 5g layer — not anti-Semitic but definitely part of that dangerous John Birch Society politics/medical-conspiracy stew. A few minutes later I was hit by this.

Now, I want to be clear. This stuff happens to other people quite a bit, particularly women academics and activists, due to the gaming of reporting features by trolls. And it happens to lots of regular folks as well due to the algorithmic nature of enforcement — I saw someone go to Twitter jail once for tweeting “I hope Trump chokes on his own uvula” (incitement to violence!). So none of this is particularly noteworthy. This has been broken a long time, and there’s a lot of people I respect who say it may not even be fixable. I’m more optimistic and think it could be made workable, but even there it’s always going to be imperfect: there’s some collateral damage with even the best moderation regimes.

But in any case, I decided to opt for the appeal. After all, I’m a well-known expert on media literacy and COVID-19. My pinned tweet is actually a OneZero article on my efforts to fight misinformation on COVID-19, an effort I got involved in mid-February 2020, before Twitter was even thinking about this stuff. Etc, etc. I expected the appeal might take three days, maybe. So I appealed.

Now, appealing isn’t cost-free. In fact, one of the primary ways reporters contact me for information on how best to fight COVID-19 misinfo is through Twitter DMs, and when you decide to appeal you lose all access to your DMs, all ability to browse, everything. (And perversely, all those DMs just go into a bit of a black hole, there for when you get privileges back, but with no one DMing you knowing that in the meantime you can’t see them). Getting banned for alleged COVID misinfo significantly affects my ability to work on real COVID misinfo. On the other hand, I don’t want to start accruing a bunch of black marks that might get me banned sometime down the road.

Anyway — it’s been a week now. I’ve hesitated writing this because I actually support stronger moderation on Twitter and for the love of God, this isn’t a “I’ve been censored” story. But as always with policy, stronger isn’t enough, smarter means much more. And an appeals process that is in effect a week’s ban isn’t really an appeals process at all. It would make more sense to me, and everyone else, to simply give up the pretense of an appeals process on individual tweets altogether, until Twitter can actually run one effectively. Had they not offered one, I’d have treated this as an algorithmic goof I had to live with; instead I lost a week on Twitter which I would have been using to actually advance anti-misinformation practices.

So that would be my recommendation to Twitter. Either cancel the appeals process, apply it narrowly to suspensions, or speed it up. At the very least, inform people engaging in it what the average time for resolution is. And while my suspension probably won’t derail national or international efforts against COVID-19, I can’t help but think of all the medical researchers and public policy people out there using Twitter to communicate and collaborate. So as much as Twitter seems to think any deference to academic culture is a thumb on the scale, I really hope they can have someone write up a list of experts more important than me and take a bit more care before they ban them. I assume what I was hit with was based on a programmatic scan, not trolls gaming reporting. But the anti-vaccine trolls are out there and I know they are reporting the heck out of anyone that gets in their way. If Twitter doesn’t make a nominal effort to protect those researchers, there will be much more high-profile (and damaging) bannings to come.

(Incidentally the fact that the report does not actually tell me if I have been banned by a programmatic scan –having 5g and vaccines in the same tweet — or via a report is very bad in terms of both transparency and utility. I actually need to know whether it is a troll report or algorithm. If it’s an algorithm, it’s a lightning strike, and I go on the way I have. If the trolls have found me, that’s a different problem, and one I need to be alerted to.)

If the appeal doesn’t come through soon, I’ll remove the tweet, which I guess means I’ll see you all in about 12-24 hours. (UPDATE: I have removed the tweet and am back)

One final note — I also hesitated putting this up because I don’t want to field questions from reporters about it. So many women and people of color deal with this sort of issue constantly, due to targeting by trolls. Talk to them, not me. Maybe actually phone up a sex worker and learn about crazy path they have to thread on various platforms to avoid being shadowbanned, or social justice activists whose every sarcastic tweet is pored over and brigaded by trolls looking to get them kicked off the platform. Also, as I said, I’m broadly supportive of Twitter’s efforts to keep COVID misinfo off the platform. To paraphrase the famous Obama quote, I’m not against moderation, I’m against dumb moderation. But if you are a reporter looking to talk about moderation challenges, I highly suggest talking to people besides me. You can start with Sarah T. Roberts on what really goes on behind the scenes, and Safiya Noble on the issues of algorithmic enforcement (which again, are felt less by people like me than others). I find Siva Vaidhyanathan’s thesis that the system cannot actually be made to work a bit more pessimistic than my take, but one that deserves more airtime. And of course for general policy perspective on platforms, my colleague at the Center for an Informed Public, Ryan Calo, is always a good call.

If on the other hand you want to talk about my work on COVID misinfo and the new and effective model of digital literacy I promote, feel free to email me at michael.caulfield@wsu.edu. Direct messages at @holden probably won’t be up for a while.

Normie Infiltration

Still banned from Twitter (over a dumb mistake their algorithm made), so I’ll just put this here — I am finding it really hard to figure out if some of these QAnon groups are really rifting at the moment over things not playing out as they were told or if the groups have been infiltrated by normies posing as disaffected QAnon supporters. I think honestly it’s a bit of both, I just don’t know the ratio.

Screenshot of folks losing faith on the Great Awakening site.

If normie infiltration plays even a small role, that’s honestly fascinating. Infiltration not by radicals but by centrists. Strange times.

Microclout

I have a couple people in my online social circle who were over the past month telling followers to “just watch” what would happen on the 6th, when everybody but them and their followers would be surprised that Joe Biden didn’t become president. At first, Mike Pence was going to heroically pull some imagined maneuver. Then it was another theory. But the idea from the posters was the same: remember who was right and who was wrong, they’d ask, when this all happens.

I don’t think they were expecting what happened to happen. But I think they were doing something that feels very much like clout-building: taking a gamble on being the one person who seemed in the know, because the rewards would be significant if true.

There’s talk right now about the number of social media influencers at the Capitol Insurrection. A lot of the people leading it were media stars, and it’s difficult to know how much of it they did for their brand, and how much was for the desired result.

But I’m not sure those dynamics stop at a certain floor of users. It seems to me that everyone has at least a few people in their online circles who are approaching issues around these events and conspiracies related to them as a brand-building process. In that case, can we really say the motivation is as simple as “confirmation bias”? Or would we be better off thinking of these dynamics around issues of personal brand-building, its incentives and disincentives?

When it comes to disinformation, the public is a vector, not a target.

Disinformation has always been about getting elites to do things. That’s the point that so many who have looked at what percentage of ppl saw what on Facebook have missed. The public isn’t a target — it’s a vector (and it’s not the only vector).

Hopefully, as we watch what’s going on today, people can see that now? We track spread, but the real measure is penetration into groups that either make decisions or exert broad public influence. Or exert influence over those with influence.

Whether it’s our President who is talking about “shredded votes” in Fulton County, the politicians frightened of a small but heavily deluded set of future primary voters, or health care workers starting to plug into antivax networks due to COVID, that’s what to watch.

And by that measure, I’m sorry to say, we’re looking increasingly fucked.

Control-F and Building Resilient Information Networks

In the misinformation field there’s often a weird dynamic between the short-term and long-term gains folks. Maybe I don’t go to the right meetings, but my guess is if you went to a conference on structural racism and talked about a redesigning the mortgage interest deduction in a way that was built to specifically build black wealth rather than intensify racial wealth gaps most of the folks there would be fine with yes-anding it. Let’s get that done short term, and other stuff long-term. Put it on the road map.

In misinformation, however, the short term and long term people are perpetually at war. It’s as if you went to the structural racism conference and presented on revised mortgage policy and someone asked you how that freed children from cages on the border. And when you said it didn’t, they threw up their hands and said, “See?”

Control-F as an Example of a Targeted Approach

Here’s an example: control-f. In my classes, I teach our students to use control-f to find stuff on web pages. And I beg other teachers to teach control-f as well. Some folks look at that and say, that’s ridiculous. Mike, you’re not going to de-radicalize Nazis by teaching them control-f. It’s not going to address cognitive bias. It doesn’t give them deep critical thinking powers, or undo the resentment that fuels disinformation’s spread.

But consider the tactics used by propagandists, conspiracy theorists, bad actors, and the garden variety misinformed. Here’s a guy yesterday implying that the current coronavirus outbreak is potentially a bioweapon, developed with the help of Chinese spies (That’s how I read the implication at least).

Screenshotted tweet links to CBC article and claims it describes a husband and wife were Chinese “spies” removed from a facility for sending pathogens back to China.

Now is that true? It’s linked to the CBC, after all. That’s a reputable outlet.

The first thing you have to do to verify it is click the link. And right there, most students don’t know they should do that. They really don’t. It’s where most students fail, actually, their lack of link-clicking. But the second thing you have to do is see whether the article actually supports that summary.

How do you do that? Well, you could advise people to fully read the article, in which case zero people are going to do that because it takes too long to do for every tweet or email or post. And if it takes too long, the most careless people in the network will tweet unverified claims (because they are comfortable with not verifying) and the most careful people will tweet nothing (because they don’t have time to verify to their level of certainty). And if you multiply that out over a few hundred million nodes you get the web as we have it today, victim of the Yeats Effect (“The best lack all conviction, while the worst / Are full of passionate intensity”). The reckless post left and right and the careful barely post at all.

The Yeats Effect Is Partially a Product of Time Disparities

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment).

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted?

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f. Roll the animated GIF!

In the GIF above we show a person checking whether key terms in the tweet about the virus researchers are found in the article. Here we check “spy”, but we can quickly follow up with other terms: coronavirus, threat, steal, send.

I just did this for the tweeted article, and repeatedly those terms are found either not at all or in links to other unrelated stories. Except for threat, which turned up this paragraph that says the opposite of what the tweet alleges:

Paragraph indicating no threat to public perceived. Which would be odd if they were shipping deadly viruses around.

The idea here is not that if those specific words are not found that the contextualization is wrong. But rather than reading every article cited to determine whether it has been correctly contextualized, a person can quickly identify cases which have a high probability of being miscontextualized and are therefore worth the effort to correct. And for every case like this, where it’s reckless summary, there’s maybe ten other cases where the first term helps the user verify it’s good to share. Again, in less than a few seconds.

But People Know This, Right?

Now, here’s the kicker. You might think that since this form of verification triage is so easy that we’d be in a better situation. One theory is that people know about control-f, but they just don’t care. They like their disinfo, they can’t be bothered. (I know there’s a mobile issue too, but that’s another post). If everybody knows this and doesn’t do it, isn’t that just more evidence that we are not looking at a skills issue?

Except, if you were going to make that argument, you’d have to show that everybody actually does know about control-f. It wouldn’t be the end of the argument — I could reply that knowing and having a habit are different — but that’s where we’d start.

So think for a minute. How many people know that you can use control-f and other functions to search a page? What percentage of internet users? How close to a 100% is it? What do we have to work with —

Eh, I can’t drag out the suspense any longer. This is an older finding, internal to Google: only 10% of internet users know how to use control-F.

I have looked for more recent studies and I can’t find them. But I know in my classes many-to-most students have never heard of control-f, and another portion is aware it can be used in things like Microsoft Word, but unaware it’s a cross-application feature available on the web. When I look over student shoulders as they execute web search tasks, I repeatedly find them reading every word of a document to answer a specific question about the document. In a class of 25 or so there’s maybe one student that uses control-f naturally coming into the class.

Can We Do This Already Please

If we assume that people have a limited amount of effort they’ll expend on verification, the lack of knowledge here may be as big a barrier as other cognitive biases. Why we aren’t vigorously addressing issues like this in order to build a more resilient information network (and even to just help students efficiently do things like study!) is something I continue to not understand. Yes, we have big issues. But can we take five minutes and show people how to search?

Memorizing Lists of Cognitive Biases Won’t Help

From the Twitters, by me.

What’s the cognitive bias that explains why someone would think having a list of 200 cognitive biases bookmarked would make them any better at thinking?

Image
Screenshot of tweet encouraging people to read a list of 200 biases to be a better thinker.

(It literally says it’s “to help you remember” 200+ biases. Two hundred! LOL, critical thinking boosters are hilarious)

 I should be clear — biases are a great way to look at certain issues *after* the fact, and it’s good to know that you’re biased. Our own methods look pretty deeply at certain types of bias and try to design methods that route around them, or use them to advantage.

But if you want to change your own behavior, memorizing long lists of biases isn’t going to help you. If anything it’s likely to just become another weapon in your motivated reasoning arsenal. You can literally read the list of biases to see why reading the list won’t work. 

The alternate approach, ala Simon/Gigerenzer, is to see “biases” not as failings but as useful rules of thumb that are inapplicable in certain circumstances, and push people towards rules of thumb that better suit the environment. 

As an example, salience bias — paying more attention to things that are prominent or emotionally striking — is a pretty useful behavior in most circumstances, particularly in personal life or local events. 

It falls apart partly because in larger domains – city, state, country – there’s more emotional and striking events than you can count, which means you can be easily manipulated through selection, and because larger problems often are not tied to the most emotional events. 

Does that mean we should throw away our emotional reaction as a guide altogether? Ignore things that are more prominent? Not use emotion as any indication of what to pay attention to?

Not at all. Instead we need to think carefully about how to make sure the emotion and our methods/environment work *together*. 

Reading that list of biases may start with “I will not be fooled,” but it probably ends with some dude telling you family separation at the border isn’t a problem because “it’s really the salience effect at work”. 

TL;DR: biases aren’t wholly bad, and the flip side of a bias is a useful heuristic. Instead of thinking about biases and eliminating them, think about applying the right heuristics to the right sorts of problems, and organizing your environment in such a way that the heuristics don’t get hacked.

The Stigmergic Myth of Social Media, or Why Thinking About Radicalization Without Thinking About Radicalizers Doesn’t Work.

One of the founding myths of internet culture, and particularly web culture, is the principle of stigmergy.

This will sound weird, but stigmergy is about ant behavior. Basically, ants do various things to try to accomplish objectives (e.g. get food to nest) but rather than a command and control structure to coordinate they use pheromones, or something like pheromones. (My new goal is to write shorter, quicker blog posts this year, and that means not spiraling into my usually obsession with precision. So let’s just say something like pheromones. Maybe actually pheromones. You get the point.)

So, for example, ants wander all over, and they are leaving maybe one scent, but they go and find the Pringle crumbs and as they come back with the food they leave another scent. A little scent trail. And then other ants looking for Pringles stumble over that scent trail and they follow it to the Pringle crumbs. And then all those ants leave a scent as they come back with their Pringle crumbs, and what happens over time is the most productive paths have the best and strongest smell.

If you think this smells very E. O. Wilson, it is. But it’s not just E. O. Wilson. This stuff was everywhere in the 1990s. Take “desire paths”, which was a metaphor I first heard when I landed in the middle of the dot com explosion. The story goes some university somewhere doesn’t build paths when they first put up the buildings. Instead, they wait for the first snow, and see where the tracks between the buildings come out. And where the tracks fall they put the paths. Another one talked about the worness of objects as an indicator. And in my first meeting with a MediaLab grad in 1999 (who’d been hired as a consultant for the educational software company I worked for) he described to me his major project: Patina, a web site whose pages showed visible signs of wear they more they were read.

This stuff was everywhere in the 1990s, and when Web 2.0 came around it was the founding mythology. I swear, unless you were around then, you have no idea how this cluster of metaphors formed the thinking of Silicon Valley. You really don’t.

And like a lot of mythologies, there’s a lot of truth to it. When I say myth, I don’t mean it’s wrong. It’s a good way to think about a lot of things. I have built (and will continue to build) a lot of projects around these principles.

But it’s also a real hindrance when we talk about disinfo and bad actors. Because the general idea in the Stigmergic Myth is that uncoordinated individual action is capable of expressing a representative collective consciousness. And in that case all we have to do is set up a system of signals that truly capture that collective or consensus intent.

But none of the founding myths — ants and Pringles, Swedish college desire paths, or even Galton’s ox weighing — deal with opposing, foundational interests. And opposing interests change everything. There isn’t a collective will or consciousness to express.

Faced with this issue, Web 2.0 doubled down. The real issue was the signals were getting hacked. And that’s absolutely true. There was a lot of counterfeit pheromone about, and getting rid of that was crucial. Don’t discount that.

But the underlying reality was never addressed. In areas where cooperation and equality prevails, the Stigmergic Myth is useful. But in areas of conflict and inequality, it can be a real hindrance to understanding what is going on. It can be far less less an expression of collective will or desire than other less distributed approaches, and while fixing the signals and the system is crucial, it’s worth asking if the underlying myth is holding our understanding back.

A New Year’s Eve Activity (and a New Year’s Day Wish)

I made a short video showing a New Year’s Eve Activity around SIFT, and getting serious for a minute with a New Year’s Day wish.

I don’t know how many people know this about me, but I actually study misinfo/disinfo pretty deeply, outside of my short videos on how to do quick checks. If anything, I probably spend too much time keeping up with the latest social science, cognitive theory, network analysis, etc. on the issue.

But scholarship and civic action are different. Action to me is like Weber’s politics, the slow drilling of hard boards, taking passion and perspective. You figure out where you can make a meaningful difference. You find where the cold hard reality of where we are intersects with a public’s desire to make things better. And then you drill.

It’s been three long exhausting years since I put out Web Literacy for Student Fact-Checkers, and over a decade since I got into civic digital literacies. I’m still learning, still adapting. And still drilling.

Happy New Year, everyone. And thanks to everyone that has helped me on this weird, weird, journey.