Terraforming is a process found in science fiction novels of deliberately modifying the atmosphere and ecology of a planet to make it more habitable for a given life-form. In early sci-fi, that life form was human — drop a few machines on a planet, watch them spin up an atmosphere and ecology, have the humans come back in a few decades or centuries and settle. In later sci-fi, it was often aliens intent on terraforming earth, creating a planet more habitable for them, but deadly for us.
Disinformation can have a terraforming effect too, in the second sense. One of the prominent trends of the past year has been that the disinformation around the Big Lie has created momentum around a host of legislative and policy changes that will make disinformation both cheaper to produce and more impactful.
For instance, the false story that Ruby Freeman was caught on video taking ballots from a “suitcase” was made possible by the transparency measure of having publicly viewable video of the counting facility. This false story in turn created outrage — some genuine and misguided, some cynical and strategic — that has resulted in a push for more counties to have more live feeds from which more video can be deceptively clipped and inaccurately summarized.
False concerns about supposed ballot irregularities have led to publicly available ballot scans in some places, and imperfections in the process by which those scans are stored or released create new news pegs on which to hang dubious fraud allegations. False stories about the Arizona election result in the creation of a bogus external “audit” which generates daily misinformation, which fuels the push in other states for similar external audits.
Each step seems to lead to another, where the material and processes that misinformation thrives on becomes more ubiquitous, more compelling, more ever-present.
Of course, there are people behind all of this, just as in science fiction there’s always someone who dropped the terraformer on the planet’s surface. But there’s also a certain emergent momentum to the whole process that is bigger than any given actor. Seen from this perspective, disinformation is quickly terraforming its environment, making it more habitable and productive for disinformation. In the end, that is going to make it a whole lot less habitable for all of us.
If you want to study something, a first step might be to go out and collect it. If I was looking for themes in 16th century poetry about food, I’d go out and get 16th century poems about food. If I wanted to look at personal narratives of medical tragedy, I’d either solicit such information or pull examples from existing corpora.
When you get to misinformation, however, this doesn’t quite work. There is a fabulous line in Kapferer’s book on rumor, where he notes many early scholars in the field chose to use as examples untrue rumors; in practice they’re studying misinformation. But as Kapferer pointed out, this avoids the main social problem of rumor. Rumors, Kapferer pointed out, are not a social problem because they are false. Were that the case, people would long ago have ceased to traffic in them. Instead, he points out, “rumors are bothersome because they may turn out to be true.”
Starting from a point where something is already deemed misinformation hides that tension. It’s certainly useful to collect things that continue to circulate on the internet long after they are shown to be in error and ask why they persist. But misinformation often starts out as something more ambiguous, and that initial ambiguity is a crucial part of the story. And so when I think about misinformation around elections and what to look at, I lean less towards “misinformation” as the object of study, and more toward Kapferer’s definition of rumor:
We shall thus label “rumor” the emergence and circulation in society of information that is either not yet publicly confirmed by official sources or denied by them. “Hearsay” is what goes “unsaid,” either because rumors get the jump on official sources (rumors of resignations and devaluations) or because the latter disagree with the former (e.g., the rumor about the “true” culprits in the assassination of John F. Kennedy).
(Rumors, p 13)
Kapferer’s definition gets at the heart of a structural issue for me. What he calls “hearsay” thrives in two separate but related environments. The first is the area which Shibutani’s Improvised News explicated so brilliantly. When official channels fail to provide necessary information, hearsay provides an alternative network to fill in the gaps. This is a collective sense-making. The second is related, but separate: when there is a dispute about “who gets to speak” to an issue, hearsay thrives as an alternative to official channels. In other words, an adversarial sense-making.
Both of these functions are necessary to information environments, which is one reason why “hearsay” exists. But narrow the focus to misinformation, and we can become quickly reductive. Of course if things are wrong and harmful we want to minimize their spread. Such a world doesn’t need to reckon with trade-offs. When we broaden the frame from misinformation to hearsay, the key problem becomes visible. We need hearsay, both to fill informational gaps and to challenge official accounts.
Helping Hearsay
Much of the story of misinformation has been told by institutions who value the official account. In these tellings, misinformation has chipped away at institutional influence, with disastrous results. For them, misinformation is a corrosive force which undermines the utility of the official information system. In this case hearsay, whether they want to admit it or not, is treated as a disease within the body institutional.
There’s truth in this, but I’d propose a return to an underutilized frame, one that has historically informed a number of the fields that have been brought under the misinformation umbrella, from rumor studies to crisis informatics. Instead of seeing versions of hearsay (non-institutional systems of news and analysis) as primarily damaging institutional systems, we could choose to see the hearsay system itself as the thing under attack. That is, in the age of social media, a valuable system of non-institutional knowledge is increasingly gameable and gamed, rendered useless by a variety of threats and incentives that are polluting not the institutional space, but the hearsay space.
No single vision is ever adequate. But leaning at least a bit more into this question — how do we repair and restore the value of informal knowledge networks by protecting them from corruption — might get us to better, and more humane solutions. And it might help dial down what is often an unhelpful posture to both collective and adversarial sense-making, by centering the “bothersome” usefulness and centrality of these systems.
There is a well-known saying — “it’s not the crime, it’s the coverup that gets you.” This is true in the obvious way it is usually meant: many administrative crimes are difficult to prove. They happen at a particular moment, are witnessed by few, and intent is notoriously difficult to get at. Cover-ups, on the other hand, often spiral slowly outwards. Bureaucracies always create many more paper trails than any one individual realizes, and a person covering up a crime ends up realizing that over time.
But in politics and business, coverups are damaging in another way. The truth is that individuals have very little idea what constitutes acceptable behavior in these realms. Suppose someone overheard someone in their office talking about non-public information about a company whose stock they owned. They then called and sold that stock. Is that legal or illegal? People don’t know. So people use a simple signal — If the person tried to hide it, it was probably wrong. If they didn’t, it was probably no big deal. Looking at the behavior is easier than looking through insider trading law for most people.
This signal (concealment = wrongdoing) is pretty ingrained for most people. But the depth of that instinctive reaction raises two interesting problems. First, when lawlessness (or other unethical behavior) is done in the open we have sometimes have a hard time processing it as wrongdoing. Second, when mundane behavior is either concealed or portrayed as being concealed, we sometimes process mundane acts, communications, and events as being nefarious. As usual, propagandists use these patterns to their advantage.
When acceptable behavior is out in the open it intensifies our feeling of acceptableness, and when bad behavior is hidden it intensifies our feeling of wrongdoing. But what about the two other squares on the matrix (open/not ok, hidden/ok)?
Transparent wrongdoing
As has been noted by many commenters, there’s a bit of a paradox with the events of January 6th. Experts disagree whether it was a coup or an insurrection (the difference hinging on the level to which it was guided by elites). But it was clearly one of the two. And one way we know this is it was one of the most documented events in all of human history. The people participating, many believing that they would prevail and wanting credit for that, filmed themselves doing it. Tweeted out that they were headed to the insurrection. Compared it to 1776. None of it was hidden.
That transparency should work in the favor of culpability, and in a narrow legal sense it has. Generally, don’t film yourself doing crimes. But as we have put time in between ourselves and the event, the transparency is being leveraged in another way by propagandists. Propagandists have turned the tables — if this really was an insurrection, they ask, then why were they all filming themselves doing it? To the sociologist and psychologist there are relatively easy answers to these questions; but the transparency signal is not a imminently logical one, and doesn’t respond to footnotes. If you’re explaining, as they say, you’re losing.
In general, we’ve seen a lot of this in recent history. This is a short blog post, so I won’t make a comprehensive list here, but I think anyone who has watched the news over the past few years could make a list of questionable and sometimes illegal behavior that benefitted publicly from the argument “If it was really that wrong, would it have been done this publicly?”
In this way, transparency can take on a secondary, if symbolic, meaning. Transparency often means to be able to “look in” to the inner workings of an organization or action. But for the propagandist, claims of transparency can be used to “look past”, to render the bad behavior itself invisible.
Mundane revelations
The second interesting combination is when the behavior is acceptable, even mundane, and yet a feeling of concealment is used to create an appearance of wrong-doing. Consider two interviews. In one, a scientist makes a claim in a high production clearly official interview that “We don’t actually know how much the earth will heat up in the next ten years.” This is pretty mundane stuff: we know the earth is heating up, but models disagree on the amount we will see in the short term. Taped by a film crew and put on Dateline, you’re likely to read the comment just in that way.
Now take the same interview, and do it with a hidden camera. Suddenly, the same quote can feel like a hidden admission.
We see this a lot in propaganda. Video is caught of someone doing something out in the open, but it is surveillance video, or hidden video. Likewise, leaked or hacked emails can often reveal the boring lives of mid-level bureaucrats and press officers. Highlight the fact it was leaked and add some ominous red circles and highlight, and suddenly the impression of concealment can create a story where there is none. After all, official documents may say, but it’s purportedly “secret” documents that reveal.