The Fyre Festival and the Trumpet of Amplification

Unless you’ve been living under a rock, you’re probably aware that there are two documentaries out on the doomed Fyre Festival. You should watch both: the event — both its dynamics and the personalities associated with it — will give you disturbing insights into our current moment. And if you teach students about disinformation I’d go so far as to assign one or both of the documentaries.

Here is one connection between the events depicted in the film and disinfo. There are many others. (This post is not intended for researchers of disinfo, but for teachers looking to help students understand some of the mechanisms).

The Orange Square

Key to the Fyre Festival story is the orange square, a bit of paid coordinated posting by a set of supermodels and other influencers. The models and influencers, including such folks as Kendall Jenner, were paid hundreds of thousands of dollars to post the same message with a mysterious orange square on the same day. And thus an event was born.

Related image

People new to disinformation and influencer marketing might think the primary idea here is to reach all the influencer followers. And that’s part of it. But of course, if that were the case you wouldn’t need to have people all post at the same time. You wouldn’t need the “visual disruption” of the orange square.

The point here is not to reach followers, but to catalyze a much larger reaction. That reaction, in part, is media stories like this by the Los Angeles Times.

And of course it wasn’t just the LA Times: it was dozens (hundreds?) of blogs and publications. It was YouTubers talking about it. Music bloggers. Mid-level elites. Other influencers wanting in on the buzz. The coordinated event also gave credibility required to book bands, the booking of the bands created more credibility, more news pegs, and so on.

You can think of this as a sort of nuclear reaction. In the middle of the event sits some fissile material — the media, conspiracy thought leaders, dispossessed or bitter political influencers. Around it are laid synchronized charges that, should they go off right, catalyze a larger, more enduring reaction. If you do it right, a small amount of social media TNT can create an impact several orders of magnitude larger than its input.

Enter the Trumpet

Central to understanding this is the fissile material is not the general public, at least at first. As a marketer or disinfo agent you often work your way upward to get downward effects. Claire Wardle, drawing on the work of Whitney Phillips and others, expresses one version of this in the “trumpet of amplification“:

Image result for "claire wardle" trumpet

Here the trumpet reflects a less direct strategy than Fyre, starting by influencing smaller, less influential communities, refining messages then pushing them up the influence ladder. But many of the principles are the same. With a relatively small number of resources applied in a focused, time-compressed pattern you can jump start a larger and more enduring reaction that gives the appearance of legitimacy — and may even be self-sustaining once manipulation stops. Maybe that appearance of legitimacy is applied to getting investors and festival attendees to part with their money. Or maybe it’s to create the appearance that there’s a “debate” about whether the humanitarian White Helmets are actually secret CIA assets:

Maybe the goal is disorientation. Maybe it’s buzz. Maybe it’s information — these techniques, of course, are also often used ethically by activists looking to call attention to a certain issue.

Why does this work? Well, part of it is the nature of the network. In theory the network aggregates the likes, dislikes and interests of billions of individuals and if some of those interests begin to align — shock at a recent news story for example — then that story breaks through the noise and gets noticed. When this happens without coordination it’s often referred to as “organic” activity.

The dream of many early on was that such organic activity would help us discover things we might otherwise not. And it has absolutely done that — from Charlie Bit My Finger to tsunami live feeds this sort of setup proved good at pushing certain types of content in front of us. And it worked in roughly this same sort of way — organic activity catches the eyes of influencers who then spread it more broadly. People get the perfect viral dance video, learn of a recent earthquake, discover a new opinion piece that everyone is talking about.

But there are plenty of ways that marketers, activists, and propagandists can game this. Fyre used paid coordinated activity, but of course activists often use unpaid coordinated activity to push issues in front of people. They try to catch the attention of mid-level elites that get it in front of reporters and so on. Marketers often just pay the influencers. Bad actors seed hyperpartisan or conspiracy-minded content in smaller communities, ping it around with bots and loyal foot soldiers, and build enough momentum around it that it escapes that community. giving the appearance to reporters and others of an emerging trend or critique.

We tend to think of the activists as different from the marketers and the marketers as different from the bad actors but there’s really no clear line. The disturbing fact is it takes frightfully little coordinated action to catalyze these larger social reactions. And while it’s comforting to think that the flaw here is with the masses, collectively producing bizarre and delusional results, the weakness of the system more likely lie with a much smaller set of influencers, who can be specifically targeted, infiltrated, duped, or just plain bought.

Thinking about disinfo, attention, and influence in this way — not as mass delusion but as the hacking of specific parts of an attention and influence system — can give us better insight into how realities are spun up from nothing and ultimately help us find better, more targeted solutions. And for influencers — even those mid-level folks with ten to fifty thousand followers — it can help them come to terms with their crucial impact on the system, and understand the responsibilities that come with that.

Smoking out the Washington Post imposter in a dozen seconds or less

So today a group known for pranks circulated an imposter site that posed as the Washington Post, announcing President Trump’s resignation on a post-dated paper. It’s not that hard for hoaxers to do this – any one can come up with a confusingly similar url to a popular site, grab some HTML and make a fake site. These sites often have a short lifespan once they go viral — the media properties they are posing as lean on the hosters who pull the plug. But once it goes viral the damage is done, right?

It’s worth noting that you don’t need a deep understanding of the press or communications theory to avoid being duped here. You don’t even need to be a careful reader. Our two methods for dealing with this are dirt simple:

  • Just add Wikipedia (our omnibar hack to investigate a source)
  • Google News Search & Scan (our technique we apply to stories that should have significant coverage).

You can use either of these for this issue. The way we look for an imposter using Wikipedia is this:

  1. Go up to the “omnibar” and turn the url into a search by adding space + wikipedia
  2. Click through to the article on the publication you are supposedly looking at.
  3. Scroll to the part of the sidebar with a link to the site, click it.
  4. See if the site it brings you to is the same site

Here’s what that looks like in GIF form (sorry for the big download).

I haven’t sped that up, btw. That’s your answer in 12 seconds.

Now some people might say, well if you read the date of the paper you’d know. Or if you knew the fonts associated with the Washington Post you’d realize the fonts were off. But none of these are broadly applicable habits. Every time you look at a paper like this there will be a multitude of signals that argue for the authenticity of the paper and a bunch that argue against it. And hopefully you pick up on the former for things that are real and the latter for things that aren’t, but if you want to be quick, decisive, and habitual about it you should use broadly applicable measures that give you clear answers (when clear answers are available) and mixed signals only when the question is actually complex.

When I present these problems to students or faculty I find that people can *always* find what they “should have” noticed after the fact. But of course it’s different every time and it’s never conclusive. What if the fonts had been accurate? Does that mean it’s really the Post? What if the date was right? Trustworthy then?

The key isn’t figuring out the things that don’t match after the fact. The key is knowing the most reliable way to solve the whole class of problem, no matter what the imposter got right or wrong. And ideally you ask questions where a positive answer has a chance of being as meaningful as a negative one.

Anyway, the other route to checking this is just as easy — our check other coverage method, using a Google News Search:

  1. Go to the omnibar, search [trump resigns]
  2. When you get to the Google results, don’t stop. Click into Google News for a more curated search
  3. Note that in this case there are zero stories about Trump resigning and quite a lot about the hoax.
  4. There is no step four — you’re done

Again, here it is in all it’s GIF majesty:

You’ll notice that you do need to practice a bit of care here — some publishers try to clickbait the headline by putting the resignation first, hoping that the fact it was fake gets trimmed off and gets a click. (If I were king of the world I’d have a three strikes policy for this sort of stuff and push repeat offenders out of the cluster feature spots, but that’s just me). Still, scanning over these headlines even in the most careless way possible it would be very hard not to pick up this was a fake story.

Note that in this case we don’t even need these fact-checks to exist. If we get to this page and there are no stories about Trump resigning, then it didn’t happen — for two reasons. First, if it happened there would be broad coverage. Second, even if the WaPo was the first story on this, we would see their story in the search results.

There’s lots of things we can teach students, and we should teach them them. But I’m always amazed that two years into this we haven’t even taught them techniques as simple as this.

Why Reputation?

As I was reading An Xiao Mina’s recent (and excellent) piece for Nieman Lab, and it reminded me that I had not yet written here about why I’ve increasingly been talking about reputation as a core part of online digital literacy. Trust, yes, consensus, yes. But I keep coming back to this idea of reputation.

Why? Well, the short answer is Gloria Origgi. Her book, Reputation, is too techno-optimist in parts, but is still easily the most influential book I’ve read in the past year. Core to Origgi’s work is the idea that reputation is both a social relation and a social heuristic, and these two aspects of reputation have a dynamic relationship. I have a reputation, which is the trace of past events and current relationships in a social system. But that reputation isn’t really separate from the techniques others use to decode and utilize my reputation for decision-making.

This relationship is synergistic. As an example, reputation is subject to the Matthew Effect, where a person who is initially perceived as smart can gain additional reputation for brilliance at a fraction of the cost of someone initially perceived as mediocre. This is because quick assessments of intelligence will have to weight past assessments of others — as a person expands their social circle initial judgments are often carried forward, even if those initial judgments are flawed.

Reputation as a social heuristic maps well onto our methods of course — both Origgi and the Digital Polarization initiative look to models from Simon and Gigerenzer for inspiration. But it also suggests a theory of change.

Compare the idea of “trust” to that of “reputation”. Trust is an end result. You want to measure it. You want to look for and address the things that are reducing trust. And, as I’ve argued, media literacy programs should be assessing shifts in trust, seeing if students move out of “trust compression” (where everything is moderately untrustworthy) to a place where they make bigger and more accurate distinctions.

But trust is not what is read, and when we look at low-trust populations it can often seem like there is not much for media literacy to do. People don’t trust others because they’ve been wronged. Etc. What exactly does that have to do with literacy?

But that’s not the whole story, obviously. In between past experience, tribalism, culture, and the maintenance of trust is a process of reading reputation and making use of it. And what we find is that, time and time again, bad heuristics accelerate and amplify bad underlying issues.

I’ve used the example of PewDiepie and his inadvertent promotion of a Nazi-friendly site as an example of this before. PewDiepie certainly has issues, and seems to share a cultural space that has more in common with /pol/ than #resist. But one imagines that he did not want to risk millions of dollars to promote a random analysis of Death Note by a person posting Hitler speeches. And yet, through an error in reading reputation, he did. Just as the Matthew Effect compounds initial errors in judgment when heuristics are injudiciously applied, errors in applying reputation heuristics tend to make bad situations worse — his judgment about an alt-right YouTuber flows to his followers who then attach some of PewDiepie’s reputation to the ideas presented therein — based, mostly, on his mistake.

I could write all day on this, but maybe one more example. There’s an old heuristic about the reputation of positions on issues — “in matters indifferent, side with the majority.” This can be modified in a number of ways — you might want to side with the qualified majority when it comes to treating your prostate cancer. You might side with the majority of people who share your values on an issue around justice. You might side with a majority of people like you on an issue that has some personal aspects — say, what laptop to get or job to take. Or you might choose a hybrid approach — if you are a woman considering a mastectomy you might do well to consider what the majority of qualified women say about the necessity of the procedure.

The problem, however, from a heuristic standpoint, is that it is far easier to signal (and read the signal) of attributes like values or culture or identity than it is to read qualifications — and one underremarked aspect of polarization is that — relative to other signals — partisan identity has become far easier to read than it was 20 years ago, and expertise has become more difficult in some ways.

One reaction to this is to say — well people have become more partisan. And that’s true! But a compounding factor is that as reputational signals around partisan identity have become more salient and reputational signals around expertise have become more muddled (by astroturfing, CNN punditocracy, etc) people have gravitated to weighting the salient signals more heavily. Stuff that is easier to read is quicker to use. And so you have something like the Matthew Effect — people become more partisan, which makes those signals more salient, which pushes more people to use those signals, which makes people more partisan about an expanding array of issues. What’s the Republican position on cat litter? In 2019, we’ll probably find out. And so on.

If you want to break that cycle, you need to make expertise more salient relative to partisan signals, and show people techniques to read expertise as quickly as partisan identity. Better heuristics and an information environment that empowers quick assessment of things like expertise and agenda can help people to build better, fuller, and more self aware models of reputation, and this, in turn, can have meaningful impact on the underlying issues.

Well, this has not turned into the short post I had hoped, and to do it right I’d probably want to talk ten more pages. But one New Year’s resolution was to publish more WordPress drafts, so here you go. 🙂

Some Notes On Installing Federated Wiki On Windows

It’s 2018, and I’ve still not found anything that helps me think as clearly as federated wiki. At the same time, running a web server of your own is still, in 2018, a royal pain. Case in point: recently a series of credit card breaches forced a series of changes in my credit card number (two breaches in one year, hooray). And that ended up wiping out my Digital Ocean account as it silently failed the monthly renewals. Personal cyberinfrastructure is a drag, man.

But such is life. So I recently started looking at whether I could do federated wiki just on my laptop and not deal with a remote server. It doesn’t get me into the federation, per se, but it allows all the other benefits of federated wiki — drag-and-drop refactoring, quick idea linking, iterative note-taking, true hypertext thinking.

It turns out to be really easy (I mean as things go with this stuff). I’ll go into detail more below, but here are the steps:

  1. Download Node.js for Windows. Install.
  2. Open a command window and type: npm install -g wiki
  3. Launch via command window: wiki -p 80 –security_type friends –cookieSecret ‘REPLACE-THIS-SECRET’
  4. Navigate to localhost in a browser
  5. Click the lock to “claim” the wiki as owner
  6. Click the “wiki” link to take it out of read-only mode.
  7. Go forth and wiki…..

Installation

Step one: Download Node.js for Windows. Install.

Step two: Open a command window and type: npm install -g wiki

It’s installed!

Initial Startup

To start your wiki go to a command prompt and type:

wiki -p 80 --security_type friends --cookieSecret 'REPLACE-THIS-SECRET' 

You may need to give node some permissions. I won’t advise you on that. But you definitely don’t need to give public networks access to your server if you don’t want.

Go to your localhost. You’ll get the start page.

Claiming your wiki

When you first visit your wiki it will be in unclaimed, read-only mode, and the bottom of the interface will look like this (though probably not have 47 pages):

When you click that lock icon, it will create a random username and go to unlocked position.

Once you do that you can click on the word “wiki” and now it will move out of read-only into edit mode:

You’ll know it’s in edit mode because you’ll see the edit history icons (sometimes colloquially referred to as ‘chiclets’) at the bottom.

And — that’s it. You’re done. Wiki away.

You’ll need to launch the server from a command window each time you want to use it, but if you’re familiar with Windows you can write a bat file and put it in your startup folder.

(Incidentally, this isn’t a tutorial on how to use federated wiki. I’m tired, frankly, of trying to sell it to people who want to know why it takes more than fifteen minutes to learn. I don’t teach people it anymore because people have weird expectations and it wastes too much of my time trying to get past them. But if you’re one of the people who has made the jump, you know this — I just want to help you do it locally on your laptop.)

Optional stuff: Changing your name, importing or backing up files

You don’t need to know where files live on your computer, but sometimes it is useful. For instance, you might want to back up your pages, or reset a username. Here’s how you can do that.

In the single user mode we used above, wiki pages will be in a .wiki directory under your user directory. For instance, my directory is C:\Users\mcaulfield\.wiki\pages. They are simple json files, and can be backed up and zipped. You can also drop json files from other wiki instances here, though you’ll have to delete the sitemap.json file to reindex (more on that below).

For ownership and indexing issues there is a status directory under the .wiki directory (e.g. C:\Users\mcaulfield\.wiki\status). This has two important files in it. One is owner.json, which maintains login information (initially this will not be there — it’s written when you claim it). The other is your sitemap, which has a list of all pages and recent updates on them. Deleting the sitemap is useful when you want to regenerate it after manually uploading new files.

To change your username, you can edit the owner.json file. Change the name property.

If something goes wrong and you want to reinitiate the claim process, you can delete the owner.json file.

If you clear your cookies and hence loose your claim (i.e. are logged out), you can pull the secret from the json and enter it when prompted. It’s OK to change it to something simple and more password-like that you can remember.

The node files of your wiki installation will be in your AppData roaming directory under npm, e.g. C:\Users\mcaulfield\AppData\Roaming\npm\node_modules\wiki. There’s not an real reason to touch these files.

Running a personal desktop farm

This is only for federated wiki geeks, but it is completely possible to run a small personal desktop farm, where you can run multiple wiki sites in what are essentially separate notebooks. Just go into your hosts file (C:\Windows\System32\drivers\etc\hosts) and add localhost aliases:

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost
	127.0.0.1       disinfo
	127.0.0.1       journal
	127.0.0.1       papersonwiki
	127.0.0.1	sandbox
	127.0.0.1	teachersguide
	127.0.0.1	wikipediawomen
	127.0.0.1	opioidcrisis
	127.0.0.1	raceinamerica

Launch in farm mode (-f) and type these words into your browser omnibar. Each will maintain a separate wiki instance. If you want to be able to search across all instances, use the –autoseed flag. Note that you’ll have to go through the minimal claim process with each one (two clicks, shown above).

Pushing to a larger federation

If you want to push to a remote server, you can. There’s a couple ways to do this.

First, there’s a flag in wiki that allows you to point to a different directory for pages. So you can point that to a mapped drive or Dropbox or whatever on your laptop, and then point a remote server to that same directory.

Alternatively you could do a periodic rsync to the server. Windows 10 has bash native to it, so you can install that, reach your files through Bash for Windows’s /mnt/c/ mapping, and push them up that way.

In each case, you probably want to delete the sitemap.json and sitemap.xml to trigger a regeneration.

Interestingly, you could also use this scheme (I think) for joint generation of a public wiki.

IIRC, there is also a way to drag and drop json export files into wiki instances.

Finally, you can share files with people by zipping them up and emailing them or providing them as a zipped download. They in turn can drop them into their own federated wiki instance to work with. I’ve been thinking a lot about this model, which is very memex like — I make a notebook of my notes on something, post it up, you pull it into your machine. The provenance gets messy at scale, but among a group of people in a subfield that are being more descriptive in their practice than rhetorical this might work out fine.

It’s Good To Be Back

Using federated wiki again reminds me once again of what wiki means, in an etymological sense. It means quick.

What all other non-federated wiki systems lack is not just federation. They lack quickness, largely because they are designed towards novices and trade away possibilities for fluid authorship in exchange for making the first 15 minutes of use easier.

So while it may seem weird to run a federated wiki server on a laptop in a way that makes federation less available, if you’ve learned the method of multi-pane wiki it’s not really weird at all, because every note taking system you’ve used besides federated wiki is unbearably slow, clunky, and burdensome. Federated wiki, in the hands of someone that has mastered it, works at the speed of thought. And it does that whether your in the federation or not. So here’s to a very wiki New Year.

“Conspiracy Theorists” in 1934 and 1961

A quick follow-on to my last post — it’s worth mentioning that “conspiracy theorist” is also a much older term than many realize. A few years ago, in fact, a story was going around the forums that the term was either invented by the CIA or at least made an undesirable moniker by them.

Again, in reality, the term is much older and appears to have long been a term of derision even back then. Consider this use from 1934:

The differences of opinion now to be observed in the Congressional committees laboring with the Stock Exchange bill are explained by some thick-and-thin opponents of all changes in the bill by the existence of a conspiracy to defeat it. If there is a conspiracy, it is one of the most vocal in conspiratorial history. The investment bankers, their employees and some of their customers have been making the welkin ring with their complaints. If there has been any secret, backstairs work, the conspiracy theorists will surely find receptive audience for its exposure.

It is to be suspected, however, that the reorganized hesitation within the committees about the Fletcher-Rayburn bill “as is” rests upon more solid ground. The probability is that Senators and Representatives, like people, have come to see that there are risks in an indiscriminate attack upon “Wall Street” which cannot be brushed aside by references to the supposedly dubious of those who resist the attacks.

There’s lots for cultural historians to dig into here — this is, after all, a charge of conspiracy theory against supporters of an anti-Wall Street bill, which shows the ways the term is used to police narratives, for better and worse. But again, we see what we saw with conspiracy theory in the last post. From the beginning these terms have been negative, even if sometimes groups may have used that negative connotation to their own political ends.

It’s worth noting, of course, that conspiracy theory is used against hysteria as well, as in this letter in 1961 to a New Jersey paper (also not usually cited, the first cites for the OED are in 1964):

The conspiracy theory of history takes an admitted Communist plot against all free men and makes it virtually the sole factor responsible .for all phenomena that are not to the liking of the conspiracy theorists. Thus the fiasco of the Cuban revolt is seen not as the tragic miscalculation of wishful-thinking incompetents, which it apparently was . . . but as the usual sinister work of pro-Commie elements in our Government.

That there is a call, and a pretty unsubtle one at that, that the Bay of Pigs fiasco not be used as an excuse to slide back to McCarthyism. The letter is actually headed “Conspiracy Theory”. (The author, William Monaghan, actually wrote on this issue at least one more time — decrying rising Holocaust denialism of the time in 1963.)

The author continues after some details:

This sort of puerility also never recognizes that the admitted world conspiracy does have at times a large mass base of people who consider themselves in no way conspirators, but rather downtrodden ones who have found a cause and a regime that will bring better days to them. This was the case with the Chinese peasantry in the period of the rise to power of the Chinese Reds. . . . And it was the case recently when the Cuban masses refused to rise against a regime they still for the most part consider their benefactor, not their oppressor. No internal C. I. A. conspiracy but the facts of life in Cuba today foredoomed the invasion and , revolt attempt, much as we might wish it to have been otherwise. …

He concludes:

The conspiracy theorist will never abandon his pet intellectual hobby, because it gives him far too much of a sense of his own importance and his group’s significance in history. It is therefore not at all surprising that he should from time to time proclaim that his small group will turn out to have been the savior of this nation and of the liberals themselves.” 

WILLIAM E. MONAGHAN 548 Studio Road , Ridgefield, May 12, 1961.

The academic calls to think more critically about how we deploy the charge of conspiracy theory are welcome and overdue. Still, on the merits, I’m with William most days of the week, and hope that more people will cite his treatment of conspiracy theorists in their own histories.

The first use of the term “conspiracy theory” is much earlier — and more interesting — than historians have thought.

Was reading the new Oxford collection on conspiracy theory (quite an impressive collection, can be bought here) and noted that one of the articles dated the term conspiracy theory back to the 1870s. It’s not central to the author’s argument, but it’s not trivial either. The author sees the term as coming out of crime, and then navigating to politics:

Such considerations encourage a more systematic approach. By consulting databases that have digitized American newspapers from the nineteenth century, it is possible to gain an appreciation of how theory as a term made inroads into the discourse of crime. Thus, a search of the database America’s Historical Newspapers identifies the following dates as the earliest mention for conspiracy theory and other terms built on the template (crime x + theory):


murder theory (1867)
suicide theory (1871)
conspiracy theory (1874)
blackmail theory (1874)
abduction theory (1875)


From Conspiracy Theory: The Nineteenth-Century Prehistory of a Twentieth-Century Concept from Conspiracy Theories and the People Who Believe Them (p. 62). Oxford University Press. Kindle Edition. .


This is fairly common dating — scholarly accounts I’ve read have had similar or even later dates for the occurrence [see Wikipedia, for example, for the OED date of 1909(!!) as well as some other potential dates.]

However, unless I’m missing something, most of this is wrong. The first mention found in newspapers is in 1863, not the 1870s, and it does not come out of court terminology, but rather politics. In fact it is used much as it would be today, to derisively refer to a set of allegedly less educated people who see a secret plot when a simple non-conspiratorial narrative has much more explanatory power.

One note here for people who haven’t delved much into the literature on this — conspiracy theory as a popular term doesn’t really take off until the late 1950s, and the scholarly concern about conspiracy theories (under a variety of names) gets its first significant lift in the 1930s and 40s in discussions about totalitarianism and populism. And of course conspiracy theories — under different names — have existed since the dawn of time. When we talk about early mentions we’re talking about the evolution of the term — the idea that you could have a theory that hinged on conspiracy, and how that would be regarded back then. This in turn plugs into a debate about how much our perception of such theories has shifted over time.

The Conspiracy of the British Elites Against the Union

So now we come to the first mention of “conspiracy theory” I’ve found, apparently missed by the OED and others. It will take a little explaining to set up. But it’s particularly surprising others have not found it since it’s from an exchange in the New York Times.

It’s from 1863, and it’s a response to a letter that had run the Sunday before. That letter had dealt with the question of why England — whose papers and elites had spent so much time attacking the United States over the institution of slavery in the 1850s — was now taking the side of the South in the Civil War.

The answer, the writer says, is obvious. America had been exerting influence on English institutions, and this had threatened the aristocracy, which feared loss of power. So they had embarked on a plan. They would support whatever side was weaker, in the hope that America would be destroyed. Once America was destroyed, then the governing classes could point to the failure of the U.S. as proof that democratic reforms don’t work, and reclaim power. In order to do this they would support the South verbally, but, importantly, not intervene on their behalf since the point is to avoid any decisive action that would hasten the conclusion of the war. Their best play was to draw out the conflict.

The ultimate endgame? Creating the “most terrible financial explosion ever seen in a civilized country” — all to benefit the small class of English aristocrats!

New York Times, January 4, 1863. Page 2.


I am not a Civil War historian, so I can’t say if any of this was true. But in form it’s not terribly different from the conspiracy theories promoted by Sanders surrogates about the DNC, or theories tossed around the right wing blogosphere from time to time. Small set of elites that are afraid of the success of brilliant progressive/conservative ideas, and so what do they do — they sabotage them, just to say, hey I told you so.

Again — is it true? Who knows. But the form of the argument is very congruent with what we think of as conspiracy theorizing nowadays.

So the next week another person replies in the correspondence section of the NYT. And he says, look, you don’t need to invent this whole bizarre plot. England supported abolition when it was cheap for to them to do so. Now it’s looking like it might get expensive if their cotton is cut off, so they are muddling through this. Their lack of intervention is not due to a desire to let the war do maximum damage or cause financial collapse, but based on the fact that they have other foreign entanglements that are much more consequential at the moment and can’t afford a new one.


Here’s the text of the portion that mentions “conspiracy theory”:

Now, when we look for the cause of this, any man who has made European politics his study at home, or, being abroad has known mercily so much of them as one cannot help knowing, from dally perusal of the French and English papers, sees fast enough that since 1849 (to go no further back) England has had quite enough to do in Europe and Asia, without going out of her way to meddle with America. It was a physical and moral impossibility that she could be carrying on a gigantic conspiracy against us. But our masses, having only a rough general knowledge of foreign affairs, and not unnaturally somewhat exaggerating the space which we occupy in the world’s eye, do not appreciate the complications which rendered such a conspiracy impossible. They only look at the sudden right-about-face movement of the English Press and public, which is most readily accounted for on the conspiracy theory.

New York Times, January 11, 1863.

You’ll note here that conspiracy theory — almost ten years before the other examples historians often note — is used much how we would use it now. It’s a put down, an assertion that the complexity someone else sees is a result of ignorance or worse.

You’ll note too something that is almost too delicious: the first use of conspiracy theory is about a conspiracy said to involve the press . The first reference to conspiracy theory we have on record is, in part, a “The press is so unfair because they’re in the bag for the elite cabal” conspiracy. Fake news, man.

I can’t help but feel that this citation raises some questions — a few at least — around some of the cultural history I’ve read on the use of the term conspiracy theory. But it’s Christmas Eve, and I’m not really interested in those conversations right now — I just wanted to correct something I’ve been seeing people get wrong.

To my knowledge I’m the first to trace back the etymology this far, but if I’m not, you can let me know in the comments. If I’m not the the first, it’s worthwhile anyway to get this up so that people stop making this mistake.

The Homeostatic Fallacy and Misinformation Literacy

I wrote a thing for Neiman’s year-end journalism predictions yesterday that I’m quite excited about. Hopefully will be out soon. (Update: it’s here.)

In the article I finally publish this term I’ve been throwing around in some private conversations — the “homeostatic fallacy”. 

Homeostasis is a fundamental concept of biology. The typical example is human temperature. For humans the temperature 98.6 tends to be a very desirable temperature. And so what you find is no matter where you put a human body — in the arctic or the tropics — the human body finds ways to keep that temperature constant. In hot environments you sweat, and the condensation helps cool you. Your blood vessels dilate, to bring more heat to the skin where it can be dispensed with. In cold environments your blood vessels constrict. 

There are a lot of people out there that think that misinformation is sort of like hot and cold environments are towards the body. In other words, both misinformation (let’s think of it as cold) and information literacy (let’s think of it as heat) have relatively little effect, because the mind has certain homeostatic mechanisms that protect against identity threat and provide resistance to new ideas. And there’s a certain truth to this. Knowing the facts about nuclear power won’t suddenly make you a supporter, and learning the history of Reconstruction won’t turn you woke. 

And as such, we hear a lot of critiques about media literacy of the sort that you can’t change people’s minds by giving them better information. Homeostatic arguments sometimes go even further, claiming bad information is likely not leading to bad decisions anyway.

Often this is posed as a counter to a naive Cartesian view of beliefs, that treats our decision-making processes as scientific. But weirdly, both the homeostatic view of misinformation and the Cartesian one suffer from the same transactional blindspot. On a case by case basis — show a person a fact, check if it changes them — homeostatic mechanisms often prevail. Your mind has certain psychological set-points and will often fight to keep them constant, even in the face of disinformation or massive scientific evidence. 

But the fallacy is that the set-point itself will not change.

People often use temperature as an example of a homeostatic set-point, but I think another example is far more educational. Consider weight.

Your body has a natural weight it gravitates towards, and the power of that set-point can’t be underestimated. Think about this astonishing fact — one pound of weight is about 3,500 excess calories. To maintain weight over a year, your average intake would have to stay within about 10 calories a day of maintenance levels. Few people account for calories at this level of precision, and yet many retain a stable weight year after year. That’s the power of homeostasis.

How do we gain weight, then? There’s quite a lot of debate on this, actually. But, more or less, temporary behavior and adverse events, in sufficient quantities, not only increase our weight, but change our set-points. And now the homeostatic mechanisms that kept us trim work against us, to keep the pounds on.

In short time frames, a sort of psychological homeostasis is protective. We see bad info or good info and nothing changes. We share a corrosive meme and we’re still the same person. Just as a single cookie at the Christmas party will have a net zero effect on your weight through the magic of homeostasis, a single bit of disinfo isn’t going to change you.

But the goal of disinformation isn’t really around these individual transactions. The goal of disinformation is to, over time, change our psychological set-points. To the researcher looking at individuals at specific points in time, the homeostasis looks protective — fire up Mechanical Turk, see what people believe, give them information or disinformation, see what changes. What you’ll find is nothing changes — set-points are remarkably resilient.

But underneath that, from year to year, is drift. And its the drift that matters.