Educating the Influencers: The “Check, Please!” Prototype

OK, maybe you’re just here for the video. I would be. Watch the demo of Check Please, and then continue downpage for the theory of change behind it.

Watched it? OK, here’s the backstory.

Last November we won an award from RTI International and the Rita Allen Foundation to work on a “fact-checking tutorial generator” that would generate hyper-specific tutorials that could be shared with “friends, family, and the occasional celebrity.” The idea was this — we talk a lot about media literacy, but the people spreading the most misinformation (and the people reaching the most people with that misinformation) are some of the least likely people to currently be in school. How do we reach them?

I proposed a model: we teach the students that we have, and then give them web tools to teach the influencers. As an example, we have a technique we show students called “Just add Wikipedia”: when confronted with a site of unknown provenance, go up to the omnibar, add “wikipedia” after the domain to trigger a Google search that floats relevant Wikipedia pages to the top, select the most relevant Wikipedia page, and get a bit of background on the site before sharing.

When teaching students how to do this, I record little demos using Camtasia on a wide variety of examples. Students have to see the steps and, as importantly, see how easy the steps really are, on a variety of examples. And in particular, they have to see the steps on the particular problem they just tried to solve: even though the steps are very standard, general instruction videos don’t have half the impact of specific ones. When you see the exact problem you just struggled with solved in a couple clicks, it sinks in in a way that no generic video ever will.

Unfortunately , this leaves us in a bit of a quandary relative to our “have students teach the influencers” plan. I have a $200 dollar copy of Camtasia, a decades worth of experience creating screencasts, and still, for me to demo a move — from firing up the screen recorder to uploading to YouTube or exporting a GIF — is a half-hour process. I doubt we’re going to change the world on that ROI. As someone once said, a lie can make it halfway around the world while the truth is still lacing up its Camtasia dependencies.

But what if we could give our students a website that took some basic information about decisions they made in their own fact-checking process and that website would generate the custom, shareable tutorial for them to share, as long as they were following one of our standard techniques?

I came up with this idea last year — using selenium, a invisible Chrome browser you can run on the server — to walk through the steps of a claim or reputation check while taking screen shots that formed the basis of an automatic tutorial on fact-checking a claim. And I ran it by TS Waterman and after walking through it a bit we decided that — maybe to our surprise (!!) — it seemed rather straightforward. We proposed it to the forum, won the runner-up prize in November, and on January 15 I began work on it. (TS is still involved and will help optimize the program and advise direction as we move forward, as soon as I clean up my embarrassing prototype spaghetti code).

But here’s the thing — it works! The prototype is so so far from finished, and the plan is to launch a public site in April after adding a couple more types of checks and massively refactoring code. But it works. And it may provide a new way to think about stopping the spread of misinformation, not by by generic tools for readers, but by empowering those that enforce social norms with better, more educational tools.

The result.

Attention Is the Scarcity

There’s a lot of things that set our approach at the Digital Polarization Initiative apart from most previous initiatives. But the biggest thing is this: we start from the environment in which students are most likely to practice online literacy skills, and in that environment attention is the scarcity.

The idea that scarce attention forms the basis of modern information environments is not new. Herbert Simon, years ago, noted that abundances consume — an abundance of goats makes a scarcity of grass. And information? It consumes attention. So while we name this the information age, information is actually less and less valuable. The paradox of the information age is that control of information means less and less, because information becomes commodified. Instead, the powerful in the information age control the scarcity: they control attention.

Slide from my presentation at StratCom last year

Again, this is not an observation that is unique to me. Zeynep Tufecki, Penny Andrews, An Xaio Mina, Claire Wardle, Whitney Phillips, and so many more have drilled down on various aspects of this phenomenon. And years ago, Howard Rheingold put attention as a crucial literacy of the networked age, next to others like critical consumption. It’s not, at this point, a very contentious assertion.

And yet the implications of this, media literacy at least, have yet to be fully explored. When information is scarce, we must deeply interrogate the limited information that is provided us, trying to find the internal inconsistencies, the flaws, the contradictions. But in a world where information is abundant, these skills are not primary. The primary skill of a person in an attention-scarce environment is making relatively quick decisions about what to turn their attention toward, and making longer term decisions about how to construct their media environment to provide trustworthy information.

People know my four moves approach that tries to provide a quick guide for sorting through information, the 30 second fact-checks, and the work from Sam Wineburg and others that it builds on. These are media literacy, but they are focused not on deeply analyzing a piece of information but on making a decision of whether an article, author, website, organization, or Facebook page is worthy of your attention (and if so, with what caveats).

But there are other things to consider as well. When you know how attention is captured by hacking algorithms and human behavior, extra care in deciding who to follow, what to click on, and what to share is warranted. I’ve talked before about PewDiepie’s recommendation of an anti-Semitic YouTube account based on some anime analysis he had enjoyed. Many subscribed based on the recommendation. But of course, the subscription doesn’t just result in that account’s anime analysis videos being shared with you — it pushes the political stuff to you as well. And since algorithms weight subscriptions highly in what to recommend to you, it begins a process of pushing more and more dubious and potentially hateful content in front of you.

How do you focus your attention? How do you protect it? How do you apply it productively and strategically, and avoid giving it to bad actors or dubious sources? And how do you do that in a world where decisions about what to engage with are made in seconds, not minutes or hours?

These are the question our age of attention requires we answer, and the associated skills and understandings are where we need to focus our pedagogical efforts.

The Fyre Festival and the Trumpet of Amplification

Unless you’ve been living under a rock, you’re probably aware that there are two documentaries out on the doomed Fyre Festival. You should watch both: the event — both its dynamics and the personalities associated with it — will give you disturbing insights into our current moment. And if you teach students about disinformation I’d go so far as to assign one or both of the documentaries.

Here is one connection between the events depicted in the film and disinfo. There are many others. (This post is not intended for researchers of disinfo, but for teachers looking to help students understand some of the mechanisms).

The Orange Square

Key to the Fyre Festival story is the orange square, a bit of paid coordinated posting by a set of supermodels and other influencers. The models and influencers, including such folks as Kendall Jenner, were paid hundreds of thousands of dollars to post the same message with a mysterious orange square on the same day. And thus an event was born.

Related image

People new to disinformation and influencer marketing might think the primary idea here is to reach all the influencer followers. And that’s part of it. But of course, if that were the case you wouldn’t need to have people all post at the same time. You wouldn’t need the “visual disruption” of the orange square.

The point here is not to reach followers, but to catalyze a much larger reaction. That reaction, in part, is media stories like this by the Los Angeles Times.

And of course it wasn’t just the LA Times: it was dozens (hundreds?) of blogs and publications. It was YouTubers talking about it. Music bloggers. Mid-level elites. Other influencers wanting in on the buzz. The coordinated event also gave credibility required to book bands, the booking of the bands created more credibility, more news pegs, and so on.

You can think of this as a sort of nuclear reaction. In the middle of the event sits some fissile material — the media, conspiracy thought leaders, dispossessed or bitter political influencers. Around it are laid synchronized charges that, should they go off right, catalyze a larger, more enduring reaction. If you do it right, a small amount of social media TNT can create an impact several orders of magnitude larger than its input.

Enter the Trumpet

Central to understanding this is the fissile material is not the general public, at least at first. As a marketer or disinfo agent you often work your way upward to get downward effects. Claire Wardle, drawing on the work of Whitney Phillips and others, expresses one version of this in the “trumpet of amplification“:

Image result for "claire wardle" trumpet

Here the trumpet reflects a less direct strategy than Fyre, starting by influencing smaller, less influential communities, refining messages then pushing them up the influence ladder. But many of the principles are the same. With a relatively small number of resources applied in a focused, time-compressed pattern you can jump start a larger and more enduring reaction that gives the appearance of legitimacy — and may even be self-sustaining once manipulation stops. Maybe that appearance of legitimacy is applied to getting investors and festival attendees to part with their money. Or maybe it’s to create the appearance that there’s a “debate” about whether the humanitarian White Helmets are actually secret CIA assets:

Maybe the goal is disorientation. Maybe it’s buzz. Maybe it’s information — these techniques, of course, are also often used ethically by activists looking to call attention to a certain issue.

Why does this work? Well, part of it is the nature of the network. In theory the network aggregates the likes, dislikes and interests of billions of individuals and if some of those interests begin to align — shock at a recent news story for example — then that story breaks through the noise and gets noticed. When this happens without coordination it’s often referred to as “organic” activity.

The dream of many early on was that such organic activity would help us discover things we might otherwise not. And it has absolutely done that — from Charlie Bit My Finger to tsunami live feeds this sort of setup proved good at pushing certain types of content in front of us. And it worked in roughly this same sort of way — organic activity catches the eyes of influencers who then spread it more broadly. People get the perfect viral dance video, learn of a recent earthquake, discover a new opinion piece that everyone is talking about.

But there are plenty of ways that marketers, activists, and propagandists can game this. Fyre used paid coordinated activity, but of course activists often use unpaid coordinated activity to push issues in front of people. They try to catch the attention of mid-level elites that get it in front of reporters and so on. Marketers often just pay the influencers. Bad actors seed hyperpartisan or conspiracy-minded content in smaller communities, ping it around with bots and loyal foot soldiers, and build enough momentum around it that it escapes that community. giving the appearance to reporters and others of an emerging trend or critique.

We tend to think of the activists as different from the marketers and the marketers as different from the bad actors but there’s really no clear line. The disturbing fact is it takes frightfully little coordinated action to catalyze these larger social reactions. And while it’s comforting to think that the flaw here is with the masses, collectively producing bizarre and delusional results, the weakness of the system more likely lie with a much smaller set of influencers, who can be specifically targeted, infiltrated, duped, or just plain bought.

Thinking about disinfo, attention, and influence in this way — not as mass delusion but as the hacking of specific parts of an attention and influence system — can give us better insight into how realities are spun up from nothing and ultimately help us find better, more targeted solutions. And for influencers — even those mid-level folks with ten to fifty thousand followers — it can help them come to terms with their crucial impact on the system, and understand the responsibilities that come with that.

Smoking out the Washington Post imposter in a dozen seconds or less

So today a group known for pranks circulated an imposter site that posed as the Washington Post, announcing President Trump’s resignation on a post-dated paper. It’s not that hard for hoaxers to do this – any one can come up with a confusingly similar url to a popular site, grab some HTML and make a fake site. These sites often have a short lifespan once they go viral — the media properties they are posing as lean on the hosters who pull the plug. But once it goes viral the damage is done, right?

It’s worth noting that you don’t need a deep understanding of the press or communications theory to avoid being duped here. You don’t even need to be a careful reader. Our two methods for dealing with this are dirt simple:

  • Just add Wikipedia (our omnibar hack to investigate a source)
  • Google News Search & Scan (our technique we apply to stories that should have significant coverage).

You can use either of these for this issue. The way we look for an imposter using Wikipedia is this:

  1. Go up to the “omnibar” and turn the url into a search by adding space + wikipedia
  2. Click through to the article on the publication you are supposedly looking at.
  3. Scroll to the part of the sidebar with a link to the site, click it.
  4. See if the site it brings you to is the same site

Here’s what that looks like in GIF form (sorry for the big download).

I haven’t sped that up, btw. That’s your answer in 12 seconds.

Now some people might say, well if you read the date of the paper you’d know. Or if you knew the fonts associated with the Washington Post you’d realize the fonts were off. But none of these are broadly applicable habits. Every time you look at a paper like this there will be a multitude of signals that argue for the authenticity of the paper and a bunch that argue against it. And hopefully you pick up on the former for things that are real and the latter for things that aren’t, but if you want to be quick, decisive, and habitual about it you should use broadly applicable measures that give you clear answers (when clear answers are available) and mixed signals only when the question is actually complex.

When I present these problems to students or faculty I find that people can *always* find what they “should have” noticed after the fact. But of course it’s different every time and it’s never conclusive. What if the fonts had been accurate? Does that mean it’s really the Post? What if the date was right? Trustworthy then?

The key isn’t figuring out the things that don’t match after the fact. The key is knowing the most reliable way to solve the whole class of problem, no matter what the imposter got right or wrong. And ideally you ask questions where a positive answer has a chance of being as meaningful as a negative one.

Anyway, the other route to checking this is just as easy — our check other coverage method, using a Google News Search:

  1. Go to the omnibar, search [trump resigns]
  2. When you get to the Google results, don’t stop. Click into Google News for a more curated search
  3. Note that in this case there are zero stories about Trump resigning and quite a lot about the hoax.
  4. There is no step four — you’re done

Again, here it is in all it’s GIF majesty:

You’ll notice that you do need to practice a bit of care here — some publishers try to clickbait the headline by putting the resignation first, hoping that the fact it was fake gets trimmed off and gets a click. (If I were king of the world I’d have a three strikes policy for this sort of stuff and push repeat offenders out of the cluster feature spots, but that’s just me). Still, scanning over these headlines even in the most careless way possible it would be very hard not to pick up this was a fake story.

Note that in this case we don’t even need these fact-checks to exist. If we get to this page and there are no stories about Trump resigning, then it didn’t happen — for two reasons. First, if it happened there would be broad coverage. Second, even if the WaPo was the first story on this, we would see their story in the search results.

There’s lots of things we can teach students, and we should teach them them. But I’m always amazed that two years into this we haven’t even taught them techniques as simple as this.

Why Reputation?

As I was reading An Xiao Mina’s recent (and excellent) piece for Nieman Lab, and it reminded me that I had not yet written here about why I’ve increasingly been talking about reputation as a core part of online digital literacy. Trust, yes, consensus, yes. But I keep coming back to this idea of reputation.

Why? Well, the short answer is Gloria Origgi. Her book, Reputation, is too techno-optimist in parts, but is still easily the most influential book I’ve read in the past year. Core to Origgi’s work is the idea that reputation is both a social relation and a social heuristic, and these two aspects of reputation have a dynamic relationship. I have a reputation, which is the trace of past events and current relationships in a social system. But that reputation isn’t really separate from the techniques others use to decode and utilize my reputation for decision-making.

This relationship is synergistic. As an example, reputation is subject to the Matthew Effect, where a person who is initially perceived as smart can gain additional reputation for brilliance at a fraction of the cost of someone initially perceived as mediocre. This is because quick assessments of intelligence will have to weight past assessments of others — as a person expands their social circle initial judgments are often carried forward, even if those initial judgments are flawed.

Reputation as a social heuristic maps well onto our methods of course — both Origgi and the Digital Polarization initiative look to models from Simon and Gigerenzer for inspiration. But it also suggests a theory of change.

Compare the idea of “trust” to that of “reputation”. Trust is an end result. You want to measure it. You want to look for and address the things that are reducing trust. And, as I’ve argued, media literacy programs should be assessing shifts in trust, seeing if students move out of “trust compression” (where everything is moderately untrustworthy) to a place where they make bigger and more accurate distinctions.

But trust is not what is read, and when we look at low-trust populations it can often seem like there is not much for media literacy to do. People don’t trust others because they’ve been wronged. Etc. What exactly does that have to do with literacy?

But that’s not the whole story, obviously. In between past experience, tribalism, culture, and the maintenance of trust is a process of reading reputation and making use of it. And what we find is that, time and time again, bad heuristics accelerate and amplify bad underlying issues.

I’ve used the example of PewDiepie and his inadvertent promotion of a Nazi-friendly site as an example of this before. PewDiepie certainly has issues, and seems to share a cultural space that has more in common with /pol/ than #resist. But one imagines that he did not want to risk millions of dollars to promote a random analysis of Death Note by a person posting Hitler speeches. And yet, through an error in reading reputation, he did. Just as the Matthew Effect compounds initial errors in judgment when heuristics are injudiciously applied, errors in applying reputation heuristics tend to make bad situations worse — his judgment about an alt-right YouTuber flows to his followers who then attach some of PewDiepie’s reputation to the ideas presented therein — based, mostly, on his mistake.

I could write all day on this, but maybe one more example. There’s an old heuristic about the reputation of positions on issues — “in matters indifferent, side with the majority.” This can be modified in a number of ways — you might want to side with the qualified majority when it comes to treating your prostate cancer. You might side with the majority of people who share your values on an issue around justice. You might side with a majority of people like you on an issue that has some personal aspects — say, what laptop to get or job to take. Or you might choose a hybrid approach — if you are a woman considering a mastectomy you might do well to consider what the majority of qualified women say about the necessity of the procedure.

The problem, however, from a heuristic standpoint, is that it is far easier to signal (and read the signal) of attributes like values or culture or identity than it is to read qualifications — and one underremarked aspect of polarization is that — relative to other signals — partisan identity has become far easier to read than it was 20 years ago, and expertise has become more difficult in some ways.

One reaction to this is to say — well people have become more partisan. And that’s true! But a compounding factor is that as reputational signals around partisan identity have become more salient and reputational signals around expertise have become more muddled (by astroturfing, CNN punditocracy, etc) people have gravitated to weighting the salient signals more heavily. Stuff that is easier to read is quicker to use. And so you have something like the Matthew Effect — people become more partisan, which makes those signals more salient, which pushes more people to use those signals, which makes people more partisan about an expanding array of issues. What’s the Republican position on cat litter? In 2019, we’ll probably find out. And so on.

If you want to break that cycle, you need to make expertise more salient relative to partisan signals, and show people techniques to read expertise as quickly as partisan identity. Better heuristics and an information environment that empowers quick assessment of things like expertise and agenda can help people to build better, fuller, and more self aware models of reputation, and this, in turn, can have meaningful impact on the underlying issues.

Well, this has not turned into the short post I had hoped, and to do it right I’d probably want to talk ten more pages. But one New Year’s resolution was to publish more WordPress drafts, so here you go. 🙂

Some Notes On Installing Federated Wiki On Windows

It’s 2018, and I’ve still not found anything that helps me think as clearly as federated wiki. At the same time, running a web server of your own is still, in 2018, a royal pain. Case in point: recently a series of credit card breaches forced a series of changes in my credit card number (two breaches in one year, hooray). And that ended up wiping out my Digital Ocean account as it silently failed the monthly renewals. Personal cyberinfrastructure is a drag, man.

But such is life. So I recently started looking at whether I could do federated wiki just on my laptop and not deal with a remote server. It doesn’t get me into the federation, per se, but it allows all the other benefits of federated wiki — drag-and-drop refactoring, quick idea linking, iterative note-taking, true hypertext thinking.

It turns out to be really easy (I mean as things go with this stuff). I’ll go into detail more below, but here are the steps:

  1. Download Node.js for Windows. Install.
  2. Open a command window and type: npm install -g wiki
  3. Launch via command window: wiki -p 80 –security_type friends –cookieSecret ‘REPLACE-THIS-SECRET’
  4. Navigate to localhost in a browser
  5. Click the lock to “claim” the wiki as owner
  6. Click the “wiki” link to take it out of read-only mode.
  7. Go forth and wiki…..

Installation

Step one: Download Node.js for Windows. Install.

Step two: Open a command window and type: npm install -g wiki

It’s installed!

Initial Startup

To start your wiki go to a command prompt and type:

wiki -p 80 --security_type friends --cookieSecret 'REPLACE-THIS-SECRET' 

You may need to give node some permissions. I won’t advise you on that. But you definitely don’t need to give public networks access to your server if you don’t want.

Go to your localhost. You’ll get the start page.

Claiming your wiki

When you first visit your wiki it will be in unclaimed, read-only mode, and the bottom of the interface will look like this (though probably not have 47 pages):

When you click that lock icon, it will create a random username and go to unlocked position.

Once you do that you can click on the word “wiki” and now it will move out of read-only into edit mode:

You’ll know it’s in edit mode because you’ll see the edit history icons (sometimes colloquially referred to as ‘chiclets’) at the bottom.

And — that’s it. You’re done. Wiki away.

You’ll need to launch the server from a command window each time you want to use it, but if you’re familiar with Windows you can write a bat file and put it in your startup folder.

(Incidentally, this isn’t a tutorial on how to use federated wiki. I’m tired, frankly, of trying to sell it to people who want to know why it takes more than fifteen minutes to learn. I don’t teach people it anymore because people have weird expectations and it wastes too much of my time trying to get past them. But if you’re one of the people who has made the jump, you know this — I just want to help you do it locally on your laptop.)

Optional stuff: Changing your name, importing or backing up files

You don’t need to know where files live on your computer, but sometimes it is useful. For instance, you might want to back up your pages, or reset a username. Here’s how you can do that.

In the single user mode we used above, wiki pages will be in a .wiki directory under your user directory. For instance, my directory is C:\Users\mcaulfield\.wiki\pages. They are simple json files, and can be backed up and zipped. You can also drop json files from other wiki instances here, though you’ll have to delete the sitemap.json file to reindex (more on that below).

For ownership and indexing issues there is a status directory under the .wiki directory (e.g. C:\Users\mcaulfield\.wiki\status). This has two important files in it. One is owner.json, which maintains login information (initially this will not be there — it’s written when you claim it). The other is your sitemap, which has a list of all pages and recent updates on them. Deleting the sitemap is useful when you want to regenerate it after manually uploading new files.

To change your username, you can edit the owner.json file. Change the name property.

If something goes wrong and you want to reinitiate the claim process, you can delete the owner.json file.

If you clear your cookies and hence loose your claim (i.e. are logged out), you can pull the secret from the json and enter it when prompted. It’s OK to change it to something simple and more password-like that you can remember.

The node files of your wiki installation will be in your AppData roaming directory under npm, e.g. C:\Users\mcaulfield\AppData\Roaming\npm\node_modules\wiki. There’s not an real reason to touch these files.

Running a personal desktop farm

This is only for federated wiki geeks, but it is completely possible to run a small personal desktop farm, where you can run multiple wiki sites in what are essentially separate notebooks. Just go into your hosts file (C:\Windows\System32\drivers\etc\hosts) and add localhost aliases:

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost
	127.0.0.1       disinfo
	127.0.0.1       journal
	127.0.0.1       papersonwiki
	127.0.0.1	sandbox
	127.0.0.1	teachersguide
	127.0.0.1	wikipediawomen
	127.0.0.1	opioidcrisis
	127.0.0.1	raceinamerica

Launch in farm mode (-f) and type these words into your browser omnibar. Each will maintain a separate wiki instance. If you want to be able to search across all instances, use the –autoseed flag. Note that you’ll have to go through the minimal claim process with each one (two clicks, shown above).

Pushing to a larger federation

If you want to push to a remote server, you can. There’s a couple ways to do this.

First, there’s a flag in wiki that allows you to point to a different directory for pages. So you can point that to a mapped drive or Dropbox or whatever on your laptop, and then point a remote server to that same directory.

Alternatively you could do a periodic rsync to the server. Windows 10 has bash native to it, so you can install that, reach your files through Bash for Windows’s /mnt/c/ mapping, and push them up that way.

In each case, you probably want to delete the sitemap.json and sitemap.xml to trigger a regeneration.

Interestingly, you could also use this scheme (I think) for joint generation of a public wiki.

IIRC, there is also a way to drag and drop json export files into wiki instances.

Finally, you can share files with people by zipping them up and emailing them or providing them as a zipped download. They in turn can drop them into their own federated wiki instance to work with. I’ve been thinking a lot about this model, which is very memex like — I make a notebook of my notes on something, post it up, you pull it into your machine. The provenance gets messy at scale, but among a group of people in a subfield that are being more descriptive in their practice than rhetorical this might work out fine.

It’s Good To Be Back

Using federated wiki again reminds me once again of what wiki means, in an etymological sense. It means quick.

What all other non-federated wiki systems lack is not just federation. They lack quickness, largely because they are designed towards novices and trade away possibilities for fluid authorship in exchange for making the first 15 minutes of use easier.

So while it may seem weird to run a federated wiki server on a laptop in a way that makes federation less available, if you’ve learned the method of multi-pane wiki it’s not really weird at all, because every note taking system you’ve used besides federated wiki is unbearably slow, clunky, and burdensome. Federated wiki, in the hands of someone that has mastered it, works at the speed of thought. And it does that whether your in the federation or not. So here’s to a very wiki New Year.

“Conspiracy Theorists” in 1934 and 1961

A quick follow-on to my last post — it’s worth mentioning that “conspiracy theorist” is also a much older term than many realize. A few years ago, in fact, a story was going around the forums that the term was either invented by the CIA or at least made an undesirable moniker by them.

Again, in reality, the term is much older and appears to have long been a term of derision even back then. Consider this use from 1934:

The differences of opinion now to be observed in the Congressional committees laboring with the Stock Exchange bill are explained by some thick-and-thin opponents of all changes in the bill by the existence of a conspiracy to defeat it. If there is a conspiracy, it is one of the most vocal in conspiratorial history. The investment bankers, their employees and some of their customers have been making the welkin ring with their complaints. If there has been any secret, backstairs work, the conspiracy theorists will surely find receptive audience for its exposure.

It is to be suspected, however, that the reorganized hesitation within the committees about the Fletcher-Rayburn bill “as is” rests upon more solid ground. The probability is that Senators and Representatives, like people, have come to see that there are risks in an indiscriminate attack upon “Wall Street” which cannot be brushed aside by references to the supposedly dubious of those who resist the attacks.

There’s lots for cultural historians to dig into here — this is, after all, a charge of conspiracy theory against supporters of an anti-Wall Street bill, which shows the ways the term is used to police narratives, for better and worse. But again, we see what we saw with conspiracy theory in the last post. From the beginning these terms have been negative, even if sometimes groups may have used that negative connotation to their own political ends.

It’s worth noting, of course, that conspiracy theory is used against hysteria as well, as in this letter in 1961 to a New Jersey paper (also not usually cited, the first cites for the OED are in 1964):

The conspiracy theory of history takes an admitted Communist plot against all free men and makes it virtually the sole factor responsible .for all phenomena that are not to the liking of the conspiracy theorists. Thus the fiasco of the Cuban revolt is seen not as the tragic miscalculation of wishful-thinking incompetents, which it apparently was . . . but as the usual sinister work of pro-Commie elements in our Government.

That there is a call, and a pretty unsubtle one at that, that the Bay of Pigs fiasco not be used as an excuse to slide back to McCarthyism. The letter is actually headed “Conspiracy Theory”. (The author, William Monaghan, actually wrote on this issue at least one more time — decrying rising Holocaust denialism of the time in 1963.)

The author continues after some details:

This sort of puerility also never recognizes that the admitted world conspiracy does have at times a large mass base of people who consider themselves in no way conspirators, but rather downtrodden ones who have found a cause and a regime that will bring better days to them. This was the case with the Chinese peasantry in the period of the rise to power of the Chinese Reds. . . . And it was the case recently when the Cuban masses refused to rise against a regime they still for the most part consider their benefactor, not their oppressor. No internal C. I. A. conspiracy but the facts of life in Cuba today foredoomed the invasion and , revolt attempt, much as we might wish it to have been otherwise. …

He concludes:

The conspiracy theorist will never abandon his pet intellectual hobby, because it gives him far too much of a sense of his own importance and his group’s significance in history. It is therefore not at all surprising that he should from time to time proclaim that his small group will turn out to have been the savior of this nation and of the liberals themselves.” 

WILLIAM E. MONAGHAN 548 Studio Road , Ridgefield, May 12, 1961.

The academic calls to think more critically about how we deploy the charge of conspiracy theory are welcome and overdue. Still, on the merits, I’m with William most days of the week, and hope that more people will cite his treatment of conspiracy theorists in their own histories.