Data Voids and the Google This Ploy: Kalergi Plan

If you want to see how data voids are utilized by extremists, here’s a good example. Last night a prominent conservative organization tweeted this image:

Picture of a group of conservative activists. One holds a beach ball that says “Google Kalergi Plan.”

You see the beach ball, right? It asks you to Google the “Kalergi Plan”. What’s that? It’s an anti-Semitic conspiracy theory that has its roots in “the ‘white genocide’ and ‘great replacement’ conspiracy rhetoric in far-right circles, which allege that a secret ruling class of Jewish elites is using immigration policy to remove European white people from the population.”

It’s garbage, and it’s dangerous garbage. Specifically, it’s the sort of garbage that motivated both the shooter in the Tree of Life massacre and the shooter in the Christchurch shootings.

But what happens when you Google this term?

Results of Kalergi Plan search on Google. At the top of the page are three white supremacist videos pushing conspiracy theories with a history of promoting violence and murder.

What you see immediately are the videos. These videos, like a lot of conspiracy videos, take a little known footnote in history and place it center stage. Kalergi, of course, is historically real. But he is also a historical figure of little note and no current influence. As such, there isn’t much writing on him, except (and here’s the main thing) by those who have put him at the center of a fake conspiracy.

So what you get is what researchers call a “data void“: people who know anything about the history of Europe, immigration, etc. don’t talk about Kalergi, because he is insignificant, a figure most notable for the conspiracy theories built around him. But people using the conspiracy theory talk about Kalergi quite a lot. So when you search Kalergi Plan, almost all the information you get will be by white supremacist conspiracy theorists.

These bad actors then use the language of critical thinking to tell you to look at the evidence and “make up your own mind.”

Screenshot of YouTube video saying “Is the Kalergi Plan Real? Make up your own mind.”

But of course if you’re searching “Kalergi plan”, most of the “evidence” you are getting comes from white supremacist conspiracy theorists. Making up your own mind under such a scenario is meaningless at best.

Things used to be much worse up until a few months ago, because if you watched one of these videos, YouTube would keep playing you conspiracy videos on the “Kalergi Plan” via a combination of autoplay, recommended videos, and personalization. It would start connecting you to other videos on other neo-Nazi theories, “race science”, and the like. People would Google a term once and suddenly find themselves permanently occupying a racist, conspiracy driven corner of the internet. Fun stuff.

Due to some recent actions by YouTube this follow-on effect has been substantially mitigated (though their delay in taking action has led to the development of a racist-conspiracist bro culture on YouTube that continues to radicalize youth). The tamping down of the recommended video conspiracy vector isn’t perfect, but it is already having good effects. However, it’s worth noting that reducing the influence of this vector has probably increased the importance of Google This ploys on the net, since people are less likely to hit these videos without direct encouragement.

What can we do as educators? What should we encourage our students to do?

1. Choose your search terms well

First, let students know that all search terms should be carefully chosen, and ideally formed using terms associated with the quality and objectivity of the information you want back. Googling “9/11 hoax” is unlikely to provide you reliable information on the 9/11 attacks, as people who study 9/11 don’t refer to it as a hoax. In a similar vein, “black on white crime”, the term that began the radicalization of the Charleston church shooter, is a term used by many neo-Nazis but does not feature prominently in academic analysis of crime patterns. Medical misinformation is similar — if you search for information on “aborted fetuses” in vaccines when there are not aborted fetuses in vaccines the people you’re going to end up reading are irresponsible, uneducated kooks.


Selected Google results for “Aborted fetuses in vaccines”. It’s a tire fire.

This isn’t to say that a better search term gets you great results, especially around issues that are conspiracy-adjacent. But a better search term may at least return a set of results with some good pages listed. Here are the top results for a bad search ([[“aborted fetuses in vaccines”]] on the left), and ([[stem cells vaccines]]) on the right.

Search result screenshots.

Note the differences (reliable sources are highlighted). With the loaded terms on the left, the top two results are from unreliable sources. However, the less loaded search returns better results. In addition to seeing some scholarly articles with the better terms (a possible-though-not-foolproof indicator you are using better language) the second item here is not only a reliable resource on this issue, but one of the best comprehensive explanations of the issue written for the general public, from an organization that specializes in the history of medicine. Search on the loaded terms, however, and you will not see this, even in the first fifty results.

2. Search for yourself

Conspiracy theorists are fond of asking people to “think for themselves” — after those people use the suggested conspiracy-inflected search terms to immerse themselves in a hall of mirrored bullshit. A better idea might be to do less thinking for yourself and more searching for yourself. When you see signs or memes asking you to search specific terms, realize that the person asking you to do that may be part of a community that has worked to flood the search results for that term with misinformation.

When we say “search for yourself” we do not mean you should use terms that return information that matches your beliefs. We mean that you should think carefully about the sort of material you want to find. If you wish to find scholarly articles or popular presentations of scholarly work, choose terms scholars use. If you are interested in the history of the Europe’s current immigration policies, search for “history of europe’s immigration policies”, not “Kalergi Plan”. Don’t be an easy mark. There’s nothing more ridiculous than a person talking about thinking for themselves while searching on terms they were told to search on by random white supremacists or flat-earthers on the internet.

A final note — for the moment, avoid auto-complete in searches unless it truly is what you were just about to type. Auto-complete often amplifies social bias and for niche terms it can be gamed in ways that send folks down the wrong path. It’s not so bad when searching for very basic how to information or the location of the nearest coffee shop, but for research on issues of social controversy or conflict it should be avoided.

3. Anticipate what sorts of sources might be in a good search — and notice if they don’t show up

Before going down the search result rabbit hole, ask yourself what sort of sources would be the most authoritative to you on those issues and look to see if those sorts of pages show up in the results. There’s a set of resources I’ve grown used to seeing when I type in a well-formed medical query — the Mayo Clinic, WebMD, the American Cancer Society, the National Institutes of Health. (as an example, look at “melatonin for sleep” as a search). When looking for coverage of national events I’ve grown use to seeing recognizable newspapers — the Washington Post, the Los Angeles Times, the Wall Street Journal.

Students don’t necessarily have the ability to recognize these sorts of sources off the bat, but they should cultivate the habit of noticing the results that turn up in a well tuned query, and the sources that turn up in a data void, such as the “death foods” term you may occasionally see in website sponsored ad chumbuckets. Initially this understanding may be more about genre than specific touchstones — expecting newspapers to show up for events, hospitals and .gov sites for medical searches, magazine or journal treatments of policy issues.

The important thing, however, is anticipation. Does the student have at least a vague expectation of the sorts of sources they expect to see before they hit the search button. If they develop the habit of forming these informal expectations they are more likely to reanalyze search terms when those expectations are violated.

Network Heuristics

There’s a story going around right now about a “reporter” who was following people shorting Tesla stock and allegedly approaching them for information. I won’t go into the whole Elon vs. the Short Sellers history, you don’t need it. Let’s just say that posing as a reporter can be used for ill in a variety of ways and maybe this was a case of that.

Snapshot of Maisy Kinsley’s profile

The way a lot of people judge reputation is signals, the information a person chooses to project about themselves. Signals can be honest or dishonest, but if a person is new to you you may not be able to assess the honesty of the signal. What you can do, however, is assess the costliness of the signal. In the case of a faker, certain things take relatively little time and effort, but others take quite a lot.

Let’s list the signals, and then we’ll talk about their worth.

First there’s the Twitter bio and the headshot. The headshot is an original photo — a reverse image search here doesn’t turn up Maisy, but it doesn’t turn up anyone else — it’s less likely to be a stolen photo. The Twitter bio says she’s written for Bloomberg.

This isn’t that impressive as verification, but wait! Maisy also has a website, and it looks professionally done!

Maisy’s website

From the website we learn that she’s a freelancer. Again, user supplied, but she links to her LinkedIn page. She’s got 194 connections, and is only 3 degrees of separation from me! (I’m getting a bit sick of this photo, but still).

LinkedIn profile.

Oh, and she went to Stanford! Talk about costly, right? You don’t do that on a whim!

Screenshot of education panel in LinkedIn

The Usual Signals Are Increasingly Garbage

Here’s the thing about all the signals here: they are increasingly garbage, because the web drives down the cost of these sorts of things. And as signals become less costly they are less useful.

Your blurb on Twitter is produced directly by you — it’s not a description on a company website or in a professionals directory. So, essentially worthless.

That photo that’s unique? In this case, it was generated by machine learning, which can now generate unique pictures of people that never existed. It’s a process you can replicate in a few seconds at this site here, as I did to generate a fake representative and fake tweet below.

The website? Domains are cheap, about $12 a year. Website space is even cheaper. The layout? Good layout used to be a half-decent signal that you’d spent money on a designer — fifteen years ago. Nowadays, templates like this are available for free.

LinkedIn, though, right? All those connections? The education? I mean, Stanford, right?

First, the education field in LinkedIn is no more authoritative than the bio field in Twitter. It’s user supplied, even though it looks official. Hey, look, I just got into Stanford too! My new degree in astrophysics is going to rock.

Screenshot of a fake degree I just gave myself on LinkedIn. I deleted it immediately after; fake-attending Stanford was messing up my work-life balance.

The connections are a bit more interesting. One person called one of Maisy’s endorsements to see if they actually knew this person. Nope, they didn’t. Just doing that thing where you don’t refuse connections or mutual endorsements. “Maisy” just had to request a lot of connections and make enough endorsements and figure that enough of a percentage would follow or endorse her back. Done and done.

“JB is real…talked on the phone…just taking advantage of reciprocal nature of people” Twitter, @plainsite

I’ll tell you a funny story, completely true. I once friended someone on LinkedIn that I knew, Sara Wickham or somesuch. And we went back and forth talking about our friends in college in 1993 — “Remember Russ?” “Guy with the guitar, always playing the Dead and Camper Van Beethoven?” “Oh you mean Chris?” “Right, Chris.” “Absolutely. Whatever happened to Chris, anyway?”

A week or so into our back and forth I noticed we had never attended the college at the same time, and as I dug into it I remembered the person I was thinking of didn’t have that last name at all. I had never met this person I was talking with, and in fact we had no common friends.

That’s LinkedIn in a nutshell. Connections are a worthless indicator.

Stop with the “Aha, I Spotted Something” You’re Firing Up In Your Head

So now maybe you’re channeling your inner Columbo and just dying to tell us all about all the things you’ve noticed that “gave this away”. You would not have been fooled, right?

I mean, there’s a five year work gap between Stanford and reporting. She graduated in 2013 and then started just *now*. Weird, right? There’s a sort of bulge in the photo that’s the tell-tale sign of AI processing! And it’s the same photo everywhere! The bio on the website sucks. The name of her freelancing outfit is Unbiased Storytellers, which feels as made up a name as Honest Auto Service.

Here’s the thing — you’re less smart if you’re doing this stuff than if you’re not. You know the person is fake, and so what you’re doing is noticing all the little inconsistencies.

But the problem is that life is frustratingly inconsistent once put under a microscope. The work gap? People have kids, man. It’s not unusual at all — especially for women — to have a work gap between college and their first job. If that’s your sign of fakery, you’re going to be labeling a lot of good female reporters fake.

That photo? Sometimes photos just have weirdness about them. Here’s the photo of a Joshua Benton on Twitter, who tweets a lot of stuff.

Joshua Benton
Joshua Benton

Joshua’s Twitter bio claims that he’s a person running a journalism project at Harvard, so it’s a bit weird he’s obscuring what he looks like. Definitely fishy!

Except, of course, Benton does work at Harvard, and in fact runs a world famous lab there.

What about Maisy’s sucky bio? Well, have you ever written a sucky bio and thought, I’ll go back and fix that? I have. (A lot of them made it all the way into conference programs).

And finally the name of her freelance shop: Unbiased Freelancing and Storytellers. Surely a fake, right?

Funny story about that. A bunch of Twitter users were investigating this story and looking at her LinkedIn connections/endorsements. And one of them found the clearly fake Mr. Shepard, a “Dog Photographer and Maker of Paper Hats”:

Do I have to spell this out for you? His name is Shepard and he photographs dogs. Look at the hats, which are CLEARLY photoshopped on (can you see the stitching?) His bio begins “Walking the line between storyteller and animal handler…” Come ON, right?

Except then the same person called the “Puptrait” studio. And JB is real. And his last name is really Shepard.

This puppy is real and is really wearing that hat. They is also super adorable, and if you want a picture like this of your own pet (or just want to browse cuteness for a while in a world that has gotten you down) you should check out Shepard’s Puptrait Studio. Picture used with kind permission of JB Shepard.

And the hats aren’t photoshopped, he really does make these custom paper hats that fit on dog’s heads.

If you’d think this was fake, don’t blame yourself — when reading weak and cheap signals you are at the mercy of your own biases as to what is real and what is not, what is plausible and what is not. You’ll make assumptions about what a normal work gap looks like based on being a man, or what a normal job looks like based on being something that isn’t a dog photographer and maker of paper hats.

I actually used to do this thing where I would tell faculty or students that a site was fake, and ask them how do we know? And they would immediately generate *dozens* of *obvious* tells — too many ads, weird layout, odd domain name, no photos of the reporters, clickbaity headlines, no clear about page. And then I would reveal that it was actually a real site. And not only a real site, but a world renowned medical journal or Australia’s paper of national record.

I had to stop doing this for a couple reasons. First, people got really mad at me. Which, fair point. It was a bit of a dick move.

But the main reason I had to stop is after having talked themselves into it by all these things they noticed, a certain number of the students and faculty could not be talked out of it, even after the reveal. Each thing they had “noticed” had pulled them deeper into the belief that the site was faked and being told that it was actually a well-respected source created a cognitive dissonance that couldn’t be overcome. People would argue with me. That can’t really be a prestigious medical journal — I don’t believe you! You’ve probably got it wrong somehow! Double-check!

It ended up taking up too much class time and I moved on to other ways to teach it. But the experience actually frightened me a bit.

Avoid Cheap Signals, Look For Well-Chosen Signs.

By looking at a lot of poor quality cheap signals you don’t get a good sense of what a person’s reputation is. Mostly, you just get confused. And the more attributes of a thing you have to weigh against one another in your head the more confused you’re going to get.

This situation is only going to get worse, of course. Right now AI-generated pictures do have some standard tells, but in a couple years they won’t. Right now you still have to write your own marketing blurb on a website, but in a couple years machine learning will pump out marketing prose pretty reliably, and maybe at a level that looks more “real” than stuff hand-crafted. The signals are not only cheap, they are in a massive deflationary spiral.

What we are trying to do in our digital literacy work is to get teachers to stop teaching this “gather as much information as you can and weigh it in a complex calculus heavily influenced by your own presuppositions” approach and instead use the network properties of the web to apply quick heuristics.

Let’s go back to this “reporter”. She claims to write for Bloomberg.

Snapshot of Maisy Kinsley’s profile

Does she? Has she written anywhere? Here’s my check:

Screenshot of Google News

I plug “Maisy Kinsley” into Google News. There’s no Maisy Kinsley mentioned at all. Not in a byline, not in a story. You can search Bloomberg.com too and there’s nothing there at all.

Let’s do the same with a reporter from the BBC who just contacted me. Here’s a Google News Search. First a bunch of Forbes stuff:

A search for Frey Lindsay turns up many stories from Forbes in Google News

Downpage some other stuff including a BBC reference:

If we click through to BBC News and do a search, we find a bunch more stories:

We’re not looking at hair, or photos, or personal websites or LinkedIn pages or figuring out if a company name is plausible or a work gap explainable. All those are cheap signals that Frey can easily fake (if a bad actor) and we can misread (if he is not). Instead we figure out what traces we should find on the web if is Frey really a journalist. Not what does Frey say about himself, but what does the web say about Frey. The truth is indeed “out there”: it’s on the network, and what students need is to understand how to apply network heuristics to get to it. That involves knowing what is a cheap signal (LinkedIn followers, about pages, photographs), and what is a solid sign that is hard to counterfeit (stories on an authoritative and recognizable controlled domain).

Advancing this digital literacy work is hard because many of the heuristics people rely on in the physical world are at best absent from the digital world and at worst easily counterfeited. And knowing what is trustworthy as a sign on the web and what is not is, unfortunately, uniquely digital knowledge. You need to know how Google News is curated and what inclusion in those results means and doesn’t mean. You need to know followers can be bought, and that blue checkmarks mean you are who you say you are but not that you tell the truth. You need to know that it is usually harder to forge a long history than it is to forge a large social footprint, and that bad actors can fool you into using search terms that bring their stuff to the top of search results.

We’ve often convinced ourselves in higher education that there is something called “critical thinking” which is some magical mental ingredient that travels, frictionless, into any domain. There are mental patterns that are generally applicable, true. But so much of what we actually do is read signs, and those signs are domain specific. They need to be taught. Years into this digital literacy adventure, that’s still my radical proposal: that we should teach students how to read the web explicitly, using the affordances of the network.


If you want to see how badly we are failing to teach students these things, check out A Brief History of CRAAP and Recognition is Futile.

Update on Check, Please!

Short update on the Check, Please project.

We’re about halfway into the coding hours on this which is a bit scary. We still have some expert hours from TS Waterman at the end to solve the hard problems but right now we’re solving the easy ones.

A couple weeks ago we put out a prototype. The prototype was for one of the three moves we wanted to showcase, and it was functional, and used the original concept of a headless Chrome instance in the background to make these things. The protoype did what good prototypes do and showed that project was possible, but there were three weak spots:

  • First, the Chrome screenshots could usually be manipulated to capture the right part of the screen (e.g. scroll down to a headline or get the correct Google result scrolled into view). But this was a bit more fragile than hoped as we tested it on a wide array of prompts.
  • Second, headless chrome was really slow on some sites. Even on speedy sites, like Google, the fire-up and retrieval would normally be a couple seconds but could stretch to much much more. We were headless chroming three sites and on the occasional call where all three went slow we’d sometimes get over 30 seconds. This didn’t happen a lot (timings were usually about 10 – 15 seconds for the entire process) but it happened enough.
  • Finally, because headless chrome is headless a lot of things needed to make the animation instructive (mouse pointers, cursors, omnibars) have to be added anyway via image manipulation.

I played with the settings, with asynchrony, with using a single persistent instance of Chrome Driver, and things got better, but it became clear that we should offload at least some problems to a caching mechanism, and where possible use PIL to draw mockups off of an HTML request rather than doing everything through screenshots. So I’m in the middle of that rebuild now, with caching and some imaging library rework. Hoping to get it reliably under 10 seconds max.

Web Literacy Across the Curriculum

We’re still teaching history using only print texts even as kids are being historicized online by Holocaust deniers and Lost-Causers. We’re teaching science in an era when online anti-vaxxers gain traction by using scientific language to deceive and intimidate. 

Sam Wineburg, The internet is sowing mass confusion. We must rethink how we teach kids every subject.

Couple good pieces out — one by Sam Wineburg, and an interesting response (expansion?) by Larry Cuban. The point, at least as I read it? Misinformation on the web is not really a subject — or, in any case, not only a subject. The web, after all, is an environment, a domain in which most professional, scholarly, and civic skills are practiced. Yet the structure of how we teach most subjects treats the web as either an afterthought, or worse, as a forbidden land.

If you know me and know this blog, this issue has been my obsession since before this blog was launched in 2007. Back in 2009 I dubbed the practice of ignoring the web as a target domain as “Abstinence-only Web Education“:

…what [the term] expresses [is] my utter shock that when talking to some otherwise intelligent adults about the fact that we are not educating our students to be critical consumers of web content, or to use networks to solve problems, etc — my utter shock that often as not the response to this problem is “Well, if students would just stop getting information from the web and go back to books, this whole problem would go away.”

Now I do believe that reading more books and less web is usually a good decision as part of a broader strategy. But most of what students will do in their professional and civic lives will involve the web.

My younger daughter, for example, is presenting to the school board tonight about how the integrated arts and academics magnet program she is in supports various educational objectives. When trying to understand what those objectives mean — from critical thinking to collaboration — she is not reading a textbook or going to a library. She is consulting the web.

And I am writing this at work as part of being in an informal professional development community, and you are reading it to maybe help you with your job.

These issues seem a million miles away from Pizzagate and blogs that tell you that sea ice is increasing and climate change is really a hoax. But they turn out to be adjacent. What happens if my daughter’s search for critical thinking lands on one of the recently politicized redefinitions of that term, which she ends up presenting to the school board? And you’re here at this blog, trusting me — but there are of course other blogs and articles that are written by people in the employ of ed tech firms, and those by people that have zero experience in the domain on which they write. Giving your attention to those sites may actually make you worse at what you do, or lead to your manipulation by corporate forces of which you are unaware.

Or maybe not! Maybe you’re good at all this.

Still, I keep coming back to that part of Dewey’s School and Society where he talks about the problem of transmission of knowledge in a post-agrarian society. In the first lecture in that work, Dewey talks about the way in which industrialization has rendered the processes of production opaque. In an agrarian society, he notes, “the entire industrial process stood revealed, from the production on the farm of the raw materials, till the finished article was actually put to use.” In such a world a youngster could simply observe, and see what competent practice looked like. To understand where things came from was to understand one’s household, and not much pedagogical artifice was required. With the introduction of complex, specialized and opaque systems, however, there was no opportunity to learn by looking over a parents shoulder, and so a more designed approach was required.

Two things occur to me re-reading that. The first is not necessarily a new media literacy insight. But that networked opacity we deal with — the complex network of actors and algorithms that lead to a piece of information or propaganda being displayed on your screen — is a very similar problem. There’s a part of that lecture where Dewey talks about how students that investigate the production of clothing walk through domains of physics, history, geography, engineering, and economics due to the complex set of historical, geographical, and other factors that have determined the way in which clothing gets made. The point he makes is that you can organize the curriculum around clothing, and the disciplines become meaningful.

I’m not proposing to do a complete retread of Dewey’s progressive education in 2019. We’ve learned a lot since Dewey about how people learn; that’s good and we should use that. But narrowly, what Dewey saw in clothing in 1899 I see in web literacy today. Here is a going social concern that combines sociology, psychology, history, engineering, algorithms, math, political science and so on. You don’t have to adopt unmodified Deweyism to see the opportunities there for integrative education. Elucidate the circumstances of production for this thing students are using most of their waking life. If you’re a high school or an integrative first-year program put together a year on it, and try it out.

The second point is on skills. Dewey noted that when professional knowledge moved out of farms and into factories and offices children lost the ability to observe competence in action. Work — and the skills associated with it — became hidden.

That’s still true today, but there’s another angle on this. Even in offices our skills are quite hidden because of the ways that this work evades third-party observation. Where there is an artifact of work — equations, code, writing, etc., a co-worker can ask “hey, why are you doing that in that way?” And where more ephemeral processes are public — soft skills exercised in a meeting for example — they can also be learned.

But web skills have the double whammy of leaving very little trace, and of being intensely private. And this makes transmission and improvement of these skills much more difficult, and creates a situation where there is a lot of hidden need. More on that in a later post.

Educating the Influencers: The “Check, Please!” Prototype

OK, maybe you’re just here for the video. I would be. Watch the demo of Check Please, and then continue downpage for the theory of change behind it.

Watched it? OK, here’s the backstory.

Last November we won an award from RTI International and the Rita Allen Foundation to work on a “fact-checking tutorial generator” that would generate hyper-specific tutorials that could be shared with “friends, family, and the occasional celebrity.” The idea was this — we talk a lot about media literacy, but the people spreading the most misinformation (and the people reaching the most people with that misinformation) are some of the least likely people to currently be in school. How do we reach them?

I proposed a model: we teach the students that we have, and then give them web tools to teach the influencers. As an example, we have a technique we show students called “Just add Wikipedia”: when confronted with a site of unknown provenance, go up to the omnibar, add “wikipedia” after the domain to trigger a Google search that floats relevant Wikipedia pages to the top, select the most relevant Wikipedia page, and get a bit of background on the site before sharing.

When teaching students how to do this, I record little demos using Camtasia on a wide variety of examples. Students have to see the steps and, as importantly, see how easy the steps really are, on a variety of examples. And in particular, they have to see the steps on the particular problem they just tried to solve: even though the steps are very standard, general instruction videos don’t have half the impact of specific ones. When you see the exact problem you just struggled with solved in a couple clicks, it sinks in in a way that no generic video ever will.

Unfortunately , this leaves us in a bit of a quandary relative to our “have students teach the influencers” plan. I have a $200 dollar copy of Camtasia, a decades worth of experience creating screencasts, and still, for me to demo a move — from firing up the screen recorder to uploading to YouTube or exporting a GIF — is a half-hour process. I doubt we’re going to change the world on that ROI. As someone once said, a lie can make it halfway around the world while the truth is still lacing up its Camtasia dependencies.

But what if we could give our students a website that took some basic information about decisions they made in their own fact-checking process and that website would generate the custom, shareable tutorial for them to share, as long as they were following one of our standard techniques?

I came up with this idea last year — using selenium, a invisible Chrome browser you can run on the server — to walk through the steps of a claim or reputation check while taking screen shots that formed the basis of an automatic tutorial on fact-checking a claim. And I ran it by TS Waterman and after walking through it a bit we decided that — maybe to our surprise (!!) — it seemed rather straightforward. We proposed it to the forum, won the runner-up prize in November, and on January 15 I began work on it. (TS is still involved and will help optimize the program and advise direction as we move forward, as soon as I clean up my embarrassing prototype spaghetti code).

But here’s the thing — it works! The prototype is so so far from finished, and the plan is to launch a public site in April after adding a couple more types of checks and massively refactoring code. But it works. And it may provide a new way to think about stopping the spread of misinformation, not by by generic tools for readers, but by empowering those that enforce social norms with better, more educational tools.

The result.

Attention Is the Scarcity

There’s a lot of things that set our approach at the Digital Polarization Initiative apart from most previous initiatives. But the biggest thing is this: we start from the environment in which students are most likely to practice online literacy skills, and in that environment attention is the scarcity.

The idea that scarce attention forms the basis of modern information environments is not new. Herbert Simon, years ago, noted that abundances consume — an abundance of goats makes a scarcity of grass. And information? It consumes attention. So while we name this the information age, information is actually less and less valuable. The paradox of the information age is that control of information means less and less, because information becomes commodified. Instead, the powerful in the information age control the scarcity: they control attention.

Slide from my presentation at StratCom last year

Again, this is not an observation that is unique to me. Zeynep Tufecki, Penny Andrews, An Xaio Mina, Claire Wardle, Whitney Phillips, and so many more have drilled down on various aspects of this phenomenon. And years ago, Howard Rheingold put attention as a crucial literacy of the networked age, next to others like critical consumption. It’s not, at this point, a very contentious assertion.

And yet the implications of this, media literacy at least, have yet to be fully explored. When information is scarce, we must deeply interrogate the limited information that is provided us, trying to find the internal inconsistencies, the flaws, the contradictions. But in a world where information is abundant, these skills are not primary. The primary skill of a person in an attention-scarce environment is making relatively quick decisions about what to turn their attention toward, and making longer term decisions about how to construct their media environment to provide trustworthy information.

People know my four moves approach that tries to provide a quick guide for sorting through information, the 30 second fact-checks, and the work from Sam Wineburg and others that it builds on. These are media literacy, but they are focused not on deeply analyzing a piece of information but on making a decision of whether an article, author, website, organization, or Facebook page is worthy of your attention (and if so, with what caveats).

But there are other things to consider as well. When you know how attention is captured by hacking algorithms and human behavior, extra care in deciding who to follow, what to click on, and what to share is warranted. I’ve talked before about PewDiepie’s recommendation of an anti-Semitic YouTube account based on some anime analysis he had enjoyed. Many subscribed based on the recommendation. But of course, the subscription doesn’t just result in that account’s anime analysis videos being shared with you — it pushes the political stuff to you as well. And since algorithms weight subscriptions highly in what to recommend to you, it begins a process of pushing more and more dubious and potentially hateful content in front of you.

How do you focus your attention? How do you protect it? How do you apply it productively and strategically, and avoid giving it to bad actors or dubious sources? And how do you do that in a world where decisions about what to engage with are made in seconds, not minutes or hours?

These are the question our age of attention requires we answer, and the associated skills and understandings are where we need to focus our pedagogical efforts.

The Fyre Festival and the Trumpet of Amplification

Unless you’ve been living under a rock, you’re probably aware that there are two documentaries out on the doomed Fyre Festival. You should watch both: the event — both its dynamics and the personalities associated with it — will give you disturbing insights into our current moment. And if you teach students about disinformation I’d go so far as to assign one or both of the documentaries.

Here is one connection between the events depicted in the film and disinfo. There are many others. (This post is not intended for researchers of disinfo, but for teachers looking to help students understand some of the mechanisms).

The Orange Square

Key to the Fyre Festival story is the orange square, a bit of paid coordinated posting by a set of supermodels and other influencers. The models and influencers, including such folks as Kendall Jenner, were paid hundreds of thousands of dollars to post the same message with a mysterious orange square on the same day. And thus an event was born.

Related image

People new to disinformation and influencer marketing might think the primary idea here is to reach all the influencer followers. And that’s part of it. But of course, if that were the case you wouldn’t need to have people all post at the same time. You wouldn’t need the “visual disruption” of the orange square.

The point here is not to reach followers, but to catalyze a much larger reaction. That reaction, in part, is media stories like this by the Los Angeles Times.

And of course it wasn’t just the LA Times: it was dozens (hundreds?) of blogs and publications. It was YouTubers talking about it. Music bloggers. Mid-level elites. Other influencers wanting in on the buzz. The coordinated event also gave credibility required to book bands, the booking of the bands created more credibility, more news pegs, and so on.

You can think of this as a sort of nuclear reaction. In the middle of the event sits some fissile material — the media, conspiracy thought leaders, dispossessed or bitter political influencers. Around it are laid synchronized charges that, should they go off right, catalyze a larger, more enduring reaction. If you do it right, a small amount of social media TNT can create an impact several orders of magnitude larger than its input.

Enter the Trumpet

Central to understanding this is the fissile material is not the general public, at least at first. As a marketer or disinfo agent you often work your way upward to get downward effects. Claire Wardle, drawing on the work of Whitney Phillips and others, expresses one version of this in the “trumpet of amplification“:

Image result for "claire wardle" trumpet

Here the trumpet reflects a less direct strategy than Fyre, starting by influencing smaller, less influential communities, refining messages then pushing them up the influence ladder. But many of the principles are the same. With a relatively small number of resources applied in a focused, time-compressed pattern you can jump start a larger and more enduring reaction that gives the appearance of legitimacy — and may even be self-sustaining once manipulation stops. Maybe that appearance of legitimacy is applied to getting investors and festival attendees to part with their money. Or maybe it’s to create the appearance that there’s a “debate” about whether the humanitarian White Helmets are actually secret CIA assets:

Maybe the goal is disorientation. Maybe it’s buzz. Maybe it’s information — these techniques, of course, are also often used ethically by activists looking to call attention to a certain issue.

Why does this work? Well, part of it is the nature of the network. In theory the network aggregates the likes, dislikes and interests of billions of individuals and if some of those interests begin to align — shock at a recent news story for example — then that story breaks through the noise and gets noticed. When this happens without coordination it’s often referred to as “organic” activity.

The dream of many early on was that such organic activity would help us discover things we might otherwise not. And it has absolutely done that — from Charlie Bit My Finger to tsunami live feeds this sort of setup proved good at pushing certain types of content in front of us. And it worked in roughly this same sort of way — organic activity catches the eyes of influencers who then spread it more broadly. People get the perfect viral dance video, learn of a recent earthquake, discover a new opinion piece that everyone is talking about.

But there are plenty of ways that marketers, activists, and propagandists can game this. Fyre used paid coordinated activity, but of course activists often use unpaid coordinated activity to push issues in front of people. They try to catch the attention of mid-level elites that get it in front of reporters and so on. Marketers often just pay the influencers. Bad actors seed hyperpartisan or conspiracy-minded content in smaller communities, ping it around with bots and loyal foot soldiers, and build enough momentum around it that it escapes that community. giving the appearance to reporters and others of an emerging trend or critique.

We tend to think of the activists as different from the marketers and the marketers as different from the bad actors but there’s really no clear line. The disturbing fact is it takes frightfully little coordinated action to catalyze these larger social reactions. And while it’s comforting to think that the flaw here is with the masses, collectively producing bizarre and delusional results, the weakness of the system more likely lie with a much smaller set of influencers, who can be specifically targeted, infiltrated, duped, or just plain bought.

Thinking about disinfo, attention, and influence in this way — not as mass delusion but as the hacking of specific parts of an attention and influence system — can give us better insight into how realities are spun up from nothing and ultimately help us find better, more targeted solutions. And for influencers — even those mid-level folks with ten to fifty thousand followers — it can help them come to terms with their crucial impact on the system, and understand the responsibilities that come with that.