Google Should Be a Librarian, not a Family Feud Contestant

I’ve been investigating Google snippets lately, based on some work that other people have done. These are the “cards” that pop up on top sometimes, giving the user what appears to be the “one true answer”.

What’s shocking to me is not that Google malfunctions in producing these, but how often it malfunctions, and how easy it is to find malfunctions. It’s like there is little to no quality control on the algorithm at all.

Other people have found dozens of these over the past couple days, but here’s a few I found goofing off yesterday while half watching Incorporated on Syfy.

Prodded with the right terms, Google will tell you that:

  • Sasha Obama was adopted
  • Lee Harvey Oswald didn’t shoot JFK
  • GMOs make you sick

Want some screenshots? Today’s your lucky day!



gmos and health.PNG

Now I’m sure that Google will reply that the results are the results. And I’m sure that other people will ask why I’m being such a special snowflake and stamping my iron boot on the neck of results I don’t like. (Their mixed metaphor, not mine!)

(By the way, trivia fact: one technique of populist dictatorships is to portray the opposition as simultaneously weak and effete while being all-powerful and brutal. Just some facts for your next pub trivia night…)

The truth is, however, that I have a fairly simple definition of a fact, and I would hope that a company who’s stated mission is “to organize the world’s information” would as well. For me a fact is:

  • something that is generally not disputed
  • by people in a position to know
  • who can be relied on to accurately tell the truth

And so, not to be too Enlightenment era about this, but all these snippets fail that test. And not just fail: they fail spectacularly.

The person writing about the GMO health risks has no science background and is considered such a sham by the scientific community that when he appeared on Dr. Oz scientists refused to share the stage with him, fearing even that would be too much normalization of him.

The site writing about Sasha and Malia being adopted, “America’s Freedom Fighters”, is site specializing in fake news to such an extent that Google autosuggests “fake news” if you type it into the search box.


And the JFK conspiracy theory is — well, a conspiracy theory. It’s literally the prototypical modern conspiracy theory. It’s the picture in the dictionary next to the word “conspiracy theory”.

The truth is in cases like these cases Google often fails on all three counts:

  • They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.
  • They choose sites and authors who are in no position to know more about a subject than the average person.
  • They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.

Google Should Not Be Family Feud

I never really got the show Family Feud when I was a kid. That’s partially because my parents mostly put me on a diet of PBS, which made anything higher on the dial look weird. But it’s also because it just didn’t jive with my sense of why we ask questions in the first place.

For those that haven’t seen Family Feud, here’s how it works. The host of Family Feud asks you a question, like “What builds your appetite?” You try to guess what your average American would answer.

You win if you guess something in the top five of what most people would say. So a lot of people say “smelling food” so that ranks in the list. No one says “not eating” so that doesn’t rank.

Watching this as a kid I’d always wonder, “Yes, but what actually builds your appetite the most?” Like, what’s the real answer? Don’t we care about that?

But Family Feud doesn’t care about that. It was never about what is true, it was about what people say.

I don’t think Google’s purpose is to aspire to be Family Feud game show team, but it’s sometimes hard to tell. For example, a principle of “organizing the world’s information” has to be separating reliable sources from unreliable ones, and trying to provide answers that are true. But it’s clear that in many cases that’s not happening — otherwise quality control would be flagging these misfires and fixing them. The snippets, which create the impression of a definitive answer while feeding people bad science, conspiracy, and hate speech, make matters worse.

It should not be that hard to select good sources of information. For example, there is an excellent National Academies report on genetically engineered crops that was written by a mix of corporate and anti-corporate scientists and policy analysts. Here’s the conclusion of that study on health effects:


On the basis of its detailed examination of comparisons between currently commercialized GE and non-GE foods in compositional analysis, acute and chronic animal-toxicity tests, long-term data on health of livestock fed GE foods, and epidemiological data, the committee concluded that no differences have been found that implicate a higher risk to human health safety from these GE foods than from their non-GE counterparts. The committee states this finding very carefully, acknowledging that any new food—GE or non-GE—may have some subtle favorable or adverse health effects that are not detected even with careful scrutiny and that health effects can develop over time.

That’s actually what science looks and sounds like — having reviewed the data available, we find no evidence but are aware that since impacts may take time to develop there may yet be adverse impacts to appear.

If you went to a competent health sciences librarian and asked for material on this, this is what you’d get back. This report as one of the definitive statements to date on GMO safety. Because the librarian’s job is not to play Family Feud, but to get you the best information.

Google instead gives you the blog of a man with no medical or scientific training who claims GMOs cause infertility, accelerated aging, and organ damage. But “survey says!” that’s true, so it’s all good.

The world right now is in a post-truth crisis that threatens to have truly earth-shattering impacts. What Google returns on a search result can truly change the fate of the entire world. What Google returns can literally lead to the end of humanity as we know it, through climate change, nuclear war, or disease. Not immediately, but as it shapes public perception one result at a time.

I’m not asking Google to choose sides. I’m not asking them to put a finger on the scale for the answers I’d like to see. I’m asking them to emulate science in designing a process that privileges returning good information over bad. I’m asking that they take their place as a librarian of knowledge, rather than a Family Feud game show contestant. It seems a reasonable request.

Doubt Versus a Bayesian Outlook

There’s lots of primary causes of the recent assault on truth that are non-technological. In fact, most causes have very little to do with technology. I’d point people to the excellent book The Merchants of Doubt which details the well-funded and and well-planned corporate assault on science that began as early as the 1950s around the issue of whether cigarettes cause cancer. There was a simple but profound realization Big Tobacco had 50 years ago — they didn’t have to refute the conclusion of the science that clearly, even back then, pointed to tobacco as a primary cause of lung cancer. They just had to introduce doubt.

The neat thing about doubt is it makes you look and feel like a pretty deep thinker. America loves doubt. Every four years we run an election for 18 months and then treat the people who haven’t decided until the last week of the election as if they were some sort of free-thinkers rather than the most politically ignorant population in the country. The mythology of doubt is strong.

Reporter:  “So what do you think about the election, Bob?”

Independent: “Well, I’m not sure. Clinton has some good points, but Trump seems like a strong leader. I like to take my time thinking about these things.”

Reporter: “Well, it’s quite the important decision. Back to you, Maria!”

The mythology of doubt is that we have things which need to be “proven”, and until they get proven we we are in a state of doubt: we really don’t know what to believe. Who can say?

But doubt is not actually what you want. Doubt is just certainty from another direction, and these two orientations — doubt and certainty — form a binary worldview that promotes polarization, narrow thinking, and poor policy outcomes.

What you really want is not doubt. What you want, for lack of a better word, to be Bayesian in your outlook. The famous statistician and epidemiologist Jerome Cornfield, responsible for much of the revival of Bayesian approaches in epidemiology in the 1960s and beyond, used to talk about the “Bayesian Outlook”.

The Bayesian Outlook is at its heart simple, but it’s also profound. Here’s Cornfield:

The Bayesian outlook can be summarized in a single sentence: any inferential or decision process that does not follow from some likelihood function and some set of priors has objectively verifiable deficiencies. The application of this outlook is a largely extra-mathematical task, requiring the selection of likelihoods and priors that are appropriate to given problem situations, with the determination of what is appropriate requiring, in Fisher’s words (in another context), ‘responsible and independent thinkers applying their minds and imaginations to the detailed interpretation of verifiable observations. (Cornfield, 1969)

There’s a field of Bayesian statistics that is fairly developed and beyond the scope of this post. But as Cornfield notes, Bayesian approaches are not really about the math — they are about a way of looking at the world. And given that I think it’s possible to talk about having a “Bayesian outlook” when it comes to fact-checking.

What does this mean in practice? As an example, I use this tweet occasionally in my presentations:

Is the part about the Nazis true? It’s either true or not, of course. But we can only view that truth through an array of probability.

When I first see something like this, my immediate reaction is it has a good chance of being true. Why?

Well, there are priors. I know Schumer is Jewish, of European descent. And I know that the Nazis and their collaborators killed a substantial portion of of that population, maybe about 40%. I also know you have, by definition, eight great-grandparents. The chances that at least one of the eight great-grandparents might have died in WWII at the hands of Nazis or Nazi collaborators is something that had a reasonable chance of being true before this tweet.

We call these the priors: they exist before this tweet makes its way to me. One key component of Bayesian analysis is that we begin with a set of priors, and pay careful attention to the selection of those priors before assimilating new information.

Now as to the new information: the fact that someone tweeted this claim makes the claim more probable, to some extent. This is a specific claim. It came to me through a feed where I weed out the worst misinformation offenders pretty regularly. The second statement, about Trump’s father, is true.

It seems plausible. But I follow my prime habit with social media: check your emotions. Never reflexively tweet something that factual that feels “perfect”.

A quick search shows there’s a 1998 article from the New York Times that says that “aides say” seven of nine of his great-grandmother’s children were killed by Nazis. That’s good, and raises the likelihood it’s true. The old priors plus this new information become our new priors. We’ve moved from plausible to probable.

But I want to hear it from Chuck Schumer’s mouth, not some unnamed aides responding to a campaign attack in 1998.

And when you start to try to find Schumer saying it it gets less clear. There is Holocaust after Holocaust event that Schumer has attended — and yet this fact never makes the papers or his speeches:


Absence of evidence is not strong evidence of absence. But it is evidence, especially as it starts to pile up. With each failed attempt to find support for this, my disposition towards this fact inches down, moving from likely and sinking back towards plausible.

Then, at some point, I change my search terms. One of the unreliable sites on this question — a forum post —  mentions a “porch” where his great grandmother was killed. That’s a specific detail that is likely to get me closer to the event. So I throw it in and look what comes up:


And when we go to that top result we find testimony from Schumer at a congressional hearing on reparations for Holocaust survivors:

Senator Schumer. Now I am going to give my opening  statement, and first I want to start by thanking our Chairman,  Chairman Leahy, for letting me have the gavel today in order to  explore this exceptionally important topic: how to resolve what  I hope, what we all hope are among the last remaining  reparation claims stemming from the murder of 6 million Jews during the Holocaust. We all know the horror of the Holocaust.  My great-grandmother, who was the matriarch of her family, was told to leave her home. She and her family had gathered on the front porch. They refused to leave, and they just machine-gunned all of them down in 1941. So, obviously, I have personal experience with the horrors of the Holocaust, but the horrors are just awful.

Sometimes we refer to the horror as “unspeakable.” But unspeakable is exactly what the Holocaust must never become.  Those who perpetrated it, those who benefited from it want us not to speak. But we are here to speak and to have this hearing.

Now that’s a good source — official testimony from Schumer himself. From a written statement. The weight of this evidence outweighs everything prior, but is still added to it. It’s not just that Schumer is telling a story here, but that he is telling a story about an event that was plausible to begin with.

Is it bulletproof? No. Schumer could, of course, be lying, or exaggerating. He might have heard or remembered the story told him wrong. But right now, the best information we have is this testimony plus the remarks of others (such as aides) over a 20 year period. We have enough here, in absence of other evidence, to call this claim true.

But unlike “doubt” or “certainty”– the demand that something anything less perfect knowledge one way or another must leave us in a useless middle ground, we end up, with each step, getting better, more informed priors even as our decisions on what is true vacillate. By the end we call this true, because to overcome what we know here would require strong evidence that currently doesn’t seem to exist. But we’d be excited to get new information, even if it contradicted this, because it would build a better set of priors, both for this and other related claims.

This post is pretty nascent stuff — and maybe I’ve bit off a bit more than I can chew here. But I suppose what I’m saying is that fact-checking on a complex claim looks a bit like this:


We’ll get this together in a better presented post at some later time. But I do think one of the primary goals of fact-checking is to get students to think about truth in more nuanced ways, and this is the sort of direction I see that going, instead of the cynical skepticism we often peddle.




How “News Literacy” Gets the Web Wrong

I have a simple web literacy model. When confronted with a dubious claim:

  • Check for previous fact-checking work
  • Go upstream to the source
  • Read laterally

That’s it. There’s a couple admonitions in there to check your emotions and think recursively, but these three things — check previous work,  go upstream, read laterally — are the core process.

We call these things strategies. They are generally usable intermediate goals for the fact-checker, often executed in sequence: if one stops panning out, then you go onto the next one.

The reason we present these in sequence in this way is we don’t just want to get students to the truth — we want to get them there as quickly as possible. The three-step process comes from the experience of seeing both myself and others get pulled into a lot of wasteful work — fact-checking claims that have already been extensively fact-checked, investigating meaningless intermediate sources, and wasting time analyzing things from a site that later turns out to be a known hoax site or conspiracy theory site.

To give an example, here’s a story from Daily Kos:


And here’s what students will say, when confronted with this after years of “close reading” training:

  • Who is this Hunter guy?
  • Hunter is a pseudonym, which is bad. How do we know who he really is? Suspicious!
  • What is this Daily Kos site?
  • Who owns Daily Kos? Liberals? Really?
  • There’s a lot of comments which is good.
  • The spelling and punctuation on this page is good, which makes it credible.
  • The site looks well designed.
  • The site is very orange.
  • There’s anti-Trump language on the page which makes it not credible and slanted.
  • The picture here isn’t of the Russians, it doesn’t match, which is fishy.

They might even go to Hunter’s about page and find that the most recent story he has recommended has, well, a very anti-Trump spin on it:


They can spend hours on this, going to the site’s about page, reading up on Hunter, looking at the past stories Hunter wrote. And in my experience, students, when they do this, are under the impression that this time and depth spent here shows real care and thought.

Except it doesn’t. Because if your real goal is to find out if this is true, none of this matters.

What matters is this:


What you see above, in the first paragraph of the story, is a link to the Wall Street Journal, the source of the claim. This “Hunter” might be a Democrat with a pseudonym invoking an 80s police procedural series, but he follows good, honest web practice. He sources his fact using something called “hypertext”. It’s a technology we use to connect related documents together on the web.

And once we see that — a way to get closer to the actual source of the fact, all those questions about who Hunter is and what his motives are and how well he spells things on this very orange looking site don’t matter, because — for the purposes of fact-checking — we don’t give a crap. We’re going to move up and put our focus on the Wall Street Journal, not Daily Kos.

(Disclosure — I used to write a bit on Daily Kos, I know certain front-pagers there, and yes I know that Hunter’s name is not really a reference to the uniquely forgettable Fred Dryer television series).

Once we get to the Wall Street Journal, we’re not done. We want to make sure that the Wall Street Journal‘s report on this is not coming from somewhere else as well. But when we get to the Wall Street Journal we find this is original reporting from the Journal itself:

The younger Trump was likely paid at least $50,000 for his Paris appearance by the Center of Political and Foreign Affairs. The Trump Organization didn’t dispute that amount when asked about it by The Wall Street Journal.

“Donald Trump Jr. has been participating in business-related speaking engagements for over a decade—discussing a range of topics including sharing his entrepreneurial experiences and offering career specific advice,” said Amanda Miller, the company’s vice president for marketing.

So going upstream comes to an end for us, and we move on to our next strategy — reading laterally about the site. Now in this case, we all might skip that — it is the Wall Street Journal we have here — but the truth is that students might not know whether to trust the WSJ. So we execute a trusty domain search: ‘’, which tells Google to get all the pages that are talking about that aren’t from that site itself:


And when we do that we see that there is a Wikipedia page on this site that will let us know that the WSJ is the largest newspaper in America by circulation and has won 39 Pulitzer prizes.

We do note, looking at the WSJ article, that Hunter has tweaked the headline here a bit. The WSJ says that Trump Jr. was likely paid $50,000, whereas Hunter’s headline is more strident about that claim. But apart from that the story checks out.

Do we trust this WSJ article 100%? No, of course not. But we trust it enough. It’s tweetworthy. And after we’ve confirmed that fact we can go back down to the Daily Kos page and see if that article by Hunter has any useful additional analysis. Over time, if you keep fact-checking Hunter’s stories, and they keep checking out, you might start considering him a reliable tertiary source.

If you use this process, you’ll notice a couple of things. The first one is that it’s pretty quick — the sort of thing that you can execute in the 90 seconds before you decide to retweet something.

But there’s another piece here too — rather than the fuzzy analysis of a single story from a single source you have here a series of clearly defined subgoals with defined exit points: check for previous work until there is no more previous work, get as close to the original as you can until you can get no closer, and read laterally until you understand the source. These goals are executed in an order that resolves easy questions quickly and hard questions efficiently.

That’s important, because if you can’t get it down to something quick and directed then students just think endlessly about what’s in front of them.  Or worse, they give up. They need intermediate goals, not checklists.

Fact-Checking the Mailman

Recently the Digital Polarization Initiative has been getting a lot of press, and as a result a lot of people have been sending me alternative approaches to fake news.

Most aren’t good. I’ve already talked about the reasons why CRAAP is ineffective. I’ve been more hesitant to talk about a popular program from the News Literacy Project called Checkology, which is less obviously bad. But in past days I’ve seen more and more people talking about how Checkology might be a solution to our current problem.

Unfortunately, news literacy isn’t the big problem here. Web literacy is. And the Checkology curriculum doesn’t really address this.

As an example, here’s the Checkology checklist:

1. Gauge your emotional reaction: Is it strong? Are you angry? Are you intensely hoping that the information turns out to be true? False?

2. Reflect on how you encountered this. Was it promoted on a website? Did it show up in a social media feed? Was it sent to you by someone you know?

3. Consider the headline or main message:

a. Does it use excessive punctuation(!!) or ALL CAPS for emphasis?

b. Does it make a claim about containing a secret or telling you something that “the media” doesn’t want you to know?

c. Don’t stop at the headline! Keep exploring.

4. Is this information designed for easy sharing, like a meme?

5. Consider the source of the information:

a. Is it a well-known source?

b. Is there a byline (an author’s name) attached to this piece?

c. Go to the website’s “About” section: Does the site describe itself as a “fantasy news” or “satirical news” site?

d. Does the person or organization that produced the information have any editorial standards?

e. Does the “contact us” section include an email address that matches the domain (not a Gmail or Yahoo email address)?

f. Does a quick search for the name of the website raise any suspicions?

6. Does the example you’re evaluating have a current date on it?

7. Does the example cite a variety of sources, including official and expert sources? Does the information this example provides appear in reports from (other) news outlets?

8. Does the example hyperlink to other quality sources? In other words, they haven’t been altered or taken from another context?

9. Can you confirm, using a reverse image search, that any images in your example are authentic (in other words, sources that haven’t been altered or taken from another context)?

10. If you searched for this example on a fact-checking site such as or, is there a fact-check that labels it as less than true?

Now, there’s some good things in here. I think their starting point — check your emotional reaction — is quite good, and it’s similar to some advice I use myself. Thinking about editorial standards is good. Reverse image search is a helpful and cool tool. Looking for reports from other sources is good.

But if you include subquestions, there are twenty-three steps to Checkology’s list and they are all going to give me conflicting information of relatively minor importance. What if there are no spelling errors but there is also no current date? What if the about page says the site is a premier news source, but it has no links back to original sources? What if it cites a variety of things but doesn’t hyperlink?

Even more disturbingly, this approach to fact-checking keeps me on the original page for ages. What if I get all the way through the quarter of an hour that the first twenty-two questions take only to find out on question twenty-four that Snopes has looked at this and it’s complete trash?

This isn’t hypothetical. Given the current reaction time of Snopes to much of the viral stuff on the web you could probably give Student A this long list and Student B a piece of paper that says “Check Snopes First” and the Snopes-user would outperform the other student every time.

And even if there is no Snopes article on the particular issue you are looking at, what good is it going to do you to look this deeply at the article in front of you if it is not the source. Consider our Hunter article:


Let’s answer the questions using Checkology:

1. Gauge your emotional reaction:

Is it strong?  Yes.

Are you angry? Yes.

Are you intensely hoping that the information turns out to be true? Yes.

False? No.

2. Reflect on how you encountered this.

Was it promoted on a website? Facebook

Did it show up in a social media feed? Yes.

Was it sent to you by someone you know? Yes

3. Consider the headline or main message:

a. Does it use excessive punctuation(!!) or ALL CAPS for emphasis? No.

b. Does it make a claim about containing a secret or telling you something that “the media” doesn’t want you to know? No.

c. Don’t stop at the headline! Keep exploring. Ok.

4. Is this information designed for easy sharing, like a meme? No.

5. Consider the source of the information:

a. Is it a well-known source? Maybe?

b. Is there a byline (an author’s name) attached to this piece? Kind of but fake.

c. Go to the website’s “About” section: There is no About section.

Does the site describe itself as a “fantasy news” or “satirical news” site? There is no About section.

d. Does the person or organization that produced the information have any editorial standards? Not sure how I find this?

e. Does the “contact us” section include an email address that matches the domain (not a Gmail or Yahoo email address)? Looked for the contact us page for a couple minutes but could not find it.

f. Does a quick search for the name of the website raise any suspicions?  Yes! It is listed on a site called “fake news checker”!

6. Does the example you’re evaluating have a current date on it? Yes

7. Does the example cite a variety of sources, including official and expert sources? No

Does the information this example provides appear in reports from (other) news outlets? Yes

8. Does the example hyperlink to other quality sources? Yes

In other words, they haven’t been altered or taken from another context? No

9. Can you confirm, using a reverse image search, that any images in your example are authentic (in other words, sources that haven’t been altered or taken from another context)? The image doesn’t match, it’s old!

10. If you searched for this example on a fact-checking site such as or, is there a fact-check that labels it as less than true? No results.

Ok. So now we’ve spent ten to fifteen minutes on this article, looking for dates and email addresses and contact emails. Now what? I have no idea. The site appears on a list of fake news sites and doesn’t have a contact page. But it does have a date on the story and Politifact and Snopes don’t have stories on it. The image for the article is an old image (fake!).

And conversely, the article links to the Wall Street Journal as the source of the claim.


Are you starting to get the feeling we just spent a whole lot of time on a checklist that we are about to crumple up and through in the trash?

To put this in perspective, you got a dubious letter and just spent 20 minutes fact-checking the mailman. And then you actually opened the letter and found it was a signed letter from your Mom.

“Ah,” you say, “but the mailman is a Republican!”

How does this make any sense?

Staying On the Page

If you want to read how badly this fails, you can look at some of the stories about the program as it is used in the classroom. Here’s a snippet about some folks using Checkology:

The students’ first test comes from Facebook. A post claims that more than a dozen people died after receiving the flu vaccine in Italy and that the CDC is now telling people not to get a flu shot. [One student] is torn.

“I mean, I’ve heard many rumors that the flu shot’s bad for you,” [she] says. But instinct tells her the story’s wrong. “It just doesn’t look like a reliable source. It looks like this is off Facebook and someone shared it.”

Cooper labels the story “fiction.” And she’s right.

This drives me nuts. It worked out this time, of course, because the story is false. But relying on your intuition like this, based on no real knowledge other than how a claim looks, is not what we should be encouraging here.

Worse, you see the biggest failing here — in this curriculum based around asking questions of a text, the student is not actually doing anything other than asking questions. They are looking at a text and seeing what feelings come from it after asking the questions.

Here’s another student on the same flu story:

Her classmate takes a different path to the same answer. When he’s not sure of a story, he says, he now checks the comments section to see if a previous reader has already done the research.

“Because they usually figure it out,” [he] says. And, indeed, he wasn’t the first to question the vaccine story’s veracity. “Like one comment was, ‘I just fact-checked this, and it doesn’t appear to be true. Where else do you see this to be true?’ “

I’m not attacking the student here — they are doing exactly what the curriculum told them to: looking at a page and asking questions about it.But you can see here that we just had a student use comments on an article to fact-check an article. Comments!

Comments can be useful, of course. When the trail has gone cold tracing a story to its source, often it’s a comment from someone that points the way to the original story. Sometimes a person points to an article on Snopes or Politifact.

But to get to the truth quickly, comments are usually the worst place to look. At this point, almost every anti-Trump story online has someone under it calling it “fake news”. What do you do with that? How does it help?

Again, this is not what a web literate person does when they hear that the flu vaccine may be bad. A web literate person finds the original source of the claim and then asks the web what it knows about the source. All this other stuff is mostly beside the point.

More than Fiction, Less than Fact

Which brings me to my second (third? tenth?) pet peeve here: there’s a muddling here of the issues of claim, story, and source.

Take that claim on Facebook that over a dozen deaths were caused by the flu vaccine and the CDC banned it. “Fiction,” said the student.


And it’s true that the source she was reading and the story that she was reading were misinformation. But is the story complete fiction? Let’s do a search:


I’m guessing the student read the Natural News story down towards the bottom — Natural News is one of the big suppliers of anti-vaxx propaganda on Facebook. But the story here is not cut from whole cloth. Just reading the two blurbs at the top of the search results I get a pretty good idea what happened. The Wall Street Journal reports that on December 1 Italy suspended, pending an investigation, the use of two batches of the flu vaccine. This was apparently due to 12 people dying shortly after receiving it.

On December 3rd, the BBC reported that Italy had completed its investigation and cleared the vaccine as safe. A bit of domain knowledge tells me that what probably happened is what often happens with these things — flu vaccine is administered to a population that is relatively old and has a higher chance of dying due to any cause. Eventually those sort of probabilities produce a bunch of correlations with no causation.

By the way, this ability to read a page of results and form a hypothesis about the shape of a story based on a quick scan of all the information there — dates, URLS, blurbs, directory structure — that’s what mastery looks like, and that’s what you want your employees and citizens to be able to do, not count spelling errors.

So is this vaccine story “fiction”? I suppose so. It’s not true that the vaccine killed these people, and the CDC certainly didn’t cancel the vaccine. If we were doing a Snopes ruuling on this I’d go with a straight up “False” as the ruling.

But I’d also note there was a brief panic over a series of what we now know to be unrelated deaths, followed by an investigation that ruled the vaccines safe.

You are not going to get that if you stare at a page looking for markers that the story is true or false. You are only going to get that if you follow the claim upstream.

The Checkology list declares that students should “use the questions below to assess the likelihood that a piece of information is fake news.” In that instruction you have a dangerous conflation of source and claim, which is only furthered by confusing questions like “Does the example have a date on it?”

News as source, and news as claim. It’s an epistemological hole that we put our students in, and to help them out of it we hand them a shovel.

The Ephemeral Stream

How do programs like the News Literacy Project’s Checkology get these issues wrong? The intentions are good, clearly. And there is a ton of talent working on it that’s had a lot of time to get it right:

The News Literacy Project was founded nearly nine years ago by a Pulitzer prize-winning investigative reporter with the Los Angeles Times, Alan C. Miller. The group and its mission have been endorsed by 33 “partner” news organizations, including The Associated Press, The New York Times and NPR.


Fundamentally, these efforts miss because what’s needed is not an understanding of news but of the web. 

As just one example, the twenty-plus questions that students are asked to ask a document seem to assume that

  1. Sources are scarce and we must absolutely figure out this source instead of ditching it for a better one.
  2. Asking the web what it knows about a source is a last resort, after reading the about page, counting spelling errors, tallying punctuation, and figuring whether an author’s email address looks a bit fishy.

The web is not print, or a walled garden of digital content subscription. Information on the web is abundant. And yet the strategies we see here all telegraph scarcity, as if the website you are looking at was a book you were handed in the middle of a desert to decipher.

The approach also does not come to terms with the Stream — that constant flow of reshareable information we confront each morning in Twitter or Facebook. You don’t have fifteen minutes to go through a checklist in the Stream. You have 90 seconds. And your question isn’t “Should I subscribe to Natural News?” — your question is “Did a dozen people die of flu vaccine?” Whether news folks want to admit it or not, the stream tends to erode brand relationships with providers in favor of a stream of undifferentiated headlines.

Above all, the World Wide Web is a web, and the way to establish authority and truth on the web is to use the web-like properties of it. Those include looking for and following relevant links as well as learning how to navigate tools like Google which use that web to index and rank relevant pages. And they include smaller things as well, like understanding the ways in which platforms like Twitter and Facebook signal authority and identity.

In short, we need a web literacy that starts with the web and the tools it provides to track a claim to ground. As we can see from the confusing and confused reactions of students in the Checkology program, that’s not happening now, and “news literacy” isn’t going to fix that.

If you’re interested in alternative, web-native approaches to news literacy, you can try my new, completely free and open-source book Web Literacy for Student Fact-Checkers.

You should also read Sam Wineburg and Sarah McGrew’s Why Students Can’t Google Their Way to Truth, and the results of their Stanford study which showed that the major deficits of students with regard to news analysis were issues of web literacy and use.



Searching News Program Audio

Maha Bali has a great post on getting to the source of faked Trump video. She does a great job narrating her process, along with reflecting on it, so I’ll just suggest you go to her blog and read it now. It’s well worth your time. In particular, two things jump out at me — the domain knowledge that red flags the content and the realization that to debunk the video requires finding the original video from which it was made.

Go read it now.

Now that you’re back (you did read it, right?) it reminds me of a technique many people not be aware of — you can search dialogue on the major U.S. news programs via the Internet Archive.

So for example, once you get to the point where you know you’re looking for Trump saying the phrase “Sea of Love” you can head over to the Internet Archive TV News Archive page for “tremendous sea of love.” And right there, the second result, is the video that has been altered, along with the ABC chyron:



There’s also a specialized Trump collection on the site if you just want to search the clips in which Donald Trump plays a part.

We can use this for other things as well. For example, we might want to fact-check whether Mike Pence agreed with the “Muslim Ban” during the later part of the campaign. So you can check that by going into the Trump archive and typing “pence muslim ban“.


When you click on that, you’ll see Mike Pence agreeing directly with that particular language.

Why is this important? So much of what our leaders communicate is now over the air, with very little written record. Resources on  sites like these (what do we call these? The “gray web”?) are not indexed by Google, but are freely accessible and provide irreplaceable functionality for fact-checking civic discourse. If I spent some of my time as a student many years ago learning how to use specialized library databases, surely we can have our students spend some time learning tools like these, right?




Against Expressive Social Media

I’m sitting here starting an argument with you and you are starting an argument with me.

I am against expressive social media, I say. I think it is making us very dumb and we should use other forms of social media to teach kids.

“But, Mike,” you may be thinking, “why are you so binary, why not BOTH?”

“But, Mike,” you may be thinking, “you must respect the students and their expressive urge!”

“But, Mike,” you are thinking, “is this really an extended subtweet of something I said? Is it against me? It’s against me, isn’t it?”

Or perhaps you’re thinking, damn straight, it’s about time someone spoke against expressive social media. Sock it to ’em, Mike!

If you’re really enlightened maybe your opinion is that it would be silly to be for or against the article at this point. Let’s wait until the terms in the headline are defined. Then, after that paragraph, in the milliseconds after the definition — then I’ll decide for or against it.


I’m Sick of this Crap and I Want It to End

We do this all day on Facebook and Twitter and blogs. On Medium, or forums, or Slack. We argue or bond with others that share our opinions. We see an open box on the internet and type into it What We Think. Maybe we soften it. Or maybe, as is the case here, we say screw it, and just try to anger people, like I am doing now. But underneath it all is the idea that you try to convince me of something and I try to convince you and somehow down the line we end up smarter.

I could add caveats here about the cases where this works, but I don’t want to give you an out right now. The fact is it mostly doesn’t work. Most of the work here in a blog does not make me smarter. It makes me better at presenting things I’ve learned off-blog. It documents what I’ve learned, maybe, which is useful later. It influences you. But to the extent I am sitting here trying to persuade you of something, learning time is over.

There Was a Vision Once and This Is Not It

When I was in college I had decided to never become my Dad, who was an early programmer for Digital Equipment Corporation.

I was good at computer programming, and I had enjoyed it as a kid. I was in my first chat rooms in the late 1970s. My invites for my 5th birthday were printed out on that old green and white striped paper using a loop where my dad fed an array of names into a MUMPS program to make 20 personal invites (we invited everyone in the class). That was 1975.


But by college computing seemed boring. I was interested in music and art and philosophy. I dropped out of college and hitch-hiked and played guitar, under an illusion I was Bob Dylan. I thought big thoughts and read a lot of Joseph Campbell and Henry Miller in a variety of makeshift lodgings.


Yeah, it’s actually me. I really did want to be Dylan.

My dad worried about me, as one would about a son who is working two days a week as a janitor and sleeping outside on Cape Cod while writing crappy Henry Miller knockoff stories. He had this feeling that this might not be a sustainable way of living. When I moved back home and thought about going back to college in late 1990 he tried to talk me into looking into programming. I still wasn’t interested.

One day there was a photocopy on the kitchen counter of an article from a magazine. Just out on an otherwise empty counter. I looked at it.

“As We May Think?” I asked?

“Oh, yeah,” my dad said, as if the article had just been left there accidentally. “You might really like that. You should read it.”

It wasn’t very subtle.

But I did read it, and it opened my eyes. In the article, the author, writing in 1945, detailed things that looked like computers from Terry Gilliam’s Brazil, but were not used as glorified walkie-talkies, printing presses, or accounting machines but instead tools to truly augment thought.

Reading it didn’t change me overnight, but it opened a door for me. It made me realize that properly conceived computers were philosophical, firmly rooted in the humanities that I loved. I got on the Internet. I started playing with hypertext. When Mosaic came out, I hopped on board the web. And I dreamed big dreams.

Dreams of what? Dreams of the Memex, of course, that thought experiment of that 1945 author, Vannevar Bush:


The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex.

First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item.

When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

I loved this vision. A person pulling these various threads together, like Campbell pulling these various religions together or Miller jump-cutting through related scenes in 1930s Paris to form a literary montage.

I bought into the early hopes that the World Wide Web was really going to be a World Wide Memex, where people used it like this, as a tool for thought. And at the core of that vision was that idea that people would be using the web to try to construct and share understanding, not to argue about it.

Usenet Killed the Hypertext Star

Of course, that’s not how things turned out. The hyperlinked vision of the web was replaced by Usenet plus surveillance. Share and argue, argue and share. But now with personalized ads for things you just bought last week. (Amazon: “This guy bought a Chromebook, he must really like Chromebooks. Show him some more Chromebooks.”)

It’s a step back, but no one seems to notice. Or care.

In my more pessimistic moments, I come to think that the thing that poor Vannevar Bush didn’t get, and that Doug Engelbart didn’t get, and that Alan Kay didn’t get is people really like the buzz of getting beliefs confirmed. And they like the buzz of getting angry at people that are too stupid to get what they already know. Confirming beliefs makes you feel smart and arguing with people makes you feel smarter than someone else. Both allow you to snack on dopamine throughout the day,  and if you ever need a full meal you can always jump on Reddit.

Buzz, buzz, buzz.

At Some Point the Candy Stops

I’m rambling here because I’m sick of making sense, I guess. But the thing is we had and have technologies that look like that dream of the old web, where an individual tries to construct knowledge and prod it. To test their knowledge. To try to broaden their understanding of both their knowledge and the limits of their knowledge by attempting to explain things from a more neutral point of view. I’m a broken record on this, but wiki is a way to do this. There are other ways too — things like annotation tools have some promise, if they become more than glorified comments.

But none of these will give you the buzz. So we’re a bit stuck, like sugar addicts of caffeine junkies trying to go straight. My wife Nicole teaches K-12 art, and has taught in K-5 classes that are used to getting candy as an award for very basic good behavior. That’s a tough room to walk into, and that’s kind of the room we’re in. Do we give students more candy, or do we find a different way?

When I started this blog a decade ago, my first post was this:

We need to stop asking how we can communicate with our college students in their idiom, which is a valid question, but ultimately a marketing and customer service issue.

We need to start asking the real question, which is how do we teach our students to collaborate and communicate in ways fit for the agile projects the future requires.

I meant agile here in its normal lay sense: that we need to be fast and flexible in our thinking and our doing, and we need to provide tools that support that.

I’m not sure that’s what we’re doing, though. I’m not saying that classes shouldn’t be fun. But have we truly thought about the type of collaboration that the future needs and designed education to fit that? Or are we chasing engagement without concern for the needs of our students and broader society? Are we truly developing new ways of working together with one another? Or are we teaching old ways with a better looking site theme? Are we opening their minds or closing them? Are we building a life of the ego or a life of the mind?

I’ll apologize for this post in a couple days, probably. There are fifteen unfinished posts in my queue that express this better than this, but for some reason the dam just broke today.

The Lead-Crime Hypothesis and a Gripe About Mobile

I’ve generally kept my advocacy for the Lead-Crime Hypothesis off this blog. This is a blog about web-enabled education, after all. But today I can probably get away with it because there’s a web literacy connection. Seriously, I promise.

For those who don’t know the lead-crime hypothesis, it goes like this: the massive crime wave of the late 70s to early 1990s in the U.S. — the crime wave that gave us our politics as we have them now, since it was seen as a failure of liberalism — that crime wave was caused primarily by youth exposure to lead, a result of the most massive public poisoning in the history of the world: the sale and use of leaded gasoline.

In this theory, early lead poisoning, especially in urban areas, affected the mental development of many children, making them more prone to violence and a host of other cognitive and behavioral issues. Roll those behaviors forward 18 years or so, and that early lead exposure becomes a crime wave.

You see why I don’t mention this on the blog much, even though I’ve been annoying friends with it for years. It sounds pretty tin-foil hattish, even though it’s a pretty solid hypothesis.

Anyway, I wanted to make a point about mobile learning, and today I get to do it by talking about lead.

So here’s the thing: I’m reading through an old New Scientist article from 1971 for another purpose (history of computing in education) when I see an article adjacent to the one I’m reading on lead poisoning.

I can’t resist. In it is this paragraph:


From “Is Lead Blowing Our Minds? New Scientist, May 27, 1971.

There’s a whole host of of questions that occur to me reading this. The first question is how that Manchester average child exposure compares to Flint, Michigan. I open a new tab and do a web search for lead blood level in Flint. It turns out that 30 children in Flint had levels above 5 micrograms per deciliter. Twenty-three of them were under six:

Unfortunately there’s a mismatch in unit here, so we’re going to have to covert 5ug/dL to parts per million. So we open another tab and find a converter:


Then we convert. I actually know this conversion, but I like doing it here to make sure I don’t mess it up by a decimal place:


OK, so here’s some context then that should blow your mind. In Flint there were 30 children that tested above the dangerous level of 5 ug/dL. This was the crisis. Yet, according to the New Scientist article, in 1971 the average blood level in children in Manchester UK was six times that, at 31 ug/dL. And unlike in Flint, that wasn’t temporary — that was over their entire childhood.

As usual, when I look at lead stuff, I have to flip back and forth multiple times. The numbers shock me every time. But I think I did this right. (You’re welcome to check me here).

We can have some more fun here. The Flint article says:

Any child who tests 45 micrograms per deciliter or higher must be immediately treated at a poison center, Wells said. No children have tested at that level.

We return to that New Scientist article:

A recent study of Manchester children showed an average of 0.31 ppm, with 17 percent over .50 ppm…

Again, that conversion show 17% of Manchester children had levels over 50 ug/dL. So maybe 20% of the 1971 population of Manchester, if they were alive today, would likely be rushed to a poison center for immediate blood chelation.

So that’s some context.

So now for the hypothesis. The end of that paragraph says that Finland had the highest lead levels in 1971 and Sweden the lowest in a multi-country study. This is a great find because Finland and Sweden should have similar-ish cultures, but different lead exposures. According to the lead hypothesis if we go forward 18 years or so we should find that Finland has a significantly higher crime rate than Sweden.

We make this hypothesis before we go, and decide to look at the murder rate, since it is the most comparable across countries (other violent crimes can vary in definition and record-keeping, but murder is murder).

So we open up my go-to resource for nation data — Nation Master. Unfortunately comparisons for 1989 are not available. But 1995 comparisons are, so we’ll take it:


And what do we find? Score another point for the lead hypothesis: the rate of murder in Finland, the high-lead country, is three times of that in Sweden, the low-lead country.

The whole process takes about ten minutes, maybe a bit less. But at the end of it, my tabs look like this:


With about a third of those tabs opened up in the course of looking at this.

I’m not saying that I proved anything here. I could still be a nut about this leaded gasoline and crime hypothesis.

But I am saying that this is what literate web reading looks like. You read things, and slide smoothly into multi-tab investigations of issues, pulling in statistical databases, unit converters, old and new magazine articles, published research.

Now here’s my question — if I read this on my phone, how much of this could I have done? My experience tells me almost none of it. On a laptop we built all this context, developed an informal hypothesis and tested against a database. On a phone, I doubt we could have even made it through the first Flint search without wanting to throw our phone across the room.

We know that this sort of multi-tabbed environment is productive — it was, of course, one of the major breakthroughs of Xerox PARC — multiple windows between which you could copy and paste text. If you want your computer to be more than a consumption tool you need that sort of functionality.

The mobile web takes that all away, makes us dumber and less investigative. Yet year after year we hear people talking about the promise of mobile learning.

It’s not only wrong — it’s harmful.

As educators, I’m going to propose a different question. Not “How do we promote mobile learning?” but instead, how do we stop it?

How do we get kids to work on laptops, and stop reading on phones? How do we get them to learn the techniques of multi-tab investigations? Because this world where we’ve started reading everything on single-tabbed phone browsers, without workable copy and paste, without context menus, without keyboards? It’s going to make us very dumb compared to the people that came before us. And I think we need all the intelligence we can use right now.

Misinformation Is a Norovirus and the Web Is a Cruise Ship

I can’t make it to MisInfoCon, unfortunately, or the #fakenewssci conference going on right now on the East Coast (can we get a few West Coast misinformation conferences please?) But I thought I’d offer my take on a frame for the problem of misinformation on the web.

When you listen to the psychologists talk about misinformation, it can get pretty depressing. They’ll tell you that once people believe a thing, it can be pretty hard to dislodge that belief. And creating new beliefs doesn’t take that much. Some repeated exposure to information (whether true or false) and an emotional frame to view it through does the trick. Easy to catch, hard to cure. In fact, trying to dislodge existing beliefs — even when they are patently ridiculous, like flat-eartherism — often results in a “backfire effect” causing the beliefs to set in deeper.

When you listen to historians talk, it can be pretty uplifting, in a weird “we’re screwed but we always have been” sort of way. Fake news and slanted news has been around since day one of our species. If you believe theorists like Dan Sperber, our reasoning power evolved not to solve problems, but to slant news. So this is nothing new, and maybe our reaction to this is a moral panic.

Both of these takes, though, tend to leave me feeling a bit unsatisfied. And it’s partially because the psychological and historical approach provide insights, but an inappropriate frame for improving the information environment. For me, the appropriate way to think about problems of web-based misinformation is through a public health lens. Through the lens of epidemiology, which looks at the spread of disease.

My view is that Misinformation is a stomach bug, one that has existed since the dawn of time in various strains. And the web, it’s a cruise ship. Combine the two things and you get something like this:

BAYONNE, N.J. (AP) — Kim Waite was especially disappointed to fall ill while treating herself to a Caribbean cruise after completing cancer treatment. The London woman thought she was the only sick one as her husband wheeled her to the infirmary — until the elevator doors opened to reveal hundreds of people vomiting into bags, buckets or on the floor, whatever was closest.

“I started crying, I couldn’t believe it,” Waite said. “I was in shock.”

Waite was among nearly 700 passengers and crew members who became ill during a cruise on Royal Caribbean’s Explorer of the Seas. The voyage was cut short and the ship returned to port Wednesday in New Jersey, where it was being sanitized in preparation for its next voyage.

I won’t go too deep into the whole epidemiology of stomach bugs and cruise ships, but let’s start with this. No web cruise looks at a room of 700 passengers with a norovirus and says “Well, they can’t be cured, so nothing can be done.”

There’s absolutely something to be done: prevent the room from having 701 people in it. The primary focus is on the people outside that room.

And yet, when we talk about fact-checking, the assumption is that the main use of such things is to correct people’s beliefs — to “cure” people who are “sick”. It’s not.

Like the Social Web, Cruise Ships Are Viral by Design

A cruise ship is meant to push you closer to people you don’t know, it provides events, buzz, common meals, trivia contests. And that social virality breeds traditional virality.

I’m no cruise ship expert, but if you think about what a cruise ship has to do to deal with a stomach bug outbreak you’ll get a lot further in thinking about web misinformation than if you cling to this idea of fact-checkers as missionaries.

What cruise ships do is try to stop the spread of the virus. And they do that by adopting many approaches at once.

daily mail 012.JPG

Source: Daily Mail

For example:

  • They set rules and influence behavior patterns that reduce the spread of the disease.
  • They train their crews to identify potential sick passengers earlier, and to act in ways that don’t further the spread.
  • They set up isolation rooms.
  • They sanitize the ship in between voyages.

And so on. Almost none of this activity deals with curing people who are infected.

Fact-Checks Aren’t a Cure, They’re Prevention

I’m not going to bore you with a point-by-point extended analogy of what disease control measures on a ship map to what web misinformation control strategies. But since I run a student-driven fact-checking project, let me talk about fact checks. Because, again, I hear a lot of people saying “You know, if a person believes something and reads a fact-check they just have their beliefs reinforced.” And while it’s a true and important point, it gets the frame wrong.

Fact-checking isn’t a cure for misinformation. It’s prevention. It’s the hand sanitizer and the sinks around the ship that make it easy to wash your hands before you get infected or infect someone else. It’s information hygiene.

How do fact-checks accomplish this?

  • They incentivize news providers and politicians to not make up lies in the first place.
  • If news providers and politicians produce lies anyway, an available fact-check can prevent someone from sharing the lie.
  • If someone shares the lie, the availability of a fact-check allows a commenter on a post or tweet to shame the user into removing the lie.
  • A habit of checking for fact-checks slows sharers and readers down more generally, resulting in less overall virality (and hopefully more reflection).

What fact-checks don’t do is influence true believers. And that’s OK. That’s not the battle we’re fighting.

Regarding my work with the Digital Polarization Initiative, I’ll add that getting students to produce fact-checks is important not only for the fact-checks they produce, but because it builds good information hygiene habits; in the process of producing these things they become a different sort of reader on the web as well, one more prone to use the interactivity of the web to do a quick check on the headlines that rile them up. So an important part of this is changing our orientation to the web from one of discussion (in which retrenchment is the norm) to one of investigation.

What If Your Context Menu Gave You Context?

The larger point is if we want to deal with misinformation on a network, we have to think in network terms. And in network terms, the most important stuff happens before a person adopts a belief. And a lot of things could be done there.

There’s the design of the web environment, for example. Open a browser like Chrome and go to a page and right click into a context menu. The context menu is so-named because it changes based on the context. But what if it gave you context?  What if, when you were confronted with an unfamiliar site, instead of a context menu that read like:

  • Back
  • Forward
  • Reload
  • Save As…
  • Print

You got a context menu that said:

  • Site Info
  • Fact Checks
  • Reload
  • Save As…

etc., where Site Info produced a custom Google search that compiled a bunch of information from Wikipedia descriptions of the site to WHOIS and date created results, and Fact Checks looked for references to the page in prominent fact-checking platforms?

What if your browser could recognize prominent names, such as “Andrew Wakefield” and highlight them, encouraging people to get a hover card summarizing the work, worldview, and issues around an author or a quote source before the reader took the quote at face value?

I know, I know — there are extensions that can do this. I’ve been working with Jon Udell on just such an extension for the Digital Polarization Initiative. But extensions are good for the short term, but the wrong long-term model. It’s like a cruise ship saying “We’ll give hand sanitizer and sinks to those who request them.”  If you want to fix the problem, you put the sanitizer in the hallways, not in the rooms.

If you want to stop disinformation, AI is great. But a more effective idea would be to make the browser (or the Facebook interface) a better tool for investigation.

Once you do that, you start to build a web ecosystem where fact-checking can have real impact.

Ending this abruptly, because, well, work beckons. But let’s think a lot bigger than we have been on this.