Against Expressive Social Media

I’m sitting here starting an argument with you and you are starting an argument with me.

I am against expressive social media, I say. I think it is making us very dumb and we should use other forms of social media to teach kids.

“But, Mike,” you may be thinking, “why are you so binary, why not BOTH?”

“But, Mike,” you may be thinking, “you must respect the students and their expressive urge!”

“But, Mike,” you are thinking, “is this really an extended subtweet of something I said? Is it against me? It’s against me, isn’t it?”

Or perhaps you’re thinking, damn straight, it’s about time someone spoke against expressive social media. Sock it to ’em, Mike!

If you’re really enlightened maybe your opinion is that it would be silly to be for or against the article at this point. Let’s wait until the terms in the headline are defined. Then, after that paragraph, in the milliseconds after the definition — then I’ll decide for or against it.


I’m Sick of this Crap and I Want It to End

We do this all day on Facebook and Twitter and blogs. On Medium, or forums, or Slack. We argue or bond with others that share our opinions. We see an open box on the internet and type into it What We Think. Maybe we soften it. Or maybe, as is the case here, we say screw it, and just try to anger people, like I am doing now. But underneath it all is the idea that you try to convince me of something and I try to convince you and somehow down the line we end up smarter.

I could add caveats here about the cases where this works, but I don’t want to give you an out right now. The fact is it mostly doesn’t work. Most of the work here in a blog does not make me smarter. It makes me better at presenting things I’ve learned off-blog. It documents what I’ve learned, maybe, which is useful later. It influences you. But to the extent I am sitting here trying to persuade you of something, learning time is over.

There Was a Vision Once and This Is Not It

When I was in college I had decided to never become my Dad, who was an early programmer for Digital Equipment Corporation.

I was good at computer programming, and I had enjoyed it as a kid. I was in my first chat rooms in the late 1970s. My invites for my 5th birthday were printed out on that old green and white striped paper using a loop where my dad fed an array of names into a MUMPS program to make 20 personal invites (we invited everyone in the class). That was 1975.


But by college computing seemed boring. I was interested in music and art and philosophy. I dropped out of college and hitch-hiked and played guitar, under an illusion I was Bob Dylan. I thought big thoughts and read a lot of Joseph Campbell and Henry Miller in a variety of makeshift lodgings.


Yeah, it’s actually me. I really did want to be Dylan.

My dad worried about me, as one would about a son who is working two days a week as a janitor and sleeping outside on Cape Cod while writing crappy Henry Miller knockoff stories. He had this feeling that this might not be a sustainable way of living. When I moved back home and thought about going back to college in late 1990 he tried to talk me into looking into programming. I still wasn’t interested.

One day there was a photocopy on the kitchen counter of an article from a magazine. Just out on an otherwise empty counter. I looked at it.

“As We May Think?” I asked?

“Oh, yeah,” my dad said, as if the article had just been left there accidentally. “You might really like that. You should read it.”

It wasn’t very subtle.

But I did read it, and it opened my eyes. In the article, the author, writing in 1945, detailed things that looked like computers from Terry Gilliam’s Brazil, but were not used as glorified walkie-talkies, printing presses, or accounting machines but instead tools to truly augment thought.

Reading it didn’t change me overnight, but it opened a door for me. It made me realize that properly conceived computers were philosophical, firmly rooted in the humanities that I loved. I got on the Internet. I started playing with hypertext. When Mosaic came out, I hopped on board the web. And I dreamed big dreams.

Dreams of what? Dreams of the Memex, of course, that thought experiment of that 1945 author, Vannevar Bush:


The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex.

First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item.

When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

I loved this vision. A person pulling these various threads together, like Campbell pulling these various religions together or Miller jump-cutting through related scenes in 1930s Paris to form a literary montage.

I bought into the early hopes that the World Wide Web was really going to be a World Wide Memex, where people used it like this, as a tool for thought. And at the core of that vision was that idea that people would be using the web to try to construct and share understanding, not to argue about it.

Usenet Killed the Hypertext Star

Of course, that’s not how things turned out. The hyperlinked vision of the web was replaced by Usenet plus surveillance. Share and argue, argue and share. But now with personalized ads for things you just bought last week. (Amazon: “This guy bought a Chromebook, he must really like Chromebooks. Show him some more Chromebooks.”)

It’s a step back, but no one seems to notice. Or care.

In my more pessimistic moments, I come to think that the thing that poor Vannevar Bush didn’t get, and that Doug Engelbart didn’t get, and that Alan Kay didn’t get is people really like the buzz of getting beliefs confirmed. And they like the buzz of getting angry at people that are too stupid to get what they already know. Confirming beliefs makes you feel smart and arguing with people makes you feel smarter than someone else. Both allow you to snack on dopamine throughout the day,  and if you ever need a full meal you can always jump on Reddit.

Buzz, buzz, buzz.

At Some Point the Candy Stops

I’m rambling here because I’m sick of making sense, I guess. But the thing is we had and have technologies that look like that dream of the old web, where an individual tries to construct knowledge and prod it. To test their knowledge. To try to broaden their understanding of both their knowledge and the limits of their knowledge by attempting to explain things from a more neutral point of view. I’m a broken record on this, but wiki is a way to do this. There are other ways too — things like annotation tools have some promise, if they become more than glorified comments.

But none of these will give you the buzz. So we’re a bit stuck, like sugar addicts of caffeine junkies trying to go straight. My wife Nicole teaches K-12 art, and has taught in K-5 classes that are used to getting candy as an award for very basic good behavior. That’s a tough room to walk into, and that’s kind of the room we’re in. Do we give students more candy, or do we find a different way?

When I started this blog a decade ago, my first post was this:

We need to stop asking how we can communicate with our college students in their idiom, which is a valid question, but ultimately a marketing and customer service issue.

We need to start asking the real question, which is how do we teach our students to collaborate and communicate in ways fit for the agile projects the future requires.

I meant agile here in its normal lay sense: that we need to be fast and flexible in our thinking and our doing, and we need to provide tools that support that.

I’m not sure that’s what we’re doing, though. I’m not saying that classes shouldn’t be fun. But have we truly thought about the type of collaboration that the future needs and designed education to fit that? Or are we chasing engagement without concern for the needs of our students and broader society? Are we truly developing new ways of working together with one another? Or are we teaching old ways with a better looking site theme? Are we opening their minds or closing them? Are we building a life of the ego or a life of the mind?

I’ll apologize for this post in a couple days, probably. There are fifteen unfinished posts in my queue that express this better than this, but for some reason the dam just broke today.

The Lead-Crime Hypothesis and a Gripe About Mobile

I’ve generally kept my advocacy for the Lead-Crime Hypothesis off this blog. This is a blog about web-enabled education, after all. But today I can probably get away with it because there’s a web literacy connection. Seriously, I promise.

For those who don’t know the lead-crime hypothesis, it goes like this: the massive crime wave of the late 70s to early 1990s in the U.S. — the crime wave that gave us our politics as we have them now, since it was seen as a failure of liberalism — that crime wave was caused primarily by youth exposure to lead, a result of the most massive public poisoning in the history of the world: the sale and use of leaded gasoline.

In this theory, early lead poisoning, especially in urban areas, affected the mental development of many children, making them more prone to violence and a host of other cognitive and behavioral issues. Roll those behaviors forward 18 years or so, and that early lead exposure becomes a crime wave.

You see why I don’t mention this on the blog much, even though I’ve been annoying friends with it for years. It sounds pretty tin-foil hattish, even though it’s a pretty solid hypothesis.

Anyway, I wanted to make a point about mobile learning, and today I get to do it by talking about lead.

So here’s the thing: I’m reading through an old New Scientist article from 1971 for another purpose (history of computing in education) when I see an article adjacent to the one I’m reading on lead poisoning.

I can’t resist. In it is this paragraph:


From “Is Lead Blowing Our Minds? New Scientist, May 27, 1971.

There’s a whole host of of questions that occur to me reading this. The first question is how that Manchester average child exposure compares to Flint, Michigan. I open a new tab and do a web search for lead blood level in Flint. It turns out that 30 children in Flint had levels above 5 micrograms per deciliter. Twenty-three of them were under six:

Unfortunately there’s a mismatch in unit here, so we’re going to have to covert 5ug/dL to parts per million. So we open another tab and find a converter:


Then we convert. I actually know this conversion, but I like doing it here to make sure I don’t mess it up by a decimal place:


OK, so here’s some context then that should blow your mind. In Flint there were 30 children that tested above the dangerous level of 5 ug/dL. This was the crisis. Yet, according to the New Scientist article, in 1971 the average blood level in children in Manchester UK was six times that, at 31 ug/dL. And unlike in Flint, that wasn’t temporary — that was over their entire childhood.

As usual, when I look at lead stuff, I have to flip back and forth multiple times. The numbers shock me every time. But I think I did this right. (You’re welcome to check me here).

We can have some more fun here. The Flint article says:

Any child who tests 45 micrograms per deciliter or higher must be immediately treated at a poison center, Wells said. No children have tested at that level.

We return to that New Scientist article:

A recent study of Manchester children showed an average of 0.31 ppm, with 17 percent over .50 ppm…

Again, that conversion show 17% of Manchester children had levels over 50 ug/dL. So maybe 20% of the 1971 population of Manchester, if they were alive today, would likely be rushed to a poison center for immediate blood chelation.

So that’s some context.

So now for the hypothesis. The end of that paragraph says that Finland had the highest lead levels in 1971 and Sweden the lowest in a multi-country study. This is a great find because Finland and Sweden should have similar-ish cultures, but different lead exposures. According to the lead hypothesis if we go forward 18 years or so we should find that Finland has a significantly higher crime rate than Sweden.

We make this hypothesis before we go, and decide to look at the murder rate, since it is the most comparable across countries (other violent crimes can vary in definition and record-keeping, but murder is murder).

So we open up my go-to resource for nation data — Nation Master. Unfortunately comparisons for 1989 are not available. But 1995 comparisons are, so we’ll take it:


And what do we find? Score another point for the lead hypothesis: the rate of murder in Finland, the high-lead country, is three times of that in Sweden, the low-lead country.

The whole process takes about ten minutes, maybe a bit less. But at the end of it, my tabs look like this:


With about a third of those tabs opened up in the course of looking at this.

I’m not saying that I proved anything here. I could still be a nut about this leaded gasoline and crime hypothesis.

But I am saying that this is what literate web reading looks like. You read things, and slide smoothly into multi-tab investigations of issues, pulling in statistical databases, unit converters, old and new magazine articles, published research.

Now here’s my question — if I read this on my phone, how much of this could I have done? My experience tells me almost none of it. On a laptop we built all this context, developed an informal hypothesis and tested against a database. On a phone, I doubt we could have even made it through the first Flint search without wanting to throw our phone across the room.

We know that this sort of multi-tabbed environment is productive — it was, of course, one of the major breakthroughs of Xerox PARC — multiple windows between which you could copy and paste text. If you want your computer to be more than a consumption tool you need that sort of functionality.

The mobile web takes that all away, makes us dumber and less investigative. Yet year after year we hear people talking about the promise of mobile learning.

It’s not only wrong — it’s harmful.

As educators, I’m going to propose a different question. Not “How do we promote mobile learning?” but instead, how do we stop it?

How do we get kids to work on laptops, and stop reading on phones? How do we get them to learn the techniques of multi-tab investigations? Because this world where we’ve started reading everything on single-tabbed phone browsers, without workable copy and paste, without context menus, without keyboards? It’s going to make us very dumb compared to the people that came before us. And I think we need all the intelligence we can use right now.

Misinformation Is a Norovirus and the Web Is a Cruise Ship

I can’t make it to MisInfoCon, unfortunately, or the #fakenewssci conference going on right now on the East Coast (can we get a few West Coast misinformation conferences please?) But I thought I’d offer my take on a frame for the problem of misinformation on the web.

When you listen to the psychologists talk about misinformation, it can get pretty depressing. They’ll tell you that once people believe a thing, it can be pretty hard to dislodge that belief. And creating new beliefs doesn’t take that much. Some repeated exposure to information (whether true or false) and an emotional frame to view it through does the trick. Easy to catch, hard to cure. In fact, trying to dislodge existing beliefs — even when they are patently ridiculous, like flat-eartherism — often results in a “backfire effect” causing the beliefs to set in deeper.

When you listen to historians talk, it can be pretty uplifting, in a weird “we’re screwed but we always have been” sort of way. Fake news and slanted news has been around since day one of our species. If you believe theorists like Dan Sperber, our reasoning power evolved not to solve problems, but to slant news. So this is nothing new, and maybe our reaction to this is a moral panic.

Both of these takes, though, tend to leave me feeling a bit unsatisfied. And it’s partially because the psychological and historical approach provide insights, but an inappropriate frame for improving the information environment. For me, the appropriate way to think about problems of web-based misinformation is through a public health lens. Through the lens of epidemiology, which looks at the spread of disease.

My view is that Misinformation is a stomach bug, one that has existed since the dawn of time in various strains. And the web, it’s a cruise ship. Combine the two things and you get something like this:

BAYONNE, N.J. (AP) — Kim Waite was especially disappointed to fall ill while treating herself to a Caribbean cruise after completing cancer treatment. The London woman thought she was the only sick one as her husband wheeled her to the infirmary — until the elevator doors opened to reveal hundreds of people vomiting into bags, buckets or on the floor, whatever was closest.

“I started crying, I couldn’t believe it,” Waite said. “I was in shock.”

Waite was among nearly 700 passengers and crew members who became ill during a cruise on Royal Caribbean’s Explorer of the Seas. The voyage was cut short and the ship returned to port Wednesday in New Jersey, where it was being sanitized in preparation for its next voyage.

I won’t go too deep into the whole epidemiology of stomach bugs and cruise ships, but let’s start with this. No web cruise looks at a room of 700 passengers with a norovirus and says “Well, they can’t be cured, so nothing can be done.”

There’s absolutely something to be done: prevent the room from having 701 people in it. The primary focus is on the people outside that room.

And yet, when we talk about fact-checking, the assumption is that the main use of such things is to correct people’s beliefs — to “cure” people who are “sick”. It’s not.

Like the Social Web, Cruise Ships Are Viral by Design

A cruise ship is meant to push you closer to people you don’t know, it provides events, buzz, common meals, trivia contests. And that social virality breeds traditional virality.

I’m no cruise ship expert, but if you think about what a cruise ship has to do to deal with a stomach bug outbreak you’ll get a lot further in thinking about web misinformation than if you cling to this idea of fact-checkers as missionaries.

What cruise ships do is try to stop the spread of the virus. And they do that by adopting many approaches at once.

daily mail 012.JPG

Source: Daily Mail

For example:

  • They set rules and influence behavior patterns that reduce the spread of the disease.
  • They train their crews to identify potential sick passengers earlier, and to act in ways that don’t further the spread.
  • They set up isolation rooms.
  • They sanitize the ship in between voyages.

And so on. Almost none of this activity deals with curing people who are infected.

Fact-Checks Aren’t a Cure, They’re Prevention

I’m not going to bore you with a point-by-point extended analogy of what disease control measures on a ship map to what web misinformation control strategies. But since I run a student-driven fact-checking project, let me talk about fact checks. Because, again, I hear a lot of people saying “You know, if a person believes something and reads a fact-check they just have their beliefs reinforced.” And while it’s a true and important point, it gets the frame wrong.

Fact-checking isn’t a cure for misinformation. It’s prevention. It’s the hand sanitizer and the sinks around the ship that make it easy to wash your hands before you get infected or infect someone else. It’s information hygiene.

How do fact-checks accomplish this?

  • They incentivize news providers and politicians to not make up lies in the first place.
  • If news providers and politicians produce lies anyway, an available fact-check can prevent someone from sharing the lie.
  • If someone shares the lie, the availability of a fact-check allows a commenter on a post or tweet to shame the user into removing the lie.
  • A habit of checking for fact-checks slows sharers and readers down more generally, resulting in less overall virality (and hopefully more reflection).

What fact-checks don’t do is influence true believers. And that’s OK. That’s not the battle we’re fighting.

Regarding my work with the Digital Polarization Initiative, I’ll add that getting students to produce fact-checks is important not only for the fact-checks they produce, but because it builds good information hygiene habits; in the process of producing these things they become a different sort of reader on the web as well, one more prone to use the interactivity of the web to do a quick check on the headlines that rile them up. So an important part of this is changing our orientation to the web from one of discussion (in which retrenchment is the norm) to one of investigation.

What If Your Context Menu Gave You Context?

The larger point is if we want to deal with misinformation on a network, we have to think in network terms. And in network terms, the most important stuff happens before a person adopts a belief. And a lot of things could be done there.

There’s the design of the web environment, for example. Open a browser like Chrome and go to a page and right click into a context menu. The context menu is so-named because it changes based on the context. But what if it gave you context?  What if, when you were confronted with an unfamiliar site, instead of a context menu that read like:

  • Back
  • Forward
  • Reload
  • Save As…
  • Print

You got a context menu that said:

  • Site Info
  • Fact Checks
  • Reload
  • Save As…

etc., where Site Info produced a custom Google search that compiled a bunch of information from Wikipedia descriptions of the site to WHOIS and date created results, and Fact Checks looked for references to the page in prominent fact-checking platforms?

What if your browser could recognize prominent names, such as “Andrew Wakefield” and highlight them, encouraging people to get a hover card summarizing the work, worldview, and issues around an author or a quote source before the reader took the quote at face value?

I know, I know — there are extensions that can do this. I’ve been working with Jon Udell on just such an extension for the Digital Polarization Initiative. But extensions are good for the short term, but the wrong long-term model. It’s like a cruise ship saying “We’ll give hand sanitizer and sinks to those who request them.”  If you want to fix the problem, you put the sanitizer in the hallways, not in the rooms.

If you want to stop disinformation, AI is great. But a more effective idea would be to make the browser (or the Facebook interface) a better tool for investigation.

Once you do that, you start to build a web ecosystem where fact-checking can have real impact.

Ending this abruptly, because, well, work beckons. But let’s think a lot bigger than we have been on this.

Web Literacy for Student Fact-Checkers Is Out

Back before the election I was working on a book on the problems of living in “the stream” — this endless flow of stuff we read, retweet, and react to. My argument in that still unfinished work was that while the stream is useful and exciting it also warps our sense of reality in unhelpful ways. Forced to decide within seconds to retweet an inflammatory tweet or share a headline on Facebook we tend to make bad decisions that pollute the information environment and reduce the depth and complexity of our thought. The 2016 primary elections in the U.S. were going to be Exhibit A of this trend, with a nod toward the acceleration of these trends in the 2016 general election.

It was going to be a condemnation of the attention economy we’ve developed and its whole rotten ad-driven substrate, followed by a plea to return to some older visions of the web.

After the general election I felt both vindicated and weirdly distant. As I continued to work on the book it occurred to me that what the world needed, much more than a scholarly book or extended philippic, was a textbook or field guide that explained how to survive in this world of viral information flows and social media firehoses.

So in November I switched gears and began to write a textbook for web literacy that focused on the question of what web literacy for stream culture looked like. What I found is that it had to be quick and tactical. Users are presented with hundreds of headlines and statements a day through social media, and asked to retweet or share that information with little or no background. Students need skills that help them to get closer to the truth in betwen the few minutes between when they see something and when they decide to share it. Conversations with researcher Sam Wineburg confirmed this need for quick and frugal fact-checking basics.

So I wrote this book: Web Literacy for Student Fact-Checkers. It’s still rough and unfinished in places, but it’s in a shape that’s suitable for classroom use.

I don’t mean it to replace what we do with critical thinking and web efforts around digital identity, making, and collaboration. But I think it fills a gap that I’m not seeing other resources address. And it’s a real important gap.

Here are some other formats:

MOBI (Kindle)
PressBooks XML

Comments and suggested edits are welcome, but for maximum efficiency I’d ask you communicate comments about specific pages or passages using on the web version of the book.

Narrative Neediness

Jesse Walker on how our need for narrative creates a market for both conspiracy theory and fake and slanted news:

For a lot of people, the real assumption that they bring to the news, even beyond their partisan affiliations, is an expectation of a smooth narrative. They expect news stories to look like the movies or TV shows that they’re familiar with. Even if they’re regular journalism consumers, the stories they remember best are these well done stories that tell a compelling narrative and make them feel like they’re watching a movie or TV show.

In reality, stories are messy and have real loose ends. That’s the real bias that readers have to combat, and it’s something that people in the media have to think about. Because, on the one hand we want to provide good, compelling narratives, but on the other hand, we don’t want people to think they live in this world that’s made up of these easy, compelling narratives. They don’t.

I  used to teach statistical literacy and narratives — even in a smaller sense–  were the biggest problem. You’d take a stat like “Only 4% of college students are black males” and ask students to think about what that might mean statistically, and no matter how much you would try to keep them inside the numbers for even a few minutes, they would race towards narratives. The conservative kids would rush towards “Well, maybe they are just underprepared, and that’s why…” The liberal kids would immediately start talking about how they faced discrimination, or grew up in bad neighborhoods.

Lost in the debate: how much under-representation does that figure indicate? How much would you have to increase the participation rate to achieve an equitable result?

If you stop the students, already lost in their narratives, and ask them what that statistic says about equitable representation they will tell you a variety of things — you need to increase participation by 94%, or get 9 times the amount of black males into college. But of course, the black male population is about 6% of the population, so while such a figure shows a severe race-based deficit — about 33% — it’s not nearly as much as all the students, on both sides of the partisan divide, read from that number.

And this matters, because the “black males aren’t in college” narrative is a pretty impoverished narrative. There’s actually an awful lot of black male students in college. But which colleges? Do they persist? Why not? How could we do better at supporting their needs and creating better opportunities? There are so many interesting and useful questions to ask.

Is this just confirmation bias by another name? I don’t think so. You could watch this process with students and statistics even where they had no pre-existing bias towards a result. Cancer rates in this country, for example, have skyrocketed: there are more people living with cancer than ever before. Give this to students and instantly it blossoms into a wide variety of compelling stories about water quality and plastic containers, or failure of people to take responsibility for themselves, or the good old days when people had home cooked meals, or any one of two dozen other stories. And you can watch students sometimes jump between contradictory narratives — half the time they just want to find a resting place in a narrative: which one is irrelevant.

Once the narrative is chosen, thinking stops, and you can almost see the students’ shoulders relax.

(A few seconds of thought will get you to a better answer: as five-year cancer survival rates increase and other causes of death decrease there are more people than ever living with cancer because our medical care is getting better. Of course, that’s not much of a story…).

That moment when the facts slot into a narrative eventually comes for everyone. It has to; we’re human and what we want is meaning. But I’m  interested in delaying its arrival, if only for a little bit. And the question I have is how we can orient our pedagogy and digital interfaces to increase that delay, and in the process construct some narratives that are a bit less tidy and a bit more useful.

Cleanup Time

Today’s photo investigation.

cleaning up.jpg

The big “story” now is that the Women’s March left a big mess and that’s awful, and they should have cleaned it up. Here’s the image — it’s shocking!

Well, this is almost too easy. There’s two ways to do this. If you search Snopes for the term “Women’s March signs snopes” you’ll find an article that debunks the right-hand photo, at least partially:


What the Snopes article says about that righthand photo is it is signs left by the Women’s March, but these particular signs were left at the Trump International Hotel in D.C. as part of the protest. That’s why they are clustered together like that. Someone does have to clean them up, but it wasn’t routine littering. Additionally, the Parks Service has remarked the protest was tidier than previous events. While the Snopes article gives no single word ruling, their presentation is close to what they usually call “Mixed” — partially true, but misleading in presentation.

Speaking of cleaning up, what about the photo on the left-hand side:


Well, you see that “alamy” watermark by the guy’s waist? I’m guessing this is stock photography. And stock photography, in general, is not released after the day of an event, so I’m thinking this was taken long ago.

We’d like to right-click the photo and search by image, but my guess is that the two images pushed together won’t match anything. So let’s use the snipping tool.

Windows: Call up the “snipping tool“:


Use it to capture the piece of the photo you want to search for. Save it to somewhere you’ll remember:


Mac: Hold the “Command, Shift and 4” together and then select what you want to do a screen shot of. Save it to somewhere you’ll remember.

Now go to Google search and upload it, the way we’ve done with past photos.


Any of these results is probably good to click on, but I pick door number three, partially because it is so specific.

And when we do that, we have good luck. We get to the stock photo purchase page, where there is a full description:


We even get the date and location. It was shot seven years ago. So no, this was not from the march.


And… we’re done. Fake-a-rooni.

Road Trip

I like showing people how to debunk viral photos for a couple reasons. First, it requires small enough action that it can easily become a habit. You don’t need to do a lot of research or have a lot of knowledge.

Second, it shows how technological affordance (in this case Google Chrome’s right-click “Search by Image” function) works to create culture. We need to make you curious about the photos you see. But that’s a whole lot easier if the technology makes checking things two steps instead of eighteen.

Finally, it’s fun.

In any case, the photo of the day:


So this is part of the whole “Bikers for Trump” meme. Bikers are supposedly coming by the hundreds of thousands to provide “security” for the inauguration.

I’ll leave the larger issues of this fascination with biker-based security aside and ask a simpler question. Is this a picture of bikers headed to the 2017 inauguration?

The answer? No. And it takes about 30 seconds to find out.

First, right-click or Control-Click on the image and select search Google for image:


The Google search — for reasons known only to Google — will assume that the best name for this image is “Jesus”. Change it to “bikers”


Change the date (using the “tools” button) to end in 2016. If we find that this picture existed in 2016 it’s pretty clear it isn’t people headed to the inauguration in 2017. Let’s look at what we get:



While these are technically the dates that the pages that contain the photo were published (not the publication date of the photo), the results are probably good enough for us to doubt the photo.  We can be done here, in 30 seconds.

If you take about 30 seconds more you can do even better. On the second page of results we find a page from 2009:


We have Google translate that page, and find the image there posted on a Czech forum in 2013. In the process we see that this is a photo that has been used by a number of biker groups, but is still relatively rare, and the earliest posting was from a Czech forum.

So no, this is not a picture of Bikers for Trump.