The Lead-Crime Hypothesis and a Gripe About Mobile

I’ve generally kept my advocacy for the Lead-Crime Hypothesis off this blog. This is a blog about web-enabled education, after all. But today I can probably get away with it because there’s a web literacy connection. Seriously, I promise.

For those who don’t know the lead-crime hypothesis, it goes like this: the massive crime wave of the late 70s to early 1990s in the U.S. — the crime wave that gave us our politics as we have them now, since it was seen as a failure of liberalism — that crime wave was caused primarily by youth exposure to lead, a result of the most massive public poisoning in the history of the world: the sale and use of leaded gasoline.

In this theory, early lead poisoning, especially in urban areas, affected the mental development of many children, making them more prone to violence and a host of other cognitive and behavioral issues. Roll those behaviors forward 18 years or so, and that early lead exposure becomes a crime wave.

You see why I don’t mention this on the blog much, even though I’ve been annoying friends with it for years. It sounds pretty tin-foil hattish, even though it’s a pretty solid hypothesis.

Anyway, I wanted to make a point about mobile learning, and today I get to do it by talking about lead.

So here’s the thing: I’m reading through an old New Scientist article from 1971 for another purpose (history of computing in education) when I see an article adjacent to the one I’m reading on lead poisoning.

I can’t resist. In it is this paragraph:


From “Is Lead Blowing Our Minds? New Scientist, May 27, 1971.

There’s a whole host of of questions that occur to me reading this. The first question is how that Manchester average child exposure compares to Flint, Michigan. I open a new tab and do a web search for lead blood level in Flint. It turns out that 30 children in Flint had levels above 5 micrograms per deciliter. Twenty-three of them were under six:

Unfortunately there’s a mismatch in unit here, so we’re going to have to covert 5ug/dL to parts per million. So we open another tab and find a converter:


Then we convert. I actually know this conversion, but I like doing it here to make sure I don’t mess it up by a decimal place:


OK, so here’s some context then that should blow your mind. In Flint there were 30 children that tested above the dangerous level of 5 ug/dL. This was the crisis. Yet, according to the New Scientist article, in 1971 the average blood level in children in Manchester UK was six times that, at 31 ug/dL. And unlike in Flint, that wasn’t temporary — that was over their entire childhood.

As usual, when I look at lead stuff, I have to flip back and forth multiple times. The numbers shock me every time. But I think I did this right. (You’re welcome to check me here).

We can have some more fun here. The Flint article says:

Any child who tests 45 micrograms per deciliter or higher must be immediately treated at a poison center, Wells said. No children have tested at that level.

We return to that New Scientist article:

A recent study of Manchester children showed an average of 0.31 ppm, with 17 percent over .50 ppm…

Again, that conversion show 17% of Manchester children had levels over 50 ug/dL. So maybe 20% of the 1971 population of Manchester, if they were alive today, would likely be rushed to a poison center for immediate blood chelation.

So that’s some context.

So now for the hypothesis. The end of that paragraph says that Finland had the highest lead levels in 1971 and Sweden the lowest in a multi-country study. This is a great find because Finland and Sweden should have similar-ish cultures, but different lead exposures. According to the lead hypothesis if we go forward 18 years or so we should find that Finland has a significantly higher crime rate than Sweden.

We make this hypothesis before we go, and decide to look at the murder rate, since it is the most comparable across countries (other violent crimes can vary in definition and record-keeping, but murder is murder).

So we open up my go-to resource for nation data — Nation Master. Unfortunately comparisons for 1989 are not available. But 1995 comparisons are, so we’ll take it:


And what do we find? Score another point for the lead hypothesis: the rate of murder in Finland, the high-lead country, is three times of that in Sweden, the low-lead country.

The whole process takes about ten minutes, maybe a bit less. But at the end of it, my tabs look like this:


With about a third of those tabs opened up in the course of looking at this.

I’m not saying that I proved anything here. I could still be a nut about this leaded gasoline and crime hypothesis.

But I am saying that this is what literate web reading looks like. You read things, and slide smoothly into multi-tab investigations of issues, pulling in statistical databases, unit converters, old and new magazine articles, published research.

Now here’s my question — if I read this on my phone, how much of this could I have done? My experience tells me almost none of it. On a laptop we built all this context, developed an informal hypothesis and tested against a database. On a phone, I doubt we could have even made it through the first Flint search without wanting to throw our phone across the room.

We know that this sort of multi-tabbed environment is productive — it was, of course, one of the major breakthroughs of Xerox PARC — multiple windows between which you could copy and paste text. If you want your computer to be more than a consumption tool you need that sort of functionality.

The mobile web takes that all away, makes us dumber and less investigative. Yet year after year we hear people talking about the promise of mobile learning.

It’s not only wrong — it’s harmful.

As educators, I’m going to propose a different question. Not “How do we promote mobile learning?” but instead, how do we stop it?

How do we get kids to work on laptops, and stop reading on phones? How do we get them to learn the techniques of multi-tab investigations? Because this world where we’ve started reading everything on single-tabbed phone browsers, without workable copy and paste, without context menus, without keyboards? It’s going to make us very dumb compared to the people that came before us. And I think we need all the intelligence we can use right now.

Misinformation Is a Norovirus and the Web Is a Cruise Ship

I can’t make it to MisInfoCon, unfortunately, or the #fakenewssci conference going on right now on the East Coast (can we get a few West Coast misinformation conferences please?) But I thought I’d offer my take on a frame for the problem of misinformation on the web.

When you listen to the psychologists talk about misinformation, it can get pretty depressing. They’ll tell you that once people believe a thing, it can be pretty hard to dislodge that belief. And creating new beliefs doesn’t take that much. Some repeated exposure to information (whether true or false) and an emotional frame to view it through does the trick. Easy to catch, hard to cure. In fact, trying to dislodge existing beliefs — even when they are patently ridiculous, like flat-eartherism — often results in a “backfire effect” causing the beliefs to set in deeper.

When you listen to historians talk, it can be pretty uplifting, in a weird “we’re screwed but we always have been” sort of way. Fake news and slanted news has been around since day one of our species. If you believe theorists like Dan Sperber, our reasoning power evolved not to solve problems, but to slant news. So this is nothing new, and maybe our reaction to this is a moral panic.

Both of these takes, though, tend to leave me feeling a bit unsatisfied. And it’s partially because the psychological and historical approach provide insights, but an inappropriate frame for improving the information environment. For me, the appropriate way to think about problems of web-based misinformation is through a public health lens. Through the lens of epidemiology, which looks at the spread of disease.

My view is that Misinformation is a stomach bug, one that has existed since the dawn of time in various strains. And the web, it’s a cruise ship. Combine the two things and you get something like this:

BAYONNE, N.J. (AP) — Kim Waite was especially disappointed to fall ill while treating herself to a Caribbean cruise after completing cancer treatment. The London woman thought she was the only sick one as her husband wheeled her to the infirmary — until the elevator doors opened to reveal hundreds of people vomiting into bags, buckets or on the floor, whatever was closest.

“I started crying, I couldn’t believe it,” Waite said. “I was in shock.”

Waite was among nearly 700 passengers and crew members who became ill during a cruise on Royal Caribbean’s Explorer of the Seas. The voyage was cut short and the ship returned to port Wednesday in New Jersey, where it was being sanitized in preparation for its next voyage.

I won’t go too deep into the whole epidemiology of stomach bugs and cruise ships, but let’s start with this. No web cruise looks at a room of 700 passengers with a norovirus and says “Well, they can’t be cured, so nothing can be done.”

There’s absolutely something to be done: prevent the room from having 701 people in it. The primary focus is on the people outside that room.

And yet, when we talk about fact-checking, the assumption is that the main use of such things is to correct people’s beliefs — to “cure” people who are “sick”. It’s not.

Like the Social Web, Cruise Ships Are Viral by Design

A cruise ship is meant to push you closer to people you don’t know, it provides events, buzz, common meals, trivia contests. And that social virality breeds traditional virality.

I’m no cruise ship expert, but if you think about what a cruise ship has to do to deal with a stomach bug outbreak you’ll get a lot further in thinking about web misinformation than if you cling to this idea of fact-checkers as missionaries.

What cruise ships do is try to stop the spread of the virus. And they do that by adopting many approaches at once.

daily mail 012.JPG

Source: Daily Mail

For example:

  • They set rules and influence behavior patterns that reduce the spread of the disease.
  • They train their crews to identify potential sick passengers earlier, and to act in ways that don’t further the spread.
  • They set up isolation rooms.
  • They sanitize the ship in between voyages.

And so on. Almost none of this activity deals with curing people who are infected.

Fact-Checks Aren’t a Cure, They’re Prevention

I’m not going to bore you with a point-by-point extended analogy of what disease control measures on a ship map to what web misinformation control strategies. But since I run a student-driven fact-checking project, let me talk about fact checks. Because, again, I hear a lot of people saying “You know, if a person believes something and reads a fact-check they just have their beliefs reinforced.” And while it’s a true and important point, it gets the frame wrong.

Fact-checking isn’t a cure for misinformation. It’s prevention. It’s the hand sanitizer and the sinks around the ship that make it easy to wash your hands before you get infected or infect someone else. It’s information hygiene.

How do fact-checks accomplish this?

  • They incentivize news providers and politicians to not make up lies in the first place.
  • If news providers and politicians produce lies anyway, an available fact-check can prevent someone from sharing the lie.
  • If someone shares the lie, the availability of a fact-check allows a commenter on a post or tweet to shame the user into removing the lie.
  • A habit of checking for fact-checks slows sharers and readers down more generally, resulting in less overall virality (and hopefully more reflection).

What fact-checks don’t do is influence true believers. And that’s OK. That’s not the battle we’re fighting.

Regarding my work with the Digital Polarization Initiative, I’ll add that getting students to produce fact-checks is important not only for the fact-checks they produce, but because it builds good information hygiene habits; in the process of producing these things they become a different sort of reader on the web as well, one more prone to use the interactivity of the web to do a quick check on the headlines that rile them up. So an important part of this is changing our orientation to the web from one of discussion (in which retrenchment is the norm) to one of investigation.

What If Your Context Menu Gave You Context?

The larger point is if we want to deal with misinformation on a network, we have to think in network terms. And in network terms, the most important stuff happens before a person adopts a belief. And a lot of things could be done there.

There’s the design of the web environment, for example. Open a browser like Chrome and go to a page and right click into a context menu. The context menu is so-named because it changes based on the context. But what if it gave you context?  What if, when you were confronted with an unfamiliar site, instead of a context menu that read like:

  • Back
  • Forward
  • Reload
  • Save As…
  • Print

You got a context menu that said:

  • Site Info
  • Fact Checks
  • Reload
  • Save As…

etc., where Site Info produced a custom Google search that compiled a bunch of information from Wikipedia descriptions of the site to WHOIS and date created results, and Fact Checks looked for references to the page in prominent fact-checking platforms?

What if your browser could recognize prominent names, such as “Andrew Wakefield” and highlight them, encouraging people to get a hover card summarizing the work, worldview, and issues around an author or a quote source before the reader took the quote at face value?

I know, I know — there are extensions that can do this. I’ve been working with Jon Udell on just such an extension for the Digital Polarization Initiative. But extensions are good for the short term, but the wrong long-term model. It’s like a cruise ship saying “We’ll give hand sanitizer and sinks to those who request them.”  If you want to fix the problem, you put the sanitizer in the hallways, not in the rooms.

If you want to stop disinformation, AI is great. But a more effective idea would be to make the browser (or the Facebook interface) a better tool for investigation.

Once you do that, you start to build a web ecosystem where fact-checking can have real impact.

Ending this abruptly, because, well, work beckons. But let’s think a lot bigger than we have been on this.

Web Literacy for Student Fact-Checkers Is Out

Back before the election I was working on a book on the problems of living in “the stream” — this endless flow of stuff we read, retweet, and react to. My argument in that still unfinished work was that while the stream is useful and exciting it also warps our sense of reality in unhelpful ways. Forced to decide within seconds to retweet an inflammatory tweet or share a headline on Facebook we tend to make bad decisions that pollute the information environment and reduce the depth and complexity of our thought. The 2016 primary elections in the U.S. were going to be Exhibit A of this trend, with a nod toward the acceleration of these trends in the 2016 general election.

It was going to be a condemnation of the attention economy we’ve developed and its whole rotten ad-driven substrate, followed by a plea to return to some older visions of the web.

After the general election I felt both vindicated and weirdly distant. As I continued to work on the book it occurred to me that what the world needed, much more than a scholarly book or extended philippic, was a textbook or field guide that explained how to survive in this world of viral information flows and social media firehoses.

So in November I switched gears and began to write a textbook for web literacy that focused on the question of what web literacy for stream culture looked like. What I found is that it had to be quick and tactical. Users are presented with hundreds of headlines and statements a day through social media, and asked to retweet or share that information with little or no background. Students need skills that help them to get closer to the truth in betwen the few minutes between when they see something and when they decide to share it. Conversations with researcher Sam Wineburg confirmed this need for quick and frugal fact-checking basics.

So I wrote this book: Web Literacy for Student Fact-Checkers. It’s still rough and unfinished in places, but it’s in a shape that’s suitable for classroom use.

I don’t mean it to replace what we do with critical thinking and web efforts around digital identity, making, and collaboration. But I think it fills a gap that I’m not seeing other resources address. And it’s a real important gap.

Here are some other formats:

MOBI (Kindle)
PressBooks XML

Comments and suggested edits are welcome, but for maximum efficiency I’d ask you communicate comments about specific pages or passages using on the web version of the book.

Narrative Neediness

Jesse Walker on how our need for narrative creates a market for both conspiracy theory and fake and slanted news:

For a lot of people, the real assumption that they bring to the news, even beyond their partisan affiliations, is an expectation of a smooth narrative. They expect news stories to look like the movies or TV shows that they’re familiar with. Even if they’re regular journalism consumers, the stories they remember best are these well done stories that tell a compelling narrative and make them feel like they’re watching a movie or TV show.

In reality, stories are messy and have real loose ends. That’s the real bias that readers have to combat, and it’s something that people in the media have to think about. Because, on the one hand we want to provide good, compelling narratives, but on the other hand, we don’t want people to think they live in this world that’s made up of these easy, compelling narratives. They don’t.

I  used to teach statistical literacy and narratives — even in a smaller sense–  were the biggest problem. You’d take a stat like “Only 4% of college students are black males” and ask students to think about what that might mean statistically, and no matter how much you would try to keep them inside the numbers for even a few minutes, they would race towards narratives. The conservative kids would rush towards “Well, maybe they are just underprepared, and that’s why…” The liberal kids would immediately start talking about how they faced discrimination, or grew up in bad neighborhoods.

Lost in the debate: how much under-representation does that figure indicate? How much would you have to increase the participation rate to achieve an equitable result?

If you stop the students, already lost in their narratives, and ask them what that statistic says about equitable representation they will tell you a variety of things — you need to increase participation by 94%, or get 9 times the amount of black males into college. But of course, the black male population is about 6% of the population, so while such a figure shows a severe race-based deficit — about 33% — it’s not nearly as much as all the students, on both sides of the partisan divide, read from that number.

And this matters, because the “black males aren’t in college” narrative is a pretty impoverished narrative. There’s actually an awful lot of black male students in college. But which colleges? Do they persist? Why not? How could we do better at supporting their needs and creating better opportunities? There are so many interesting and useful questions to ask.

Is this just confirmation bias by another name? I don’t think so. You could watch this process with students and statistics even where they had no pre-existing bias towards a result. Cancer rates in this country, for example, have skyrocketed: there are more people living with cancer than ever before. Give this to students and instantly it blossoms into a wide variety of compelling stories about water quality and plastic containers, or failure of people to take responsibility for themselves, or the good old days when people had home cooked meals, or any one of two dozen other stories. And you can watch students sometimes jump between contradictory narratives — half the time they just want to find a resting place in a narrative: which one is irrelevant.

Once the narrative is chosen, thinking stops, and you can almost see the students’ shoulders relax.

(A few seconds of thought will get you to a better answer: as five-year cancer survival rates increase and other causes of death decrease there are more people than ever living with cancer because our medical care is getting better. Of course, that’s not much of a story…).

That moment when the facts slot into a narrative eventually comes for everyone. It has to; we’re human and what we want is meaning. But I’m  interested in delaying its arrival, if only for a little bit. And the question I have is how we can orient our pedagogy and digital interfaces to increase that delay, and in the process construct some narratives that are a bit less tidy and a bit more useful.

Cleanup Time

Today’s photo investigation.

cleaning up.jpg

The big “story” now is that the Women’s March left a big mess and that’s awful, and they should have cleaned it up. Here’s the image — it’s shocking!

Well, this is almost too easy. There’s two ways to do this. If you search Snopes for the term “Women’s March signs snopes” you’ll find an article that debunks the right-hand photo, at least partially:


What the Snopes article says about that righthand photo is it is signs left by the Women’s March, but these particular signs were left at the Trump International Hotel in D.C. as part of the protest. That’s why they are clustered together like that. Someone does have to clean them up, but it wasn’t routine littering. Additionally, the Parks Service has remarked the protest was tidier than previous events. While the Snopes article gives no single word ruling, their presentation is close to what they usually call “Mixed” — partially true, but misleading in presentation.

Speaking of cleaning up, what about the photo on the left-hand side:


Well, you see that “alamy” watermark by the guy’s waist? I’m guessing this is stock photography. And stock photography, in general, is not released after the day of an event, so I’m thinking this was taken long ago.

We’d like to right-click the photo and search by image, but my guess is that the two images pushed together won’t match anything. So let’s use the snipping tool.

Windows: Call up the “snipping tool“:


Use it to capture the piece of the photo you want to search for. Save it to somewhere you’ll remember:


Mac: Hold the “Command, Shift and 4” together and then select what you want to do a screen shot of. Save it to somewhere you’ll remember.

Now go to Google search and upload it, the way we’ve done with past photos.


Any of these results is probably good to click on, but I pick door number three, partially because it is so specific.

And when we do that, we have good luck. We get to the stock photo purchase page, where there is a full description:


We even get the date and location. It was shot seven years ago. So no, this was not from the march.


And… we’re done. Fake-a-rooni.

Road Trip

I like showing people how to debunk viral photos for a couple reasons. First, it requires small enough action that it can easily become a habit. You don’t need to do a lot of research or have a lot of knowledge.

Second, it shows how technological affordance (in this case Google Chrome’s right-click “Search by Image” function) works to create culture. We need to make you curious about the photos you see. But that’s a whole lot easier if the technology makes checking things two steps instead of eighteen.

Finally, it’s fun.

In any case, the photo of the day:


So this is part of the whole “Bikers for Trump” meme. Bikers are supposedly coming by the hundreds of thousands to provide “security” for the inauguration.

I’ll leave the larger issues of this fascination with biker-based security aside and ask a simpler question. Is this a picture of bikers headed to the 2017 inauguration?

The answer? No. And it takes about 30 seconds to find out.

First, right-click or Control-Click on the image and select search Google for image:


The Google search — for reasons known only to Google — will assume that the best name for this image is “Jesus”. Change it to “bikers”


Change the date (using the “tools” button) to end in 2016. If we find that this picture existed in 2016 it’s pretty clear it isn’t people headed to the inauguration in 2017. Let’s look at what we get:



While these are technically the dates that the pages that contain the photo were published (not the publication date of the photo), the results are probably good enough for us to doubt the photo.  We can be done here, in 30 seconds.

If you take about 30 seconds more you can do even better. On the second page of results we find a page from 2009:


We have Google translate that page, and find the image there posted on a Czech forum in 2013. In the process we see that this is a photo that has been used by a number of biker groups, but is still relatively rare, and the earliest posting was from a Czech forum.

So no, this is not a picture of Bikers for Trump.






The Impulse to Dive Deeper

This comes up in my feed today:


I go to retweet it, but stop. How do I know this is true? It’s a little alarm bell that goes off now when something seems just a little too perfect.

I right click on the image, search by image.



I look at the URLs, and I see “”.  I also note the “/news/ann-arbor/” file structure, which makes me think this is local news. That’s promising, if this is an account of a local story. I click through.


This is gold. A local account from a local paper of a story that happened locally years ago. The photo has a credit, and we have more information.

And I’m more informed now. I first looked at this photo and my mind naively assumed it was the South. Not consciously, but subconsciously. As I read the story I learn more about early efforts to celebrate Martin Luther King, new perspectives on how dangerous it was in some parts. I actually, I do a bit more than that, because the story makes me tear up a bit. Read it yourself, and you’ll see what I mean.

The whole process here takes a few minutes, and that’s only because reading the article takes a bit of time. The process of finding the article took ten seconds. In the end, I moved from senseless retweeting to actually learning something about our history.

I think some people think this stuff — Google Reverse Image, doing a Google Scholar search, looking up whois information on sites — is all just so *small* compared to Big Questions and Critical Thinking etc etc etc. And maybe it is.

But if you can imagine a life of these little habits, each one of which pushes you to dig a little deeper, explore a bit more, dive in a little further — I believe this is the way we start to build a better sort of society, a better sort of digital practice. We start with these habits, we move outward to questions, and deeper into reading. But without the habits, you won’t even start.