The following is a provocation for #EngageMOOC. Thanks to Bonnie Stewart and the rest of the #EngageMOOC crew for inviting me to contribute.
When I was in my twenties I went to the doctor with a cough I believed was whooping cough due to the tell-tale “whoop” intake of breath that occurred after I had coughed myself blue. It came sporadically towards the end of the day, so when I went to the doctor one morning I had to describe it without him actually seeing what I was experiencing.
He asked me a lot of questions: how it felt, my medical history and habits. And one of the many questions he asked was whether I smoked. I said I did.
And though he asked other questions, that was basically it for the doctor. As far as he was concerned it was smoker’s cough and I should quit smoking. He prescribed some cough syrup and sent me on my way. I stared in amazement but took the script.
I went home, and it got worse. I went back in, and this time got a physician’s assistant.
But this time it went very differently. The assistant asked me what was wrong, and I said I thought I had whooping cough. Rather than proceed to other questions, he stopped and left the room, returning in a couple minutes.
“I just checked our notices,” he said, “And it looks like there was an outbreak of pertussis a little north of here. I’m going to put you on a broad spectrum antibiotic, and check back with you next week.”
And with that he wrote me out a script for erythromycin, and I was on my way. In a short time I was better.
“Recognizing” Fake News
Most educational approaches promoted as solutions to fake news look decidedly like the first doctor’s method. Take in everything you can about the item you are looking at, and see if you can recognize it for what it is. Take the Newseum’s “E.S.C.A.P.E. Junk News” method. Students are asked to look at stories and evaluate them along six multi-faceted dimensions:
Do the facts hold up?
Look for information you can verify:
Who made this and can I trust them?
Trace who has touched the story:
- Social Media Users
What’s the big picture?
Consider if this is the whole story and weigh other force surrounding it.
- Current events
- Cultural trends
- Political goals
- Financial pressures
Who is the intended audience?
Look for attempts to appeal to specific groups or types of people.
- Image choices
- Presentation Techniques
Why was this made?
Look for clues to the motivation
- The publisher’s mission
- Persuasive language or images
- Moneymaking tactics
- Stated or unstated agendas
- Calls to action
How is this information presented?
Consider how the way it’s made affects the impact.
- Image choices
- Placement and layout
Now, you might think a person filling out this exhausting battery of questions would make a good decision on what is credible and what is not. But research suggests otherwise. In fact, what we know from studies of expertise in many fields is such exhaustive holistic assessments can make the evaluator more prone to error.
Why? Because in the end, any such list of attributes is going to point in many different and contradictory directions, and your exhausted mind — which cannot hold this much information in working memory at one time — will find a way to take a shortcut. Maybe it will choose to notice more salient features over less salient ones. Maybe it will fall back on racism, bigotry, stereotypes. Or resort to confirmation bias. Maybe it will just give up entirely.
In medicine, this can have deadly consequences, which is why physicians today are less likely to write down all available symptoms and look for the magic connect-the-dots disease, and more likely to walk down simple decision trees. “Do you have a family history of heart disease?” ends up early in that sequence if you walk in with chest pain, because depending on that answer, the questions will change. The questions have to change. If there is a family history of heart disease, and a patient is presenting with chest pain, the doctor is not going to spend much time messing around with questions about acid reflux. They are going to rule out some other simple causes and get you off to some tests.
Likewise, the physician’s assistant in my introductory story heard symptoms that might be pertussis and might be smoker’s cough. But rather than collecting as much information as possible and evaluating that holistically he sought an answer to the one question that mattered: had there been an outbreak? Pertussis outbreaks are still fairly rare. If there was an outbreak, there was a decent chance what I had was whooping cough. If there was no outbreak, the chance that it was whooping cough was vanishingly small.
I’ve talked a lot in the past about how recent research by Sam Wineburg and Sarah McGrew demonstrated that effective fact-checkers “got off the page” they were evaluating, “using the network to check the network.” Historians and students in their study tried to evaluate whether something was credible by reading it closely. Fact-checkers, on the other hand, immediately opened other tabs and saw what Wikipedia or Google Scholar had to say about the source. The students and historians performed poorly using their method, with many unable to distinguish material put out by blatant political advocacy groups from material put out by widely respected professional groups. The fact-checker’s methods, on the other hand, got them to the right answers in a fraction of the time.
But it’s not just about getting off the page — it’s also about asking the most important questions first, and not getting distracted by salient yet minor details, or becoming so overloaded by evaluation your bias is allowed free rein.
The Fast and Frugal Logic of the Four Moves
Based on these concerns, I came up with the “four moves” of fact-checking and source verification. For students trying to ascertain the truth of a story on the web less is often more, and what students need are not long lists of attributes to weigh in some complex holistic calculus, but quick and directed moves that solve simple scenarios quickly and complex scenarios in a reasonable amount of time. The four moves I came up with were:
- Check for previous work. Most stories you see on the web have been either covered, verified, or debunked by more reputable sources. Find a reputable source that has done your work for you. If you can find that, maybe your work is done.
- Go upstream to the source. If you can’t find a rock-solid source that has done your verification and context-building for you, follow the story or claim you are looking at to it’s origin. Most stories shared with you on the web are recoverage of some other reporting or research. Follow the links and get to the source. If you recognize the source as credible, your work may be done.
- Read laterally. If you have traced the claim or story or research to the source and you don’t recognize it, you will need check the credibility of the source by looking at available information on its reliability, expertise, and agenda.
- Circle back. A reminder that even when we follow this process sometimes we find ourselves going down dead ends. If a certain route of inquiry is not panning out, try going back to the beginning with what you know now. Choose different search terms and try again.
They are structured so that you can quickly eliminate hoaxes and egregiously wrong stories. Here’s an example of how that might work. Say you see a story that says that Jennifer Lawrence has died:
Well, has she died?
The ESCAPE method, like the RADCAB and CRAAP methods before it ask you to look at the evidence, think about the source, consider the context, audience, purpose, tone, grammar, style and so forth to figure out whether this is reliable.
The four moves, on the other hand, proposes to answer simple questions first. If Jennifer Lawrence has died, there should be wall to wall coverage of that, right? So check for previous work, and look to see if reliable outlets are covering this. If she is dead, they will be. If she is not, they won’t. Lo and behold, if we type [[Jennifer Lawrence dead]] into Google News we don’t find any stories about her dying:
That’s done in five seconds. But of course not all questions are that easy. Consider the following image found floating around Pinterest:
A short search of Google News doesn’t show any relevant coverage of the study. So let’s go to the next level and see if we can find the source.
There’s a link typed on the image, but rather than deal with that we do a Google Search and immediately hit the jackpot, immediately seeing a Google result on the issue that comes from Harvard.
We can go to that page and find the study is more nuanced than what is presented in the image (and also that there is no race data in the study — the image here is purely an attempt to use the trope of the “angry black women” to get more clicks).
Consider the difference had we spent time looking at the provider of the image for bias, tone, evidence, purpose and the like (a futile process we call “fact-checking the mailman”). By following the moves we quickly get to the most authoritative source for the fact and work from there, in the original context.
Finally, if we don’t trust the original source, or have never heard of it, we can “read laterally” (a term borrowed from Wineburg’s group) and find out more about the organization or publisher. Here we do a quick read on the source of the above study:
We find that the URL is correct and that the school is considered a leading school of public health in the United States.
Recognition is Futile
These are just a couple of the most simple examples. As you will see in the text of Web Literacy for Fact-Checkers, they can be used in more complex scenarios as well. Reading laterally can involve checking out an expert’s publication record, for example. Tracing a photo upstream to the source might involve reverse image search. Certain site-specific searches can make finding previous work easier.
But all these techniques tend to avoid any complex tasks of recognition or similarity. Recognition is futile for a number of reasons. It overloads cognitive capacity, draws attention to easily manipulable surface features, and makes solving even the simplest problems time-consuming. Additionally, as surface features become easier and easier to fake, recognition as a model will become less and less useful.
We have only touched the surface of the Four Moves approach here, and it currently being somewhere just short of 2 a.m. on a Saturday night we’ll have to stop here for now. I hope you’ll explore both the textbook and the activities blog.
But the broad point here is to move away from the idea that we want students to “recognize” fake news. Instead we want to give students a process to quickly verify and contextualize news. That’s a very different approach than what we’ve typically done in the past, but it’s a shift we have to make if we want to empower our students to make the fast and accurate assessments the current information environment requires.