So I forgot to report this, but I put together a team and submitted a proposal to the Rita Allen Misinformation Solutions Forum contest, and our project was chosen out of all the submissions as one of five finalists. I’ll be going to D.C. in October to pitch it in a competition for one of two prizes.
The project is named “Let me fact-check that for you”: a semi-automated, personalized guide generator for the “Wait! You’re wrong about that!” responder.
The tool is meant to empower current Snope-sers to not just post links to alternative articles, but to post short, customized guides that show how they went about fact-checking the particular link, story, or image. Too often when someone in a comment thread debunks or contextualizes something, it’s just dueling links. No one learns how to check things any better. Our hope is to make a website service where you plug in a URL or image and click through a couple decisions. Out the other end comes a sharable little five second, screenshot-based guide showing how you might check that specific link or image.
The idea is loosely (very loosely) inspired by the old joke site “Let me Google that for you”, where you could plug in a question someone had asked you and it would create a little video of the process to Google the answer. The idea with LMGTFY was partially to shame people into checking Google before bother people, but the other piece of it was to demonstrate that the time cost of consulting Google first was minimal. People were overestimating the cost of consulting Google, the little links were reminders.
People outside misinfo may not be aware of this, but there is a critique of the “people won’t fact check because they love their own point of view” that posits that people — to some extent — aren’t just choosing things they agree with because they like being right, but because it just requires less effort than engaging in more accuracy-oriented behavior. Gordon Pennycook and David Rand, for instance, have an interesting paper on this idea, showing that people that score high on cognitive reflection (an appetite for effort) also show better headline discernment, even when headlines are ideologically aligned.
I’m not necessarily sold on the Pennycook and Rand version of this idea, but I’m interested in the broader insight. I know it doesn’t explain the worst offenders, but I’ve found with those I work with that cynicism (“Pick what you want, it’s all bullshit!”) is often driven by the cognitive exhaustion of sorting through conflicting information. This insight also aligns with Hannah Arendt’s work — totalitarianism wins the information war by deliberately overwhelming the capacity of a population to reconcile endless contradictions. The contradictions are a tool to increase the cost of pursuing truth relative to other options.
If this is the case, one approach might be to encourage people to be more effortful when looking at online media. (Meh.) But the approach I favor is to reduce both the real and perceived cost of sorting through the muck through finding cheap, good enough methods and popularizing them. Doing that — while fostering a culture that values accuracy — might cause a few more people to regard the cost of checking something to be worth it relative to other seemingly more economical options like partisan heuristics, conspiracy thinking, or cynical nihilism.
As such, the methods that our tool will demonstrate will be useful (at decreasing real cost, since our methods fall back on some cognitively inexpensive methods). But the bigger impact is just letting people see that they probably imagine the cost of weeding out the worst information as being much higher than it actually is. By resetting these expectations, we can influence the behavior they choose.
As they say, it’s a theory. Anyway, let me know if you’ll be at the forum in October. I’d love to meet up. And if you’re working on something similar, let me know.