A Suggested Improvement to Facebook’s Information Panel: Stories “About” Not “From”

Facebook has a news information panel, and I like it lots. But it could be better. First, let’s look at where it works. Here it is with a decent news source:

Untitled Project

That’s good. The info button throws the Wikipedia page up there, which is the best first stop.  We got the blue checkmark there, and some categorization. I can see its location and its founding date, both of which are quick signals to orient me to what sort of site this is.

There’s also the “More from this website.” Here’s where I differ with Facebook’s take. I don’t think this is a good use of space. Students always think they can tell what sort of site a site is from the stories they publish. I’m skeptical. I know that once you know a site is fake then suddenly of course you feel like you could have told from the sidebar.  But I run classes on this stuff and watch faculty and students try to puzzle this out in real time, and frankly they aren’t so good at it. If I lie and hint to them it’s a good site, bad headlines look fine. If I lie and hint it’s bogus site, real headlines look fake. It’s just really prone to reinforcing initial conceptions.

I get that what’s supposed to happen is that users see the stories and then click that “follow” button if they like what they see. But that’s actually the whole “pages” model that burned 2016 down to the ground. You should not be encouraging people to follow things before they see what other sites say about it.

Take this one — this says RealFarmacy is a news site, and there’s no Wikipedia page to really rebut that. So we’re left with the headlines from RealFarmacy:

realfarm

OK, so of course if you think this site is bogus beforehand it is so clear what these stories mean about the site. But look, if you clicked it it’s because a site named RealFarmacy seemed legit to you. These headlines are not going to help you — and if the site plays its cards right, it’s really easy to hack credibility into this box by altering their feed and interspersing boring stories with clickbaity ones. It’s a well known model, and Facebook is opening itself up to it here.

A better approach is to use the space for news about the site. Here’s some news about RealFarmacy.com:

real farmacy

Which one of these is more useful to you? I’m going to guess the bottom one. The top one is a set of signals that RealFarmacy is sending out about itself. It’s what it wants its reputation to be. The bottom one? That’s what its reputation is. And as far as I can tell it’s night and day.

This is why the technique we show students when investigating sites is to check Wikipedia first, but, if that doesn’t give a result, check for Google News coverage on the organization second. Step one, step two. It works like a charm and Facebook should emulate it.

Can you do this for everything? Sure. In fact, for most legit sites the “News about this source” panel would be boring, but boring is good!

news

Just scanning this I can see the Tribune has a long history, and is a probably a sizable paper. That’s not perfect, but probably more useful than the feed that tells me they have a story about a fraternity incident at Northwestern.

This won’t always work perfectly — occasionally you’ll get the odd editorial calling CNN “fake news” at the top. And people might overly fixate on Bezos’s appearance in coverage about the WaPo. But those misfires are probably worth filling in the data void in Wikipedia around a lot of these clickbait sites with news results that give some insight into the source. Pulling the site’s feed is an incomplete solution, and one that bad actors will inevitably hack.

Anyway, the web is a network of sites, and reputation is on the network, not the node. It’s a useful insight for programmers, but a good one for readers too. Guide users towards that if you want to have real impact.

GIFs as Disinformation Remedy (aka The GIF Curriculum)

Earlier today Alexios Mantzarlis tweeted a GIF by @PicPedant that demonstrates a particular photo is fake in a precise way:

8hy8IaP

This is interesting because recently I’ve been moving to GIFs myself for explanations. Here’s some demonstrations of our techniques, for example:

Check for other coverage (in this case with a good result):

Manafort

Check what a person is verified as (in this case, a reporter):

Verified

Check what a site is about (in this case white supremacy):

webamren

And so on. Part of this is related to my and Scott Waterman’s Rita Allen proposal to generate short instructional sequences to post in comment threads of fake material (i.e., how to fact check) rather than just Snopes-ing people (i.e. “you’re wrong”). Part is based on my year and a half now of making short 10 second YouTube demonstrations of how to check various things.

But bringing it to GIFs is also based on some conversations I have had with Aviv Ovadya and others on what Web Literacy at Scale looks like. (“Web Literacy at Scale” is Aviv’s term for a project he is working on). I think one thing I’ve been thinking about (again, inspired by the insights of others at MisinfoCon) is that misinformation works on the user who doesn’t click through on links, which means media literacy efforts have to (at least in part) not require clicking through on links. If the disease thrives in a click-through free environment and learning about the cure requires click-through — well, you’ve maybe already lost there.

Media literacy GIFs, then, work on a few of levels. First, like disinformation, they operate just by being in the feed, requiring no user action, reducing the asymmetry of problem and remedy. Just as importantly, the shortness of the GIFs show that online media literacy need not (in many cases, at least) be complex. If you could have checked it in five seconds and you didn’t, it’s not that you’re just not smart enough — it’s that you just didn’t care enough. It’s a moral failing, not an intellectual one, and that framing allows us to more cleanly enforce reputational consequences one those who are reckless.

I don’t know how this affects my and Scott Waterman’s Rita Allen proposal, which involves a similar effort to show people how to fact-check rather than berate them for being wrong, but planned to use short sequences of screenshots. That framing paper is due Tuesday, so I guess I have to figure it out by then. But for the moment I’m contemplating the benefits of a media literacy campaign that thrives in the same link-averse passive consumption playgrounds as the problem it is trying to address.

A Short History of CRAAP

Update: I recently learned that this post has been selected for inclusion in a prestigious ACRL yearly list. Newcomers unfamiliar with our work may want to check out SIFT, our alternative to CRAAP, after reading the article.

I reference the history of the so-called “checklist approaches” to online information literacy from time to time, but haven’t put the history down in any one place that’s easily linkable. So if you were waiting for a linkable history of CRAAP and RADCAB, complete with supporting links, pop open the champagne (Portland people, feel free to pop open your $50 bottle of barrel-aged beer). Today’s your lucky day.

Background

In both undergraduate education and K-12 the most popular approach to online media literacy of the past 15 years has been the acronym-based “checklist”. Prominent examples include RADCAB and CRAAP, both in use since the mid-00s. The way that these approaches work is simple: students are asked to chose a text, and then reflect on it using the acronym/initialism as prompt.

As an example, a student may come across an interactive fact-check of the claim that reporters in Russia were fired over a story they did that was critical of the Russian government. It makes claims that a prominent critic of the Kremlin, Julia Ioffe, has made grave errors in her reporting of a particular story on Russian journalists, and goes further to detail what they claim is a pattern of controversy:

2016.PNG

We can use the following CRAAP prompts to analyze the story. CRAAP actually asks the students to ponder and answer 27 separate questions before they can label a piece of content “good”, but we’ll spare you the pain of that and abbreviate here:

  • Currency: Is the article current? Is it up-to-date? Yes, in this case it is! It came out a couple of days ago!
  • Relevance: Is the article relevant to my need for information? It’s very relevant. This subject is in the news, and the question of whether Russia is this authoritarian state that so many people claim it is vital to understanding what our policies should be toward Russia, and to what it might mean to want to emulate Russia in domestic policy toward journalists.
  • Accuracy: Is the article accurate? Are there spelling errors, basic mistakes? Nope, it’s well written, and very slickly presented, in a multimedia format.
  • Authority: Does it cite sources? Extensively. It quotes the reporters, it references the articles it is rebutting.
  • Purpose: What is the purpose? It’s a fact-check, so the purpose is to check facts, which is good.

Having read the whole thing once and read it again thinking about these questions, maybe we find something to get uneasy about, 20 minutes later. Maybe.

But none of these questions get to the real issue, which is that this fact check is written by FakeCheck, the fact-checking unit of RT (formerly Russia Today), a news outfit believed by experts to be a Kremlin-run “propaganda machine”. Once you know that, the rest of this is beside the point, a waste of student time. You are possibly reading a Kremlin written attack on a Kremlin critic. Time to find another source.

We brought a can opener to a gunfight

Having gone through this exercise, it probably won’t shock you that the checklist approach was not designed for the social web. In fact, it was not designed for the web at all. 

The checklist approach was designed – initially – for a single purpose: selecting library resources on a limited budget. That’s why you’ll see terms like “coverage” in early checklist approaches — what gets the biggest bang for the taxpayer buck? 

These criteria have a long history of slow evolution, but as an example of how they looked 40 years ago, here’s a bulletin from the Medical Library Association in 1981. First it states the goal:

In December 1978, CHIN held a series of meetings of health care professionals for the purpose of examining how these providers assess health information in print and other formats. We hoped to extract from these discussions some principles and techniques which could be applied by the librarians of the network to the selection of health materials.

And what criteria did they use?

selection

During these meetings eight major categories of selection criteria for printed materials were considered: accuracy, currency, point of view, audience level, scope of coverage, organization, style, and format.

If you read this article’s expansions on those categories, you’ll see the striking similarities to what we teach students today, as a technique not to decide on how best to build a library collection, but for sorting through social media and web results.

Again, I’ll repeat: the criteria here are from 1978, and other more limited versions pre-dated that conference significantly.

When the web came along, librarians were faced with another collections challenge: if they were going to curate “web collections” what criteria should they use?

The answer was to apply the old criteria. This 1995 announcement from information superhighway library CyberStacks was typical:

Although we recognize that the Net offers a variety of resources of potential value to many clientele and communities for a variety of uses, we do not believe that one should suspend critical judgment in evaluating quality or significance of sources available from this new medium. In considering the general principles which would guide the selection of world wide web (WWW) and other Internet resources for CyberStacks(sm), we decided to adopt the same philosophy and general criteria used by libraries in the selection of non-Internet Reference resources (American Library Association. Reference Collection Development and Evaluation Committee 1992). These principles, noted below, offered an operational framework in which resources would be considered as candidate titles for the collection

Among the criteria mentioned?

  • Authority
  • Accuracy
  • Recency
  • Community Needs (Relevance)
  • Uniqueness/Coverage

Look familiar?

It wasn’t just CyberStacks of course. To most librarians it was just obvious that whether it was on the web or in the stacks the same methods would apply.

So when the web came into being, library staff, tasked with teaching students web literacy, began to teach students how to use collection development criteria they had learned in library science programs. The first example of this I know of is Tate & Alexander’s 1996 paper which outlines a lesson plan using the “traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage.” 

trad.PNG

(an image from a circa 2000 slideshow from Marsha Tate and Jan Alexander on how to teach students to apply library collection development criteria to the web)

It’s worth noting that even in the mid 1990s, research showed the checklist approach did not work as a teaching tool. In her 1998 research on student evaluation of web resources, Ann Scholz‐Crane observed how students used the following criteria to evaluate two web sites (both with major flaws as sources):

criteria

She gave the students the two websites and asked them to evaluate them (one student group with the criteria and one without). She was clear to the students that they had the entire web at their disposal to answer the questions.

The results…were not so good. Students failed to gather even the most basic information about the larger organization producing one of the sites. In fact, only 7 of 25 students even noted a press release on an organization’s website was produced by the organization, which should be considered as an author. This oversight was all the more concerning as the press release outlined research the organization had done. The students? They saw the relevant author as the contact person listed at the bottom of the press release. That was what was on the page, after all.

(If this sounds similar to the FakeCheck problem above — oh heck, I don’t even have snark left in me anymore. Yeah. It’s the same issue, in 1998.)

What was going on? In noting a major difference in how the expert evaluators went about the site versus the way the students did, Scholz‐Crane notes:

No instances were found where it could be determined that the students went outside the original document to locate identifying information. For example, the information about the author of Site A that appeared on the document itself was short phrase listing the author as a regular contributor to the magazine… however a link from the document leads the reader to a fuller description of the author’s qualifications and a caution to remember that the author is not a professional and serves only as a friend/mentor. None of the students mentioned any of the information contained in the fuller description as part of the author’s qualifications. This is in stark contrast to the essay evaluations of the expert evaluators where all four librarians consulted sources within the document’s larger Web site and sources found elsewhere on the Web.

Worse, although the checklist was meant to provide a holistic view of the document, most students in practice focused their attention on a single criterion, although what that criterion was varied from student to student. The supposed holistic evaluation was not holistic at all. Finally, the use of the control group showed that the students without the criteria were already using the same criteria in their responses: far from being a new way of looking at documents it was in fact a set of questions students were already asking themselves about documents, to little effect.

You know how this ends. The fact that the checklist didn’t work didn’t slow its growth. In fact, adoption accelerated. In 2004, Sarah Blakeslee at California State University noted the approach was already pervasive, even if the five terms most had settled on were not memorable:

Last spring while developing a workshop to train first-year experience instructors in teaching information literacy, I tried to remember off the top of my head the criteria for evaluating information resources. We all know the criteria I’m referring to. We’ve taught them a hundred times and have stumbled across them a million more. Maybe we’ve read them in our own library’s carefully crafted evaluation handout or found one of the 58,300 web documents that appear in .23 seconds when we type “evaluating information” into the Google search box (search performed at 11:23 on 1/16/04).

Blakeslee saw the lack of memorability of the prompts as a stumbling block:

Did I trust them to hold up a piece of information, to ponder, to wonder, to question, and to remember or seek the criteria they had learned for evaluating their source that would instantly generate the twenty-seven questions they needed to ask before accepting the information in front of them as “good”? Honestly, no, I didn’t. So what could I do to make this information float to the tops of their heads when needed?

After trying some variations in order of Accuracy, Authority, Objectivity, Currency, and Coverage (“My first efforts were less than impressive. AAOCC? CCOAA? COACA?”), a little selective use of synonyms produced the final arrangement, in a handout that quickly made its way around the English-speaking world. But the criteria were essentially the same as they were in 1978, as was the process:

chart

And so we taught this and its variations for almost twenty years even though it did not work, and most librarians I’ve talked to realized it didn’t work many years back but didn’t know what else to do.

So let’s keep that in mind as we consider what to do in the future: contrary to public belief we did teach students online information literacy. It’s just that we taught them methodologies that were developed to decide whether to purchase reference sets for libraries

It did not work out well.

The Fast and Frugal Needs of the Online Reader

I’m writing a couple framing documents for some events coming up. This is one that I’m still drafting, but I thought I’d throw the draft up and take any comments. Note that this is already at max length. Also, one site name has been removed in an effort to not attract the trolls. And citations haven’t been added (although I will need to be strategic with what to cite given length limits).

The Fast and Frugal Needs of the Online Reader

Many educators believe the solution to our current dilemma is to get our students to think more about the media that reaches them. But what if the solution was to get them to think less?

I say this partially to be provocative: as I’ll make clear momentarily, thinking “less” is meant here in a very specific way. But it’s evident that many of the sorts of academic investigations for which we train our students are a poor fit for the decentralized, networked environment of the web. Years ago, I used to teach students to look deeply at documents, and wring every last bit of signal out of them, performing a complex mental calculus at the end: does the story seem plausible? Does the language seem emotional? What is the nature of the arguments? Any logical fallacies? Is it well sourced? Does it look like a professional website?

Such approaches fail on the web for a number of reasons:

  • Students are really bad at weighting the various signals. A clickbait-ish ad on the sidebar can cause students to throw out the website of a prestigious news organization, while the clean interface of xxxxxx, a prominent node in the disinformation ecosystem, engenders confidence.
  • They don’t work under time pressure and at volume. Most of the online information literacy taught in higher education is built around students choosing six or seven resources for a term paper, where students may have several hours to spend vetting resources. In the real world we have minutes at most, seconds at worst.
  • Engagement with these sources is problematic in itself. The traditional academic approach is to evaluate resources by reading them deeply. This is a profoundly inappropriate tactic for disinformation. There is ample evidence that reading disinformation, even with a critical eye, can warp one’s perspective over time. From the perspective of bad actors, if you’re reading the site, they’ve won.

At the college level, online information literacy methods initially grew out of slow methods of print evaluation. Methods still in use, such as CRAAP and RADCAB, grew out of decades-old procedures to evaluate the purchase of print materials for library collections. It’s time to develop new methods and strategies suited to the speed, volume, and uncertainty of the social web.

Heuristics provide an alternate solution

In cases where individuals must make quick decisions under complexity and uncertainty, rules of thumb and quick standard procedures have been shown to outperform deeper analysis. Competent decision makers reduce the factors they look at when making decisions, and seek information on those limited factors consistently and automatically. This pattern applies to online information literacy as well.

As one example, imagine a student sees breaking news that Kim Jong-un has stated that any future nuclear attack would focus on Los Angeles. The student could look at the source URL of the story, the about page, and examine the language. See if the spelling exhibits non-native markers or if the other stories on the site were plausible. Alternatively, she could apply a simple heuristic: big, breaking stories will be reported by multiple outlets. We select some relevant text from the page, and right-click ourselves to Google News Search. If we’re met with a number of outlets reporting this, we’ll take it seriously. If we don’t see that, we dismiss it for the time being.

This process takes five seconds and can be practiced on any story that fits this description. It makes use of the web as a network of sites that provides tools (like Google News search) to make quick assessments of professional consensus. In this example, should the story turn out to be well reported, the results present you with an array of resources that might be better than the one that originally reached you, putting you on better footing for deeper analysis. In part, we reduce the complexity of initial vetting so that students can apply that cognitive energy more strategically.

Online information literacy built for (and practiced on) the web

Over the past year and a half, the project I am involved with has been developing a toolbox of techniques like this, relentlessly honing them and testing them with college students and professors. Our techniques tend to fall into various buckets that are not entirely different from traditional online information literacy concerns: Check for other coverage, find the original, investigate the publisher. But unlike much information literacy, these concerns are bonded to quick web-native techniques that can be practiced as habits.

The techniques encode expert understandings of which the students may not even be aware: our Google News search technique, for example, is meant to filter out much of the echo-system that Kate Starbird’s team has investigated. It does that by using a somewhat more curated search (Google News) instead of a less curated one (Google). While teaching the students about the echo-system may be beneficial, they don’t need to know about it to benefit from the technique.

What do we need to do as a community?

Despite the novelty of the problems and solutions, our process for developing educational approaches grows out of standard educational design.

  • Start with the scenarios. Any reasonable strategy we teach students to sort and filter media must grapple with authentic scenarios and work backwards from there. What do we expect the student to be able to do, under what conditions? How does that inform the strategies we provide?
  • Address the knowledge gaps the process foregrounds. Media literacy and news literacy remain important, but broad theories of media impact are often less helpful than domain-specific information students need to quickly sort media they encounter. What is a tabloid? A think tank? What is the nutraceutical industry and how do they promote medical misinformation? How does state-funded media differ from state-controlled media?
  • Learn to value efficacy over accuracy. Higher education is not known for valuing speed and simplicity over deeper analysis, but if we wish to change the online behavior of students these need to be core concerns. Any approach to uncertainty which seeks 100% accuracy is wrong-headed, and likely harmful.
  • Use the web as a web. For years, approaches have treated the web as an afterthought, just another domain in which to apply context-independent critical thinking skills. But the problems of the web are not the problems of the print world, and the solutions the web provides are distinct from print culture solutions. Approaches need to start with the affordances of the web as the primary context, pulling from older techniques as needed.

In short, develop new solutions native to the context and the problem that are lightweight enough that they have a chance of being applied after the student leaves the classroom. These are not new educational insights, but are ones we need to turn towards more fully if we wish to make a difference.

Tribalism is a cognitive shortcut. Addressing it requires better shortcuts.

Beautiful essay this week by Zeynep on Politico, six paragraphs you should read to become a smarter human. But I just want to point to something in paragraph three very relevant to media literacy:

Deluged by apparent facts, arguments and counterarguments, our brains resort to the most obvious filter, the easiest cognitive shortcut for a social animal: We look to our peers, see what they believe and cheer along.

There’s a nugget in here I wish more people would dig into, and that’s that tribalism is a filter. In many cases, it’s not a bad one. In a perfect world, you’d want to hear from experts who also share the values of your tribe, because decisions sit at the intersection of values, expertise, and experience. Expertise alone, uninformed by relevant values, can be very thin gruel.

In our current situation, of course,  that’s not what’s happening. In a variety of domains, tribalism has become a reality distortion field that prevents us from drawing on relevant expertise at all. But instead of seeing tribalism as an insurmountable force of nature, the better view is to see it as a useful heuristic that has ceased to work in certain situations.

The question is not “can media literacy strategies overcome tribalism?” It can’t be that because tribalism is a media literacy strategy. The question, then, is whether we can provide any strategies quick and efficient enough that they compare favorably to tribalism.

Don’t get me wrong — tribalism is easy and fun, emotionally and socially rewarding. I don’t think you’ll  find a strategy faster than it. Zeynep’s point that it’s the “easiest shortcut for a social animal” is right on target.

But if you want to replace it, you have to give students digital literacy strategies that compare favorably to it: ones that are designed to meet this need for quick filtering and evaluation of what reaches us through the stream. It has to be speedy, and it has to be low cognitive effort. It has to be focused less on “Is what I’m reading good?” and more on “Should I give this my attention at all?”

Focus on what tribalism provides people, and we have a chance. Bring a twenty minute checklist to a fast and frugal heuristic fight and the battle is already lost.

P. S. In case people from outside my usual circle read this and say “Oh, but what would such strategies look like?” This is a blog. Go to the main blog page and read appx 100 articles written on this exact topic over the past two years.

Stop Reacting and Start Doing the Process

Today’s error comes to you from a Tulsa NBC affiliate:

tulsa

 

Of course, this was all the rage on Twitter as well, with many smart people tweeting the USA Today story directly:

markos

It’s a good demonstration of why representativeness heuristics fail. Here’s the story everyone fell for:

usa today

So let’s go through this — good presentation, solid source. Headline actually not curiosity gap or directly emotional. Other news stories look legit. Named author. Recognizable source with a news mission.

Now the supporters of recognition approaches will point out that in the body of the article there is some weird capitalization and a punctuation mistake. That’s the clue, right!

kerosene

When we look back, we can be really smart of course, saying things like “The capitalization of Kerosene and the lack of punctuation are typical mistakes of non-native speakers.” But in the moment as your mind balances these oddities against what is right on the page, what are your chances of giving that proper weight? And what would “proper weight” even mean? How much does solid page design balance out anachronistic spelling choices? Does the lack of clickbaity ads and chumbuckets forgive a missing comma? Does solid punctuation balance out clickbait stories in the sidebar?

Your chances of weighting these things correctly are pretty lousy. Your students’ chances are absolutely dismal. When actual journalists can’t keep these things straight, what chance do they have?

Take the Tulsa news site. Assuming that USA Today was probably a better authority on whether we still capitalize “kerosene” (which was once a brand name like Kleenex), the Tulsa writer rewrites the story and transcribes the misspelling faithfully while risking their entire career:

kerosene2

We know looking at surface features doesn’t work. Recognition for this stuff is just too prone to bias and first impressions in everyone but an extremely small number experts. And even most *experts* don’t trust recognition approaches alone — so, again, what chance do your students have?

How do our processes work, on the other hand? Really well. Here’s Check for Other Coverage, which has some debunks now but importantly shows that there is actually no USA Today article with this title (and has shown this since this was published).

And here’s Just Add Wikipedia which confirms there is no such “usatoday-go” URL associated with USA Today.

Both of these take significantly less time than judging the article’s surface features, and, importantly, result in relatively binary findings less prone to bias concerns. The story is not being covered in anything indexed by Google News. The URL is not a known USA Today URL. Match, set, point. Done.

Can they fail? Sure. But here’s the thing — they’ll actually fail less than more complex approaches, and when they do fail (for instance if the paper is not found in Wikipedia or does not have a URL) they still put you in good position for deeper study if you want it. Or, just maybe, if they don’t work in the first 30 seconds you’ll realize the retweet or news write up can wait a bit. The web is abundant with viral material, passing on one story that is not quickly verifiable won’t kill you.

Safety Culture and the Associated Press

More journalistic mess-ups in the news today, this time from the Associated Press, which labeled director/producer Costa-Gavras as dead when he is very much alive. Via Alexios Mantzarlis here’s a snapshot of the AP headline on the Washington Post from yesterday (I think?) of a hoax that happened almost a week ago.

DmX2YXkVAAE00j1 (1)

How did it happen? Well a person claiming to be the Greek Minister of Culture posted a tweet saying there was breaking news to this effect — here is the account as it looked at the time of the tweet:

zorba2

And here is the tweet:

zorba tweet

Twenty minutes later this tweet followed:

zorba3.PNG

And soon the handle and picture were changed:

https://twitter.com/TDebNews/status/1035153092826869765

Now in retrospect people always say such things are obvious fakes. Why is this in English, for example? Why was she using this weird badly lit press photo instead of either a personal photo or a government headshot?

But as I say over and over again, this is like changing lanes without checking your mirrors or doing a head check, getting in a collision, and then talking about all the reasons you “should have known the car was there.” Didn’t you think it odd no one had passed you? Didn’t you hear the slight noise of an engine?

None. Of. This. Matters.

I mean, it does matter. But the likely reason that you crashed is not that you didn’t apply your Sherlock powers of deduction. It’s that you didn’t take the 3 seconds to do what you should have.

Stop Reacting and Start Doing the Process

Last week I published an article that takes less than five minutes to read and shows how journalists and citizens can avoid such errors. I don’t mean to be egotistical here, but if you are a working reporter and haven’t read it you should read it.

Here’s an animated gif from that.

Verified

See a tweet. Think about retweeting. Look for verification checkmark. See what they are verified as. Three seconds.

Now what if they are not verified? Or if you’ve never heard of the publication they are verified for?

Well then you escalate. You start to pull out the the bigger toolsets: Wikipedia searches, Google News scans on the handle, the kind of thing Bari Weiss missed a while back that led her to publish quotes from a hoax Antifa account in The New York Times. Those ever so slightly more involved procedures look like this:

 

If that doesn’t work, you escalate further. Maybe get into more esoteric stuff: the Wayback Machine, follower analysis. Or if you are a reporter, you pick up the phone.

It was Sam Wineburg that did me the biggest service when I was writing my textbook for student fact-checkers. He talked about the need to relentlessly simplify (which I was already trying to do, though he pushed me harder). But he also told me who I should read: Gerd Gigerenzer. And that changed a lot.

Gigerenzer’s work deals with risk and uncertainty, and how different professional cultures deal with these issues. What he finds is that successful cultures figure out what the acceptable level of risk is, then design procedures and teach heuristics that take big chunks out of that risk profile while remaining relatively simple. Airline safety culture is a good example of this. How much fuel do you put in a plane? There’s a set of easily quantifiable variables:

  1. fuel for the trip
  2. plus a five percent contingency
  3. plus fuel for landing at an alternate airport if required
  4. plus 30 minutes holding fuel
  5. plus a crew-determined buffer for extreme circumstances

(via Gigerenzer, 2014)

That last one is a buffer, but you’ll notice something with the other ones: they don’t really achieve precision: if you were to calculate the fuel needs of each flight you could probably get closer to the proper fuel amount. I know, for example, that your chances of needing 30 minutes holding fuel for landing at Portland (PDX) are probably dramatically lower than your need for it when heading to Philadelphia (PHL). And the five percent contingency is supposed to make up for miscalculations in things like windspeed, but doesn’t account for the fact that different seasons and different flight paths have different levels of weather uncertainty.

The problem is that you can add those things back in, but now you’re reintroducing complexity back into the mix, and complexity pushes people back into error and bias.

That crew-determined buffer is important too, of course. If after all the other factors are accounted for the fuel amount doesn’t seem sufficient due to other factors, the crew can step up and add fuel (but importantly, not subtract). But the rule of thumb doesn’t waste their energy on the basic fuel calculation — they save it for dealing with the exceptions, the weird things that don’t fit the general model, where nuance and expertise matters.

This is a long detour, but the point is that rather than asking people to use individual thinking about dozens of factors around an issue weighted with careful precision, what Gigereezer calls “positive risk” cultures do is decide the acceptable level of risk for a given endeavor, then work together to design simple procedures and heuristics that if followed encode the best insights of the field when applied within the domain. At the end of the procedures there’s the buffer — you’re free to look at the result and think “the heuristic just doesn’t apply here.” But you have to explain yourself, and what’s different here. You apply the heuristic first and think second.

Does the AP Have a Digital Verification Process?

There’s another important piece about defining set processes: they can be enforced in a way that a general “be careful” can’t. We can start to enforce them as norms.

What is the AP process to source a tweet? Ideally, it would be some sort of short tree process — look for a blue checkmark. If found, do X, if not, do Y. The process would quickly sift out the vast majority of safe calls (positive or negative) in seconds.

That quick sifting is important, but just as important is the accountability it provides. Instead of looking at an error like this and discussing whether it was an acceptable level of error, we can start with the question “Was the process followed?” The nature of risk — as Gigerenzer reminds us — is that if you make no errors your system is broken, because you are sacrificing opportunity. So we shouldn’t be punishing people that just happen to be caught on the wrong end of a desirable fail rate.

But if a reporter risked the reputation of your organization because they didn’t follow a defined 5-second protocol — well that’s different. That should have consequences. These sorts of protocols exist elsewhere in journalism. Journalists aren’t accountable for lies sources tell them, but they are accountable for not following proper procedure around confirming veracity, seeking rebuttals, and pushing on source motivation.

Again, this isn’t meant to treat the heuristics or procedures as hard and fast laws. Occasionally procedures produce results so absurd you have to throw the rule book out for a bit. Experts do develop intuitions that sometimes outperform procedures. But the rule of thumb has to be the starting point, and expertise has to make a strong argument against applying it (or for applying a competing rule of thumb). And such “expert” deviations are clearly not what we are seeing here.

What Zeynep Said and Other Things

This is a wandering post, and it’s kind of meant to be — jotting down ideas I’ve been talking about here a while but haven’t pooled together (see here for one of my earlier attempts to bring Gigerenzer into it).

But there’s so much more and I have to post, and not all of it is about journalism. Some of it, unsurprisingly, is about education.

A couple threads I’ll just tack on here.

First, I have been meaning to write a post on Zeynep Tufekci’s NYT op-ed on Musk’s Thailand cave idiocy. Here’s the lead-up:

The Silicon Valley model for doing things is a mix of can-do optimism, a faith that expertise in one domain can be transferred seamlessly to another and a preference for rapid, flashy, high-profile action. But what got the kids and their coach out of the cave was a different model: a slower, more methodical, more narrowly specialized approach to problems, one that has turned many risky enterprises into safe endeavors — commercial airline travel, for example, or rock climbing, both of which have extensive protocols and safety procedures that have taken years to develop.

And here’s the killer graf:

This “safety culture” model is neither stilted nor uncreative. On the contrary, deep expertise, lengthy training and the ability to learn from experience (and to incorporate the lessons of those experiences into future practices) is a valuable form of ingenuity.

Zeynep is exactly right here, but I’d argue that while academia has escaped some of the worst beliefs of Silicon Valley we still often worship this idea of domain independent critical thinking. And part of it is because we devalue more narrowly contextualized knowledge. And a big part of it is that we devalue relevant *professional* knowledge as insufficiently abstract or insufficiently precise. We laugh at rules of thumb as fine for the proles but not for us with our peer-reviewed level of certainty.

But, as Zeynep argues, the process of looking at what competent people do and then encoding that expert knowledge into processes that novices can use is not only a deeply creative process itself, but it forms the foundation on which our students will practice their own creativity. And if we could get away from the idea of our students as professors in training for a bit, we could see that maybe?

Also  — I’ll talk about later is how rules of thumb relate to getting past the idea that we seek certainty. We don’t. We seek a certain level of risk, and the two things are very different. Thinking in terms of rules of thumb and 10 second protocols not only protects against error,  but it prevents the conspiratorial and cynical spiral that asking for academic levels of precision from novices can produce.

And finally — that “norm” thing about journalists? It’s true for your Mom too. Telling your Mom she’s posting falsehoods is probably not nearly as effective as telling her the expectation is she does a 30 second verification process before posting. When norms are clear (don’t cut in line) they are more enforceable than when they are vague (don’t be a dick at the grocery store). Part of the reason for teaching short techniques instead of more fuzzy “thinking about texts” is expecting a source lookup on a tweet is socially enforceable in a way a ten point list of things to think about is not.

OK, sorry for the mess of a post. Usually after I write a trainwreck of a thing like this I find a more focused way to say it later. But thanks for sticking with this until the end. 🙂

 

 

A Roll-Up of Digipo Resources (4 September 2018)

One of the nice things about running a blog-fueled grassroots semi-funded initiative is the agility. The Digipo project has moved far and fast in the past year. But one of the bad things is all the old blogposts a just a snapshot in time, and often out of date.

I’ve wanted to get everything updated and I will, but for the moment here’s a bunch of resources. Please note that if it is 2019 when you are reading this you should look for a more recent post.

Textbook

People love Web Literacy for Student Fact-checkers, and it continues to be the resource in broadest use. You can also download a PDF.

Prompts for Class: Four Moves Blog

The way I run my classes is to throw up prompts and have the students race to learn more about them in short frames of time. Sometimes we move onto the next one, and sometimes we have deeper discussions about disinformation or structural factors after that (this details the format).

Anyway, key to that class structure is the Four Moves Blog which provides prompts for students to investigate. I just tell them to search for the prompt in the search box up top, and then investigate it. This avoids all the “What was that URL” again awkwardness while also allowing a certain class fluency in that we can react to what is working in the class rather than structure everything meticulously beforehand.

weed.PNG\

Slides and Lesson Plans for First Two Classes

While we play the classes after the first week a bit looser, the first two classes are pretty scripted. This is partially because we want to lay the right foundation, but also because we want to introduce the ideas without bumping too much up the potential identity threat issues this stuff can cause. So the first week deals with some serious stuff mixed with some frivolous stuff.

Here are the slides: Class One, Class Two

Here is the Lesson Plan/Notes: Google Doc

Note that the notes are a little out of sync with the slides in some places. The notes are mainly there though so you understand how to use the slides and the stories behind the examples, it’s not really a script.

The Canvas Course and the Blackboard Export

We have two to three weeks of online homework (activities, assessments, readings, videos) in Blackboard/Canvas.

If you look in Canvas Commons for a Citizen Fact-Checking module from me you should be able to import it into your class. I’ve heard some people have had some problems with that import and for others it’s gone well. Get back to me if you have problems and we’ll try to figure out what’s going on.

The Blackboard course export is here: Digipo-04-Sept-18. As usual, this is just my export of my materials. There aren’t any warranties, and you should go through the material after import, prune it and review it for error.