Useful thoughts on attention and information overload from 1971 (via Simon, Deutsch, Shubik)

Back in 2015, I was blogging less and using a homegrown personal wiki more. And I was thinking about this problem of collaboration and attention.

Going through my notes on the wiki from that time, I realized a bunch of my thinking had been formed by a book chapter from 1971 that I read in 2015, a transcription of a presentation and panel by Herbert Simon, Karl Deutsch, and Martin Shubik. Re-reading it I’m struck that for all its faults it provides insights that are even more relevant in 2018 than 2015. Here’s some ported notes and highlights:

Simon: a wealth of information = a scarcity of attention

Simon’s key contribution in the talk is to push the conversation from the idea of information overload (supply) to the problem of attention. And his key point is that as information increases, attention decreases:

[I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.

Simon: the cost of information is borne more by the consumer (in time) than the producer

Simon uses a news metaphor to make his point — the cost of information is mostly in the time to process it (when aggregated over many people) rather than produce it:

In an information-rich world, most of the cost of information is the cost incurred by the recipient. It is not enough to know how much it costs to produce and transmit information; we must also know how much it costs, in terms of scarce attention, to receive it. I have tried bringing this argument home to my friends by suggesting that they recalculate how much the New York Times (or Washington Post) costs them, including the cost of reading it. Making the calculation usually causes them some alarm, but not enough for them to cancel their subscriptions. Perhaps the benefits still outweigh the costs.

Simon: scarcity of attention must be a design principle for organizations and technology, but it is usually overlooked

The design principle that attention is scarce and must be preserved is very different from a principle of “the more information the better.” The aforementioned Foreign Office thought it had a communications crisis a few years ago. When events in the world were lively, the teletypes carrying incoming dispatches frequently fell behind. The solution: replace the teletypes with line printers of much greater capacity. No one apparently asked whether the IPS’s (including the Foreign Minister) that received and processed messages from the teletypes would be ready, willing, and able to process the much larger volume of messages from the line printers.

We overlook these things because we have a mythology of information poverty:

Our attitudes toward information reflect the culture of poverty. We were brought up on Abe Lincoln walking miles to borrow (and return!) a book and reading it by firelight.

Deutsch on the operations of attention

Deutsch makes some welcome corrections to Simon, who in many remarks not detailed above is far too trusting of technology. (The DDT example Simon uses is particularly painful).

Part of his point is attention is really a series of operations much bigger than just a spotlight of focus. A person that gives their attention has to (according to Deustch):

  • recognize loosely what it is one should pay attention to (the target), such as things unfamiliar, strangers, or things that do not fit
  • track the object of attention, and keep attention on it
  • interpret the object and ask what it resembles
  • decide which response to the object is most appropriate, and what should be done about it
  • carry out the response
  • accept feedback, and learn from the results of the response whether it was the rightone and how future responses should be corrected.

Deutsch argues that when you look at the whole cycle it involves not just attention but memory, and further, that the problem of filters is going to be solved partially by accepting some amount of redundancy. The reasoning is a bit complex, but familiar to people nowadays I think. Because institutional memory in organizations is expensive and a bit zero-sum you need redundancy in organizations and a networked, less hierarchical approach to information. This is turn prevents relevant information from being eliminated due to single bottlenecks.

Shubik on optimum information systems

I have become a fan of Simon over the past few years, so the insights in his observations are not surprising to me (more surprised by some of his oversights, actually). Shubik, on the other hand, hasn’t even been on my radar. But he’s good! Here he is on optimum information systems:

An optimum information system is not necessarily the one which processes the most data. An optimum system for protecting the average stock- holder does not supply him with full, detailed financial accounts. In fact, one can easily swindle the unwary by supplying them with financial details and footnotes they do not understand. It is now possible to bombard a generally uncomprehending public with myriad details on pollution, the pros and cons of insecticides, the value and dangers of irrigation schemes, on-the-spot reports of rioting and looting, televised moon landings, suicides, murders, and historical prices of thousands of stocks and commodities.

Shubik on the coming of computer network-based mobs, and, maybe, Gamergate

So for people not hip to what 1971 was about, you had “time-sharing” computers — which were multi-user mainframes mostly — and monitors, and a lot of thought was put into what happened when TV gets hooked into systems that allowed instant two-way communication, feedback, and interactivity. Shubik wonders in particular what mobs look like when the virtual is felt as real and demagogic leaders pair the instant feedback of communications systems with the viscerality of the a TV based medium. It’s of course a weird version, based on tech of the time, but still an amazing quote:

Consider some of the possible dangers. What is the first great TV, time-sharing demagogue going to look like? How will he put to use such extra features of modern communications as virtually instantaneous feedback? When will a TV screen with the appropriate sensory feelings be able to portray the boss behind his mahogany desk (two thousand miles away) who fires or chastises his employee, and makes him feel just as small, and his palms just as clammy with sweat, as if he were in the room with him? When will the first time-shared riot occur? Orson Welles came close in the thirties with a fairly good radio panic. Current techniques for mob control require physical proximity. In the Brave New World, will we still regard a mob as a great number of closely packed people, or will isolated mobs interacting via TV consoles and operating over large areas be more efficient?

Oettinger summarizes Simon’s contribution

Anthony Oettinger summarizes Simon’s contribution nicely:

Simon has offered three very deep, important, fundamental principles that shed light on things I had not perceived clearly:

  1. attention is a scarce commodity
  2. information technology allows effort to be displaced from possession, storage, and accumulation of information to its processing, even if the information is located in the world itself rather than in the file
  3. filtering and organizing the environment for persons whose attention is scarce are critical.

It remains for others to apply these general principles to particular organizations and explore their political and economic implications.

Deustch’s criticism of Simon

I have reservations about Simon’s enthusiasm, in the name of simplification and economy of thought, for throwing out vast amounts of what universities now teach. Much of what we learn in social science used to be interpreted against our knowledge of history. If we throw out too much historical data, many of our abstractions may lose meaning. A critical design problem for education is to determine the amount of memories from the past needed for producing and interpreting new information.

In general, Simon makes a very good case for the design aims of technology and education, but is not particularly good on technological prediction, whereas Shubik — even in asides — is incredibly prescient about technology and its risks. Deustch, in turn, serves as a good corrective on Simon’s penchant for an absolute leanness of process and storage — believing that memory plays more of a role in effective processing than Simon will admit, and pushing the idea that a more conservative approach to change in the face of human systems may be warranted — slow down taking action when information is inconclusive. (Even here, the results are fascinating, with Deustch using the example of how population is a more pressing issue than climate change, since the effects of overpopulation were well established but climate science murky).

The three parts, taken together make interesting reading, even today — or, perhaps, especially today. You can check out the whole thing here.

We’re Thinking About This Backwards

One of my great loves is Dewey. I share his belief that an educated engaged populace is crucial to democracy and democracy is crucial to the profession of those teaching in democracies. I think part of what we need to do is make sure all citizens have the tools they need to sort news from noise and speak truth to power.

At the same time, often when I speak to people, the questions come up — sure, media literacy is good, but how do we reach the people who drop out of high school, the people who don’t go to college. And so on.

Have I said enough that this is important? Well, here’s the other shoe dropping: our most pressing problems are not caused by Joe or Jane the Mechanic not getting a GED. They are caused by people in power — mid-level gatekeepers and up — who are allowing institutions to be corrupted by misinformation.

You can leave all the complaints about that formulation in the comments. I’ll admit that elites have always been prone to elite misinformation (see Iraq, Vietnam, Climate Change). But I would assert that such history shows the disasters that result when institutions become corrupted. The current configuration feels unique to me and other misinfo experts I talk to. The speed and frequency with which lies are created god knows where and then pushed up the chain to people with professional and political power is what’s frightening. High school dropouts are not your problem. Trump supporting gas station owners or conspiracy-minded baristas are not your problem. Your problem is the FBI agent consuming Twitter nonsense,  the politician that not only uses disinfo but comes to believe it, and the blue checkmarked mid-level elites that are unwitting vehicles pushing that stuff relentlessly into the view of those who act on it.

I started off with Dewey, and Jane the Mechanic. Here’s the relation. While mass education is good and should be pursued as a long-term solution, if I was going to target our online literacy immediately and had a limited number of seats, I would target it at everyone that will find their way to positions of influence. Politicians. Policy leads. Product managers at tech startups. Future FBI agents and social workers and department heads. I would look at the gears of democratic institutions — political, civic, administrative — and see who has their hands on the levers, from the mid-level bureaucrats to the top.

I’m committed to implementing our program broadly, but if you think the misinformation problem is Jane the Mechanic and her one vote every four years you’ve got it backwards. For immediate impact target those who make and enforce decisions and those who influence them, and stop scapegoating those without power or influence.

I need to think about this too in terms of how we grow the Digital Polarization Initiative. We’ve had good success in first-year programs, and we need to continue that. But the nature of our moment (and our limited resources) may require us to think about how to target this more efficiently as we expand. If people have some ideas of the sort of college programs we should be trying to get this training into, let me know in the comments. We probably need to also tap into the fact that the core of the American Democracy Project is still students who plan to go into those positions of influence and power and the faculty who teach them. More later I think.

A Suggested Improvement to Facebook’s Information Panel: Stories “About” Not “From”

Facebook has a news information panel, and I like it lots. But it could be better. First, let’s look at where it works. Here it is with a decent news source:

Untitled Project

That’s good. The info button throws the Wikipedia page up there, which is the best first stop.  We got the blue checkmark there, and some categorization. I can see its location and its founding date, both of which are quick signals to orient me to what sort of site this is.

There’s also the “More from this website.” Here’s where I differ with Facebook’s take. I don’t think this is a good use of space. Students always think they can tell what sort of site a site is from the stories they publish. I’m skeptical. I know that once you know a site is fake then suddenly of course you feel like you could have told from the sidebar.  But I run classes on this stuff and watch faculty and students try to puzzle this out in real time, and frankly they aren’t so good at it. If I lie and hint to them it’s a good site, bad headlines look fine. If I lie and hint it’s bogus site, real headlines look fake. It’s just really prone to reinforcing initial conceptions.

I get that what’s supposed to happen is that users see the stories and then click that “follow” button if they like what they see. But that’s actually the whole “pages” model that burned 2016 down to the ground. You should not be encouraging people to follow things before they see what other sites say about it.

Take this one — this says RealFarmacy is a news site, and there’s no Wikipedia page to really rebut that. So we’re left with the headlines from RealFarmacy:


OK, so of course if you think this site is bogus beforehand it is so clear what these stories mean about the site. But look, if you clicked it it’s because a site named RealFarmacy seemed legit to you. These headlines are not going to help you — and if the site plays its cards right, it’s really easy to hack credibility into this box by altering their feed and interspersing boring stories with clickbaity ones. It’s a well known model, and Facebook is opening itself up to it here.

A better approach is to use the space for news about the site. Here’s some news about

real farmacy

Which one of these is more useful to you? I’m going to guess the bottom one. The top one is a set of signals that RealFarmacy is sending out about itself. It’s what it wants its reputation to be. The bottom one? That’s what its reputation is. And as far as I can tell it’s night and day.

This is why the technique we show students when investigating sites is to check Wikipedia first, but, if that doesn’t give a result, check for Google News coverage on the organization second. Step one, step two. It works like a charm and Facebook should emulate it.

Can you do this for everything? Sure. In fact, for most legit sites the “News about this source” panel would be boring, but boring is good!


Just scanning this I can see the Tribune has a long history, and is a probably a sizable paper. That’s not perfect, but probably more useful than the feed that tells me they have a story about a fraternity incident at Northwestern.

This won’t always work perfectly — occasionally you’ll get the odd editorial calling CNN “fake news” at the top. And people might overly fixate on Bezos’s appearance in coverage about the WaPo. But those misfires are probably worth filling in the data void in Wikipedia around a lot of these clickbait sites with news results that give some insight into the source. Pulling the site’s feed is an incomplete solution, and one that bad actors will inevitably hack.

Anyway, the web is a network of sites, and reputation is on the network, not the node. It’s a useful insight for programmers, but a good one for readers too. Guide users towards that if you want to have real impact.

GIFs as Disinformation Remedy (aka The GIF Curriculum)

Earlier today Alexios Mantzarlis tweeted a GIF by @PicPedant that demonstrates a particular photo is fake in a precise way:


This is interesting because recently I’ve been moving to GIFs myself for explanations. Here’s some demonstrations of our techniques, for example:

Check for other coverage (in this case with a good result):


Check what a person is verified as (in this case, a reporter):


Check what a site is about (in this case white supremacy):


And so on. Part of this is related to my and Scott Waterman’s Rita Allen proposal to generate short instructional sequences to post in comment threads of fake material (i.e., how to fact check) rather than just Snopes-ing people (i.e. “you’re wrong”). Part is based on my year and a half now of making short 10 second YouTube demonstrations of how to check various things.

But bringing it to GIFs is also based on some conversations I have had with Aviv Ovadya and others on what Web Literacy at Scale looks like. (“Web Literacy at Scale” is Aviv’s term for a project he is working on). I think one thing I’ve been thinking about (again, inspired by the insights of others at MisinfoCon) is that misinformation works on the user who doesn’t click through on links, which means media literacy efforts have to (at least in part) not require clicking through on links. If the disease thrives in a click-through free environment and learning about the cure requires click-through — well, you’ve maybe already lost there.

Media literacy GIFs, then, work on a few of levels. First, like disinformation, they operate just by being in the feed, requiring no user action, reducing the asymmetry of problem and remedy. Just as importantly, the shortness of the GIFs show that online media literacy need not (in many cases, at least) be complex. If you could have checked it in five seconds and you didn’t, it’s not that you’re just not smart enough — it’s that you just didn’t care enough. It’s a moral failing, not an intellectual one, and that framing allows us to more cleanly enforce reputational consequences one those who are reckless.

I don’t know how this affects my and Scott Waterman’s Rita Allen proposal, which involves a similar effort to show people how to fact-check rather than berate them for being wrong, but planned to use short sequences of screenshots. That framing paper is due Tuesday, so I guess I have to figure it out by then. But for the moment I’m contemplating the benefits of a media literacy campaign that thrives in the same link-averse passive consumption playgrounds as the problem it is trying to address.

A Short History of CRAAP

I reference the history of the so-called “checklist approaches” to online information literacy from time to time, but haven’t put the history down in any one place that’s easily linkable. So if you were waiting for a linkable history of CRAAP and RADCAB, complete with supporting links, pop open the champagne (Portland people, feel free to pop open your $50 bottle of barrel-aged beer). Today’s your lucky day.


In both undergraduate education and K-12 the most popular approach to online media literacy of the past 15 years has been the acronym-based “checklist”. Prominent examples include RADCAB and CRAAP, both in use since the mid-00s. The way that these approaches work is simple: students are asked to chose a text, and then reflect on it using the acronym/initialism as prompt.

As an example, a student may come across an interactive fact-check of the claim that reporters in Russia were fired over a story they did that was critical of the Russian government. It makes claims that a prominent critic of the Kremlin, Julia Ioffe, has made grave errors in her reporting of a particular story on Russian journalists, and goes further to detail what they claim is a pattern of controversy:


We can use the following CRAAP prompts to analyze the story. CRAAP actually asks the students to ponder and answer 27 separate questions before they can label a piece of content “good”, but we’ll spare you the pain of that and abbreviate here:

  • Currency: Is the article current? Is it up-to-date? Yes, in this case it is! It came out a couple of days ago!
  • Relevance: Is the article relevant to my need for information? It’s very relevant. This subject is in the news, and the question of whether Russia is this authoritarian state that so many people claim it is vital to understanding what our policies should be toward Russia, and to what it might mean to want to emulate Russia in domestic policy toward journalists.
  • Accuracy: Is the article accurate? Are there spelling errors, basic mistakes? Nope, it’s well written, and very slickly presented, in a multimedia format.
  • Authority: Does it cite sources? Extensively. It quotes the reporters, it references the articles it is rebutting.
  • Purpose: What is the purpose? It’s a fact-check, so the purpose is to check facts, which is good.

Having read the whole thing once and read it again thinking about these questions, maybe we find something to get uneasy about, 20 minutes later. Maybe.

But none of these questions get to the real issue, which is that this fact check is written by FakeCheck, the fact-checking unit of RT (formerly Russia Today), a news outfit believed by experts to be a Kremlin-run “propaganda machine”. Once you know that, the rest of this is beside the point, a waste of student time. You are possibly reading a Kremlin written attack on a Kremlin critic. Time to find another source.

We brought a can opener to a gunfight

Having gone through this exercise, it probably won’t shock you that the checklist approach was not designed for the social web. In fact, it was not designed for the web at all. 

The checklist approach was designed – initially – for a single purpose: selecting library resources on a limited budget. That’s why you’ll see terms like “coverage” in early checklist approaches — what gets the biggest bang for the taxpayer buck? 

These criteria have a long history of slow evolution, but as an example of how they looked 40 years ago, here’s a bulletin from the Medical Library Association in 1981. First it states the goal:

In December 1978, CHIN held a series of meetings of health care professionals for the purpose of examining how these providers assess health information in print and other formats. We hoped to extract from these discussions some principles and techniques which could be applied by the librarians of the network to the selection of health materials.

And what criteria did they use?


During these meetings eight major categories of selection criteria for printed materials were considered: accuracy, currency, point of view, audience level, scope of coverage, organization, style, and format.

If you read this article’s expansions on those categories, you’ll see the striking similarities to what we teach students today, as a technique not to decide on how best to build a library collection, but for sorting through social media and web results.

Again, I’ll repeat: the criteria here are from 1978, and other more limited versions pre-dated that conference significantly.

When the web came along, librarians were faced with another collections challenge: if they were going to curate “web collections” what criteria should they use?

The answer was to apply the old criteria. This 1995 announcement from information superhighway library CyberStacks was typical:

Although we recognize that the Net offers a variety of resources of potential value to many clientele and communities for a variety of uses, we do not believe that one should suspend critical judgment in evaluating quality or significance of sources available from this new medium. In considering the general principles which would guide the selection of world wide web (WWW) and other Internet resources for CyberStacks(sm), we decided to adopt the same philosophy and general criteria used by libraries in the selection of non-Internet Reference resources (American Library Association. Reference Collection Development and Evaluation Committee 1992). These principles, noted below, offered an operational framework in which resources would be considered as candidate titles for the collection

Among the criteria mentioned?

  • Authority
  • Accuracy
  • Recency
  • Community Needs (Relevance)
  • Uniqueness/Coverage

Look familiar?

It wasn’t just CyberStacks of course. To most librarians it was just obvious that whether it was on the web or in the stacks the same methods would apply.

So when the web came into being, library staff, tasked with teaching students web literacy, began to teach students how to use collection development criteria they had learned in library science programs. The first example of this I know of is Tate & Alexander’s 1996 paper which outlines a lesson plan using the “traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage.” 


(an image from a circa 2000 slideshow from Marsha Tate and Jan Alexander on how to teach students to apply library collection development criteria to the web)

It’s worth noting that even in the mid 1990s, research showed the checklist approach did not work as a teaching tool. In her 1998 research on student evaluation of web resources, Ann Scholz‐Crane observed how students used the following criteria to evaluate two web sites (both with major flaws as sources):


She gave the students the two websites and asked them to evaluate them (one student group with the criteria and one without). She was clear to the students that they had the entire web at their disposal to answer the questions.

The results…were not so good. Students failed to gather even the most basic information about the larger organization producing one of the sites. In fact, only 7 of 25 students even noted a press release on an organization’s website was produced by the organization, which should be considered as an author. This oversight was all the more concerning as the press release outlined research the organization had done. The students? They saw the relevant author as the contact person listed at the bottom of the press release. That was what was on the page, after all.

(If this sounds similar to the FakeCheck problem above — oh heck, I don’t even have snark left in me anymore. Yeah. It’s the same issue, in 1998.)

What was going on? In noting a major difference in how the expert evaluators went about the site versus the way the students did, Scholz‐Crane notes:

No instances were found where it could be determined that the students went outside the original document to locate identifying information. For example, the information about the author of Site A that appeared on the document itself was short phrase listing the author as a regular contributor to the magazine… however a link from the document leads the reader to a fuller description of the author’s qualifications and a caution to remember that the author is not a professional and serves only as a friend/mentor. None of the students mentioned any of the information contained in the fuller description as part of the author’s qualifications. This is in stark contrast to the essay evaluations of the expert evaluators where all four librarians consulted sources within the document’s larger Web site and sources found elsewhere on the Web.

Worse, although the checklist was meant to provide a holistic view of the document, most students in practice focused their attention on a single criterion, although what that criterion was varied from student to student. The supposed holistic evaluation was not holistic at all. Finally, the use of the control group showed that the students without the criteria were already using the same criteria in their responses: far from being a new way of looking at documents it was in fact a set of questions students were already asking themselves about documents, to little effect.

You know how this ends. The fact that the checklist didn’t work didn’t slow its growth. In fact, adoption accelerated. In 2004, Sarah Blakeslee at California State University noted the approach was already pervasive, even if the five terms most had settled on were not memorable:

Last spring while developing a workshop to train first-year experience instructors in teaching information literacy, I tried to remember off the top of my head the criteria for evaluating information resources. We all know the criteria I’m referring to. We’ve taught them a hundred times and have stumbled across them a million more. Maybe we’ve read them in our own library’s carefully crafted evaluation handout or found one of the 58,300 web documents that appear in .23 seconds when we type “evaluating information” into the Google search box (search performed at 11:23 on 1/16/04).

Blakeslee saw the lack of memorability of the prompts as a stumbling block:

Did I trust them to hold up a piece of information, to ponder, to wonder, to question, and to remember or seek the criteria they had learned for evaluating their source that would instantly generate the twenty-seven questions they needed to ask before accepting the information in front of them as “good”? Honestly, no, I didn’t. So what could I do to make this information float to the tops of their heads when needed?

After trying some variations in order of Accuracy, Authority, Objectivity, Currency, and Coverage (“My first efforts were less than impressive. AAOCC? CCOAA? COACA?”), a little selective use of synonyms produced the final arrangement, in a handout that quickly made its way around the English-speaking world. But the criteria were essentially the same as they were in 1978, as was the process:


And so we taught this and its variations for almost twenty years even though it did not work, and most librarians I’ve talked to realized it didn’t work many years back but didn’t know what else to do.

So let’s keep that in mind as we consider what to do in the future: contrary to public belief we did teach students online information literacy. It’s just that we taught them methodologies that were developed to decide whether to purchase reference sets for libraries

It did not work out well.


The Fast and Frugal Needs of the Online Reader

I’m writing a couple framing documents for some events coming up. This is one that I’m still drafting, but I thought I’d throw the draft up and take any comments. Note that this is already at max length. Also, one site name has been removed in an effort to not attract the trolls. And citations haven’t been added (although I will need to be strategic with what to cite given length limits).

The Fast and Frugal Needs of the Online Reader

Many educators believe the solution to our current dilemma is to get our students to think more about the media that reaches them. But what if the solution was to get them to think less?

I say this partially to be provocative: as I’ll make clear momentarily, thinking “less” is meant here in a very specific way. But it’s evident that many of the sorts of academic investigations for which we train our students are a poor fit for the decentralized, networked environment of the web. Years ago, I used to teach students to look deeply at documents, and wring every last bit of signal out of them, performing a complex mental calculus at the end: does the story seem plausible? Does the language seem emotional? What is the nature of the arguments? Any logical fallacies? Is it well sourced? Does it look like a professional website?

Such approaches fail on the web for a number of reasons:

  • Students are really bad at weighting the various signals. A clickbait-ish ad on the sidebar can cause students to throw out the website of a prestigious news organization, while the clean interface of xxxxxx, a prominent node in the disinformation ecosystem, engenders confidence.
  • They don’t work under time pressure and at volume. Most of the online information literacy taught in higher education is built around students choosing six or seven resources for a term paper, where students may have several hours to spend vetting resources. In the real world we have minutes at most, seconds at worst.
  • Engagement with these sources is problematic in itself. The traditional academic approach is to evaluate resources by reading them deeply. This is a profoundly inappropriate tactic for disinformation. There is ample evidence that reading disinformation, even with a critical eye, can warp one’s perspective over time. From the perspective of bad actors, if you’re reading the site, they’ve won.

At the college level, online information literacy methods initially grew out of slow methods of print evaluation. Methods still in use, such as CRAAP and RADCAB, grew out of decades-old procedures to evaluate the purchase of print materials for library collections. It’s time to develop new methods and strategies suited to the speed, volume, and uncertainty of the social web.

Heuristics provide an alternate solution

In cases where individuals must make quick decisions under complexity and uncertainty, rules of thumb and quick standard procedures have been shown to outperform deeper analysis. Competent decision makers reduce the factors they look at when making decisions, and seek information on those limited factors consistently and automatically. This pattern applies to online information literacy as well.

As one example, imagine a student sees breaking news that Kim Jong-un has stated that any future nuclear attack would focus on Los Angeles. The student could look at the source URL of the story, the about page, and examine the language. See if the spelling exhibits non-native markers or if the other stories on the site were plausible. Alternatively, she could apply a simple heuristic: big, breaking stories will be reported by multiple outlets. We select some relevant text from the page, and right-click ourselves to Google News Search. If we’re met with a number of outlets reporting this, we’ll take it seriously. If we don’t see that, we dismiss it for the time being.

This process takes five seconds and can be practiced on any story that fits this description. It makes use of the web as a network of sites that provides tools (like Google News search) to make quick assessments of professional consensus. In this example, should the story turn out to be well reported, the results present you with an array of resources that might be better than the one that originally reached you, putting you on better footing for deeper analysis. In part, we reduce the complexity of initial vetting so that students can apply that cognitive energy more strategically.

Online information literacy built for (and practiced on) the web

Over the past year and a half, the project I am involved with has been developing a toolbox of techniques like this, relentlessly honing them and testing them with college students and professors. Our techniques tend to fall into various buckets that are not entirely different from traditional online information literacy concerns: Check for other coverage, find the original, investigate the publisher. But unlike much information literacy, these concerns are bonded to quick web-native techniques that can be practiced as habits.

The techniques encode expert understandings of which the students may not even be aware: our Google News search technique, for example, is meant to filter out much of the echo-system that Kate Starbird’s team has investigated. It does that by using a somewhat more curated search (Google News) instead of a less curated one (Google). While teaching the students about the echo-system may be beneficial, they don’t need to know about it to benefit from the technique.

What do we need to do as a community?

Despite the novelty of the problems and solutions, our process for developing educational approaches grows out of standard educational design.

  • Start with the scenarios. Any reasonable strategy we teach students to sort and filter media must grapple with authentic scenarios and work backwards from there. What do we expect the student to be able to do, under what conditions? How does that inform the strategies we provide?
  • Address the knowledge gaps the process foregrounds. Media literacy and news literacy remain important, but broad theories of media impact are often less helpful than domain-specific information students need to quickly sort media they encounter. What is a tabloid? A think tank? What is the nutraceutical industry and how do they promote medical misinformation? How does state-funded media differ from state-controlled media?
  • Learn to value efficacy over accuracy. Higher education is not known for valuing speed and simplicity over deeper analysis, but if we wish to change the online behavior of students these need to be core concerns. Any approach to uncertainty which seeks 100% accuracy is wrong-headed, and likely harmful.
  • Use the web as a web. For years, approaches have treated the web as an afterthought, just another domain in which to apply context-independent critical thinking skills. But the problems of the web are not the problems of the print world, and the solutions the web provides are distinct from print culture solutions. Approaches need to start with the affordances of the web as the primary context, pulling from older techniques as needed.

In short, develop new solutions native to the context and the problem that are lightweight enough that they have a chance of being applied after the student leaves the classroom. These are not new educational insights, but are ones we need to turn towards more fully if we wish to make a difference.

Tribalism is a cognitive shortcut. Addressing it requires better shortcuts.

Beautiful essay this week by Zeynep on Politico, six paragraphs you should read to become a smarter human. But I just want to point to something in paragraph three very relevant to media literacy:

Deluged by apparent facts, arguments and counterarguments, our brains resort to the most obvious filter, the easiest cognitive shortcut for a social animal: We look to our peers, see what they believe and cheer along.

There’s a nugget in here I wish more people would dig into, and that’s that tribalism is a filter. In many cases, it’s not a bad one. In a perfect world, you’d want to hear from experts who also share the values of your tribe, because decisions sit at the intersection of values, expertise, and experience. Expertise alone, uninformed by relevant values, can be very thin gruel.

In our current situation, of course,  that’s not what’s happening. In a variety of domains, tribalism has become a reality distortion field that prevents us from drawing on relevant expertise at all. But instead of seeing tribalism as an insurmountable force of nature, the better view is to see it as a useful heuristic that has ceased to work in certain situations.

The question is not “can media literacy strategies overcome tribalism?” It can’t be that because tribalism is a media literacy strategy. The question, then, is whether we can provide any strategies quick and efficient enough that they compare favorably to tribalism.

Don’t get me wrong — tribalism is easy and fun, emotionally and socially rewarding. I don’t think you’ll  find a strategy faster than it. Zeynep’s point that it’s the “easiest shortcut for a social animal” is right on target.

But if you want to replace it, you have to give students digital literacy strategies that compare favorably to it: ones that are designed to meet this need for quick filtering and evaluation of what reaches us through the stream. It has to be speedy, and it has to be low cognitive effort. It has to be focused less on “Is what I’m reading good?” and more on “Should I give this my attention at all?”

Focus on what tribalism provides people, and we have a chance. Bring a twenty minute checklist to a fast and frugal heuristic fight and the battle is already lost.

P. S. In case people from outside my usual circle read this and say “Oh, but what would such strategies look like?” This is a blog. Go to the main blog page and read appx 100 articles written on this exact topic over the past two years.