Update: I recently learned that this post has been selected for inclusion in a prestigious ACRL yearly list. Newcomers unfamiliar with our work may want to check out SIFT, our alternative to CRAAP, after reading the article.
I reference the history of the so-called “checklist approaches” to online information literacy from time to time, but haven’t put the history down in any one place that’s easily linkable. So if you were waiting for a linkable history of CRAAP and RADCAB, complete with supporting links, pop open the champagne (Portland people, feel free to pop open your $50 bottle of barrel-aged beer). Today’s your lucky day.
Background
In both undergraduate education and K-12 the most popular approach to online media literacy of the past 15 years has been the acronym-based “checklist”. Prominent examples include RADCAB and CRAAP, both in use since the mid-00s. The way that these approaches work is simple: students are asked to chose a text, and then reflect on it using the acronym/initialism as prompt.
As an example, a student may come across an interactive fact-check of the claim that reporters in Russia were fired over a story they did that was critical of the Russian government. It makes claims that a prominent critic of the Kremlin, Julia Ioffe, has made grave errors in her reporting of a particular story on Russian journalists, and goes further to detail what they claim is a pattern of controversy:

We can use the following CRAAP prompts to analyze the story. CRAAP actually asks the students to ponder and answer 27 separate questions before they can label a piece of content “good”, but we’ll spare you the pain of that and abbreviate here:
- Currency: Is the article current? Is it up-to-date? Yes, in this case it is! It came out a couple of days ago!
- Relevance: Is the article relevant to my need for information? It’s very relevant. This subject is in the news, and the question of whether Russia is this authoritarian state that so many people claim it is vital to understanding what our policies should be toward Russia, and to what it might mean to want to emulate Russia in domestic policy toward journalists.
- Accuracy: Is the article accurate? Are there spelling errors, basic mistakes? Nope, it’s well written, and very slickly presented, in a multimedia format.
- Authority: Does it cite sources? Extensively. It quotes the reporters, it references the articles it is rebutting.
- Purpose: What is the purpose? It’s a fact-check, so the purpose is to check facts, which is good.
Having read the whole thing once and read it again thinking about these questions, maybe we find something to get uneasy about, 20 minutes later. Maybe.
But none of these questions get to the real issue, which is that this fact check is written by FakeCheck, the fact-checking unit of RT (formerly Russia Today), a news outfit believed by experts to be a Kremlin-run “propaganda machine”. Once you know that, the rest of this is beside the point, a waste of student time. You are possibly reading a Kremlin written attack on a Kremlin critic. Time to find another source.
We brought a can opener to a gunfight
Having gone through this exercise, it probably won’t shock you that the checklist approach was not designed for the social web. In fact, it was not designed for the web at all.
The checklist approach was designed – initially – for a single purpose: selecting library resources on a limited budget. That’s why you’ll see terms like “coverage” in early checklist approaches — what gets the biggest bang for the taxpayer buck?
These criteria have a long history of slow evolution, but as an example of how they looked 40 years ago, here’s a bulletin from the Medical Library Association in 1981. First it states the goal:
In December 1978, CHIN held a series of meetings of health care professionals for the purpose of examining how these providers assess health information in print and other formats. We hoped to extract from these discussions some principles and techniques which could be applied by the librarians of the network to the selection of health materials.
And what criteria did they use?

During these meetings eight major categories of selection criteria for printed materials were considered: accuracy, currency, point of view, audience level, scope of coverage, organization, style, and format.
If you read this article’s expansions on those categories, you’ll see the striking similarities to what we teach students today, as a technique not to decide on how best to build a library collection, but for sorting through social media and web results.
Again, I’ll repeat: the criteria here are from 1978, and other more limited versions pre-dated that conference significantly.
When the web came along, librarians were faced with another collections challenge: if they were going to curate “web collections” what criteria should they use?
The answer was to apply the old criteria. This 1995 announcement from information superhighway library CyberStacks was typical:
Although we recognize that the Net offers a variety of resources of potential value to many clientele and communities for a variety of uses, we do not believe that one should suspend critical judgment in evaluating quality or significance of sources available from this new medium. In considering the general principles which would guide the selection of world wide web (WWW) and other Internet resources for CyberStacks(sm), we decided to adopt the same philosophy and general criteria used by libraries in the selection of non-Internet Reference resources (American Library Association. Reference Collection Development and Evaluation Committee 1992). These principles, noted below, offered an operational framework in which resources would be considered as candidate titles for the collection
Among the criteria mentioned?
- Authority
- Accuracy
- Recency
- Community Needs (Relevance)
- Uniqueness/Coverage
Look familiar?
It wasn’t just CyberStacks of course. To most librarians it was just obvious that whether it was on the web or in the stacks the same methods would apply.
So when the web came into being, library staff, tasked with teaching students web literacy, began to teach students how to use collection development criteria they had learned in library science programs. The first example of this I know of is Tate & Alexander’s 1996 paper which outlines a lesson plan using the “traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage.”

(an image from a circa 2000 slideshow from Marsha Tate and Jan Alexander on how to teach students to apply library collection development criteria to the web)
It’s worth noting that even in the mid 1990s, research showed the checklist approach did not work as a teaching tool. In her 1998 research on student evaluation of web resources, Ann Scholz‐Crane observed how students used the following criteria to evaluate two web sites (both with major flaws as sources):

She gave the students the two websites and asked them to evaluate them (one student group with the criteria and one without). She was clear to the students that they had the entire web at their disposal to answer the questions.
The results…were not so good. Students failed to gather even the most basic information about the larger organization producing one of the sites. In fact, only 7 of 25 students even noted a press release on an organization’s website was produced by the organization, which should be considered as an author. This oversight was all the more concerning as the press release outlined research the organization had done. The students? They saw the relevant author as the contact person listed at the bottom of the press release. That was what was on the page, after all.
(If this sounds similar to the FakeCheck problem above — oh heck, I don’t even have snark left in me anymore. Yeah. It’s the same issue, in 1998.)
What was going on? In noting a major difference in how the expert evaluators went about the site versus the way the students did, Scholz‐Crane notes:
No instances were found where it could be determined that the students went outside the original document to locate identifying information. For example, the information about the author of Site A that appeared on the document itself was short phrase listing the author as a regular contributor to the magazine… however a link from the document leads the reader to a fuller description of the author’s qualifications and a caution to remember that the author is not a professional and serves only as a friend/mentor. None of the students mentioned any of the information contained in the fuller description as part of the author’s qualifications. This is in stark contrast to the essay evaluations of the expert evaluators where all four librarians consulted sources within the document’s larger Web site and sources found elsewhere on the Web.
Worse, although the checklist was meant to provide a holistic view of the document, most students in practice focused their attention on a single criterion, although what that criterion was varied from student to student. The supposed holistic evaluation was not holistic at all. Finally, the use of the control group showed that the students without the criteria were already using the same criteria in their responses: far from being a new way of looking at documents it was in fact a set of questions students were already asking themselves about documents, to little effect.
You know how this ends. The fact that the checklist didn’t work didn’t slow its growth. In fact, adoption accelerated. In 2004, Sarah Blakeslee at California State University noted the approach was already pervasive, even if the five terms most had settled on were not memorable:
Last spring while developing a workshop to train first-year experience instructors in teaching information literacy, I tried to remember off the top of my head the criteria for evaluating information resources. We all know the criteria I’m referring to. We’ve taught them a hundred times and have stumbled across them a million more. Maybe we’ve read them in our own library’s carefully crafted evaluation handout or found one of the 58,300 web documents that appear in .23 seconds when we type “evaluating information” into the Google search box (search performed at 11:23 on 1/16/04).
Blakeslee saw the lack of memorability of the prompts as a stumbling block:
Did I trust them to hold up a piece of information, to ponder, to wonder, to question, and to remember or seek the criteria they had learned for evaluating their source that would instantly generate the twenty-seven questions they needed to ask before accepting the information in front of them as “good”? Honestly, no, I didn’t. So what could I do to make this information float to the tops of their heads when needed?
After trying some variations in order of Accuracy, Authority, Objectivity, Currency, and Coverage (“My first efforts were less than impressive. AAOCC? CCOAA? COACA?”), a little selective use of synonyms produced the final arrangement, in a handout that quickly made its way around the English-speaking world. But the criteria were essentially the same as they were in 1978, as was the process:

And so we taught this and its variations for almost twenty years even though it did not work, and most librarians I’ve talked to realized it didn’t work many years back but didn’t know what else to do.
So let’s keep that in mind as we consider what to do in the future: contrary to public belief we did teach students online information literacy. It’s just that we taught them methodologies that were developed to decide whether to purchase reference sets for libraries.
It did not work out well.