Hapgood

Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and Online Digital Literacy


A Short History of CRAAP

Update: I recently learned that this post has been selected for inclusion in a prestigious ACRL yearly list. Newcomers unfamiliar with our work may want to check out SIFT, our alternative to CRAAP, after reading the article.

I reference the history of the so-called “checklist approaches” to online information literacy from time to time, but haven’t put the history down in any one place that’s easily linkable. So if you were waiting for a linkable history of CRAAP and RADCAB, complete with supporting links, pop open the champagne (Portland people, feel free to pop open your $50 bottle of barrel-aged beer). Today’s your lucky day.

Background

In both undergraduate education and K-12 the most popular approach to online media literacy of the past 15 years has been the acronym-based “checklist”. Prominent examples include RADCAB and CRAAP, both in use since the mid-00s. The way that these approaches work is simple: students are asked to chose a text, and then reflect on it using the acronym/initialism as prompt.

As an example, a student may come across an interactive fact-check of the claim that reporters in Russia were fired over a story they did that was critical of the Russian government. It makes claims that a prominent critic of the Kremlin, Julia Ioffe, has made grave errors in her reporting of a particular story on Russian journalists, and goes further to detail what they claim is a pattern of controversy:

2016.PNG

We can use the following CRAAP prompts to analyze the story. CRAAP actually asks the students to ponder and answer 27 separate questions before they can label a piece of content “good”, but we’ll spare you the pain of that and abbreviate here:

  • Currency: Is the article current? Is it up-to-date? Yes, in this case it is! It came out a couple of days ago!
  • Relevance: Is the article relevant to my need for information? It’s very relevant. This subject is in the news, and the question of whether Russia is this authoritarian state that so many people claim it is vital to understanding what our policies should be toward Russia, and to what it might mean to want to emulate Russia in domestic policy toward journalists.
  • Accuracy: Is the article accurate? Are there spelling errors, basic mistakes? Nope, it’s well written, and very slickly presented, in a multimedia format.
  • Authority: Does it cite sources? Extensively. It quotes the reporters, it references the articles it is rebutting.
  • Purpose: What is the purpose? It’s a fact-check, so the purpose is to check facts, which is good.

Having read the whole thing once and read it again thinking about these questions, maybe we find something to get uneasy about, 20 minutes later. Maybe.

But none of these questions get to the real issue, which is that this fact check is written by FakeCheck, the fact-checking unit of RT (formerly Russia Today), a news outfit believed by experts to be a Kremlin-run “propaganda machine”. Once you know that, the rest of this is beside the point, a waste of student time. You are possibly reading a Kremlin written attack on a Kremlin critic. Time to find another source.

We brought a can opener to a gunfight

Having gone through this exercise, it probably won’t shock you that the checklist approach was not designed for the social web. In fact, it was not designed for the web at all. 

The checklist approach was designed – initially – for a single purpose: selecting library resources on a limited budget. That’s why you’ll see terms like “coverage” in early checklist approaches — what gets the biggest bang for the taxpayer buck? 

These criteria have a long history of slow evolution, but as an example of how they looked 40 years ago, here’s a bulletin from the Medical Library Association in 1981. First it states the goal:

In December 1978, CHIN held a series of meetings of health care professionals for the purpose of examining how these providers assess health information in print and other formats. We hoped to extract from these discussions some principles and techniques which could be applied by the librarians of the network to the selection of health materials.

And what criteria did they use?

selection

During these meetings eight major categories of selection criteria for printed materials were considered: accuracy, currency, point of view, audience level, scope of coverage, organization, style, and format.

If you read this article’s expansions on those categories, you’ll see the striking similarities to what we teach students today, as a technique not to decide on how best to build a library collection, but for sorting through social media and web results.

Again, I’ll repeat: the criteria here are from 1978, and other more limited versions pre-dated that conference significantly.

When the web came along, librarians were faced with another collections challenge: if they were going to curate “web collections” what criteria should they use?

The answer was to apply the old criteria. This 1995 announcement from information superhighway library CyberStacks was typical:

Although we recognize that the Net offers a variety of resources of potential value to many clientele and communities for a variety of uses, we do not believe that one should suspend critical judgment in evaluating quality or significance of sources available from this new medium. In considering the general principles which would guide the selection of world wide web (WWW) and other Internet resources for CyberStacks(sm), we decided to adopt the same philosophy and general criteria used by libraries in the selection of non-Internet Reference resources (American Library Association. Reference Collection Development and Evaluation Committee 1992). These principles, noted below, offered an operational framework in which resources would be considered as candidate titles for the collection

Among the criteria mentioned?

  • Authority
  • Accuracy
  • Recency
  • Community Needs (Relevance)
  • Uniqueness/Coverage

Look familiar?

It wasn’t just CyberStacks of course. To most librarians it was just obvious that whether it was on the web or in the stacks the same methods would apply.

So when the web came into being, library staff, tasked with teaching students web literacy, began to teach students how to use collection development criteria they had learned in library science programs. The first example of this I know of is Tate & Alexander’s 1996 paper which outlines a lesson plan using the “traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage.” 

trad.PNG

(an image from a circa 2000 slideshow from Marsha Tate and Jan Alexander on how to teach students to apply library collection development criteria to the web)

It’s worth noting that even in the mid 1990s, research showed the checklist approach did not work as a teaching tool. In her 1998 research on student evaluation of web resources, Ann Scholz‐Crane observed how students used the following criteria to evaluate two web sites (both with major flaws as sources):

criteria

She gave the students the two websites and asked them to evaluate them (one student group with the criteria and one without). She was clear to the students that they had the entire web at their disposal to answer the questions.

The results…were not so good. Students failed to gather even the most basic information about the larger organization producing one of the sites. In fact, only 7 of 25 students even noted a press release on an organization’s website was produced by the organization, which should be considered as an author. This oversight was all the more concerning as the press release outlined research the organization had done. The students? They saw the relevant author as the contact person listed at the bottom of the press release. That was what was on the page, after all.

(If this sounds similar to the FakeCheck problem above — oh heck, I don’t even have snark left in me anymore. Yeah. It’s the same issue, in 1998.)

What was going on? In noting a major difference in how the expert evaluators went about the site versus the way the students did, Scholz‐Crane notes:

No instances were found where it could be determined that the students went outside the original document to locate identifying information. For example, the information about the author of Site A that appeared on the document itself was short phrase listing the author as a regular contributor to the magazine… however a link from the document leads the reader to a fuller description of the author’s qualifications and a caution to remember that the author is not a professional and serves only as a friend/mentor. None of the students mentioned any of the information contained in the fuller description as part of the author’s qualifications. This is in stark contrast to the essay evaluations of the expert evaluators where all four librarians consulted sources within the document’s larger Web site and sources found elsewhere on the Web.

Worse, although the checklist was meant to provide a holistic view of the document, most students in practice focused their attention on a single criterion, although what that criterion was varied from student to student. The supposed holistic evaluation was not holistic at all. Finally, the use of the control group showed that the students without the criteria were already using the same criteria in their responses: far from being a new way of looking at documents it was in fact a set of questions students were already asking themselves about documents, to little effect.

You know how this ends. The fact that the checklist didn’t work didn’t slow its growth. In fact, adoption accelerated. In 2004, Sarah Blakeslee at California State University noted the approach was already pervasive, even if the five terms most had settled on were not memorable:

Last spring while developing a workshop to train first-year experience instructors in teaching information literacy, I tried to remember off the top of my head the criteria for evaluating information resources. We all know the criteria I’m referring to. We’ve taught them a hundred times and have stumbled across them a million more. Maybe we’ve read them in our own library’s carefully crafted evaluation handout or found one of the 58,300 web documents that appear in .23 seconds when we type “evaluating information” into the Google search box (search performed at 11:23 on 1/16/04).

Blakeslee saw the lack of memorability of the prompts as a stumbling block:

Did I trust them to hold up a piece of information, to ponder, to wonder, to question, and to remember or seek the criteria they had learned for evaluating their source that would instantly generate the twenty-seven questions they needed to ask before accepting the information in front of them as “good”? Honestly, no, I didn’t. So what could I do to make this information float to the tops of their heads when needed?

After trying some variations in order of Accuracy, Authority, Objectivity, Currency, and Coverage (“My first efforts were less than impressive. AAOCC? CCOAA? COACA?”), a little selective use of synonyms produced the final arrangement, in a handout that quickly made its way around the English-speaking world. But the criteria were essentially the same as they were in 1978, as was the process:

chart

And so we taught this and its variations for almost twenty years even though it did not work, and most librarians I’ve talked to realized it didn’t work many years back but didn’t know what else to do.

So let’s keep that in mind as we consider what to do in the future: contrary to public belief we did teach students online information literacy. It’s just that we taught them methodologies that were developed to decide whether to purchase reference sets for libraries

It did not work out well.



18 responses to “A Short History of CRAAP”

    1. Oh my gosh. Portlandia invented CRAAP.

  1. Whatever the history of evaluation, the checklist elements are still relevant so it makes sense that it has not changed, nor is it some travesty of teaching or necessarily the reason people don’t get it right. Experienced people do it (internal checklists) intuitively, as well as look outside of a source (or website);”lateral” searching is no newer than the other elements like currency, and thinking critically about other long-standing elements of a source; it just adds another point to the checklist. The issue isn’t how awful checklists are, it’s that we’ve become lazy and don’t think about what we are seeing critically, we expect instant gratification with the internet, and there is a whole heck of a lot more crap out there now then there would have fit in an old-fashioned, curated print library.

    1. The issue is that the volume of decisions we must make on the web combined with increased uncertainty around sources do require different approaches than longer assessments under information scarcity.

      When teaching does not start by considering the actual environment in which skills will be practiced and knowledge applied, I would argue it *is* a travesty.

      1. Of course one has to cater learning/teaching to the environment that it will be used in, and as importantly, to the developmental level of the learner. I’m merely arguing that checklists, if adapted to the web, which anyone “teaching” this would do, can be a helpful starting point. They are a tool, like so many others, and the fact that they evolved from print standards does not make it a travesty to apply updated versions of them now. Most of the bullet points (currency, authority, etc.) remain relevant and worthy of discussion and probably more importantly, practice. The bottom line is, there is no magic bullet, and that it takes time to truly verify web sources, whether one begins with a checklist or something else. In the old days one could pretty much rely on what the librarian had at hand with less critical analysis – now we are pretty much on our own, and many folks will not take the time or do not realize that they have to to get “good” info on the web.

  2. blgriffin — I’m thankful for your input here, and don’t want to belabor the point much – I know you have some expertise in this area.

    But I have taught with a checklist approach (way back, for both web literacy and statistical literacy) and I have taught with a more heuristic-driven approach, and while it’s not a magic bullet, there has been a world of difference in effect.

    One of the more interesting elements of the checklist is noted in the 1998 article and also something that Gigerenzer has noted — when students are presented with many things to evaluate they tend to quickly zero in on whatever property is easiest to assess, and simply apply that. (the 1998 study does not say this specifically, but hints at it with the description).

    By trade I’m a faculty developer and instructional design — I teach faculty how to teach and build courses. And I used to start workshops with a big slide “Students economize”, which is to say that students will take the simplest possible lesson from what you teach, and that because of this what a method teaches and what a student learns from it are often opposite things. I have many problems with the checklist, but my biggest is how I see it get “economized” by students in practice, because they are overwhelmed by it, even when taught carefully.

    We do teach some elements of the checklist later in courses, but:

    * Not as a checklist
    * Always bound to a domain (e.g. currency in news vs. currency in research, etc — something the earliest checklist approaches did that was lost)
    * Only after students have developed the habits and heuristics of quick sorting and want to learn more about the harder calls

  3. Thank you for your more detailed elaboration/reiteration of why you’ve found checklists to be self-defeating, i.e. that students over-simplify (economize, maybe by necessity?), which I see over and over again as well. And I would by no means put all faith in teaching via checklist nor object to calling them something else. When I teach this I try to get students (HS freshmen and sophomores) to come up with their own criteria (checklist?) based on what we are looking at: news, social media or scholarly (this is very tough for them to get at this level and I have a separate module or two for it) sources. I’d be curious to know if this quick-sorting is really a more effective way to get them to be more critical information users and to better sort the wheat from the chaff. It sounds like it has been for your students and teachers. Perhaps I’ll try it in the future (and be sure to give you credit!). Thanks again!

  4. As a high school librarian, I’ve been using the CRAAP acronym for a while to get studemts thinking broadly about the quality of sources. I never was satisfied, however, with the approach some teacher-librarians use of having stusents assign points based on a CRAAP rubric.

    When I first read the Four Moves a couple years ago, I initially thought it would be too difficult for students to remember and apply, but I’m trying. Along with encouraging use of fact-checking sites, I’ve been getting students to open another tab to check Authority by reading laterally–searching to see what others say about the source.

    The practices I encourage students to use are still evolving, and I would love to learn more about what others are doing successfully.

    Thank you for this discussion.

  5. […] “A Short History of CRAAP” by Mike Caulfield. […]

  6. […] on the concerns of some recent studies challenging the effectiveness of using the C.R.A.P. test to detect “fake news”,  […]

  7. […] you want to see how badly we are failing to teach students these things, check out A Brief History of CRAAP and Recognition is […]

  8. […] acronym is not even accurate. It should be CRAAP). Mike Caulfield, author of the book linked above, has an excellent explanation. The Four Moves and a Habit is not only better but pretty much a must-read at this […]

  9. […] Today we’re all about “A Short History of CRAAP” by Mike Caulfield. […]

  10. […] also found his post, A Short History of CRAAP as particularly enlightening. My jaw dropped a bit at this particular […]

  11. […] is called the CRAAP test (standing for Currency, Relevance, Authority, Accuracy and Purpose), popularized by a librarian at Cal State Chico. Versions of it are used by universities across the United […]

  12. […] is called the CRAAP test (standing for Currency, Relevance, Authority, Accuracy and Purpose), popularized by a librarian at Cal State Chico. Versions of it are used by universities across the United […]

  13. […] is called the CRAAP test (standing for Currency, Relevance, Authority, Accuracy and Purpose), popularized by a librarian at Cal State Chico. Versions of it are used by universities across the United […]

  14. […] history of thinking about information evaluation. From the catchy CRAAP mnemonics that emerged from collection development criteria to the most recent focus on authority within the ACRL Framework, information evaluation has […]

Leave a comment