A question we get asked a lot about our four moves curriculum is whether it sticks. Can a two or three week intervention really change people’s approach online to information permanently?
Remember, we don’t do traditional news literacy. We don’t do traditional media literacy. We don’t teach people about newspapers, communications theory, or any of that. We just do one thing — give them a set of things to do in their first 60 seconds after encountering a piece of media. We do that for two to three weeks of class time, and talk a bit about practical issues around online information, algorithms, trolling, and the like.
We do take some efforts to check persistence. We do the post-assessment several weeks after the last class session, to see what happens after skills decay. We test with authentic prompts, to try and mimic the context students will exercise the skills as precisely as possible. But still the question comes up — are students going to keep doing this? Like, really really? A formal assessment of this would involve some seriously creepy surveillance of students. But we got a powerful anecdotal piece of evidence a bit ago.
The background: CUNY Staten Island implemented our two-week curriculum in their Core 100 class for freshman last fall. A few weeks ago the coordinator of the Core program got this letter from one of the school’s scholarship advisors about some spring scholarship applications. The advisor writing it had no idea of the changes in the Core program and had never heard of the four moves. I am reproducing it in its entirety here, partly because I want you to know I am not cherry-picking here, and partly because the advisor writes with a beautiful clarity that I’m not sure I could match (I love a beautiful email!):
I’ve been meaning to tell you for a few months now that the Core program deserves a HUGE kudos, and that I am very impressed with the training students are receiving through Core 100.
Each year, I run our campus competition for the Jeannette K. Watson Fellowship, which is a prestigious opportunity that gives students generous stipends, internship experience, mentoring and professional development training. As part of our campus competition, my committee and I interview candidates as the final stage in the nomination process, as all candidates must then attend an extensive day-long interview session at the Watson Foundation.
One of the questions our committee asks applicants is, “Where do you get your news?” The fellowship seeks students who are knowledgeable of domestic and global issues, as well as students who are motivated to affect positive social change. This question is often asked of nominees who go forward into the official competition, therefore, we make it a point to ask this question for our internal campus competition.
In years past, we received answers such as social media, or perhaps one or two popular news stations, etc. Occasionally a student would cite the NY Times as a primary source for news. This year, we were astounded at the answers we received to this question. Nearly every applicant told us how they compare different news stations for different perspectives, and how they seek to verify the news they are reading. Most applicants further cited international sources of news for an even wider perspective. We couldn’t believe the change this year – how intelligent and worldly and diligent they all sounded! (More so than most older adults!) One of the applicants told us that she learned to do this in Core 100.
Whatever you’re doing, it’s working. I’m very impressed and quite moved.
Last night I mentioned this letter to Paul Cook, who has taught using these methods at Indiana University Kokomo. I expected him to say something along the lines of “Wow, that’s incredible!”. But he didn’t. He said “Honestly, Mike, that doesn’t surprise me at all.” And he was right. It’s moving to see here, but it’s also completely consistent with our experience of teaching the course. It’s moving to me because it’s what we see too.
You see the moves in Michele’s description, of course — find other coverage, investigate the source. The habits we push. But you see something that I often have trouble explaining to others — with the right habits you find students start sounding like entirely different people. They start being, in some ways, very different people. Less reactive, more reflective, more curious. If the habits stick, rather than decay, that effect can cumulative, because the students have done that most powerful of things — they have learned how to learn. And the impact of that can change a person’s life.
Author’s note: Back in early 2017 I introduced the “four moves”, a set of strategies that students could use on the web instead of checklist approaches such as CRAAP and incoherent lists of tips. The moves were based on my own experience teaching civic digital literacy and emerging research from Sam Wineburg and his team. While they were presented simply, they actually encoded deep knowledge about how people go wrong on the web, and were the result of intensive honing and conversations with experts.
The “moves” proved remarkably popular and resilient in the face evolving disinformation threats and misinformation concerns. They remain the four moves I’m still teaching today. They’ve even had a broader impact on the online information literacy discourse, with other educational groups slowly moving away from complex checklists to things that look four moves-like.
At the same time, while the moves remain the same, I’ve altered the phrases attached to them over time, changed their order, developed cleaner explanations, and placed them in an acronym for easier remembering. All of this is going into a set of materials that we plan to release to the public next year, but I thought I would share with you a linkable document that shares the current acronym and language we are using for the moves.
This language is quite literally the first draft of a page, so excuse the verbosity, lack of crispness, and outright error.We’ll get this down to the same level of crispness as the original book eventually. Also, it starts in the middle of a larger lesson, sorry 🙂
So if long lists of things to think about only make things worse, how do we get better at sorting truth from fiction and everything in-between?
Our solution is to give students and others a short list of things to do when looking at a source, and hook each of those things to one or two highly effective web techniques. We call the “things to do” moves and there are four of them:
The first move is the simplest. STOP reminds you of two things.
First, when you first hit a page or post and start to read it — STOP. Ask yourself whether you know the website or source of the information, and what the reputation of both the claim and the website is. If you don’t have that information, use the other moves to get a sense of what you’re looking at. Don’t read it or share media until you know what it is.
Second, after you begin to use the other moves it can be easy to go down a rabbit hole, going off on tangents only distantly related to your original task. If you feel yourself getting overwhelmed in your fact-checking efforts, STOP and take a second to remember your purpose. If you just want to repost, read an interesting story, or get a high-level explanation of a concept, it’s probably good enough to find out whether the publication is reputable. If you are doing deep research of your own, you may want to chase down individual claims in a newspaper article and independently verify them.
Please keep in mind that both sorts of investigations are equally useful. Quick and shallow investigations will form most of what we do on the web. We get quicker with the simple stuff in part so we can spend more time on the stuff that matters to us. But in either case, stopping periodically and reevaluating our reaction or search strategy is key.
Investigate the source
We’ll go into this move more on the next page. But idea here is that you want to know what you’re reading before you read it.
Now, you don’t have to do a Pulitzer prize-winning investigation into a source before you engage with it. But if you’re reading a piece on economics by a Nobel prize-winning economist, you should know that before you read it. Conversely, if you’re watching a video on the many benefits of milk consumption that was put out by the dairy industry, you want to know that as well.
This doesn’t mean the Nobel economist will always be right and that the dairy industry can’t be trusted. But knowing the expertise and agenda of the source is crucial to your interpretation of what they say. Taking sixty seconds to figure out where media is from before reading will help you decide if it is worth your time, and if it is, help you to better understand its significance and trustworthiness.
Find trusted coverage
Sometimes you don’t care about the particular article or video that reaches you. You care about the claim the article is making. You want to know if it is true or false. You want to know if it represents a consensus viewpoint, or if it is the subject of much disagreement.
In this case, your best strategy may be to ignore the source that reached you, and look for trusted reporting or analysis on the claim. If you get an article that says koalas have just been declared extinct from the Save the Koalas Foundation, your best bet might not be to investigate the source, but to go out and find the best source you can on this topic, or, just as importantly, to scan multiple sources and see what the expert consensus seems to be. In these cases we encourage you to “find other coverage” that better suits your needs — more trusted, more in-depth, or maybe just more varied. In lesson two we’ll show you some techniques to do this sort of thing very quickly.
Do you have to agree with the consensus once you find it? Absolutely not! But understanding the context and history of a claim will help you better evaluate it and form a starting point for future investigation.
Trace claims, quotes, and media back to the original context
Much of what we find on the internet has been stripped of context. Maybe there’s a video of a fight between two people with Person A as the aggressor. But what happened before that? What was clipped out of the video and what stayed in? Maybe there’s a picture that seems real but the caption could be misleading. Maybe a claim is made about a new medical treatment based on a research finding — but you’re not certain if the cited research paper really said that.
In these cases we’ll have you trace the claim, quote, or media back to the source, so you can see it in it’s original context and get a sense if the version you saw was accurately presented.
It’s about REcontextualizing
There’s a theme that runs through all of these moves: they are about reconstructing the necessary context to read, view, or listen to digital content effectively.
One piece of context is who the speaker or publisher is. What’s their expertise? What’s their agenda? What’s their record of fairness or accuracy? So we investigate the source. Just as when you hear a rumor you want to know who the source is before reacting, when you encounter something on the web you need the same sort of context.
When it comes to claims, a key piece of context includes whether they are broadly accepted or rejected or something in-between. By scanning for other coverage you can see what the expert consensus is on a claim, learn the history around it, and ultimately land on a better source.
Finally, when evidence is presented with a certain frame — whether a quote or a video or a scientific finding — sometimes it helps to reconstruct the original context in which the photo was taken or research claim was made. It can look quite different in context!
In some cases these techniques will show you claims are outright wrong, or that sources are legitimately “bad actors” who are trying to deceive you. But in the vast majority of cases they do something just as important: they reestablish the context that the web so often strips away, allowing for more fruitful engagement with all digital information.
In the coming lessons, we’ll demonstrate how each of these moves can be tied to powerful web-based techniques that you apply to content quickly, as a matter of habit.
Found some disinfo on TikTok today which had apparently started on Facebook. It’s a video that promotes a variety of bogus and increasingly bizarre claims — there are plastic shards in rice that show up when put in a hot pan, harmful magnetic gunk in your baby formula you can extract with a magnet, poisonous washing powder in your ice cream that can be revealed with a drop of lemon juice.
The TikTok version gets scrambled a bit when you try to watch it in a browser, and WordPress.com doesn’t allow TikTok embeds yet. Thankfully eagle-eyed Twitter user @infuturereverse linked me to a Facebook version of the video so you can watch it here. (And please, please, do watch it, it’s fascinating).
It did well on Facebook too — but that’s not really news at this point. What strikes me is the TikTok success, since this is more rare, and yet seems very TikTok.
Why? Well, it’s presented as a “hacks” video (a genre very popular on TikTok). Hack videos — especially hacks around domestic issues — do well. Some are pretty solid. Others are technically honest but you wonder a bit about the practicality.
There’s also a variation in the form of the “replicable prank ” video. I think of it as a “hack” variation because it usually shows a simple way to execute the prank on others, and executing it requires some knowledge. Pranks as replicable memes often look like hacks. This sort of content already spreads misinformation, where some hacks are overhyped, such as the “hyphen” iPhone Prank that is presented as “erasing your friends phone with a voice command” but really just temporarily crashes the iPhone launcher.
In higher level pranks, the poster pranks the audience intentionally with a “hack” that doesn’t work — for example, demonstrating that the iPhone comes with a secret pair of AirPods that you will find if you tear the box apart, or the current sensation on TikTok of finding cash in hotel bibles, presumably placed by Christians there to reward the faithful. (This is a modern variation on an urban legend that dates back to the 1950s, and made its original digital rounds as an email hoax).
What may not be clear to non-TikTok users is how this sort of fakery is replicated as a meme. One person “finds” money in a bible, then other people post videos of “finding” cash in the bibles. The fake hack spreads not just through the increasing reach of the initial video, but through its replication by others. It suddenly seems like dozens of folks are finding cash in bibles.
These sort of fake hacks work on multiple levels in TikTok. In cases like the bible cash, they are sort of Santa Claus-ish: a group of people in the know bonds around these knowing these are fake, but enjoying and sometimes sustaining the joke through through faking evidence, the way parents fake Santa Claus eating cookies on Christmas. Another group of people believes the hacks are real, and a larger group enjoys wondering if they are real or not in a way that makes the world a bit more magical. It’s really not harmful if people flip through their bibles in hotel rooms and experience some brief anticipation of a cash find. (It’s also spawned some interesting variations where people put non-cash fandom-related things in hotel bibles.)
It’s also interesting to me that for a lot of these you can’t know if it is faked unless you try it yourself — and it is tempting to try, as with the likely real “soda hack”:
As with practical jokes, if you do try to replicate and fail — if, for instance, you try to replicate the meme where you rub your finger against a battery and try to levitate a penny — it’s pretty tempting to pay it forward by creating a fake video yourself (in this case by using video reverse: you spin a penny, watch it fall to a stop, rub your finger against a battery, then reverse the video). I don’t think this is a perversion of TikTok culture — I think it *is* TikTok culture, which often looks like an older sibling showing something new to the middle kid who then shows it to the youngest.
Malicious misinformation is relatively rare on TikTok, but it seems to me that where it does emerge the “food hacks” video is one format that will mesh well with TikTok culture — fake hacks, tricks, and “inside knowledge” loaded with false framing.
What would that look like? It’d look very urban legendish: If there’s an eight next to your product’s barcode it means it was produced by slave labor, if you find these marks in the Starbucks bathroom it’s a sign children have been trafficked there. If your phone makes this kind of static noise it could be a sign of radiation from cell towers, and you need to move away from them immediately. For political stuff, TikToky versions of “this voting machine didn’t record my vote” etc.
The good news is since there’s not really a way for creators to monetize anything on TikTok yet it’s likely to be pretty tame compared to platforms like Facebook, YouTube, or Instagram, at least in the immediate future. But it’d be nice to see people thinking about this sooner rather than later — how do you moderate a TikTok culture that values the sort of wink-wink-nudge-nudge “Santa Claus” fakery with one that would like to keep more toxic fakery at bay?
We’ll be adding a couple TikTok examples to our current educational materials, but in the meantime feel free to ask your TikTok misinformation questions below.
There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.
Two things about this. The first is that our four moves (SIFT) apply well to this incident. Specifically, the “T” in SIFT is “Trace quotes, claims, and media to the original context.” In this case you can watch the original video on C-SPAN and see the difference immediately.
But what if you can’t trace it? In general, if the provenance of the video is hidden, but clearly has an unlinked original source, wait a bit. Even decent news sources can be godawful at linking original sources, but usually for a big video like this people will point you to the original within a day or two which is what happened here.
The second thing to watch is how the media ecosystem works as, well, a system. When people look at the impact of false news they often measure how much of it makes it to mainstream broadcasts. But very often the way networked lies and mainstream news interact is synergistic. So as this false Facebook video is being circulated to millions of viewers, the Fox News show Lou Dobbs Tonight airs a different video of Pelosi with some instances of her stammering edited together and asks “What’s going on?” Age? Illness? The video pushes beyond the bounds of acceptable journalism, but within the bounds of what is currently permissible on air. The guest commentator is very muted but pointed in replies — she’s getting old, probably pushing herself too hard, maybe needs to step aside.
In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.
Anyway, that’s what you see with a lot of disinfo campaigns. On the wild west of social media, outright lies are spread. And usually the outright lies don’t make it to the mainstream outlets exactly as spread, but a very similar and dishonestly spun story is spread at the same time through broadcast. The two blend into one, able to use to freedom of the web to build shock and the amplification of traditional media to build a sense of veracity and extend the reach. You saw this with the caravan in 2018, Clinton’s “sickness” in 2016. And so on. Two tracks — one through viral spread and the other through official channels, blended into something more damaging than either track alone.
I have so much writing backlogged I need to get a few quick hits out to clear the logjam.
Here’s a good example of a statistical false frame that’s visual enough for a slide.
It says “Washington Post” on the bottom there, but of course the Washington Post version lacks the “presidential term” markers.
When you see a framing has been added like that, it’s wise to think it through what has been added and whether its accurate. And of course with a little thought you’ll hopefully ask why if it covers one year of the Trump presidency and eight years of Obama the boxes of their terms are nearly equal in size? (Weird, right?) If you’re particularly adept you might ask why Obama’s term begins in 2007, which I seem to remember as the Bush presidency, though honestly I was drinking more back then, so who knows.
It’s worth asking it our “T” move (Trace claims, quotes, and media to the original context) works here, since the original graphic doesn’t settle the questions of what economy Obama inherited in 2009 or what economy he left us with. I think you could point to the context the article adds around the charts as useful (2017 figures are before Trump tax cuts and before his first budget). Still, it doesn’t give the answer to you outright, you’re going to have to think it through, and you *could* come to the same conclusions without going to the original context first.
But what the trace does in this case is it shows students where to look. By calling attention to what’s been added, removed, or altered, it focuses their thought in the right area. Show a student the initial graphic and say “Hey what’s the problem with this graph” and you’ll get a flood of answers — Is it inflation? How do we know they each caused this? It starts at $50k, it’s a bad axis! (students love this one). Going to the original context and looking at what has been altered solves the students biggest issue: where to focus their thinking first, given a bewildering array of options.
Since people asked, here’s the modified image with the real terms of office:
Note that even this is a bit unfair; most economists would say that the influence of the President on the economy (to the extent there is one) is felt through the mechanism of the budget and associated tax policy, and that does not get passed until the fall of the first year, with the tax policy going into effect for the following year. If you shift that, of course, then there is no part of this graph that is Trump budget, and the graph looks like this:
It’s also worth noting that if you go to the article there is plenty there to critique the Obama economy over — there’s pretty broad agreement that it’s surprising wages have not increased given the strength of the economy, and economists point out that the effects seen here are probably not pay raises at all but due to increased employment (e.g. if one spouse got cut to part time in the recession and can now get full-time work, household income increases, but rate of pay does not).
The Four Moves have undergone some tweaking since I first introduced them in early 2017. The language has shifted, been refined. We’ve come to see that lateral reading is more of a principle underlying at least two of the moves (maybe three). We’ve removed a reference to “go upstream” which was a bit geeky. All in all, though, the moves have remained constant, partially because so many people have found them useful.
Today, we’re introducing an acronym that can be used to remember the moves: SIFT.
(I)nvestigate the Source
(F)ind better coverage
(T)race claims, quotes, and media back to the original context
If you’ve followed the moves as they have developed over the past two years, these won’t surprise you, but there are a couple changes to the wording and the order.
The most notable is we’ve combined our habit (originally “check your emotions”) with the move (“circle back”) because these turn out to be the same thing. Basically — stop reading, stop reacting, figure out what you need to know and reapproach. In the beginning, this means to not read before you orient yourself. When researching, this means if you are getting sucked into an increasingly confusing maze of pages, STOP AND BACK UP.
The other moves are the same as the most recent iteration, with the change that “Find better coverage” replaces “other coverage” to emphasize the idea you are looking for other coverage, but ideally coverage that is slightly better on at least one dimension. What those dimensions are may be contextual, but often students have some half-decent intuitions here that can be refined over time.
We’ve also broadened out “Find the original” to its replacement which stresses that the point is not just finding the original for its own sake, but finding the original context. The original may be better — original reporting from the NYT or a fact-checked Atlantic article. But it could be worse — a claim that is sourced to a junk journal, or simply began as an unsubstantiated tweet. In the case of photos or videos, the original context is often mitigating, where media or quotes are presented with a false, inflammatory frame.
But the main introduction here is the acronym, a direct answer to CRAAP. (“Don’t CRAAP, SIFT?”).
Final note — some people might look at the acronym and think — “Isn’t this just more CRAAP? Another checklist?”
I deal with this extensively on this blog and in the textbook, but the problem with CRAAP has never been the acronym. In fact, the history of CRAAP as a web infolit device begins eight years (at least) before the acronym. The difference has always been the difference between a narrow list of things to do (SIFT) and a broad list of things to consider and rate (CRAAP). I’ve detailed at length why that makes such a difference in terms of cognitive load and other factors, so I won’t repeat it here. But my point is that a bad methodology got a lot of lift with a clever acronym that served as a convenient shorthand and a student mnemonic — it’s probably time the better methodology gets an acronym as well.
Sam prides himself on questioning conventional wisdom and subjecting claims to intellectual scrutiny. For kids today, that means Googling stuff. One might think these searches would turn up a variety of perspectives, including at least a few compelling counterarguments. One would be wrong. The Google searches flooded his developing brain with endless bias-confirming “proof” to back up whichever specious alt-right standard was being hoisted that week. Each set of results acted like fertilizer sprinkled on weeds: A forest of distortion flourished.
I have one or two quibbles with the recent article in the Washingtonian about a 13 year-old’s slide into the alt-right by way of meme-world, but the article as a whole is quite useful and, for parents at least, very moving. I recommend everyone read it, but in particular parents.
Let’s get the quibbles out of the way first. I think the article is a bit too enamored with Nagle’s Kill All Normies, and that maybe leaks into the narrative as well, with the inciting incident (wrongly accused of sexual harassment) perhaps playing too dominant a role. I would say it’s a bit too sympathetic, except of course it’s the woman’s son and the kid is thirteen. So I think we can let it slide. (Don’t read Nagle, though. Read Becca Lewis and Joan Donovan instead).
Where the article does excel, though, is in the way it gets across the process of grooming that these communities use. People tend to think of grooming in the context of sexual predators or spies — the slow process of finding disaffected people and using their disaffection to warp their mind bit by bit. But we’ve long known that this is how online radicalization works as well, from ISIS to neo-Nazis.
The quote I’ve chosen at the top of this article talks about confirmation bias, and I’ll come to the ways that is right in a second. But let me first say what “confirmation bias” gets wrong about our radicalization problem. (Trigger warning: I will be drawing a short parallel that touches on sexual predation).
No foreign power looking to recruit a spy goes up and says, hey, will you spy for us? And sexual predators do not begin grooming by asking for sex. Instead, in each case, there is a slow process of getting the target acclimated, bit by bit, to ideas thought repulsive. The grooming is achieved by hiding the destination of the grooming until the target is already deep in the alternate reality.
This is an important point, because it’s actually working against confirmation bias. Confirmation bias would take a non-Nazi, and work to keep them a non-Nazi. Confirmation bias, were all the cards on the table at the beginning of the grooming, would be protective. You’d Google, find out you were reading Nazi literature and think um, maybe I’ll read something else.
So what’s going on with these Google searches?
The Google searches flooded his developing brain with endless bias-confirming “proof” to back up whichever specious alt-right standard was being hoisted that week.
A few things are likely happening. The first is curation. The reddit group was likely feeding her son a constant stream of outrages of men being ill-treated by feminists. An ad that denigrates male aggressiveness in sports. The story of a woman falsely accusing a man of rape. Statistics showing the wage gap is a myth. A feminist saying outrageous things. Probably some fake stuff, ala #EndFathersDay thrown in for good measure. When these things are put all together in a stream, it can seem like there is a vast conspiracy to suppress the real truth. How come they never taught you this stuff, right?
Now, this is where we’d think being inquisitive would help. Get out and Google it, right? And for someone skilled at finding the right information on the web that strategy might work. But the curation and the language used produces loaded searches that just pulls one deeper into the narrative that the curation scaffolded.
What do I mean? Well, take the infamous Dylann Roof search “black on white crime” which he indicated was his first step into the radicalization that led to him slaughtering black worshipers in a church basement in an attempt to incite a “race war”. In the beginning, he put “black on white crime” into Google, and this is what happened next:
But more importantly this prompted me to type in the words “black on White crime” into Google, and I have never been the same since that day. The first website I came to was the Council of Conservative Citizens. There were pages upon pages of these brutal black on White murders. I was in disbelief. At this moment I realized that something was very wrong. How could the news be blowing up the Trayvon Martin case while hundreds of these black on White murders got ignored?
As I’ve talked about previously, “black on white crime” is a data void. It is not a term used by social scientists or reputable news organizations, which is why the white nationalist site Council of Conservative Citizens came up in those results. That site has since gone away, but what it was was a running catalog of cases where black men had murdered (usually) white women. In other words, it’s yet another curation, even more radical and toxic than the one that got you there. And then the process begins again.
So this is what the spiral looks like:
You can read Roof describe the process here:
From this point I researched deeper and found out what was happening in Europe. I saw that the same things were happening in England and France, and in all the other Western European countries. Again I found myself in disbelief. As an American we are taught to accept living in the melting pot, and black and other minorities have just as much right to be here as we do, since we are all immigrants. But Europe is the homeland of White people, and in many ways the situation is even worse there. From here I found out about the Jewish problem and other issues facing our race, and I can say today that I am completely racially aware.
From Roof’s “manifesto”.
The thing to remember about this algorithmic-human grooming hybrid is that the gradualness of it — the step-by-step nature of it — is a feature for the groomers, not a bug. I imagine if the first page Roof had encountered on this — the CCC page — had sported a Nazi flag and and a big banner saying “Kill All Jews” he’d have hit the back button, and maybe the world might be different. (Maybe). But the curation/search spiral brings you to that point step by step. In the center of the spiral you probably still have enough good sense to not read stuff by Nazis, at least knowingly. By the time you get to the edges, not so much.
Digital Literacy Interventions
There is so much that needs to be addressed here, in terms of platforms, schooling, awareness of the danger of various ideologies. In terms of underlying patriarchal and white supremacist culture, and the systems that serve to replicate and enlarge its influence. When I talk about digital literacy interventions, I do not mean to minimize this work. It’s massive.
But digital literacy is my piece of it. What do digital literacy interventions look like here?
There’s a multiple entry points here, corresponding to the parts of the spiral:
Students need a basic understanding of how curations can warp reality. I don’t think the “filter bubble” is the frame for this, since it implies that curations confirm existing beliefs and that stepping outside the curation is a net good. In reality, curations don’t protect us from opposing views, but often bring us to more radical views. Thinking about what you want from a curation in a way bigger than “both sides” is important. (Spoiler: what you want is context, and the people best suited to bring context are people in a position to know, via expertise, professional skill, or lived experience). What applies to human curation applies to algorithmic curation and recommendation as well. Students should be able to look at a YouTube recommendation list and articulate what the underlying principle of curation seems to be.
Students need to be aware of how search terms shape results. I talk about this a bit in my textbook a few years back — how searching something like “9/11 hoax” presupposes a certain type of result. If I was rewriting this book now, I’d massively expand that chapter and the examples around it. Like much of digital infolit, the key here is that the students know how to “zoom out” to a broader more neutral term, using diction likely associated with the things they would want to read.
Even the most loaded search term usually delivers a page with at least one good result. Teaching students to scan search engine result pages with an eye toward what sort of information is behind each of those links can help students, who often zero too much in on issues of result relevance when clicking and not enough on result genre and quality. Students can also be taught how to use somewhat curated searches — News-only searches, Scholar, Images.
Here, lateral reading is key. Before engaging with a new site, students should find out what the site they are reading is. What’s its agenda? Record of accuracy? Again, remember that grooming happens bit by bit, and one of its main mechanisms is hiding its true nature from the target. By getting students to realize early in the process that they are drifting into some radical and toxic territory they can choose to proceed with the right frame of reference, or maybe avoid those sources altogether.
Digital Infolit Can Help
Digital literacy, source-checking, and lateral reading are not replacements for action that needs to happen elsewhere. Sites like Reddit must consider what cultures they are supporting, and how their platform’s affordances may be exacerbating ill effects. The roots of white supremacy must be addressed. Full digital literacy should address issues of how economics, platform incentives, tribalism, and supremacist/sexist/colonial structures shape online discourse and production.
But the incremental nature of grooming on the internet does not just rely on ill-feeling or latent racism; it makes use of a series of misconceptions most people have about how to find and think about information on the web. The machinery of radicalization is massive, but small mistakes in search and site selection behavior help grease its wheels. Addressing those mistakes directly with students can help increase the difficulty of such radicalization for groomers — from neo-Nazis to ISIS — and given the relatively small cost of providing such training, is an intervention we should be pursuing.