Check, Please! Starter Course Released

As of yesterday, we’ve released the Check, Please! Starter Course, a three hour online module on source and fact-checking that can be dropped into any course or taken as a self-study experience.

Tweet announcing launch of Check, Please!

The techniques we teach in the course are the same moves in the popular open textbook Web Literacy for Student Fact-Checkers, but we have relentlessly shaved the lessons down to what is absolutely needed.

It’s called a starter course because what we heard from people using our materials is this — students are OK going through some general prompts and examples in their course, no matter what the course is, but they need to see pretty quickly how this material applies to the specific course it is embedded in.

So our starter course is meant to be a quick induction into the basics of the four moves — Stop, Investigate the source, Find trusted coverage, and Trace claims, quotes, and media to the original context (acronym: SIFT). Our plan is to work with other faculty to build add-on modules that support various types of courses. Students will get through the first week of general instruction, but by the second week they will be practicing these while looking at climate change, the sociology of racism, writing and research methods, journalism, science communication. The modules will follow the same rhythm that we’ve found works in the general portion — quick fact- and source-checking activities alternated with larger discussions about our current information environment.

If you plan to use it, please check the teacher’s notes linked from the first page. They contain information on ways to create a customized course out of our materials, and an explanation of how to export a plain HTML version that better suits accessibility needs around screen readers as well as provides students with a downloadable guide.

Ring Videos Create a Community Demand for Shareable Crime

I’ve been going through my NextDoor community because — well, I have to keep on top of new problems in social media and information. On good days that means I scroll through TikTok, on bad ones, NextDoor.

One thing people occasionally do on NextDoor is share Ring videos. Some are of legit crimes; the ones I’ve seen are mostly car prowls, where thieves go door to door looking for unlocked cars to steal stuff from. Others are not — e.g., sharing videos of garbage pickers (and yes, the irony of someone hooking up a home camera to Amazon and then worrying about someone picking through their garbage is not lost on me).

It’s really early days and there are not that many Ring videos shared on NextDoor. But still, what I sense — particularly through one video that I watched where a man hassled a homeless man going through his garbage with what I think was a sense of performing for a future NextDoor audience — is that people see a local Ring video with either criminals or conflict in it as a hot commodity. If you have a video that shows suspicious activity — or even better, shows you “standing up” to criminals or people you *think* are criminals, you’re the belle of the ball for the night. You post, and everyone gathers around for a couple rounds of “ain’t it awful” and “good for you”. And the conversation ends, of course, with a bunch of people saying “I really have to get a Ring.”

Get a Ring for protection? Maybe. But that’s not all. People have to get a Ring partially because that’s the only way to get in on the game of video sharing. Which leads to the weirdest dynamic of all — you not only need a Ring to share videos like these — most importantly, and bizarrely, you need crime to happen.

So what happens in communities where the demand for sharable crime exceeds the available crime in the community?

We’ve been through this social story before — Facebook and others created a popular demand for a certain type of story traditional media (and reality) wasn’t providing. So people warped reality to meet the need.

In the case of Ring + sharing, there will be pressures for individuals to take the most minor incidents and frame them sensationally, to create incidents with drama, to edit clips deceptively, to build (or tap into) deep narratives with imbue the mundane with tension, and maybe even to fake content (it seems risky to me for a small community where you know people, but the P. T. Barnum quote applies here). When crime content is scarce, people will expand their definition of crime. When suspicious activity is scarce people will expand the definition of suspicious. When those expansions still fail to serve up enough content, people will engage in even more disturbing stuff. The local dimension may also bleed into more engaging nationally viral Ring videos that serve to structure local narratives. Suddenly your hassling the homeless man video looks braver when shared against the background of a violent conflict over garbage picking the next state over.

Maybe at some point the novelty wears off, and people get off these platforms or find something else to share. I actually think there is a good chance the whole culture implodes out of awfulness. But given the commercials for the product model the sort of content you want to be producing and consuming, and that customers find that attractive, maybe not.

In any case, I’ve generally seen my misinformation/disinformation work as separate from the excellent work Chris Gilliard has been doing around Ring. But what we see here is a very disturbing parallel between the supply gap that fueled our current disinformation crisis and a coming supply gap in sharable Ring videos. History and theory shows when supply and demand fight it out demand wins — we should think very carefully about how that might play out in this case.

Does It Stick?

A question we get asked a lot about our four moves curriculum is whether it sticks. Can a two or three week intervention really change people’s approach online to information permanently?

Remember, we don’t do traditional news literacy. We don’t do traditional media literacy. We don’t teach people about newspapers, communications theory, or any of that. We just do one thing — give them a set of things to do in their first 60 seconds after encountering a piece of media. We do that for two to three weeks of class time, and talk a bit about practical issues around online information, algorithms, trolling, and the like.

We do take some efforts to check persistence. We do the post-assessment several weeks after the last class session, to see what happens after skills decay. We test with authentic prompts, to try and mimic the context students will exercise the skills as precisely as possible. But still the question comes up — are students going to keep doing this? Like, really really? A formal assessment of this would involve some seriously creepy surveillance of students. But we got a powerful anecdotal piece of evidence a bit ago.

The background: CUNY Staten Island implemented our two-week curriculum in their Core 100 class for freshman last fall. A few weeks ago the coordinator of the Core program got this letter from one of the school’s scholarship advisors about some spring scholarship applications. The advisor writing it had no idea of the changes in the Core program and had never heard of the four moves. I am reproducing it in its entirety here, partly because I want you to know I am not cherry-picking here, and partly because the advisor writes with a beautiful clarity that I’m not sure I could match (I love a beautiful email!):

Dear Donna,

I’ve been meaning to tell you for a few months now that the Core program deserves a HUGE kudos, and that I am very impressed with the training students are receiving through Core 100.

Each year, I run our campus competition for the Jeannette K. Watson Fellowship, which is a prestigious opportunity that gives students generous stipends, internship experience, mentoring and professional development training. As part of our campus competition, my committee and I interview candidates as the final stage in the nomination process, as all candidates must then attend an extensive day-long interview session at the Watson Foundation.

One of the questions our committee asks applicants is, “Where do you get your news?” The fellowship seeks students who are knowledgeable of domestic and global issues, as well as students who are motivated to affect positive social change. This question is often asked of nominees who go forward into the official competition, therefore, we make it a point to ask this question for our internal campus competition.

In years past, we received answers such as social media, or perhaps one or two popular news stations, etc. Occasionally a student would cite the NY Times as a primary source for news. This year, we were astounded at the answers we received to this question. Nearly every applicant told us how they compare different news stations for different perspectives, and how they seek to verify the news they are reading. Most applicants further cited international sources of news for an even wider perspective. We couldn’t believe the change this year – how intelligent and worldly and diligent they all sounded! (More so than most older adults!) One of the applicants told us that she learned to do this in Core 100.

Whatever you’re doing, it’s working. I’m very impressed and quite moved.

Sincerely,

Michele

Last night I mentioned this letter to Paul Cook, who has taught using these methods at Indiana University Kokomo. I expected him to say something along the lines of “Wow, that’s incredible!”. But he didn’t. He said “Honestly, Mike, that doesn’t surprise me at all.” And he was right. It’s moving to see here, but it’s also completely consistent with our experience of teaching the course. It’s moving to me because it’s what we see too.

You see the moves in Michele’s description, of course — find other coverage, investigate the source. The habits we push. But you see something that I often have trouble explaining to others — with the right habits you find students start sounding like entirely different people. They start being, in some ways, very different people. Less reactive, more reflective, more curious. If the habits stick, rather than decay, that effect can cumulative, because the students have done that most powerful of things — they have learned how to learn. And the impact of that can change a person’s life.

SIFT (The Four Moves)

Author’s note: Back in early 2017 I introduced the “four moves”, a set of strategies that students could use on the web instead of checklist approaches such as CRAAP and incoherent lists of tips. The moves were based on my own experience teaching civic digital literacy and emerging research from Sam Wineburg and his team. While they were presented simply, they actually encoded deep knowledge about how people go wrong on the web, and were the result of intensive honing and conversations with experts.

The “moves” proved remarkably popular and resilient in the face evolving disinformation threats and misinformation concerns. They remain the four moves I’m still teaching today. They’ve even had a broader impact on the online information literacy discourse, with other educational groups slowly moving away from complex checklists to things that look four moves-like.

At the same time, while the moves remain the same, I’ve altered the phrases attached to them over time, changed their order, developed cleaner explanations, and placed them in an acronym for easier remembering. All of this is going into a set of materials that we plan to release to the public next year, but I thought I would share with you a linkable document that shares the current acronym and language we are using for the moves.

This language is quite literally the first draft of a page, so excuse the verbosity, lack of crispness, and outright error. We’ll get this down to the same level of crispness as the original book eventually. Also, it starts in the middle of a larger lesson, sorry 🙂


So if long lists of things to think about only make things worse, how do we get better at sorting truth from fiction and everything in-between?

Our solution is to give students and others a short list of things to do when looking at a source, and hook each of those things to one or two highly effective web techniques. We call the “things to do” moves and there are four of them:

A list of the four moves described below

Stop

The first move is the simplest. STOP reminds you of two things.

First, when you first hit a page or post and start to read it — STOP. Ask yourself whether you know the website or source of the information, and what the reputation of both the claim and the website is. If you don’t have that information, use the other moves to get a sense of what you’re looking at. Don’t read it or share media until you know what it is.

Second, after you begin to use the other moves it can be easy to go down a rabbit hole, going off on tangents only distantly related to your original task. If you feel yourself getting overwhelmed in your fact-checking efforts, STOP and take a second to remember your purpose. If you just want to repost, read an interesting story, or get a high-level explanation of a concept, it’s probably good enough to find out whether the publication is reputable. If you are doing deep research of your own, you may want to chase down individual claims in a newspaper article and independently verify them.

Please keep in mind that both sorts of investigations are equally useful. Quick and shallow investigations will form most of what we do on the web. We get quicker with the simple stuff in part so we can spend more time on the stuff that matters to us. But in either case, stopping periodically and reevaluating our reaction or search strategy is key.

Investigate the source

We’ll go into this move more on the next page. But idea here is that you want to know what you’re reading before you read it.

Now, you don’t have to do a Pulitzer prize-winning investigation into a source before you engage with it. But if you’re reading a piece on economics by a Nobel prize-winning economist, you should know that before you read it. Conversely, if you’re watching a video on the many benefits of milk consumption that was put out by the dairy industry, you want to know that as well.

This doesn’t mean the Nobel economist will always be right and that the dairy industry can’t be trusted. But knowing the expertise and agenda of the source is crucial to your interpretation of what they say. Taking sixty seconds to figure out where media is from before reading will help you decide if it is worth your time, and if it is, help you to better understand its significance and trustworthiness.

Find trusted coverage

Sometimes you don’t care about the particular article or video that reaches you. You care about the claim the article is making. You want to know if it is true or false. You want to know if it represents a consensus viewpoint, or if it is the subject of much disagreement.

In this case, your best strategy may be to ignore the source that reached you, and look for trusted reporting or analysis on the claim. If you get an article that says koalas have just been declared extinct from the Save the Koalas Foundation, your best bet might not be to investigate the source, but to go out and find the best source you can on this topic, or, just as importantly, to scan multiple sources and see what the expert consensus seems to be. In these cases we encourage you to “find other coverage” that better suits your needs — more trusted, more in-depth, or maybe just more varied. In lesson two we’ll show you some techniques to do this sort of thing very quickly.

Do you have to agree with the consensus once you find it? Absolutely not! But understanding the context and history of a claim will help you better evaluate it and form a starting point for future investigation.

Trace claims, quotes, and media back to the original context

Much of what we find on the internet has been stripped of context. Maybe there’s a video of a fight between two people with Person A as the aggressor. But what happened before that? What was clipped out of the video and what stayed in? Maybe there’s a picture that seems real but the caption could be misleading. Maybe a claim is made about a new medical treatment based on a research finding — but you’re not certain if the cited research paper really said that.

In these cases we’ll have you trace the claim, quote, or media back to the source, so you can see it in it’s original context and get a sense if the version you saw was accurately presented.

It’s about REcontextualizing

There’s a theme that runs through all of these moves: they are about reconstructing the necessary context to read, view, or listen to digital content effectively.

One piece of context is who the speaker or publisher is. What’s their expertise? What’s their agenda? What’s their record of fairness or accuracy? So we investigate the source. Just as when you hear a rumor you want to know who the source is before reacting, when you encounter something on the web you need the same sort of context.

When it comes to claims, a key piece of context includes whether they are broadly accepted or rejected or something in-between. By scanning for other coverage you can see what the expert consensus is on a claim, learn the history around it, and ultimately land on a better source.

Finally, when evidence is presented with a certain frame — whether a quote or a video or a scientific finding — sometimes it helps to reconstruct the original context in which the photo was taken or research claim was made. It can look quite different in context!

In some cases these techniques will show you claims are outright wrong, or that sources are legitimately “bad actors” who are trying to deceive you. But in the vast majority of cases they do something just as important: they reestablish the context that the web so often strips away, allowing for more fruitful engagement with all digital information.

In the coming lessons, we’ll demonstrate how each of these moves can be tied to powerful web-based techniques that you apply to content quickly, as a matter of habit.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

TikTok’s Current Disinformation of Choice Is Fake Hacks

Found some disinfo on TikTok today which had apparently started on Facebook. It’s a video that promotes a variety of bogus and increasingly bizarre claims — there are plastic shards in rice that show up when put in a hot pan, harmful magnetic gunk in your baby formula you can extract with a magnet, poisonous washing powder in your ice cream that can be revealed with a drop of lemon juice.

The TikTok version gets scrambled a bit when you try to watch it in a browser, and WordPress.com doesn’t allow TikTok embeds yet. Thankfully eagle-eyed Twitter user @infuturereverse linked me to a Facebook version of the video so you can watch it here. (And please, please, do watch it, it’s fascinating).

It did well on Facebook too — but that’s not really news at this point. What strikes me is the TikTok success, since this is more rare, and yet seems very TikTok.

Why? Well, it’s presented as a “hacks” video (a genre very popular on TikTok). Hack videos — especially hacks around domestic issues — do well. Some are pretty solid. Others are technically honest but you wonder a bit about the practicality.

There’s also a variation in the form of the “replicable prank ” video. I think of it as a “hack” variation because it usually shows a simple way to execute the prank on others, and executing it requires some knowledge. Pranks as replicable memes often look like hacks. This sort of content already spreads misinformation, where some hacks are overhyped, such as the “hyphen” iPhone Prank that is presented as “erasing your friends phone with a voice command” but really just temporarily crashes the iPhone launcher.

In higher level pranks, the poster pranks the audience intentionally with a “hack” that doesn’t work — for example, demonstrating that the iPhone comes with a secret pair of AirPods that you will find if you tear the box apart, or the current sensation on TikTok of finding cash in hotel bibles, presumably placed by Christians there to reward the faithful. (This is a modern variation on an urban legend that dates back to the 1950s, and made its original digital rounds as an email hoax).

What may not be clear to non-TikTok users is how this sort of fakery is replicated as a meme. One person “finds” money in a bible, then other people post videos of “finding” cash in the bibles. The fake hack spreads not just through the increasing reach of the initial video, but through its replication by others. It suddenly seems like dozens of folks are finding cash in bibles.

These sort of fake hacks work on multiple levels in TikTok. In cases like the bible cash, they are sort of Santa Claus-ish: a group of people in the know bonds around these knowing these are fake, but enjoying and sometimes sustaining the joke through through faking evidence, the way parents fake Santa Claus eating cookies on Christmas. Another group of people believes the hacks are real, and a larger group enjoys wondering if they are real or not in a way that makes the world a bit more magical. It’s really not harmful if people flip through their bibles in hotel rooms and experience some brief anticipation of a cash find. (It’s also spawned some interesting variations where people put non-cash fandom-related things in hotel bibles.)

It’s also interesting to me that for a lot of these you can’t know if it is faked unless you try it yourself — and it is tempting to try, as with the likely real “soda hack”:

As with practical jokes, if you do try to replicate and fail — if, for instance, you try to replicate the meme where you rub your finger against a battery and try to levitate a penny — it’s pretty tempting to pay it forward by creating a fake video yourself (in this case by using video reverse: you spin a penny, watch it fall to a stop, rub your finger against a battery, then reverse the video). I don’t think this is a perversion of TikTok culture — I think it *is* TikTok culture, which often looks like an older sibling showing something new to the middle kid who then shows it to the youngest.

Malicious misinformation is relatively rare on TikTok, but it seems to me that where it does emerge the “food hacks” video is one format that will mesh well with TikTok culture — fake hacks, tricks, and “inside knowledge” loaded with false framing.

What would that look like? It’d look very urban legendish: If there’s an eight next to your product’s barcode it means it was produced by slave labor, if you find these marks in the Starbucks bathroom it’s a sign children have been trafficked there. If your phone makes this kind of static noise it could be a sign of radiation from cell towers, and you need to move away from them immediately. For political stuff, TikToky versions of “this voting machine didn’t record my vote” etc.

The good news is since there’s not really a way for creators to monetize anything on TikTok yet it’s likely to be pretty tame compared to platforms like Facebook, YouTube, or Instagram, at least in the immediate future. But it’d be nice to see people thinking about this sooner rather than later — how do you moderate a TikTok culture that values the sort of wink-wink-nudge-nudge “Santa Claus” fakery with one that would like to keep more toxic fakery at bay?

We’ll be adding a couple TikTok examples to our current educational materials, but in the meantime feel free to ask your TikTok misinformation questions below.

Pelosi and Doubling-Tracking

There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.

Two things about this. The first is that our four moves (SIFT) apply well to this incident. Specifically, the “T” in SIFT is “Trace quotes, claims, and media to the original context.” In this case you can watch the original video on C-SPAN and see the difference immediately.

But what if you can’t trace it? In general, if the provenance of the video is hidden, but clearly has an unlinked original source, wait a bit. Even decent news sources can be godawful at linking original sources, but usually for a big video like this people will point you to the original within a day or two which is what happened here.

The second thing to watch is how the media ecosystem works as, well, a system. When people look at the impact of false news they often measure how much of it makes it to mainstream broadcasts. But very often the way networked lies and mainstream news interact is synergistic. So as this false Facebook video is being circulated to millions of viewers, the Fox News show Lou Dobbs Tonight airs a different video of Pelosi with some instances of her stammering edited together and asks “What’s going on?” Age? Illness? The video pushes beyond the bounds of acceptable journalism, but within the bounds of what is currently permissible on air. The guest commentator is very muted but pointed in replies — she’s getting old, probably pushing herself too hard, maybe needs to step aside.

In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.

Anyway, that’s what you see with a lot of disinfo campaigns. On the wild west of social media, outright lies are spread. And usually the outright lies don’t make it to the mainstream outlets exactly as spread, but a very similar and dishonestly spun story is spread at the same time through broadcast. The two blend into one, able to use to freedom of the web to build shock and the amplification of traditional media to build a sense of veracity and extend the reach. You saw this with the caravan in 2018, Clinton’s “sickness” in 2016. And so on. Two tracks — one through viral spread and the other through official channels, blended into something more damaging than either track alone.

Using Changes in Framing to Figure Out Where to Focus Attention

I have so much writing backlogged I need to get a few quick hits out to clear the logjam.

Here’s a good example of a statistical false frame that’s visual enough for a slide.

It says “Washington Post” on the bottom there, but of course the Washington Post version lacks the “presidential term” markers.

When you see a framing has been added like that, it’s wise to think it through what has been added and whether its accurate. And of course with a little thought you’ll hopefully ask why if it covers one year of the Trump presidency and eight years of Obama the boxes of their terms are nearly equal in size? (Weird, right?) If you’re particularly adept you might ask why Obama’s term begins in 2007, which I seem to remember as the Bush presidency, though honestly I was drinking more back then, so who knows.

It’s worth asking it our “T” move (Trace claims, quotes, and media to the original context) works here, since the original graphic doesn’t settle the questions of what economy Obama inherited in 2009 or what economy he left us with. I think you could point to the context the article adds around the charts as useful (2017 figures are before Trump tax cuts and before his first budget). Still, it doesn’t give the answer to you outright, you’re going to have to think it through, and you *could* come to the same conclusions without going to the original context first.

But what the trace does in this case is it shows students where to look. By calling attention to what’s been added, removed, or altered, it focuses their thought in the right area. Show a student the initial graphic and say “Hey what’s the problem with this graph” and you’ll get a flood of answers — Is it inflation? How do we know they each caused this? It starts at $50k, it’s a bad axis! (students love this one). Going to the original context and looking at what has been altered solves the students biggest issue: where to focus their thinking first, given a bewildering array of options.

UPDATE:

Since people asked, here’s the modified image with the real terms of office:

Note that even this is a bit unfair; most economists would say that the influence of the President on the economy (to the extent there is one) is felt through the mechanism of the budget and associated tax policy, and that does not get passed until the fall of the first year, with the tax policy going into effect for the following year. If you shift that, of course, then there is no part of this graph that is Trump budget, and the graph looks like this:

It’s also worth noting that if you go to the article there is plenty there to critique the Obama economy over — there’s pretty broad agreement that it’s surprising wages have not increased given the strength of the economy, and economists point out that the effects seen here are probably not pay raises at all but due to increased employment (e.g. if one spouse got cut to part time in the recession and can now get full-time work, household income increases, but rate of pay does not).