It’s not the claim, it’s the frame

Putting a couple notes from Twitter here. One of the ideas of SIFT as a methodology (and of SHEG’s “lateral reading” as well) is that before one reads a person must construct a context for reading. On the web that’s particularly important, because the rumor dynamics of the web tend to level and sharpen material as it travels from point A to point Q, and because bad actors actively engage in false framing of claims, quotes, and media.

But it’s also a broader issue when considering source-checking. I’ve had people share RT articles with me that are more or less “true”, for example. When I push back on people that they shouldn’t be sharing RT articles, since RT is widely considered to be a propaganda arm of the Kremlin, the response is often “Well, do you see anything false in the article? What’s the lie?”

This isn’t a good approach to your information diet, for a couple reasons. The first is that a news-reading strategy where one has to check every fact of a source because the source itself cannot be trusted is neither efficient nor effective. Disinformation is not usually distributed as an entire page of lies. Seth Rich, for example, did exist, was killed, and did work at the DNC. His murder does remain unsolved. Even where people fabricate issues, they usually place the lies in a bed of truth.

But the other reason to not share articles from shady sources is the frame can be false, even if the facts are correct. Take this coverage on the Seth Rich murder from RT for example, in a story about Assange offering a reward for his killers. The implication of the story is it is possible that Seth Rich was killed for leaking the DNC emails.

Rich worked as voter expansion data director at the DNC before he was shot twice on his way home on July 10. He died later in hospital.

“If it was a robbery — it failed because he still has his watch, he still has his money — he still has his credit cards, still had his phone so it was a wasted effort except we lost a life,” his father Joel Rich told local TV station KMTV.

See the frame? Responsible reporting would add context:

  • DNC emails universally believed by experts to be hacked, not leaked.
  • The “data director” position sounds email-ish, but had no access to email systems.
  • The Washington D.C. police said regarding the robbery that in robberies where someone is killed it’s extremely common to find that the credit cards and phone are not taken, because people generally get shot in robberies when something goes wrong, and the suspects are anxious to flee the scene before the police come investigating the gunshot.

There’s not a lie in the article (that I can see) but the way the article is framed is deceptive. And there’s no way to know that as an average reader, because you don’t know what you don’t know. Without expertise you can’t see what is missing or deceptively added. So zoom out, and if the source is dodgy, skip it. Find something else. Share something else. You’re not as smart as you think you are, and reading stories designed to warp your worldview will, over time, warp your worldview.

The CBC Infolit Bot May Make People Worse at the Web

I signed up for the CBC Chatbot that teaches you about misinformation. The interface was surprisingly nice — it felt less overwhelming than the typical course stuff I work with. So kudos on that.

On the down side it’s likely to make people worse, not better, at spotting dodgy Facebook pages.

Why? Because — like a lot of reporters, frankly — they’ve taken “fake news” [sigh] to be this narrow 2016 frame of “Pretending to be a known media company when you’re not”. And that results in this advice:

The bot tells me that a verification checkmark means a site is “legitimate”.

What does “legitimate” mean here? I assume to the people that wrote the course it means that the account is not being spoofed, that it really is the organization that it purports to be. This is, in turn, based on the 2016 disinformation pattern where there were some very popular sites and pages pretending to be organizations that they were not (e.g. the famous fake local newspapers).

There’s two problems with this. First — this method of disinformation is relatively minor nowadays. I still do a prompt or two on it in my classes, but find that there are almost no current examples out there of this that are reaching viral status. I was talking to Kristy Roschke at News Co/Lab last week, and she was saying the same thing. As a teacher and curriculum designer, at first you’re like, “Wow, it’s getting hard to find new examples of this to put in the course!” And then, at a certain point, you say — if it’s so hard to find viral examples of this should it still be in the course at all?

The second problem is more serious. Because in solving a problem that increasingly does not exist, the mini-course creators birth a new problem — the belief that the checkmark is a sign of trustworthiness. If there is a checkmark, you know the page is “legitimate” they say. I don’t know if the people who wrote this were educators or not, but a foundational principle of educational theory is it doesn’t matter what you say, it’s what the student hears. And what a student hears here, almost certainly, is that blue checkmarks are trustworthy.

That’s a problem, because the real vectors of disinformation at this point are often blue-checkmarked pages. Here, for example, is the list of central hubs and central sources of conspiracy theorizing about the White Helmets in Syria, from Starbird et al’s paper on the “echo-sytem” (I’ve edited the list to only include central sources and hubs for orgs where there is a Facebook page).

List of domains that were central sources and hubs of White Hat conspiracy theorizing.

It’s a little difficult to explain this clearly and precisely, but let’s just say the above domains are part of a network of sites by which certain types disinformation is propagated. The people running these sites have various levels of intent around that: obviously, RT is considered by most experts in the area to be a propaganda arm of the Russian government (and in particular supports Putin’s agenda and interest). Same with Sputnik. The others may be involved for more idealistic reasons, but what the work of Starbird et. al. shows is that in practice they uncritically reprint the stories introduced by the Russian entities with only minor alterations, and as a result become major vectors of disinformation.

In 2019, these “echo-systems” are far more likely to be the source of disinformation than a spoofed CBC page (This was likely the case in 2016 too, but at least spoofing was in the running). But what do we find when we apply the blue checkmark test to them? Half of them are blue checkmarked:

Am I saying you can never read any of these sites? Of course not. I don’t think it’s a good investment of your time to read Russian or Syrian propaganda and disinformation, and it has bad social effects of course, but you should read what you want.

From the online media literacy standpoint, however, a media literate person would not read these sites without understanding the way in which they are very, very different than the CBC, Reuters, or The Wall Street Journal. Focusing on the blue-checkmark first has the potential to mislead a new generation of people about that, the way that focusing on dot orgs misled the last.

It’s not just state actors that win in a “Trust the blue checkmark world”, incidentally. Look through the medical misinformation space and you’ll find plenty of blue checkmarks. And we haven’t even got into the otherside of the problem — the number of pages that are from trustworthy and important sources but don’t have a checkmark, and hence will be discarded out of hand, not just as “unverified organization” but as illegitimate.

ProPublica has a checkmark, but their Illinois group does not.

Education is Hard

It’s really hard to get this stuff right. To do it in education we run and re-run lessons with students, then assess in ways that allow us to see if students are misconstruing lessons in unanticipated ways.

I learned in an early iteration of our materials, for example, that a way I was talking about organizations caused a very small percentage (less than 2 percent) to walk away with the idea that organizations with bigger budgets were better than those with smaller budgets. That’s not what we said, of course, but it’s what a few students heard. (We were trying to point out that something claiming to be a large professional organization — for example, the American Psychological Association — should normally have a large budget, whereas a professional organization that claimed to speak for an industry but had a budget of $70,000 a year probably wasn’t). So we modified how we presented that, and are hammering out a concept we call commensurabilty (we actually borrowed this idea from the Calling Bullshit course’s discussion of how to think about expansive academic claims and and the reputation of various publication venues).

Early on, we also realized that when we asked if something was a trustworthy source the way we phrased that question didn’t account for the news-genre specificity of trust. (E.g. you might trust your local TV station to report on a shooting, but you probably should not trust it to give you diet advice, as most local stations have no real expertise in that). We changed the way we phrased certain questions “Is this a trustworthy source for this sort of story or claim?”. Then we meshed some discussions with the ACRL framework for information literacy, particularly frame number one: authority is contextual and constructed.

And we did this sort of work repeatedly, both with the students and faculty in the 50+ courses involved in our project and in talking to the people outside our project using the materials. We get there because we are constantly doing formal and informal assessment against authentic prompts, and looking for points of student confusion. We get there because we assess, and can say at the end that we improved student performance on the sorts of tasks they are actually confronted with in the real world.

It’s hard, and it’s a never ending process. But as app after app and mini-course after mini-course rolls out on this stuff, it’s worth asking if the people producing them are approaching them with the same eye towards the true problems we face and the true sources of student confusion. If they aren’t, it is quite possible they are doing more harm than good.

9 Comments on “Doorbell Video” and Traditional News

Years ago when I was a online political community admin, a member of our community invented a form of blog post to spawn discussion that was less about capturing fully formed thoughts and more about opening questions. He called it nine comments, and it was (I think) one of the best innovations of Blue Hampshire front-paging. (Actually, second-best. The best innovation was the Citizen Whip Count we ran for marriage equality legislation which so stressed the state party apparatus out they back-channeled numerous time to tell us to cut it out. But I digress.)

Anyway, 9 comments for the week on Ring video and news coverage, in no particular order.

One: News Is Not Prepared for Doorbell Video

This person’s doorbell caught the moment their house was destroyed by a tornado. There’s a way in which this is educational, and could save lives — showing people how quickly a weather event like this can sneak up on you. But it’s also a reminder that news is not prepared for a doorbell video world. I know that there are codes in place for the use of citizen video, and surveillance cam video. But scale makes a difference. And the scale of this is going to be huge.

Two: Push vs. Pull Video

In network design we often talk about push and pull architectures. In a pull architecture, you go out and request something. A push architecture finds things relevant to you and pushes them to you without a request.

The technical meanings of these terms are narrow, but I often think of these as information-seeking modes. So when a robbery happens and people check to see if there is video that’s pull. You go looking for the vid you need. A video like the Philando Castile video was push — pushed out to people even before they know there’s an event. In that case, the push event was also newsworthy, so it’s not entirely about newsworthiness. But these are different modes. (Network terminology purists go ahead and hate, I’m over it).

Three: Push video and SHAREABLE RING CONTENT

Some Ring content is pull: something happens and we review the tape, or send it to reporters. Some is push: the event itself is important because it was captured on camera. The doorbell video creates the event.

Four: It’s the Push Video that concerns me

This particular news event would be noteworthy no matter what. It’s a tornado hitting a house, and that’s news. My worry though is the number of things that become news stories simply because video captures them.

Five: Easy availability of content shapes coverage, coverage warps reality

Yeah, yeah, yeah, McLuhan etc. But this basic principle isn’t really in doubt. The cost of acquiring Ring video is going to be trivial compared to putting people on the ground. But what is this video best set up to capture? What does engaging content look like on this platform? Because that’s where news might be going.

Six: The genres of Ring video are being constructed as we speak and we have little idea of what they will be

Weather videos. Hassle daughter’s date videos. Hassle garbage picker videos. Package thief videos. Cryptic event videos. Suspicious person videos. And news organizations are looking for new angles on SHAREABLE RING CONTENT, different places they can slot it into their existing coverage. Lifestyle, Crime, Weather.

Seven: Again, it’s push that’s the shift

It’s not even just that it’s push in terms of spread, but even filming. No one is seeing something and deciding to pull out a camera. The decision is after its captured.

It’s worth thinking about how the easy availability of Ring videos to newsrooms is going to shape coverage (especially local coverage, but also national). Is this where we want to go?

Eight: You don’t need an Amazon editor for a Ring News Dystopia

There is rightly a lot of focus on the sort of “communities” Amazon is looking to build around doorbell video. They could do a lot of harm. But news can be shaped dramatically by Ring video availability without the platforms becoming involved. Ring video is showing up on local broadcasts and news sites already. Some of that video is probably useful, but much of it creates a world that is even more paranoid and divorced from reality than your average local broadcast, and that’s saying something.

Nine: Viscerality of Doorbell Video Crime

The vaguest thought, but I’m stuck by the viscerality of these Ring crime videos. How they feel when you are watching vs. even traditional sensationalist coverage. I don’t think we’re psychologically prepared for this, individually or as a country.

Check, Please! Starter Course Released

As of yesterday, we’ve released the Check, Please! Starter Course, a three hour online module on source and fact-checking that can be dropped into any course or taken as a self-study experience.

Tweet announcing launch of Check, Please!

The techniques we teach in the course are the same moves in the popular open textbook Web Literacy for Student Fact-Checkers, but we have relentlessly shaved the lessons down to what is absolutely needed.

It’s called a starter course because what we heard from people using our materials is this — students are OK going through some general prompts and examples in their course, no matter what the course is, but they need to see pretty quickly how this material applies to the specific course it is embedded in.

So our starter course is meant to be a quick induction into the basics of the four moves — Stop, Investigate the source, Find trusted coverage, and Trace claims, quotes, and media to the original context (acronym: SIFT). Our plan is to work with other faculty to build add-on modules that support various types of courses. Students will get through the first week of general instruction, but by the second week they will be practicing these while looking at climate change, the sociology of racism, writing and research methods, journalism, science communication. The modules will follow the same rhythm that we’ve found works in the general portion — quick fact- and source-checking activities alternated with larger discussions about our current information environment.

If you plan to use it, please check the teacher’s notes linked from the first page. They contain information on ways to create a customized course out of our materials, and an explanation of how to export a plain HTML version that better suits accessibility needs around screen readers as well as provides students with a downloadable guide.

Ring Videos Create a Community Demand for Shareable Crime

I’ve been going through my NextDoor community because — well, I have to keep on top of new problems in social media and information. On good days that means I scroll through TikTok, on bad ones, NextDoor.

One thing people occasionally do on NextDoor is share Ring videos. Some are of legit crimes; the ones I’ve seen are mostly car prowls, where thieves go door to door looking for unlocked cars to steal stuff from. Others are not — e.g., sharing videos of garbage pickers (and yes, the irony of someone hooking up a home camera to Amazon and then worrying about someone picking through their garbage is not lost on me).

It’s really early days and there are not that many Ring videos shared on NextDoor. But still, what I sense — particularly through one video that I watched where a man hassled a homeless man going through his garbage with what I think was a sense of performing for a future NextDoor audience — is that people see a local Ring video with either criminals or conflict in it as a hot commodity. If you have a video that shows suspicious activity — or even better, shows you “standing up” to criminals or people you *think* are criminals, you’re the belle of the ball for the night. You post, and everyone gathers around for a couple rounds of “ain’t it awful” and “good for you”. And the conversation ends, of course, with a bunch of people saying “I really have to get a Ring.”

Get a Ring for protection? Maybe. But that’s not all. People have to get a Ring partially because that’s the only way to get in on the game of video sharing. Which leads to the weirdest dynamic of all — you not only need a Ring to share videos like these — most importantly, and bizarrely, you need crime to happen.

So what happens in communities where the demand for sharable crime exceeds the available crime in the community?

We’ve been through this social story before — Facebook and others created a popular demand for a certain type of story traditional media (and reality) wasn’t providing. So people warped reality to meet the need.

In the case of Ring + sharing, there will be pressures for individuals to take the most minor incidents and frame them sensationally, to create incidents with drama, to edit clips deceptively, to build (or tap into) deep narratives with imbue the mundane with tension, and maybe even to fake content (it seems risky to me for a small community where you know people, but the P. T. Barnum quote applies here). When crime content is scarce, people will expand their definition of crime. When suspicious activity is scarce people will expand the definition of suspicious. When those expansions still fail to serve up enough content, people will engage in even more disturbing stuff. The local dimension may also bleed into more engaging nationally viral Ring videos that serve to structure local narratives. Suddenly your hassling the homeless man video looks braver when shared against the background of a violent conflict over garbage picking the next state over.

Maybe at some point the novelty wears off, and people get off these platforms or find something else to share. I actually think there is a good chance the whole culture implodes out of awfulness. But given the commercials for the product model the sort of content you want to be producing and consuming, and that customers find that attractive, maybe not.

In any case, I’ve generally seen my misinformation/disinformation work as separate from the excellent work Chris Gilliard has been doing around Ring. But what we see here is a very disturbing parallel between the supply gap that fueled our current disinformation crisis and a coming supply gap in sharable Ring videos. History and theory shows when supply and demand fight it out demand wins — we should think very carefully about how that might play out in this case.

Does It Stick?

A question we get asked a lot about our four moves curriculum is whether it sticks. Can a two or three week intervention really change people’s approach online to information permanently?

Remember, we don’t do traditional news literacy. We don’t do traditional media literacy. We don’t teach people about newspapers, communications theory, or any of that. We just do one thing — give them a set of things to do in their first 60 seconds after encountering a piece of media. We do that for two to three weeks of class time, and talk a bit about practical issues around online information, algorithms, trolling, and the like.

We do take some efforts to check persistence. We do the post-assessment several weeks after the last class session, to see what happens after skills decay. We test with authentic prompts, to try and mimic the context students will exercise the skills as precisely as possible. But still the question comes up — are students going to keep doing this? Like, really really? A formal assessment of this would involve some seriously creepy surveillance of students. But we got a powerful anecdotal piece of evidence a bit ago.

The background: CUNY Staten Island implemented our two-week curriculum in their Core 100 class for freshman last fall. A few weeks ago the coordinator of the Core program got this letter from one of the school’s scholarship advisors about some spring scholarship applications. The advisor writing it had no idea of the changes in the Core program and had never heard of the four moves. I am reproducing it in its entirety here, partly because I want you to know I am not cherry-picking here, and partly because the advisor writes with a beautiful clarity that I’m not sure I could match (I love a beautiful email!):

Dear Donna,

I’ve been meaning to tell you for a few months now that the Core program deserves a HUGE kudos, and that I am very impressed with the training students are receiving through Core 100.

Each year, I run our campus competition for the Jeannette K. Watson Fellowship, which is a prestigious opportunity that gives students generous stipends, internship experience, mentoring and professional development training. As part of our campus competition, my committee and I interview candidates as the final stage in the nomination process, as all candidates must then attend an extensive day-long interview session at the Watson Foundation.

One of the questions our committee asks applicants is, “Where do you get your news?” The fellowship seeks students who are knowledgeable of domestic and global issues, as well as students who are motivated to affect positive social change. This question is often asked of nominees who go forward into the official competition, therefore, we make it a point to ask this question for our internal campus competition.

In years past, we received answers such as social media, or perhaps one or two popular news stations, etc. Occasionally a student would cite the NY Times as a primary source for news. This year, we were astounded at the answers we received to this question. Nearly every applicant told us how they compare different news stations for different perspectives, and how they seek to verify the news they are reading. Most applicants further cited international sources of news for an even wider perspective. We couldn’t believe the change this year – how intelligent and worldly and diligent they all sounded! (More so than most older adults!) One of the applicants told us that she learned to do this in Core 100.

Whatever you’re doing, it’s working. I’m very impressed and quite moved.

Sincerely,

Michele

Last night I mentioned this letter to Paul Cook, who has taught using these methods at Indiana University Kokomo. I expected him to say something along the lines of “Wow, that’s incredible!”. But he didn’t. He said “Honestly, Mike, that doesn’t surprise me at all.” And he was right. It’s moving to see here, but it’s also completely consistent with our experience of teaching the course. It’s moving to me because it’s what we see too.

You see the moves in Michele’s description, of course — find other coverage, investigate the source. The habits we push. But you see something that I often have trouble explaining to others — with the right habits you find students start sounding like entirely different people. They start being, in some ways, very different people. Less reactive, more reflective, more curious. If the habits stick, rather than decay, that effect can cumulative, because the students have done that most powerful of things — they have learned how to learn. And the impact of that can change a person’s life.