Quick Note on the Recent Buzzfeed Study

A couple people asked me to expand on comments made in my recent Familiarity = Truth post. In it I say this about the Buzzfeed finding that over 50% of Clinton supporters who remember fake anti-Clinton headlines believed them:

[A] number like “98% of Republicans who remember the headline believed it” does not mean that 98% of Republicans who saw that headline believed it, since some number of people who saw it may have immediately discounted it and forgotten all about it.

What does that mean? And how does it mitigate the effect we see here?

Well, for one, it means that despite Buzzfeed’s excellent and careful work here, they have chosen the wrong headline. The headline for the research is “Most Americans Who See Fake News Believe It, New Survey Says”. But the real headline should be the somewhat more subtle “Most Americans Who Remember Seeing Fake News Believe It, New Survey Says”

Why is that important? Because you can see the road to belief as a series of gates. Here’s the world’s simplest model of that:

Sees it > Remembers it > Believes it

So, for example, if we start out with a thousand people, maybe only 20% see something. This is the filter bubble gate.

But then, a certain amount of those people who see it process it enough to remember it. And this is not a value neutral thing — many decades of psychological research tells us we notice things which confirm our beliefs more than things that don’t. Noticing is the first part of remembering. So we should guess that people that remember a news story are more likely to believe it than those that don’t. Hence, when we read something like “Over 50% of people who remembered a fake headline believed it” this does not mean that 50% of people who read it believed it, because remembering something predicts (to some extent) belief.

Let’s call this the “schema gate” since it lets through things that fit into our current schemas.

So how big is this effect? From the data we see, it’s smaller than I would have thought. I say this because when we look at the numbers of people who remember a headline, the Trump and the Clinton numbers are not that far off. For instance, 106 Clinton supporters saw the famous murder-suicide headline, compared to 165 Trump supporters. While that certainly is quite a bit more Trump supporters (and even more on a percentage basis) we have to assume a good percentage of that difference is due to different networks of friends and filter bubble effects. If you assume that highly partisan Republicans are going to have 50% or 75% more exposure to Anti-Clinton stories, then there isn’t much room left for much of a schema gate effect.

This leads to an interesting question — if we are really attached to a schema gate effect, then we have to dial down our filter bubble effect. Maybe filter bubbles impact us less than we think, if this many Democrats are seeing this sort of story in their feed?

There’s a couple other far out ways to make the math work, but for the most part you either get evidence of a strong filter bubble gate and a weaker than expected schema gate, or vice versa. Or you get both things somewhat weaker than expected.

In any case, it’s one of the more fascinating results from the study, and if Buzzfeed or anyone else is looking to put a shovel into a new research question, this is where I’d suggest to dig.


Announcing the Digital Polarization Initiative, an Open Pedagogy Project

So I have news, lots of news.

If you’re the sort of person who just wants to jump into what I’ve launched and started building with the help of others, you can go here now, see what we’re launching, and come back to read this later. For the rest of you, long theoretical navel-gazing it is…

A New Project

I’m working on a new initiative with AASCU’s American Democracy Project. I’ve chosen “Digital Polarization” as my focus. This phrase, which enjoyed a bit of use around the time of Benjamin Barber’s work in the 1990s but has not been used much since, is chosen partially because it is remains a bit of a blank slate: we get to write what it means in terms of this project . I mean to use it as a bit of a catch-all to start an array of discussions on what I see as a set of emerging and related trends:

  • The impact of algorithmic filters and user behavior on what we see in platforms such as Twitter and Facebook, which tend to limit our exposure to opinions and lifestyles different than our own.
  • The rise and normalization of “fake news” on the Internet, which not only bolsters one’s worldview, but can provide an entirely separate factual universe to its readers
  • The spread of harassment, mob behavior, and “callout culture” on platforms like Twitter, where minority voices and opinions are often bullied into silence.
  • State-sponsored hacking campaigns that use techniques such as weaponized transparency to try and fuel distrust in democratic institutions.

All good. So why, then, “digital polarization” as the term?

Digital Polarization

It’s probably a good time to say that on net I think the Internet and the web have been a tremendous force for good. Anyone who knows my history knows that I’ve given 20 years of my life to figuring out how to use the internet to build better learning experiences and better communities, and I didn’t dedicate my life to these things because I thought they were insignificant. I still believe that we are looking at the biggest increase in human capability since the invention of the printing press, and that with the right sort of care and feeding our digital environments can make us better, more caring, and more intelligent people.

But to do justice to the possibilities means we must take the downsides of these environments seriously and address them. The virtual community of today isn’t really virtual — it’s not an afterthought or an add-on. It’s where we live. And I think we are seeing some cracks in the community infrastructure.

And so as I’ve been thinking about these questions, I’ve been looking at some of history’s great internet curmudgeons. For example, I don’t agree with everything in Barber’s 1998 work Which Technology and Which Democracy?, but so much of it is prescient, as is this snippet:

Digitalization is, quite literally, a divisive, even polarizing, epistemological strategy. It prefers bytes to whole knowledge and opts for spread sheet presentation rather than integrated knowledge. It creates knowledge niches for niche markets and customizes data in ways that can be useful to individuals but does little for common ground. For plebiscitary democrats, it may help keep individuals apart from one another so that their commonalty can be monopolized by a populist tyrant, but for the same reasons it obstructs the quest for common ground necessary to representative democracy and indispensable to strong democracy.

Barber’s being clever here, and playing on multiple meanings of polarization. In one sense, he is predicting political polarization and fragmentation due to new digital technologies. In another he is playing on the nature of the digital, which is quite literally polarizing — based on one and zeros, likes and shares, rate-ups and rate-downs.

Barber goes on, pointing out that this polarized, digital world values information over knowledge, snippets over integrated works,  segmentation over community. He’s overly harsh on the digital here, and not as aware, I think, of the possibilities of the web as I’d like. But he is dead on about the risks, as the last several years have shown us.  At its best the net gives us voices and perspectives we would have never discovered otherwise, needed insights to pressing problems just when we need them. But at its worst, our net-mediated digital world becomes an endless stream of binary actions — like/don’t like, share/pass, agree/disagree, all in an architecture that slowly segments and slips us into our correct market position a click at a time, delivering us a personalized, segregated world. We can’t laud the successes of one half of this equation without making a serious attempt to deal with the other side of the story.

The “digital polarization” term never took off, but maybe as we watch the parade of fake news and calculated outrage streaming us these days its as good a time as any to reflect along with our students on the ways in which the current digital environment impacts democracy. And I think digital polarization is a good place to start.

This is not just about information literacy, by the way. It’s not about digital literacy either. Certainly those things are involved, but that’s the starting point.

The point is to get students to understand the mechanisms and biases of Facebook and Twitter in ways that most digital literacy programs never touch. The point is not to simply decode what’s out there, but to analyze what is missing from our current online environment, and, if possible supply it.

And that’s important. As I’ve said before, as a web evangelist in education its so easy to slip into uncritical practice and try to get students to adopt an existing set of web behaviors. But the peculiar power of higher education is we aren’t stuck with existing practice — we can imagine new practice, better practice. And, in some cases, it’s high time we did.

A Student-Powered Snopes, and More

And so we have the Digital Polarization Initiative. The idea is to put together both a curriculum that encourages critical reflection on the ways in which our current digital environment impacts civic discourse, and to provide a space for students to do real work that helps to address the more corrosive effects of our current system.

Right now I am in the process of building curriculum, but we have the basics of one of the projects set up and outlined on the site.  The News Analysis project asks students to apply their research skills and disciplinary knowledge to review news stories and common claims for accuracy and context. Part of the motivation here is for students to learn how to identify fake news and misinformation. Part of the motivation is for students to do real public work: their analysis become part of a publicly available wiki that others can consult. And part of it is try try and model the sort of digital practice that democracy needs right now.

In my dream world, students not only track down fake news, but investigate and provide fair presentations of expert opinion on claims like “the global warming this year was not man-made but related to El Niño” or “Cutting bacon out of your diet reduces your risk of bowel cancer by 20%.” Importantly, they will do that in the context of wiki, a forgotten technology in the past few years, but one that asks that we rise above arguing our personal case and try instead to summarize community knowledge. Wiki is also a technology that asks that we engage respectfully with others as collaborators rather than adversaries, which is probably something we could use right about now.

There will be other projects as well. Analyzing the news that comes through our different feeds is an easy first step, but I’d love to work with others on related projects that either examine the nature of present online discourse or address its deficiencies. And we’re trying to build curriculum there as well to share with others.

In any case, check it out.  We’re looking to launch it in January for students, and build up a pool of faculty collaborators over the next couple weeks.

Familiarity = Truth, a Reprise

Almost a month ago, I wrote a post that would become one of my most popular on this site, a post on the They Had Their Minds Made Up Anyway Excuse. The post used some basic things we know from the design of learning environments to debunk the claim that fake headlines don’t change people’s minds because “we believe what we want to believe.” The “it didn’t matter” theory asserts that only people who really hated Clinton already would believe stories that intimated that Clinton had killed an FBI agent, so there was likely no net motion in beliefs of people exposed to fake news.

This graf from BGR most succinctly summarizes the position of the doubter of the effects of fake news:

On a related note, it stands to reason that most individuals prone to believing a hyperbolic news story that skews to an extreme partisan position likely already have their minds made up. Arguably, Facebook in this instance isn’t so much influencing the voting patterns of Americans as it is bringing a prime manifestation of confirmation bias to the surface.

In the weeks after the election I saw and heard this stated again and again, both in the stories I read and in the questions that reporters asked me. And it’s simply wrong. As I said back in November, familiarity equals truth: when we recognize something as true, we are most often judging if this is something we’ve heard more often than not from people we trust. That’s it. That’s the whole game. See enough headlines talking about Eastasian aggression from sources you trust and when someone asks you why we are going to war in Eastasia you will say “Well, I know that Eastasia has been aggressive, so maybe that’s it.” And if the other person has seen the same headlines they will nod, because yes, that sounds about right.

How do you both know it sounds right? Do you have some special area of your brain dedicated to storing truths? A specialized truth cabinet? Of course not. For 99% of the information you process in a day truth is whatever sounds most familiar. You know it’s true because you’ve seen it around a lot.

More on that in a minute, but first this update.

Buzzfeed Confirms Familiarity Equals Truth

New survey evidence out today from Buzzfeed confirms this.

Here’s what they did. They surveyed 3,015 adults about five of the top fake headlines of the last weeks of the election against six real headlines. Some sample fake headlines: “FBI Agent in Hillary Email Found Dead in Apparent Murder-Suicide” and “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement.” Some sample real ones: “I Ran the CIA. Now I’m Endorsing Clinton” and “Trump: ‘I Will Protect Our LGBTQ Citizens'”.

They then asked respondents whether they had seen that headline, and if they had, whether that headline was accurate. Perhaps unsurprisingly, Trump voters who had seen pro-Trump headlines believed them at high rates that approached or exceeded belief in true news stories. Ninety-six percent of Trump voters, for example, who had seen a headline that Trump sent his own plane to rescue 200 marines believed it. Eighty-nine percent of Trump voters having seen headlines about Trump protesters having been paid thousands of dollars to protest believed it.

This in itself should give second thoughts to the thesis that fake news only affects extreme partisans: it’s absurd to claim that the 98% of Republicans who remembered that headline and believed it represent a particularly partisan fringe.

Now, caveats apply here: surveys about political matters can get weird, with people occasionally expressing themselves in ways that they feel express their position rather than their literal belief (we had debates over this issue with the “Is Obama a Muslim?” question, for instance). Additionally, we are more prone to remember what we believe to be true than what we believe to be false — so a number like “98% of Republicans who remember the headline believed it” does not mean that 98% of Republicans who *saw* that headline believed it, since some number of people who saw it may have immediately discounted it and forgotten all about it.



(Chart from Buzzfeed)

Here’s the stunning part of the survey. As mentioned above, Trump voters rated pro-Trump and anti-Clinton stories true on average, and overwhelmingly so. The lowest percentage of Trump voters believing a fake headline was accurate was 76%, and the highest was 96% with an average of 86% across the five headlines. But even though the headlines were profoundly anti-Clinton, 58% of the Clinton voters who remembered seeing a headline believed the headline was accurate.

Familiarity Trumps Confirmation Bias

I want to keep calling people’s attention to the process here, because I don’t what to overstate my claim. If I read the study correctly, 1,067 Clinton voters completed it. Of those voters, 106, or 10%, remembered seeing a headline stating that an FBI Agent implicated in leaks of Clinton’s emails had died in a suspicious murder-suicide. The fact that this tracks people who remembered the headline and not people who saw it is important to keep in mind.

Yet among those 10% of Clinton supporters who remember seeing the headline “FBI Agent Suspected in Hillary Leaks Found Dead in Apparent Murder-Suicide” over half believed it was accurate.

These 10% of Clinton voters who ended up seeing this may differ in some ways from the larger population of Clinton voters. They may have slightly more conservative friends. They may be younger and more prone to get their news from Facebook. In a perfect world you would account for these things. But it is difficult to believe that any adjustments are going to overcome a figure like this. Over fifty percent of Clinton voters remembering fake headlines that were profoundly anti Clinton believed them, and no amount of controlling for differences is going to get that down to a non-shocking level.

Why would Clinton voters believe such a headline at such high rates? Again, familiarity equals truth. We chose the people we listen to and read, and then when thinking about a question like “Did Obama create more jobs than George W. Bush?” we don’t think “Oh, yes, the Wall Street Journal had an article on that on page A4.” We simply ask “Does that sound familiar?”

That Troublesome Priest

So how does this work? I’ll diverge a bit here away from what is known and try to make an informed guess.

Facebook, with its quick stream of headlines, is divorced from any information about their provenance which would allow you to ignore them. My guess is each one of those headlines, if not immediately discarded as a known falsehood, goes into our sloppy Bayesian generator of familiarity, part of an algorithm that is even less transparent to us than Facebook’s.

Confirmation bias often comes a few seconds later as we file the information, or as we weight its importance. Based on our priors we’re likely to see something as true, but maybe less relevant given what know. I’d venture to guess that the Clinton voters who believed the murder-suicide look very much like certain Clinton voters I know  — people who will “hold their nose and vote for her” even though there is something “very, very fishy about her and Bill.” The death of the FBI agent is perhaps in the unproven, but disturbing range.

You see this in practice, too. I’ve had one Clinton voter tell me “I’m not saying she killed anyone herself, or even ordered it. But sometimes if you’re powerful and you say someone is causing you problems, then other people might do it for you. Like in Becket.”

That is a close to verbatim quote from a real Clinton voter I talked to this election. And for me statements like that are signs that people really do wrestle with fake news, because no matter what your opinion of Clinton is, she most definitely has not had people killed. (And no, not even in that “Who will rid me of this troublesome priest?” Becket way.)

Given our toxic information environment and the human capacity for motivated reasoning, I’m certain that many folks were able to complete the required gymnastics around the set of “facts” Facebook provided them. I’m just as sure a bunch of people thought about that Olympic-level gymnastics routine and just decided to skip it and stay home. How many? I don’t know, but in an election won by less than 100,000 votes, almost everything matters.

In any case, I said this all better weeks ago. I encourage you to read my more comprehensive treatment on this if you get the chance. In the meantime, I’d remind everyone if you want to be well-informed it’s not enough to read the truth — you also must avoid reading lies.