Smoking out the Washington Post imposter in a dozen seconds or less

So today a group known for pranks circulated an imposter site that posed as the Washington Post, announcing President Trump’s resignation on a post-dated paper. It’s not this hard for hoaxers to do this – any one can come up with a confusingly similar url to a popular site, grab some HTML and make a fake site. These sites often have a short lifespan once they go viral — the media properties they are posing as lean on the hosters who pull the plug. But once it goes viral the damage is done, right?

It’s worth noting that you don’t need a deep understanding of the press or communications theory to avoid being duped here. You don’t even need to be a careful reader. Our two methods for dealing with this are dirt simple:

  • Just add Wikipedia (our omnibar hack to investigate a source)
  • Google News Search & Scan (our technique we apply to stories that should have significant coverage).

You can use either of these for this issue. The way we look for an imposter using Wikipedia is this:

  1. Go up to the “omnibar” and turn the url into a search by adding space + wikipedia
  2. Click through to the article on the publication you are supposedly looking at.
  3. Scroll to the part of the sidebar with a link to the site, click it.
  4. See if the site it brings you to is the same site

Here’s what that looks like in GIF form (sorry for the big download).

I haven’t sped that up, btw. That’s your answer in 12 seconds.

Now some people might say, well if you read the date of the paper you’d know or if you knew the fonts associated with the Washington Post you’d realize the fonts were off. But none of these are broadly applicable habits. Every time you look at a paper like this there will be a multitude of signals that argue for the authenticity of the paper and a bunch that argue against it. And hopefully you pick up on the former for things that are real and the latter for things that aren’t, but if you want to be quick, decisive, and habitual about it you should use broadly applicable measures that give you clear answers (when clear answers are available) and mixed signals only when the question is complex.

When I present these problems to students or faculty I find that people can *always* find what they “should have” noticed after the fact. But of course it’s different every time and it’s never conclusive. What if the fonts had been accurate? Does that mean it’s really the Post? What if the date was right? Trustworthy then?

The key isn’t figuring out the things that don’t match after the fact. The key is knowing the most reliable way to solve the whole class of problem, no matter what the imposter got right or wrong. And ideally you ask questions where a positive answer has a chance of being as meaningful as a negative one.

Anyway, the other route to checking this is just as easy — our check other coverage method, using a Google News Search:

  1. Go to the omnibar, search [trump resigns]
  2. When you get to the Google results, don’t stop. Click into Google News for a more curated search
  3. Note that in this case there are zero stories about Trump resigning and quite a lot about the hoax.
  4. There is no step four — you’re done

Again, here it is in all it’s GIF majesty:

You’ll notice that you do need to practice a bit of care here — some publishers try to clickbait the headline by putting the resignation first, hoping that the fact it was fake gets trimmed off and gets a click. (If I were king of the world I’d have a three strikes policy for this sort of stuff and push repeat offenders out of news search or out of the cluster feature spots, but that’s just me). Still, scanning over these headlines even in the most careless way possible it would be very hard not to pick up this was a fake story.

Note that in this case we don’t even need these fact-checks to exist. If we get to this page and there are no stories about Trump resigning, then it didn’t happen — for two reasons. First, if it happened there would be broad coverage. Second, even if the WaPo was the first story on this, we would see their story in the search results.

There’s lots of things we can teach students, and we should teach them them. But I’m always amazed that two years into this we haven’t even taught them techniques as simple as this.

Why Reputation?

As I was reading An Xiao Mina’s recent (and excellent) piece for Nieman Lab, and it reminded me that I had not yet written here about why I’ve increasingly been talking about reputation as a core part of online digital literacy. Trust, yes, consensus, yes. But I keep coming back to this idea of reputation.

Why? Well, the short answer is Gloria Origgi. Her book, Reputation, is too techno-optimist in parts, but is still easily the most influential book I’ve read in the past year. Core to Origgi’s work is the idea that reputation is both a social relation and a social heuristic, and these two aspects of reputation have a dynamic relationship. I have a reputation, which is the trace of past events and current relationships in a social system. But that reputation isn’t really separate from the techniques others use to decode and utilize my reputation for decision-making.

This relationship is synergistic. As an example, reputation is subject to the Matthew Effect, where a person who is initially perceived as smart can gain additional reputation for brilliance at a fraction of the cost of someone initially perceived as mediocre. This is because quick assessments of intelligence will have to weight past assessments of others — as a person expands their social circle initial judgments are often carried forward, even if those initial judgments are flawed.

Reputation as a social heuristic maps well onto our methods of course — both Origgi and the Digital Polarization initiative look to models from Simon and Gigerenzer for inspiration. But it also suggests a theory of change.

Compare the idea of “trust” to that of “reputation”. Trust is an end result. You want to measure it. You want to look for and address the things that are reducing trust. And, as I’ve argued, media literacy programs should be assessing shifts in trust, seeing if students move out of “trust compression” (where everything is moderately untrustworthy) to a place where they make bigger and more accurate distinctions.

But trust is not what is read, and when we look at low-trust populations it can often seem like there is not much for media literacy to do. People don’t trust others because they’ve been wronged. Etc. What exactly does that have to do with literacy?

But that’s not the whole story, obviously. In between past experience, tribalism, culture, and the maintenance of trust is a process of reading reputation and making use of it. And what we find is that, time and time again, bad heuristics accelerate and amplify bad underlying issues.

I’ve used the example of PewDiepie and his inadvertent promotion of a Nazi-friendly site as an example of this before. PewDiepie certainly has issues, and seems to share a cultural space that has more in common with /pol/ than #resist. But one imagines that he did not want to risk millions of dollars to promote a random analysis of Death Note by a person posting Hitler speeches. And yet, through an error in reading reputation, he did. Just as the Matthew Effect compounds initial errors in judgment when heuristics are injudiciously applied, errors in applying reputation heuristics tend to make bad situations worse — his judgment about an alt-right YouTuber flows to his followers who then attach some of PewDiepie’s reputation to the ideas presented therein — based, mostly, on his mistake.

I could write all day on this, but maybe one more example. There’s an old heuristic about the reputation of positions on issues — “in matters indifferent, side with the majority.” This can be modified in a number of ways — you might want to side with the qualified majority when it comes to treating your prostate cancer. You might side with the majority of people who share your values on an issue around justice. You might side with a majority of people like you on an issue that has some personal aspects — say, what laptop to get or job to take. Or you might choose a hybrid approach — if you are a woman considering a mastectomy you might do well to consider what the majority of qualified women say about the necessity of the procedure.

The problem, however, from a heuristic standpoint, is that it is far easier to signal (and read the signal) of attributes like values or culture or identity than it is to read qualifications — and one underremarked aspect of polarization is that — relative to other signals — partisan identity has become far easier to read than it was 20 years ago, and expertise has become more difficult in some ways.

One reaction to this is to say — well people have become more partisan. And that’s true! But a compounding factor is that as reputational signals around partisan identity have become more salient and reputational signals around expertise have become more muddled (by astroturfing, CNN punditocracy, etc) people have gravitated to weighting the salient signals more heavily. Stuff that is easier to read is quicker to use. And so you have something like the Matthew Effect — people become more partisan, which makes those signals more salient, which pushes more people to use those signals, which makes people more partisan about an expanding array of issues. What’s the Republican position on cat litter? In 2019, we’ll probably find out. And so on.

If you want to break that cycle, you need to make expertise more salient relative to partisan signals, and show people techniques to read expertise as quickly as partisan identity. Better heuristics and an information environment that empowers quick assessment of things like expertise and agenda can help people to build better, fuller, and more self aware models of reputation, and this, in turn, can have meaningful impact on the underlying issues.

Well, this has not turned into the short post I had hoped, and to do it right I’d probably want to talk ten more pages. But one New Year’s resolution was to publish more WordPress drafts, so here you go. 🙂