Update on Check, Please!

Short update on the Check, Please project.

We’re about halfway into the coding hours on this which is a bit scary. We still have some expert hours from TS Waterman at the end to solve the hard problems but right now we’re solving the easy ones.

A couple weeks ago we put out a prototype. The prototype was for one of the three moves we wanted to showcase, and it was functional, and used the original concept of a headless Chrome instance in the background to make these things. The protoype did what good prototypes do and showed that project was possible, but there were three weak spots:

  • First, the Chrome screenshots could usually be manipulated to capture the right part of the screen (e.g. scroll down to a headline or get the correct Google result scrolled into view). But this was a bit more fragile than hoped as we tested it on a wide array of prompts.
  • Second, headless chrome was really slow on some sites. Even on speedy sites, like Google, the fire-up and retrieval would normally be a couple seconds but could stretch to much much more. We were headless chroming three sites and on the occasional call where all three went slow we’d sometimes get over 30 seconds. This didn’t happen a lot (timings were usually about 10 – 15 seconds for the entire process) but it happened enough.
  • Finally, because headless chrome is headless a lot of things needed to make the animation instructive (mouse pointers, cursors, omnibars) have to be added anyway via image manipulation.

I played with the settings, with asynchrony, with using a single persistent instance of Chrome Driver, and things got better, but it became clear that we should offload at least some problems to a caching mechanism, and where possible use PIL to draw mockups off of an HTML request rather than doing everything through screenshots. So I’m in the middle of that rebuild now, with caching and some imaging library rework. Hoping to get it reliably under 10 seconds max.

Web Literacy Across the Curriculum

We’re still teaching history using only print texts even as kids are being historicized online by Holocaust deniers and Lost-Causers. We’re teaching science in an era when online anti-vaxxers gain traction by using scientific language to deceive and intimidate. 

Sam Wineburg, The internet is sowing mass confusion. We must rethink how we teach kids every subject.

Couple good pieces out — one by Sam Wineburg, and an interesting response (expansion?) by Larry Cuban. The point, at least as I read it? Misinformation on the web is not really a subject — or, in any case, not only a subject. The web, after all, is an environment, a domain in which most professional, scholarly, and civic skills are practiced. Yet the structure of how we teach most subjects treats the web as either an afterthought, or worse, as a forbidden land.

If you know me and know this blog, this issue has been my obsession since before this blog was launched in 2007. Back in 2009 I dubbed the practice of ignoring the web as a target domain as “Abstinence-only Web Education“:

…what [the term] expresses [is] my utter shock that when talking to some otherwise intelligent adults about the fact that we are not educating our students to be critical consumers of web content, or to use networks to solve problems, etc — my utter shock that often as not the response to this problem is “Well, if students would just stop getting information from the web and go back to books, this whole problem would go away.”

Now I do believe that reading more books and less web is usually a good decision as part of a broader strategy. But most of what students will do in their professional and civic lives will involve the web.

My younger daughter, for example, is presenting to the school board tonight about how the integrated arts and academics magnet program she is in supports various educational objectives. When trying to understand what those objectives mean — from critical thinking to collaboration — she is not reading a textbook or going to a library. She is consulting the web.

And I am writing this at work as part of being in an informal professional development community, and you are reading it to maybe help you with your job.

These issues seem a million miles away from Pizzagate and blogs that tell you that sea ice is increasing and climate change is really a hoax. But they turn out to be adjacent. What happens if my daughter’s search for critical thinking lands on one of the recently politicized redefinitions of that term, which she ends up presenting to the school board? And you’re here at this blog, trusting me — but there are of course other blogs and articles that are written by people in the employ of ed tech firms, and those by people that have zero experience in the domain on which they write. Giving your attention to those sites may actually make you worse at what you do, or lead to your manipulation by corporate forces of which you are unaware.

Or maybe not! Maybe you’re good at all this.

Still, I keep coming back to that part of Dewey’s School and Society where he talks about the problem of transmission of knowledge in a post-agrarian society. In the first lecture in that work, Dewey talks about the way in which industrialization has rendered the processes of production opaque. In an agrarian society, he notes, “the entire industrial process stood revealed, from the production on the farm of the raw materials, till the finished article was actually put to use.” In such a world a youngster could simply observe, and see what competent practice looked like. To understand where things came from was to understand one’s household, and not much pedagogical artifice was required. With the introduction of complex, specialized and opaque systems, however, there was no opportunity to learn by looking over a parents shoulder, and so a more designed approach was required.

Two things occur to me re-reading that. The first is not necessarily a new media literacy insight. But that networked opacity we deal with — the complex network of actors and algorithms that lead to a piece of information or propaganda being displayed on your screen — is a very similar problem. There’s a part of that lecture where Dewey talks about how students that investigate the production of clothing walk through domains of physics, history, geography, engineering, and economics due to the complex set of historical, geographical, and other factors that have determined the way in which clothing gets made. The point he makes is that you can organize the curriculum around clothing, and the disciplines become meaningful.

I’m not proposing to do a complete retread of Dewey’s progressive education in 2019. We’ve learned a lot since Dewey about how people learn; that’s good and we should use that. But narrowly, what Dewey saw in clothing in 1899 I see in web literacy today. Here is a going social concern that combines sociology, psychology, history, engineering, algorithms, math, political science and so on. You don’t have to adopt unmodified Deweyism to see the opportunities there for integrative education. Elucidate the circumstances of production for this thing students are using most of their waking life. If you’re a high school or an integrative first-year program put together a year on it, and try it out.

The second point is on skills. Dewey noted that when professional knowledge moved out of farms and into factories and offices children lost the ability to observe competence in action. Work — and the skills associated with it — became hidden.

That’s still true today, but there’s another angle on this. Even in offices our skills are quite hidden because of the ways that this work evades third-party observation. Where there is an artifact of work — equations, code, writing, etc., a co-worker can ask “hey, why are you doing that in that way?” And where more ephemeral processes are public — soft skills exercised in a meeting for example — they can also be learned.

But web skills have the double whammy of leaving very little trace, and of being intensely private. And this makes transmission and improvement of these skills much more difficult, and creates a situation where there is a lot of hidden need. More on that in a later post.

Educating the Influencers: The “Check, Please!” Prototype

OK, maybe you’re just here for the video. I would be. Watch the demo of Check Please, and then continue downpage for the theory of change behind it.

Watched it? OK, here’s the backstory.

Last November we won an award from RTI International and the Rita Allen Foundation to work on a “fact-checking tutorial generator” that would generate hyper-specific tutorials that could be shared with “friends, family, and the occasional celebrity.” The idea was this — we talk a lot about media literacy, but the people spreading the most misinformation (and the people reaching the most people with that misinformation) are some of the least likely people to currently be in school. How do we reach them?

I proposed a model: we teach the students that we have, and then give them web tools to teach the influencers. As an example, we have a technique we show students called “Just add Wikipedia”: when confronted with a site of unknown provenance, go up to the omnibar, add “wikipedia” after the domain to trigger a Google search that floats relevant Wikipedia pages to the top, select the most relevant Wikipedia page, and get a bit of background on the site before sharing.

When teaching students how to do this, I record little demos using Camtasia on a wide variety of examples. Students have to see the steps and, as importantly, see how easy the steps really are, on a variety of examples. And in particular, they have to see the steps on the particular problem they just tried to solve: even though the steps are very standard, general instruction videos don’t have half the impact of specific ones. When you see the exact problem you just struggled with solved in a couple clicks, it sinks in in a way that no generic video ever will.

Unfortunately , this leaves us in a bit of a quandary relative to our “have students teach the influencers” plan. I have a $200 dollar copy of Camtasia, a decades worth of experience creating screencasts, and still, for me to demo a move — from firing up the screen recorder to uploading to YouTube or exporting a GIF — is a half-hour process. I doubt we’re going to change the world on that ROI. As someone once said, a lie can make it halfway around the world while the truth is still lacing up its Camtasia dependencies.

But what if we could give our students a website that took some basic information about decisions they made in their own fact-checking process and that website would generate the custom, shareable tutorial for them to share, as long as they were following one of our standard techniques?

I came up with this idea last year — using selenium, a invisible Chrome browser you can run on the server — to walk through the steps of a claim or reputation check while taking screen shots that formed the basis of an automatic tutorial on fact-checking a claim. And I ran it by TS Waterman and after walking through it a bit we decided that — maybe to our surprise (!!) — it seemed rather straightforward. We proposed it to the forum, won the runner-up prize in November, and on January 15 I began work on it. (TS is still involved and will help optimize the program and advise direction as we move forward, as soon as I clean up my embarrassing prototype spaghetti code).

But here’s the thing — it works! The prototype is so so far from finished, and the plan is to launch a public site in April after adding a couple more types of checks and massively refactoring code. But it works. And it may provide a new way to think about stopping the spread of misinformation, not by by generic tools for readers, but by empowering those that enforce social norms with better, more educational tools.

The result.

Attention Is the Scarcity

There’s a lot of things that set our approach at the Digital Polarization Initiative apart from most previous initiatives. But the biggest thing is this: we start from the environment in which students are most likely to practice online literacy skills, and in that environment attention is the scarcity.

The idea that scarce attention forms the basis of modern information environments is not new. Herbert Simon, years ago, noted that abundances consume — an abundance of goats makes a scarcity of grass. And information? It consumes attention. So while we name this the information age, information is actually less and less valuable. The paradox of the information age is that control of information means less and less, because information becomes commodified. Instead, the powerful in the information age control the scarcity: they control attention.

Slide from my presentation at StratCom last year

Again, this is not an observation that is unique to me. Zeynep Tufecki, Penny Andrews, An Xaio Mina, Claire Wardle, Whitney Phillips, and so many more have drilled down on various aspects of this phenomenon. And years ago, Howard Rheingold put attention as a crucial literacy of the networked age, next to others like critical consumption. It’s not, at this point, a very contentious assertion.

And yet the implications of this, media literacy at least, have yet to be fully explored. When information is scarce, we must deeply interrogate the limited information that is provided us, trying to find the internal inconsistencies, the flaws, the contradictions. But in a world where information is abundant, these skills are not primary. The primary skill of a person in an attention-scarce environment is making relatively quick decisions about what to turn their attention toward, and making longer term decisions about how to construct their media environment to provide trustworthy information.

People know my four moves approach that tries to provide a quick guide for sorting through information, the 30 second fact-checks, and the work from Sam Wineburg and others that it builds on. These are media literacy, but they are focused not on deeply analyzing a piece of information but on making a decision of whether an article, author, website, organization, or Facebook page is worthy of your attention (and if so, with what caveats).

But there are other things to consider as well. When you know how attention is captured by hacking algorithms and human behavior, extra care in deciding who to follow, what to click on, and what to share is warranted. I’ve talked before about PewDiepie’s recommendation of an anti-Semitic YouTube account based on some anime analysis he had enjoyed. Many subscribed based on the recommendation. But of course, the subscription doesn’t just result in that account’s anime analysis videos being shared with you — it pushes the political stuff to you as well. And since algorithms weight subscriptions highly in what to recommend to you, it begins a process of pushing more and more dubious and potentially hateful content in front of you.

How do you focus your attention? How do you protect it? How do you apply it productively and strategically, and avoid giving it to bad actors or dubious sources? And how do you do that in a world where decisions about what to engage with are made in seconds, not minutes or hours?

These are the question our age of attention requires we answer, and the associated skills and understandings are where we need to focus our pedagogical efforts.

The Fyre Festival and the Trumpet of Amplification

Unless you’ve been living under a rock, you’re probably aware that there are two documentaries out on the doomed Fyre Festival. You should watch both: the event — both its dynamics and the personalities associated with it — will give you disturbing insights into our current moment. And if you teach students about disinformation I’d go so far as to assign one or both of the documentaries.

Here is one connection between the events depicted in the film and disinfo. There are many others. (This post is not intended for researchers of disinfo, but for teachers looking to help students understand some of the mechanisms).

The Orange Square

Key to the Fyre Festival story is the orange square, a bit of paid coordinated posting by a set of supermodels and other influencers. The models and influencers, including such folks as Kendall Jenner, were paid hundreds of thousands of dollars to post the same message with a mysterious orange square on the same day. And thus an event was born.

Related image

People new to disinformation and influencer marketing might think the primary idea here is to reach all the influencer followers. And that’s part of it. But of course, if that were the case you wouldn’t need to have people all post at the same time. You wouldn’t need the “visual disruption” of the orange square.

The point here is not to reach followers, but to catalyze a much larger reaction. That reaction, in part, is media stories like this by the Los Angeles Times.

And of course it wasn’t just the LA Times: it was dozens (hundreds?) of blogs and publications. It was YouTubers talking about it. Music bloggers. Mid-level elites. Other influencers wanting in on the buzz. The coordinated event also gave credibility required to book bands, the booking of the bands created more credibility, more news pegs, and so on.

You can think of this as a sort of nuclear reaction. In the middle of the event sits some fissile material — the media, conspiracy thought leaders, dispossessed or bitter political influencers. Around it are laid synchronized charges that, should they go off right, catalyze a larger, more enduring reaction. If you do it right, a small amount of social media TNT can create an impact several orders of magnitude larger than its input.

Enter the Trumpet

Central to understanding this is the fissile material is not the general public, at least at first. As a marketer or disinfo agent you often work your way upward to get downward effects. Claire Wardle, drawing on the work of Whitney Phillips and others, expresses one version of this in the “trumpet of amplification“:

Image result for "claire wardle" trumpet

Here the trumpet reflects a less direct strategy than Fyre, starting by influencing smaller, less influential communities, refining messages then pushing them up the influence ladder. But many of the principles are the same. With a relatively small number of resources applied in a focused, time-compressed pattern you can jump start a larger and more enduring reaction that gives the appearance of legitimacy — and may even be self-sustaining once manipulation stops. Maybe that appearance of legitimacy is applied to getting investors and festival attendees to part with their money. Or maybe it’s to create the appearance that there’s a “debate” about whether the humanitarian White Helmets are actually secret CIA assets:

Maybe the goal is disorientation. Maybe it’s buzz. Maybe it’s information — these techniques, of course, are also often used ethically by activists looking to call attention to a certain issue.

Why does this work? Well, part of it is the nature of the network. In theory the network aggregates the likes, dislikes and interests of billions of individuals and if some of those interests begin to align — shock at a recent news story for example — then that story breaks through the noise and gets noticed. When this happens without coordination it’s often referred to as “organic” activity.

The dream of many early on was that such organic activity would help us discover things we might otherwise not. And it has absolutely done that — from Charlie Bit My Finger to tsunami live feeds this sort of setup proved good at pushing certain types of content in front of us. And it worked in roughly this same sort of way — organic activity catches the eyes of influencers who then spread it more broadly. People get the perfect viral dance video, learn of a recent earthquake, discover a new opinion piece that everyone is talking about.

But there are plenty of ways that marketers, activists, and propagandists can game this. Fyre used paid coordinated activity, but of course activists often use unpaid coordinated activity to push issues in front of people. They try to catch the attention of mid-level elites that get it in front of reporters and so on. Marketers often just pay the influencers. Bad actors seed hyperpartisan or conspiracy-minded content in smaller communities, ping it around with bots and loyal foot soldiers, and build enough momentum around it that it escapes that community. giving the appearance to reporters and others of an emerging trend or critique.

We tend to think of the activists as different from the marketers and the marketers as different from the bad actors but there’s really no clear line. The disturbing fact is it takes frightfully little coordinated action to catalyze these larger social reactions. And while it’s comforting to think that the flaw here is with the masses, collectively producing bizarre and delusional results, the weakness of the system more likely lie with a much smaller set of influencers, who can be specifically targeted, infiltrated, duped, or just plain bought.

Thinking about disinfo, attention, and influence in this way — not as mass delusion but as the hacking of specific parts of an attention and influence system — can give us better insight into how realities are spun up from nothing and ultimately help us find better, more targeted solutions. And for influencers — even those mid-level folks with ten to fifty thousand followers — it can help them come to terms with their crucial impact on the system, and understand the responsibilities that come with that.

Smoking out the Washington Post imposter in a dozen seconds or less

So today a group known for pranks circulated an imposter site that posed as the Washington Post, announcing President Trump’s resignation on a post-dated paper. It’s not that hard for hoaxers to do this – any one can come up with a confusingly similar url to a popular site, grab some HTML and make a fake site. These sites often have a short lifespan once they go viral — the media properties they are posing as lean on the hosters who pull the plug. But once it goes viral the damage is done, right?

It’s worth noting that you don’t need a deep understanding of the press or communications theory to avoid being duped here. You don’t even need to be a careful reader. Our two methods for dealing with this are dirt simple:

  • Just add Wikipedia (our omnibar hack to investigate a source)
  • Google News Search & Scan (our technique we apply to stories that should have significant coverage).

You can use either of these for this issue. The way we look for an imposter using Wikipedia is this:

  1. Go up to the “omnibar” and turn the url into a search by adding space + wikipedia
  2. Click through to the article on the publication you are supposedly looking at.
  3. Scroll to the part of the sidebar with a link to the site, click it.
  4. See if the site it brings you to is the same site

Here’s what that looks like in GIF form (sorry for the big download).

I haven’t sped that up, btw. That’s your answer in 12 seconds.

Now some people might say, well if you read the date of the paper you’d know. Or if you knew the fonts associated with the Washington Post you’d realize the fonts were off. But none of these are broadly applicable habits. Every time you look at a paper like this there will be a multitude of signals that argue for the authenticity of the paper and a bunch that argue against it. And hopefully you pick up on the former for things that are real and the latter for things that aren’t, but if you want to be quick, decisive, and habitual about it you should use broadly applicable measures that give you clear answers (when clear answers are available) and mixed signals only when the question is actually complex.

When I present these problems to students or faculty I find that people can *always* find what they “should have” noticed after the fact. But of course it’s different every time and it’s never conclusive. What if the fonts had been accurate? Does that mean it’s really the Post? What if the date was right? Trustworthy then?

The key isn’t figuring out the things that don’t match after the fact. The key is knowing the most reliable way to solve the whole class of problem, no matter what the imposter got right or wrong. And ideally you ask questions where a positive answer has a chance of being as meaningful as a negative one.

Anyway, the other route to checking this is just as easy — our check other coverage method, using a Google News Search:

  1. Go to the omnibar, search [trump resigns]
  2. When you get to the Google results, don’t stop. Click into Google News for a more curated search
  3. Note that in this case there are zero stories about Trump resigning and quite a lot about the hoax.
  4. There is no step four — you’re done

Again, here it is in all it’s GIF majesty:

You’ll notice that you do need to practice a bit of care here — some publishers try to clickbait the headline by putting the resignation first, hoping that the fact it was fake gets trimmed off and gets a click. (If I were king of the world I’d have a three strikes policy for this sort of stuff and push repeat offenders out of the cluster feature spots, but that’s just me). Still, scanning over these headlines even in the most careless way possible it would be very hard not to pick up this was a fake story.

Note that in this case we don’t even need these fact-checks to exist. If we get to this page and there are no stories about Trump resigning, then it didn’t happen — for two reasons. First, if it happened there would be broad coverage. Second, even if the WaPo was the first story on this, we would see their story in the search results.

There’s lots of things we can teach students, and we should teach them them. But I’m always amazed that two years into this we haven’t even taught them techniques as simple as this.

Why Reputation?

As I was reading An Xiao Mina’s recent (and excellent) piece for Nieman Lab, and it reminded me that I had not yet written here about why I’ve increasingly been talking about reputation as a core part of online digital literacy. Trust, yes, consensus, yes. But I keep coming back to this idea of reputation.

Why? Well, the short answer is Gloria Origgi. Her book, Reputation, is too techno-optimist in parts, but is still easily the most influential book I’ve read in the past year. Core to Origgi’s work is the idea that reputation is both a social relation and a social heuristic, and these two aspects of reputation have a dynamic relationship. I have a reputation, which is the trace of past events and current relationships in a social system. But that reputation isn’t really separate from the techniques others use to decode and utilize my reputation for decision-making.

This relationship is synergistic. As an example, reputation is subject to the Matthew Effect, where a person who is initially perceived as smart can gain additional reputation for brilliance at a fraction of the cost of someone initially perceived as mediocre. This is because quick assessments of intelligence will have to weight past assessments of others — as a person expands their social circle initial judgments are often carried forward, even if those initial judgments are flawed.

Reputation as a social heuristic maps well onto our methods of course — both Origgi and the Digital Polarization initiative look to models from Simon and Gigerenzer for inspiration. But it also suggests a theory of change.

Compare the idea of “trust” to that of “reputation”. Trust is an end result. You want to measure it. You want to look for and address the things that are reducing trust. And, as I’ve argued, media literacy programs should be assessing shifts in trust, seeing if students move out of “trust compression” (where everything is moderately untrustworthy) to a place where they make bigger and more accurate distinctions.

But trust is not what is read, and when we look at low-trust populations it can often seem like there is not much for media literacy to do. People don’t trust others because they’ve been wronged. Etc. What exactly does that have to do with literacy?

But that’s not the whole story, obviously. In between past experience, tribalism, culture, and the maintenance of trust is a process of reading reputation and making use of it. And what we find is that, time and time again, bad heuristics accelerate and amplify bad underlying issues.

I’ve used the example of PewDiepie and his inadvertent promotion of a Nazi-friendly site as an example of this before. PewDiepie certainly has issues, and seems to share a cultural space that has more in common with /pol/ than #resist. But one imagines that he did not want to risk millions of dollars to promote a random analysis of Death Note by a person posting Hitler speeches. And yet, through an error in reading reputation, he did. Just as the Matthew Effect compounds initial errors in judgment when heuristics are injudiciously applied, errors in applying reputation heuristics tend to make bad situations worse — his judgment about an alt-right YouTuber flows to his followers who then attach some of PewDiepie’s reputation to the ideas presented therein — based, mostly, on his mistake.

I could write all day on this, but maybe one more example. There’s an old heuristic about the reputation of positions on issues — “in matters indifferent, side with the majority.” This can be modified in a number of ways — you might want to side with the qualified majority when it comes to treating your prostate cancer. You might side with the majority of people who share your values on an issue around justice. You might side with a majority of people like you on an issue that has some personal aspects — say, what laptop to get or job to take. Or you might choose a hybrid approach — if you are a woman considering a mastectomy you might do well to consider what the majority of qualified women say about the necessity of the procedure.

The problem, however, from a heuristic standpoint, is that it is far easier to signal (and read the signal) of attributes like values or culture or identity than it is to read qualifications — and one underremarked aspect of polarization is that — relative to other signals — partisan identity has become far easier to read than it was 20 years ago, and expertise has become more difficult in some ways.

One reaction to this is to say — well people have become more partisan. And that’s true! But a compounding factor is that as reputational signals around partisan identity have become more salient and reputational signals around expertise have become more muddled (by astroturfing, CNN punditocracy, etc) people have gravitated to weighting the salient signals more heavily. Stuff that is easier to read is quicker to use. And so you have something like the Matthew Effect — people become more partisan, which makes those signals more salient, which pushes more people to use those signals, which makes people more partisan about an expanding array of issues. What’s the Republican position on cat litter? In 2019, we’ll probably find out. And so on.

If you want to break that cycle, you need to make expertise more salient relative to partisan signals, and show people techniques to read expertise as quickly as partisan identity. Better heuristics and an information environment that empowers quick assessment of things like expertise and agenda can help people to build better, fuller, and more self aware models of reputation, and this, in turn, can have meaningful impact on the underlying issues.

Well, this has not turned into the short post I had hoped, and to do it right I’d probably want to talk ten more pages. But one New Year’s resolution was to publish more WordPress drafts, so here you go. 🙂