Memorizing Lists of Cognitive Biases Won't Help

From the Twitters, by me.

What’s the cognitive bias that explains why someone would think having a list of 200 cognitive biases bookmarked would make them any better at thinking?

Image
Screenshot of tweet encouraging people to read a list of 200 biases to be a better thinker.

(It literally says it’s “to help you remember” 200+ biases. Two hundred! LOL, critical thinking boosters are hilarious)

 I should be clear — biases are a great way to look at certain issues *after* the fact, and it’s good to know that you’re biased. Our own methods look pretty deeply at certain types of bias and try to design methods that route around them, or use them to advantage.

But if you want to change your own behavior, memorizing long lists of biases isn’t going to help you. If anything it’s likely to just become another weapon in your motivated reasoning arsenal. You can literally read the list of biases to see why reading the list won’t work. 

The alternate approach, ala Simon/Gigerenzer, is to see “biases” not as failings but as useful rules of thumb that are inapplicable in certain circumstances, and push people towards rules of thumb that better suit the environment. 

As an example, salience bias — paying more attention to things that are prominent or emotionally striking — is a pretty useful behavior in most circumstances, particularly in personal life or local events. 

It falls apart partly because in larger domains – city, state, country – there’s more emotional and striking events than you can count, which means you can be easily manipulated through selection, and because larger problems often are not tied to the most emotional events. 

Does that mean we should throw away our emotional reaction as a guide altogether? Ignore things that are more prominent? Not use emotion as any indication of what to pay attention to?

Not at all. Instead we need to think carefully about how to make sure the emotion and our methods/environment work *together*. 

Reading that list of biases may start with “I will not be fooled,” but it probably ends with some dude telling you family separation at the border isn’t a problem because “it’s really the salience effect at work”. 

TL;DR: biases aren’t wholly bad, and the flip side of a bias is a useful heuristic. Instead of thinking about biases and eliminating them, think about applying the right heuristics to the right sorts of problems, and organizing your environment in such a way that the heuristics don’t get hacked.

The Stigmergic Myth of Social Media, or Why Thinking About Radicalization Without Thinking About Radicalizers Doesn't Work.

One of the founding myths of internet culture, and particularly web culture, is the principle of stigmergy.

This will sound weird, but stigmergy is about ant behavior. Basically, ants do various things to try to accomplish objectives (e.g. get food to nest) but rather than a command and control structure to coordinate they use pheromones, or something like pheromones. (My new goal is to write shorter, quicker blog posts this year, and that means not spiraling into my usually obsession with precision. So let’s just say something like pheromones. Maybe actually pheromones. You get the point.)

So, for example, ants wander all over, and they are leaving maybe one scent, but they go and find the Pringle crumbs and as they come back with the food they leave another scent. A little scent trail. And then other ants looking for Pringles stumble over that scent trail and they follow it to the Pringle crumbs. And then all those ants leave a scent as they come back with their Pringle crumbs, and what happens over time is the most productive paths have the best and strongest smell.

If you think this smells very E. O. Wilson, it is. But it’s not just E. O. Wilson. This stuff was everywhere in the 1990s. Take “desire paths”, which was a metaphor I first heard when I landed in the middle of the dot com explosion. The story goes some university somewhere doesn’t build paths when they first put up the buildings. Instead, they wait for the first snow, and see where the tracks between the buildings come out. And where the tracks fall they put the paths. Another one talked about the worness of objects as an indicator. And in my first meeting with a MediaLab grad in 1999 (who’d been hired as a consultant for the educational software company I worked for) he described to me his major project: Patina, a web site whose pages showed visible signs of wear they more they were read.

This stuff was everywhere in the 1990s, and when Web 2.0 came around it was the founding mythology. I swear, unless you were around then, you have no idea how this cluster of metaphors formed the thinking of Silicon Valley. You really don’t.

And like a lot of mythologies, there’s a lot of truth to it. When I say myth, I don’t mean it’s wrong. It’s a good way to think about a lot of things. I have built (and will continue to build) a lot of projects around these principles.

But it’s also a real hindrance when we talk about disinfo and bad actors. Because the general idea in the Stigmergic Myth is that uncoordinated individual action is capable of expressing a representative collective consciousness. And in that case all we have to do is set up a system of signals that truly capture that collective or consensus intent.

But none of the founding myths — ants and Pringles, Swedish college desire paths, or even Galton’s ox weighing — deal with opposing, foundational interests. And opposing interests change everything. There isn’t a collective will or consciousness to express.

Faced with this issue, Web 2.0 doubled down. The real issue was the signals were getting hacked. And that’s absolutely true. There was a lot of counterfeit pheromone about, and getting rid of that was crucial. Don’t discount that.

But the underlying reality was never addressed. In areas where cooperation and equality prevails, the Stigmergic Myth is useful. But in areas of conflict and inequality, it can be a real hindrance to understanding what is going on. It can be far less less an expression of collective will or desire than other less distributed approaches, and while fixing the signals and the system is crucial, it’s worth asking if the underlying myth is holding our understanding back.

A New Year's Eve Activity (and a New Year's Day Wish)

I made a short video showing a New Year’s Eve Activity around SIFT, and getting serious for a minute with a New Year’s Day wish.

I don’t know how many people know this about me, but I actually study misinfo/disinfo pretty deeply, outside of my short videos on how to do quick checks. If anything, I probably spend too much time keeping up with the latest social science, cognitive theory, network analysis, etc. on the issue.

But scholarship and civic action are different. Action to me is like Weber’s politics, the slow drilling of hard boards, taking passion and perspective. You figure out where you can make a meaningful difference. You find where the cold hard reality of where we are intersects with a public’s desire to make things better. And then you drill.

It’s been three long exhausting years since I put out Web Literacy for Student Fact-Checkers, and over a decade since I got into civic digital literacies. I’m still learning, still adapting. And still drilling.

Happy New Year, everyone. And thanks to everyone that has helped me on this weird, weird, journey.

Chatham House Sharing for OER

I’ve noted a new need in my open education work that isn’t supported by many tools and not found in any licenses. I’m going to call it “Chatham House Sharing”

For those that don’t know, the Chatham House Rules are a set of rules traditionally used in association with reporters covering an event, but more recently used to govern the tweetability of different gatherings. There are probably more rules than two, but the most notable are these:

  • You can report out anything said, but
  • You can’t identify who said it

The reason for the rules is that people need to speak freely as they hash out things at a conference, and to do that they sometimes have to speak loosely in ways that don’t translate outside the conference. Politicians or practitioners may want to express concerns without triggering followup questions or teapot tempests over out-of-context utterances. Academics might like to share some preliminary data or explore nascent thoughts without confronting the level of precision a formal publication or public comment might require. And people that work for various companies may want to comment on various things without the inevitable tempest that “someone from Microsoft said X” or “someone from Harvard said Y” that accompanies that.

In open education there is a need for a form of sharing that works like this, especially in collaborative projects, though for slightly different reasons. If we imagine people working together on an evolving open resource on, say, the evolution of dark money in politics it stands to reason that many authors might not want it shared under their name. Why?

  • Most of the time it’s a work in progress, it’s not ready yet.
  • It may have undergone revisions from others that they do not want their name attached to.
  • They may never want their name attached to it, because they cannot give it the level of precision their other work in the field demands.
  • They may be part of a group that is explicitly targeted for their gender, race, or sexual orientation online and fear they will become a lightning rod for bad actors.
  • They make work for an institution or company and worry that no matter how much their input comes with the caveat that it does not represent the views of their employer it may be read that way and that is risk.
  • In cases where there is a revision history, they might be ok with attaching a name to the final project, but do not like the fact that the history logs their activity for public consumption. (One can imagine other people to whom they owe projects complaining about the amount of time spent on the resource. Even worse, as data gets combined and recombined with other tracking data, it’s impossible to predict the was in which people will use anything time stamped — but there is almost surely malicious uses to come).

What Chatham House Sharing would be is sharing that follows the following rules:

  • Within the smaller group of collaborators, contributions may or may not be tracked by name, and
  • Anyone may share any document publicly, or remix/revise for their own use, but
  • They may not attribute the document to any author or expose any editing history

If they want, of course, they can use their own authority to say, hey this document I found is pretty good. If they want to make some edits and slap their name on it, noting that portions of the document were developed collaboratively by unnamed folks, they could do that as well.

Maybe there’s already a license that covers this — perhaps makes it legally binding. People will have to let me know. The Creative Commons licenses tend to run the other way, with attribution even encouraged on the CC0 licenses though not required. But I’ve worked with academics long enough to know that the promise to not not be quoted on something can facilitate their cooperation on more informal documents, and I’ve seen enough ugliness to know that there are risks to many people in taking credit that are not felt equally. OER and open educational practice should be able to accommodate these issues in tools, licensing, and norms.

Walkthrough for Windows App

Back in January I started working on a web-based application to help teachers and others make fact-checking infographics as part of a Misinformation Solutions Forum prize from RTI International and Rita Allen. I got it to work, but as we tried to scale it out we found it had

  • Security concerns (too much potential for hacking it)
  • Scalability concerns (too resource intensive on the server)
  • Flexibility concerns (too rigid to accommodate a range of tasks, and not enough flexibility on tone for different audiences)

Maybe someone can solve those issues as part of a server app. But after a small bout of depression I realized that you could solve all of issues by making it a desktop OS-native app.

What I’ve ended up with however, does more than simply build a set of fact-checking GIFs. It’s a flexible tool to present any web process or even non-web issue. It’s going to make it easy for people to educate others on how to check things, but potentially it’s a way to make our private work and processes visible in many other ways as well.

Here’s an example of output, which also shows the implementation of blockquotes and linking.

I’ve given it to a couple people so far to try, and the response I’ve gotten is — weirdly — how *fun* it is to explain things like this. And it is. It’s really odd.

In any case, if you have access to a Windows laptop or desktop, download, unzip wherever you want, read the license (it’s free software with the usual caveats), and fire it up. If you make something cool let me know.

Windows application.

Oh, and Mac users — I’m not able to build a version for Mac (I’m surprised I was able to build this one, tbh) but given someone with my hacky abilities can make this for Windows, I’m sure if there is demand for this someone of talent can make this for Mac in less than a week.

Also I’m thinking through the legal implication of hosting the produced walkthroughs on a central site — or whether it’s better to keep them distributed (everyone host their own, but share links). More on that later.

Microtargeted Political Ads are the Tranched Subprime Mortgages of Democracy

One of the problems with microtargeted ads, and a way I’ve been thinking about them recently, is they resemble the tranched subprime mortgages that brought about the financial crash.

Others have talked about this in the context of the digital ad market as a whole. The allure of digital ads was that you would finally be able to assess impact. The reality is complexity, fraud, and snake oil hand-waving have made the impact of advertising more opaque than ever.

In the political realm, it’s even worse. We talk about whether the ads in there are on the whole beneficial or not beneficial, lies or truth, but the debate itself presumes that even an entity like Facebook has any real idea what’s in there. And they don’t. They can’t. And so as microtargeting proliferates we’re left with the pre-2008 cognitive dissonance we had around subprime: surely someone must know what’s going on under the hood! We wouldn’t really entrust vital social functions to something this opaque, this prone to fraud, this reliant on faith in untested equations, right?

There’s the question of what public policy should be for Facebook, and there can be disagreement on that. But table stakes for that discussion is that public policy be possible, and it’s just not clear to me that it can be the way the system is currently designed.

It’s not the claim, it’s the frame

Putting a couple notes from Twitter here. One of the ideas of SIFT as a methodology (and of SHEG’s “lateral reading” as well) is that before one reads a person must construct a context for reading. On the web that’s particularly important, because the rumor dynamics of the web tend to level and sharpen material as it travels from point A to point Q, and because bad actors actively engage in false framing of claims, quotes, and media.

But it’s also a broader issue when considering source-checking. I’ve had people share RT articles with me that are more or less “true”, for example. When I push back on people that they shouldn’t be sharing RT articles, since RT is widely considered to be a propaganda arm of the Kremlin, the response is often “Well, do you see anything false in the article? What’s the lie?”

This isn’t a good approach to your information diet, for a couple reasons. The first is that a news-reading strategy where one has to check every fact of a source because the source itself cannot be trusted is neither efficient nor effective. Disinformation is not usually distributed as an entire page of lies. Seth Rich, for example, did exist, was killed, and did work at the DNC. His murder does remain unsolved. Even where people fabricate issues, they usually place the lies in a bed of truth.

But the other reason to not share articles from shady sources is the frame can be false, even if the facts are correct. Take this coverage on the Seth Rich murder from RT for example, in a story about Assange offering a reward for his killers. The implication of the story is it is possible that Seth Rich was killed for leaking the DNC emails.

Rich worked as voter expansion data director at the DNC before he was shot twice on his way home on July 10. He died later in hospital.

“If it was a robbery — it failed because he still has his watch, he still has his money — he still has his credit cards, still had his phone so it was a wasted effort except we lost a life,” his father Joel Rich told local TV station KMTV.

See the frame? Responsible reporting would add context:

  • DNC emails universally believed by experts to be hacked, not leaked.
  • The “data director” position sounds email-ish, but had no access to email systems.
  • The Washington D.C. police said regarding the robbery that in robberies where someone is killed it’s extremely common to find that the credit cards and phone are not taken, because people generally get shot in robberies when something goes wrong, and the suspects are anxious to flee the scene before the police come investigating the gunshot.

There’s not a lie in the article (that I can see) but the way the article is framed is deceptive. And there’s no way to know that as an average reader, because you don’t know what you don’t know. Without expertise you can’t see what is missing or deceptively added. So zoom out, and if the source is dodgy, skip it. Find something else. Share something else. You’re not as smart as you think you are, and reading stories designed to warp your worldview will, over time, warp your worldview.