The Power of Explaining to Others

From a great New Yorker article that ran last month:

In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health care system? Or merit-based pay for teachers?

Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we — or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

When we argue we become dumber, more blind to our own lack of knowledge and logical inconsistencies. When we try to explain or summarize how things work, on the other hand, we suddenly realize that we don’t know as much as we think we do, and we tend to moderate our opinions, and be more open to data that may conflict with our beliefs. Curiosity replaces dogma. (It’s probably not for nothing that the smartest folks in the open education space make a habit of providing others with daily or weekly summaries of articles.)

People wondered why I liked clickers — it seemed very not-very-open-education. Why so multiple choice, Mike?

But I didn’t like clickers. I liked peer instruction. In the peer instruction methodology, students have to explain how things work to other students — and in the process they realize that they have no fricking clue what they are talking about (even though they were dead sure they understood it twenty seconds before).

Have you ever heard a student say “I knew it until I had to explain it on the test?” Illusion of explanatory depth, right there. They didn’t know it. But they never were given any activities that allowed them to realize they didn’t know it.

What happens in peer instruction? You give students daily opportunities to realize they understand a fraction of what they think they do, and you get amazing learning gains.

People wonder why I got obsessed with federated wiki. I got obsessed for a number of reasons, but as I discussed in The Garden and the Stream, one of the primary ones was this: a daily process of trying to explain and connect incoming ideas rather than rating them and arguing them changes your brain in helpful ways. Federated wiki takes us down a path of explanation and connection. Traditional social media takes us down a path of argument and retrenchment.

People wonder why I spent time on Choral Explanations as a future for OER. The reason? It’s likely to be the future that most advances the ability of students to learn. When students have to explain things to others (rather than argue a point) they must address gaps in their own knowledge. They must pierce the “illusion of explanatory depth” and realize wow, they actually have no idea what they are talking about. Only then can they rectify that.

And now my answer to the Post-Truth crisis? It’s to have students explain things. Some things they investigate will be simply wrong, completely false. Hillary killed an FBI agent. Three million people voted illegally. The more interesting ones are subtle: Have thyroid cancers increased near Fukushima? Did the Republican Party of North Carolina brag about voter suppression?

Again, it’s the power explaining things to others rather than arguing points. Can you summarize all sides rather than just present yours? And if you can’t summarize all sides, how in the world would you know that you are right?

It’s this power that I see most intersecting with open pedagogy as well. Explaining things to a teacher becomes just another test. Explaining things to people on the internet — especially where, as is the case with wiki, they can edit you — that’s the sort of stakes that forces some self-examination.

I think we’ve had a lot of open pedagogy that is about expression, and that’s wonderful. It’s certainly more engaging than some of the drier work of explanation. But as I’ve said many times over the past couple years, I think some of the most promising work in the future is having students explore that explanation space, and coming face-to-face with their own ignorance, as we all must do. And then either rectifying that or perhaps just respecting the issue’s complexity. I don’t know how to make that fun — please help me out there, all you talented people reading this! But I do not think it’s hyperbole to say the future of our planet depends on it.

Pulling the Moves Together

I’ve talked about how you have three basic moves in web investigations:

  • Check for previous work
  • Go upstream
  • Read laterally

These can be used on simple claims (“Bernie Sanders shouted ‘Death to America’ at a Communist rally”) to get an answer quickly. But the real reason I like this set of moves is that they can be combined and chained together for more complex investigations.

To show that, I recorded my screen for 50 minutes while I looked into the claim that millions may die of cancer due to the Fukushima reactor meltdown. As I went upstream I found there was no there there. There was literally no source to this information. About 15 minutes into the research I decided to focus on the more empirical claim that the rates of thyroid cancer in Fukushima Prefecture were hundreds of times above normal.

The thing I find when I do these investigations is it is just these moves, chained together over and over. You go upstream for a bit to find that one route is a dead end. You come back to your original document and find another route upstream. You get upstream there, but laterally reading shows you the site has no authority. You go to Google to see if Google can get you closer to the origin of the claim. You find counter-evidence to the claim. You go upstream to find the source of that counter evidence. You read laterally to assess the counter-evidence. And so on.

Here’s the video, sped up by a factor of three and re-narrated to make it (slightly) less boring:

You can look at the resulting page. It’s a really drafty writing job, but it’s a wiki, so feel free to sign up, log in, and make it better. 😉

There’s a lot of domain knowledge I have here that an average student might not. I helped develop statistical literacy guidelines and taught a n introductory class on statistical literacy and health for years, so I already know quite a bit about issues caused by global screening for cancer. I recognize the journal Science as a giant in the field, and gravitate to that link in the Google results because of that knowledge. But those issues aside, what is most interesting to me is that a complex investigation looks like many simple investigations chained together. When you see that in a literacy context, it’s usually good news.

Misinformation May Be the Disease, But Curiosity Is the Cure

Tim Harford, whose work I have followed since I first got into media and statistical literacy a decade ago, has one of the best pieces yet on our post-truth moment. As we’ve often done in these pages, he traces the roots of our current crisis not to the 2016 election but to the realization in the 1950s by Big Tobacco that that they could manufacture doubt at a fraction of the cost of adapting to truth. He goes through the well-known problems with attacking doubt and misinformation with facts, and comes to where we’ve landed with the Digital Polarization Project (sort of).

One of our big focuses for the Digital Polarization Project has been to try to engage the curiosity of students — to get them to think like reporters rather than attorneys, as encyclopedists rather than activists. Turn off the rhetoric for a while and just delight in finding new things out.

Tim comes at that from a bit of a different angle, essentially asking (as he has been asking for a while) where the Carl Sagan of sociology and public policy is — the person who can engage people in science and social science for the joy of exploration and learning rather than more immediate argumentative needs. But I think his conclusion plugs into things much bigger than that:

What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”

I’ve talked much about the nature of wiki, and particularly the idea that your job is to summarize the consensus of a community of experts. You’re not writing for yourself in wiki: you’re writing to represent others.

People often find this difficult at first. They want to win arguments.

But here’s what happens when people get into the “wiki zone” of production: it changes you.

Let me give you an example from this morning. I was writing a piece on DigiPo on a claim that Fukushima had increased thyroid cancer in the surrounding area by several thousand percent. I went into it pretty inclined to disbelieve it, and in the end it did turn out to be false: there’s no good evidence that thyroid cancer in the surrounding area has increased at all. It’s early, and evidence might develop over time — but right now the answer is nope.

In the middle of doing research on it, however, I found an article in a journal that appeared to show otherwise. While not arguing for a 2,000% increase in prevalence, it did argue for substantial increases. And it was from Epidemiology, a journal of high stature.

Now you might expect me to kick against that evidence immediately since it disproves my personal gut on the evidence, and blows apart the piece I had been writing. But when you get deep into the wiki zone, that’s not how it feels. When I came across the article, I was delighted, because it added complexity to the article I was working on. It was surprising. It would allow my wiki article to tell a more interesting story, even if it undermined what I had thought up to now.

I was actually  bit depressed when after a bit of research I found that the article had been roundly criticized as methodologically flawed by the world’s biggest experts in the epidemiology of radiation exposure. (Epidemology itself published seven letters in a later journal that tore apart the study and its conclusions).

But this is what wiki does, as opposed to blogging. It puts you in a learning mode vs. an argumentative mode. You can feel it when it happens, physically, when it lets down the rhetorical defenses you’ve set up. Ward used to call it Egoless Wiki. When people let down defenses enough to get there, to delight in the investigation more than the result, that’s when you’re in the zone. And I think it correlates with Tim’s point — that to have truth win we can’t fight for truth — we have to fight for curiosity and a bit of egolessness. We have to ask people not to argue their point, but to tell us what they know. In the end that’s the only thing that that’s going to save us.

Go read Tim’s piece though, it’s a brilliant summary of where we are and how we got here.

 

You Are Not the Hero of This Story

I’m a huge fan of peer-to-peer sharing systems. The whole idea of federated content takes much of its inspiration from platforms like BitTorrent, and I’ve repeatedly argued here that the future belongs to platforms that look more like IPFS than Dropbox. (In fact, if you read this blog, this was probably where you first heard about IPFS). Federated wiki was, of course, the ultimate peer-to-peer OER machine, and I even went so far last year to argue that torrented OER might be breaking into the mainstream.

I believe in the torrent model (over the URL model) so deeply that I’ve said that rediscovering name-based networking is key to the personal web, and that servers and URLs as the model are holding us back.

So I’m actually delighted that LBRY is trying a new torrent-like model for a YouTube replacement that balances out issues of creator control, payment, and distributed delivery of content. And even the fact that there is some BitCoin hand-waving in their materials doesn’t bother me — Ted Nelson himself envisioned a web with a system of micropayments and credits to creators, and people should still be trying to get that done. Artists and writers need to eat too, and the current dissolution of our society is partially attributable to the advertising/platform-based revenue model which rewards distributors over creators and clickbait over depth. Putting money in the pockets of creators is good.

What I dislike is headlines like this:

ucb.JPG

Headline: 20,000 Worldclass Lectures Made Illegal, So We Irrevocably Mirrored Them

LIBRYIO took a bunch of OER and hosted it, the way people do every single day. That’s great. I like that.

But “made illegal?” The videos were never made illegal. Berkeley was told that they could no longer host the videos. As the press release that follows that headline notes, multiple archiving teams have been working on this effort, with Berkeley’s blessing: it’s OER.

The headline is phrased in classic Hacker News style, and I get it. Hustlers gotta hustle. The post slug is even worse — the lectures have been “rescued”. UC Berkeley spent years of effort and millions of dollars producing and sharing these lectures, and somehow LBRY is the hero of the story.

If the company really loves creators as much as it says it does, maybe they could spend some time talking about the wonderful work that UC Berkeley has been doing in this area instead of portraying them as simply a point of failure in the story. Maybe they could talk about the quality of the content they are seeding to the network. And if they really want to help out the OER community, maybe instead of seeing people with disabilities as the villain of the story they could caption those videos and feed forward the love, like a good open citizen.

This stuff seems petty, I suppose, but how you talk about creators matters, and how you talk about open matters. The hero of this story is UC Berkeley, which not only produced and shared their knowledge at the cost of millions of dollars over many years, but actually fought for their right to continue to do so in court. LBRY is either a distribution platform that is going to allow those OER heroes to shine brighter, or the latest in a series of platforms looking to make a quick fortune of the free work of others without advancing the value of their work. Press releases like this make me worry it’s likely to be more of what’s behind door number two.

Beyond WordPress

I missed this when Jim put it up, but Martha Burtis’s keynote abstract is up for the Domains conference:

Four years into Domain of One’s Own, I wonder if we are at an inflection point, and, if so, what we will do to respond to this moment. At its onset, Domains offered us paths into the Web that seemed to creatively and adequately address a perception that we weren’t fully inhabiting that space. Our students could carve out digital homes for themselves that were free of the walled gardens of the LMS. Our faculty could begin to think of the Web not as a platform for delivering content but as an ecosystem within which their teaching could live and breathe. In doing so, perhaps we would also engage our communities in deeper conversations about what the Web was and how we could become creators rather than merely consumers of that space. But in those four years, as in any four years, our popular culture, our technical affordances, and our political landscape has continued to march forward. How does Domain of One’s Own grow into and with these changes? Where do we take this project from here so that we continue to push the boundaries or our digital experiences? How do we address the ever-looming tension between building something sustainable while also nurturing new growth?

I’m excited to hear this keynote, not just because Martha is one of the most thoughtful people in this space, but because for me this one of the big questions.

The core of open education for me is that we learn together by sharing what we know to the network. But a lot of open tool use is not about learning, but about creating in-groups and out-groups. A lot of internet activity is not about sharing what one knows but about telling others what to think.

Some of that is fine — I’m telling you what to think now, in a certain way. But balance is key. The projects I’ve admired most in this space over the past couple years — from UMW to Plymouth State to VCU — have been the projects that have used technology to do the sort of things that expressive platforms like Facebook can’t do. Ones that model the behaviors that are more likely to stop fake news rather than propagate it. Ones that engage students in the activities that increase the web’s usefulness to communities and citizens. But they are few and far between for reasons both technological and cultural. (I could write a book about the difficulties with my own Digital Polarization wiki project, for example).

Anyway, really looking forward to this talk.

Two Feeds, Two Scarcities

I’ve put my tweets on a rolling auto-delete, which probably means I’ll be doing ocassional shorter pieces in this space in addition to longer pieces. For posterity, or something.

Anyway, a thought for the day. As we think about the firehose of the Stream — that never-ending reverse-chronological scroll of events that has become the primary metaphor of the web, via Facebook, Twitter, Instagram, and who-knows-what-else — it’s worth noting that the Stream was originally a solution for scarcity, not abundance. That is, the reason that Facebook made the News Feed was that people got tired checking out all of their friend’s Facebook walls only to find there were no updates. So Facebook borrowed a lesson from RSS, that had solved this problem years earlier: serialize contributions from different places into a single reverse chronological feed. This made sure that when ever you logged into Facebook you were guaranteed there was some activity with which to engage.

To repeat, the Stream here was a solution for too little activity. By pooling activity and time-ordering it, a sense of abundance was created.

We’ve talked about this before on this blog (I should find the link, but I’m being lazy at the moment).

What I don’t think I recognized before now was that this motivation was behind the first web stream as well — that granddaddy of all feeds, the NCSA “What’s New” page:

what'snew

The What’s New page was there for a bunch of reasons — making things findable being the big one, and creating a sense of WWW momentum being another. But the biggest reason why it was there was scarcity: Without it, people would log in and find nothing new to do. I mean look at it — you have an average of one or two servers — one or two servers — coming online each day. We’re not talking information overload here.

I don’t really have a point here. I just find it interesting that the feeds that we now portray as a solution to organizing abundance grew out of needs to deal with scarcity.

 

Google Should Be a Librarian, not a Family Feud Contestant

I’ve been investigating Google snippets lately, based on some work that other people have done. These are the “cards” that pop up on top sometimes, giving the user what appears to be the “one true answer”.

What’s shocking to me is not that Google malfunctions in producing these, but how often it malfunctions, and how easy it is to find malfunctions. It’s like there is little to no quality control on the algorithm at all.

Other people have found dozens of these over the past couple days, but here’s a few I found goofing off yesterday while half watching Incorporated on Syfy.

Prodded with the right terms, Google will tell you that:

  • Sasha Obama was adopted
  • Lee Harvey Oswald didn’t shoot JFK
  • GMOs make you sick

Want some screenshots? Today’s your lucky day!

oswald

C6c1glUVQAAYdXM

gmos and health.PNG

Now I’m sure that Google will reply that the results are the results. And I’m sure that other people will ask why I’m being such a special snowflake and stamping my iron boot on the neck of results I don’t like. (Their mixed metaphor, not mine!)

(By the way, trivia fact: one technique of populist dictatorships is to portray the opposition as simultaneously weak and effete while being all-powerful and brutal. Just some facts for your next pub trivia night…)

The truth is, however, that I have a fairly simple definition of a fact, and I would hope that a company who’s stated mission is “to organize the world’s information” would as well. For me a fact is:

  • something that is generally not disputed
  • by people in a position to know
  • who can be relied on to accurately tell the truth

And so, not to be too Enlightenment era about this, but all these snippets fail that test. And not just fail: they fail spectacularly.

The person writing about the GMO health risks has no science background and is considered such a sham by the scientific community that when he appeared on Dr. Oz scientists refused to share the stage with him, fearing even that would be too much normalization of him.

The site writing about Sasha and Malia being adopted, “America’s Freedom Fighters”, is site specializing in fake news to such an extent that Google autosuggests “fake news” if you type it into the search box.

aff.PNG

And the JFK conspiracy theory is — well, a conspiracy theory. It’s literally the prototypical modern conspiracy theory. It’s the picture in the dictionary next to the word “conspiracy theory”.

The truth is in cases like these cases Google often fails on all three counts:

  • They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.
  • They choose sites and authors who are in no position to know more about a subject than the average person.
  • They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.

Google Should Not Be Family Feud

I never really got the show Family Feud when I was a kid. That’s partially because my parents mostly put me on a diet of PBS, which made anything higher on the dial look weird. But it’s also because it just didn’t jive with my sense of why we ask questions in the first place.

For those that haven’t seen Family Feud, here’s how it works. The host of Family Feud asks you a question, like “What builds your appetite?” You try to guess what your average American would answer.

You win if you guess something in the top five of what most people would say. So a lot of people say “smelling food” so that ranks in the list. No one says “not eating” so that doesn’t rank.

Watching this as a kid I’d always wonder, “Yes, but what actually builds your appetite the most?” Like, what’s the real answer? Don’t we care about that?

But Family Feud doesn’t care about that. It was never about what is true, it was about what people say.

I don’t think Google’s purpose is to aspire to be Family Feud game show team, but it’s sometimes hard to tell. For example, a principle of “organizing the world’s information” has to be separating reliable sources from unreliable ones, and trying to provide answers that are true. But it’s clear that in many cases that’s not happening — otherwise quality control would be flagging these misfires and fixing them. The snippets, which create the impression of a definitive answer while feeding people bad science, conspiracy, and hate speech, make matters worse.

It should not be that hard to select good sources of information. For example, there is an excellent National Academies report on genetically engineered crops that was written by a mix of corporate and anti-corporate scientists and policy analysts. Here’s the conclusion of that study on health effects:

gene

On the basis of its detailed examination of comparisons between currently commercialized GE and non-GE foods in compositional analysis, acute and chronic animal-toxicity tests, long-term data on health of livestock fed GE foods, and epidemiological data, the committee concluded that no differences have been found that implicate a higher risk to human health safety from these GE foods than from their non-GE counterparts. The committee states this finding very carefully, acknowledging that any new food—GE or non-GE—may have some subtle favorable or adverse health effects that are not detected even with careful scrutiny and that health effects can develop over time.

That’s actually what science looks and sounds like — having reviewed the data available, we find no evidence but are aware that since impacts may take time to develop there may yet be adverse impacts to appear.

If you went to a competent health sciences librarian and asked for material on this, this is what you’d get back. This report as one of the definitive statements to date on GMO safety. Because the librarian’s job is not to play Family Feud, but to get you the best information.

Google instead gives you the blog of a man with no medical or scientific training who claims GMOs cause infertility, accelerated aging, and organ damage. But “survey says!” that’s true, so it’s all good.

The world right now is in a post-truth crisis that threatens to have truly earth-shattering impacts. What Google returns on a search result can truly change the fate of the entire world. What Google returns can literally lead to the end of humanity as we know it, through climate change, nuclear war, or disease. Not immediately, but as it shapes public perception one result at a time.

I’m not asking Google to choose sides. I’m not asking them to put a finger on the scale for the answers I’d like to see. I’m asking them to emulate science in designing a process that privileges returning good information over bad. I’m asking that they take their place as a librarian of knowledge, rather than a Family Feud game show contestant. It seems a reasonable request.