Web Literacy Assignment: Jewish Population

This is floated into my view from alt-right Facebook earlier today:

jewishpop

It’s a set of tables from an unknown source showing that there were appx. 15 million Jews in the world in 1933 and 15 million in the world in 1948, the implication being that the Holocaust never happened.

Now, if you’re like me, your mind immediately starts racing with the statistical wait-a-seconds and that’s fine. But we’re not going to do it that way. We’re going to first find out where this came from, and how it was compiled, and who revived it. Then rather than develop our own arguments about this, we’re going to see if any experts in this area have commented on it.

Once we’re grounded in that stuff, then maybe we add our own bit of critical thinking to the mix. But the first step is getting grounded. Get your bearings! Use the web!

I’m dying to do this one myself, but I figured I’d save it for others. If you have a .edu account, go to digipo.io, register for an account, then investigate the claim and log your findings on the page I’ve created for this claim.

My Lazy Manifesto On This Post-Truth Moment: Technologies for Collaborative Exploration

My solution to the post-truth crisis is to develop a culture of collaborative explanation and exploration via development and use of new and different tools.

My belief is that humans have a couple modes of working with truth. Some are adversarial and propagative, and some are exploratory and collaborative. The adversarial mode is killing us.

My contention is that early visions of the web and digital technology (Bush, Engelbart, Kay, Berners-Lee, Cunningham) developed collaborative, exploratory approaches (Wiki, Memex, Dynabook, hypertext) as their dominant modes, but that later approaches  (Facebook,  Twitter) chose modes that promoted propagation and tribalism. That’s fine as it goes — these things are important. But as a dominant mode adversarialism is, unsurprisingly, polarizing us, and killing truth in the process.

It doesn’t have to be this way. Keep in mind that as the rest of the web has polarized, Wikipedia has, over the years, become less biased. Keep in mind that in the sciences tools like Jupyter notebooks have moved many scientists from “no it isn’t-ism” to “Let me tinker with your code.”

By embracing new exploratory modes of technology use we can create a culture of exploration, just as by adopting the tools of rhetorical dominance we created an adversarial culture focused on rhetorical dominance.

These tools include wiki (including newer versions of wiki), Jupyter notebooks, OneNote, and similar tools, but also require that tool makers rethink their own existing tools in radical ways. What would your platform look like if it made deeper investigation of an issue irresistible? If it made collaborative truth-seeking the norm?

Hint: Almost nothing like tools look now.

Hint: Almost the opposite of how things look now.

Hint: Fix that.

That’s it. That’s my one great insight. It’s been the drum I’ve beating since early 2013. It’s probably self-serving and maybe short-sighted. But it’s my insight and I thought I might put it in a single place and state it clearly.

And to the question of “But isn’t our current moment also caused by X?” Yes, most certainly. Gerrymandering, AM radio, racism, white supremacy, a parliamentary partisan culture mapped on top of a party-neutral government structure, corporate disinformation, adtech, Merchants of Doubt, the Big Sort, all of it. It all matters. My insight and expertise happens to be about the technologies we use, though, so I plan to work on that square of the problem.

The Power of Explaining to Others

From a great New Yorker article that ran last month:

In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health care system? Or merit-based pay for teachers?

Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we — or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

When we argue we become dumber, more blind to our own lack of knowledge and logical inconsistencies. When we try to explain or summarize how things work, on the other hand, we suddenly realize that we don’t know as much as we think we do, and we tend to moderate our opinions, and be more open to data that may conflict with our beliefs. Curiosity replaces dogma. (It’s probably not for nothing that the smartest folks in the open education space make a habit of providing others with daily or weekly summaries of articles.)

People wondered why I liked clickers — it seemed very not-very-open-education. Why so multiple choice, Mike?

But I didn’t like clickers. I liked peer instruction. In the peer instruction methodology, students have to explain how things work to other students — and in the process they realize that they have no fricking clue what they are talking about (even though they were dead sure they understood it twenty seconds before).

Have you ever heard a student say “I knew it until I had to explain it on the test?” Illusion of explanatory depth, right there. They didn’t know it. But they never were given any activities that allowed them to realize they didn’t know it.

What happens in peer instruction? You give students daily opportunities to realize they understand a fraction of what they think they do, and you get amazing learning gains.

People wonder why I got obsessed with federated wiki. I got obsessed for a number of reasons, but as I discussed in The Garden and the Stream, one of the primary ones was this: a daily process of trying to explain and connect incoming ideas rather than rating them and arguing them changes your brain in helpful ways. Federated wiki takes us down a path of explanation and connection. Traditional social media takes us down a path of argument and retrenchment.

People wonder why I spent time on Choral Explanations as a future for OER. The reason? It’s likely to be the future that most advances the ability of students to learn. When students have to explain things to others (rather than argue a point) they must address gaps in their own knowledge. They must pierce the “illusion of explanatory depth” and realize wow, they actually have no idea what they are talking about. Only then can they rectify that.

And now my answer to the Post-Truth crisis? It’s to have students explain things. Some things they investigate will be simply wrong, completely false. Hillary killed an FBI agent. Three million people voted illegally. The more interesting ones are subtle: Have thyroid cancers increased near Fukushima? Did the Republican Party of North Carolina brag about voter suppression?

Again, it’s the power explaining things to others rather than arguing points. Can you summarize all sides rather than just present yours? And if you can’t summarize all sides, how in the world would you know that you are right?

It’s this power that I see most intersecting with open pedagogy as well. Explaining things to a teacher becomes just another test. Explaining things to people on the internet — especially where, as is the case with wiki, they can edit you — that’s the sort of stakes that forces some self-examination.

I think we’ve had a lot of open pedagogy that is about expression, and that’s wonderful. It’s certainly more engaging than some of the drier work of explanation. But as I’ve said many times over the past couple years, I think some of the most promising work in the future is having students explore that explanation space, and coming face-to-face with their own ignorance, as we all must do. And then either rectifying that or perhaps just respecting the issue’s complexity. I don’t know how to make that fun — please help me out there, all you talented people reading this! But I do not think it’s hyperbole to say the future of our planet depends on it.

Pulling the Moves Together

I’ve talked about how you have three basic moves in web investigations:

  • Check for previous work
  • Go upstream
  • Read laterally

These can be used on simple claims (“Bernie Sanders shouted ‘Death to America’ at a Communist rally”) to get an answer quickly. But the real reason I like this set of moves is that they can be combined and chained together for more complex investigations.

To show that, I recorded my screen for 50 minutes while I looked into the claim that millions may die of cancer due to the Fukushima reactor meltdown. As I went upstream I found there was no there there. There was literally no source to this information. About 15 minutes into the research I decided to focus on the more empirical claim that the rates of thyroid cancer in Fukushima Prefecture were hundreds of times above normal.

The thing I find when I do these investigations is it is just these moves, chained together over and over. You go upstream for a bit to find that one route is a dead end. You come back to your original document and find another route upstream. You get upstream there, but laterally reading shows you the site has no authority. You go to Google to see if Google can get you closer to the origin of the claim. You find counter-evidence to the claim. You go upstream to find the source of that counter evidence. You read laterally to assess the counter-evidence. And so on.

Here’s the video, sped up by a factor of three and re-narrated to make it (slightly) less boring:

You can look at the resulting page. It’s a really drafty writing job, but it’s a wiki, so feel free to sign up, log in, and make it better. 😉

There’s a lot of domain knowledge I have here that an average student might not. I helped develop statistical literacy guidelines and taught a n introductory class on statistical literacy and health for years, so I already know quite a bit about issues caused by global screening for cancer. I recognize the journal Science as a giant in the field, and gravitate to that link in the Google results because of that knowledge. But those issues aside, what is most interesting to me is that a complex investigation looks like many simple investigations chained together. When you see that in a literacy context, it’s usually good news.

Misinformation May Be the Disease, But Curiosity Is the Cure

Tim Harford, whose work I have followed since I first got into media and statistical literacy a decade ago, has one of the best pieces yet on our post-truth moment. As we’ve often done in these pages, he traces the roots of our current crisis not to the 2016 election but to the realization in the 1950s by Big Tobacco that that they could manufacture doubt at a fraction of the cost of adapting to truth. He goes through the well-known problems with attacking doubt and misinformation with facts, and comes to where we’ve landed with the Digital Polarization Project (sort of).

One of our big focuses for the Digital Polarization Project has been to try to engage the curiosity of students — to get them to think like reporters rather than attorneys, as encyclopedists rather than activists. Turn off the rhetoric for a while and just delight in finding new things out.

Tim comes at that from a bit of a different angle, essentially asking (as he has been asking for a while) where the Carl Sagan of sociology and public policy is — the person who can engage people in science and social science for the joy of exploration and learning rather than more immediate argumentative needs. But I think his conclusion plugs into things much bigger than that:

What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”

I’ve talked much about the nature of wiki, and particularly the idea that your job is to summarize the consensus of a community of experts. You’re not writing for yourself in wiki: you’re writing to represent others.

People often find this difficult at first. They want to win arguments.

But here’s what happens when people get into the “wiki zone” of production: it changes you.

Let me give you an example from this morning. I was writing a piece on DigiPo on a claim that Fukushima had increased thyroid cancer in the surrounding area by several thousand percent. I went into it pretty inclined to disbelieve it, and in the end it did turn out to be false: there’s no good evidence that thyroid cancer in the surrounding area has increased at all. It’s early, and evidence might develop over time — but right now the answer is nope.

In the middle of doing research on it, however, I found an article in a journal that appeared to show otherwise. While not arguing for a 2,000% increase in prevalence, it did argue for substantial increases. And it was from Epidemiology, a journal of high stature.

Now you might expect me to kick against that evidence immediately since it disproves my personal gut on the evidence, and blows apart the piece I had been writing. But when you get deep into the wiki zone, that’s not how it feels. When I came across the article, I was delighted, because it added complexity to the article I was working on. It was surprising. It would allow my wiki article to tell a more interesting story, even if it undermined what I had thought up to now.

I was actually  bit depressed when after a bit of research I found that the article had been roundly criticized as methodologically flawed by the world’s biggest experts in the epidemiology of radiation exposure. (Epidemology itself published seven letters in a later journal that tore apart the study and its conclusions).

But this is what wiki does, as opposed to blogging. It puts you in a learning mode vs. an argumentative mode. You can feel it when it happens, physically, when it lets down the rhetorical defenses you’ve set up. Ward used to call it Egoless Wiki. When people let down defenses enough to get there, to delight in the investigation more than the result, that’s when you’re in the zone. And I think it correlates with Tim’s point — that to have truth win we can’t fight for truth — we have to fight for curiosity and a bit of egolessness. We have to ask people not to argue their point, but to tell us what they know. In the end that’s the only thing that that’s going to save us.

Go read Tim’s piece though, it’s a brilliant summary of where we are and how we got here.

 

You Are Not the Hero of This Story

I’m a huge fan of peer-to-peer sharing systems. The whole idea of federated content takes much of its inspiration from platforms like BitTorrent, and I’ve repeatedly argued here that the future belongs to platforms that look more like IPFS than Dropbox. (In fact, if you read this blog, this was probably where you first heard about IPFS). Federated wiki was, of course, the ultimate peer-to-peer OER machine, and I even went so far last year to argue that torrented OER might be breaking into the mainstream.

I believe in the torrent model (over the URL model) so deeply that I’ve said that rediscovering name-based networking is key to the personal web, and that servers and URLs as the model are holding us back.

So I’m actually delighted that LBRY is trying a new torrent-like model for a YouTube replacement that balances out issues of creator control, payment, and distributed delivery of content. And even the fact that there is some BitCoin hand-waving in their materials doesn’t bother me — Ted Nelson himself envisioned a web with a system of micropayments and credits to creators, and people should still be trying to get that done. Artists and writers need to eat too, and the current dissolution of our society is partially attributable to the advertising/platform-based revenue model which rewards distributors over creators and clickbait over depth. Putting money in the pockets of creators is good.

What I dislike is headlines like this:

ucb.JPG

Headline: 20,000 Worldclass Lectures Made Illegal, So We Irrevocably Mirrored Them

LIBRYIO took a bunch of OER and hosted it, the way people do every single day. That’s great. I like that.

But “made illegal?” The videos were never made illegal. Berkeley was told that they could no longer host the videos. As the press release that follows that headline notes, multiple archiving teams have been working on this effort, with Berkeley’s blessing: it’s OER.

The headline is phrased in classic Hacker News style, and I get it. Hustlers gotta hustle. The post slug is even worse — the lectures have been “rescued”. UC Berkeley spent years of effort and millions of dollars producing and sharing these lectures, and somehow LBRY is the hero of the story.

If the company really loves creators as much as it says it does, maybe they could spend some time talking about the wonderful work that UC Berkeley has been doing in this area instead of portraying them as simply a point of failure in the story. Maybe they could talk about the quality of the content they are seeding to the network. And if they really want to help out the OER community, maybe instead of seeing people with disabilities as the villain of the story they could caption those videos and feed forward the love, like a good open citizen.

This stuff seems petty, I suppose, but how you talk about creators matters, and how you talk about open matters. The hero of this story is UC Berkeley, which not only produced and shared their knowledge at the cost of millions of dollars over many years, but actually fought for their right to continue to do so in court. LBRY is either a distribution platform that is going to allow those OER heroes to shine brighter, or the latest in a series of platforms looking to make a quick fortune of the free work of others without advancing the value of their work. Press releases like this make me worry it’s likely to be more of what’s behind door number two.

Beyond WordPress

I missed this when Jim put it up, but Martha Burtis’s keynote abstract is up for the Domains conference:

Four years into Domain of One’s Own, I wonder if we are at an inflection point, and, if so, what we will do to respond to this moment. At its onset, Domains offered us paths into the Web that seemed to creatively and adequately address a perception that we weren’t fully inhabiting that space. Our students could carve out digital homes for themselves that were free of the walled gardens of the LMS. Our faculty could begin to think of the Web not as a platform for delivering content but as an ecosystem within which their teaching could live and breathe. In doing so, perhaps we would also engage our communities in deeper conversations about what the Web was and how we could become creators rather than merely consumers of that space. But in those four years, as in any four years, our popular culture, our technical affordances, and our political landscape has continued to march forward. How does Domain of One’s Own grow into and with these changes? Where do we take this project from here so that we continue to push the boundaries or our digital experiences? How do we address the ever-looming tension between building something sustainable while also nurturing new growth?

I’m excited to hear this keynote, not just because Martha is one of the most thoughtful people in this space, but because for me this one of the big questions.

The core of open education for me is that we learn together by sharing what we know to the network. But a lot of open tool use is not about learning, but about creating in-groups and out-groups. A lot of internet activity is not about sharing what one knows but about telling others what to think.

Some of that is fine — I’m telling you what to think now, in a certain way. But balance is key. The projects I’ve admired most in this space over the past couple years — from UMW to Plymouth State to VCU — have been the projects that have used technology to do the sort of things that expressive platforms like Facebook can’t do. Ones that model the behaviors that are more likely to stop fake news rather than propagate it. Ones that engage students in the activities that increase the web’s usefulness to communities and citizens. But they are few and far between for reasons both technological and cultural. (I could write a book about the difficulties with my own Digital Polarization wiki project, for example).

Anyway, really looking forward to this talk.

Two Feeds, Two Scarcities

I’ve put my tweets on a rolling auto-delete, which probably means I’ll be doing ocassional shorter pieces in this space in addition to longer pieces. For posterity, or something.

Anyway, a thought for the day. As we think about the firehose of the Stream — that never-ending reverse-chronological scroll of events that has become the primary metaphor of the web, via Facebook, Twitter, Instagram, and who-knows-what-else — it’s worth noting that the Stream was originally a solution for scarcity, not abundance. That is, the reason that Facebook made the News Feed was that people got tired checking out all of their friend’s Facebook walls only to find there were no updates. So Facebook borrowed a lesson from RSS, that had solved this problem years earlier: serialize contributions from different places into a single reverse chronological feed. This made sure that when ever you logged into Facebook you were guaranteed there was some activity with which to engage.

To repeat, the Stream here was a solution for too little activity. By pooling activity and time-ordering it, a sense of abundance was created.

We’ve talked about this before on this blog (I should find the link, but I’m being lazy at the moment).

What I don’t think I recognized before now was that this motivation was behind the first web stream as well — that granddaddy of all feeds, the NCSA “What’s New” page:

what'snew

The What’s New page was there for a bunch of reasons — making things findable being the big one, and creating a sense of WWW momentum being another. But the biggest reason why it was there was scarcity: Without it, people would log in and find nothing new to do. I mean look at it — you have an average of one or two servers — one or two servers — coming online each day. We’re not talking information overload here.

I don’t really have a point here. I just find it interesting that the feeds that we now portray as a solution to organizing abundance grew out of needs to deal with scarcity.

 

Google Should Be a Librarian, not a Family Feud Contestant

I’ve been investigating Google snippets lately, based on some work that other people have done. These are the “cards” that pop up on top sometimes, giving the user what appears to be the “one true answer”.

What’s shocking to me is not that Google malfunctions in producing these, but how often it malfunctions, and how easy it is to find malfunctions. It’s like there is little to no quality control on the algorithm at all.

Other people have found dozens of these over the past couple days, but here’s a few I found goofing off yesterday while half watching Incorporated on Syfy.

Prodded with the right terms, Google will tell you that:

  • Sasha Obama was adopted
  • Lee Harvey Oswald didn’t shoot JFK
  • GMOs make you sick

Want some screenshots? Today’s your lucky day!

oswald

C6c1glUVQAAYdXM

gmos and health.PNG

Now I’m sure that Google will reply that the results are the results. And I’m sure that other people will ask why I’m being such a special snowflake and stamping my iron boot on the neck of results I don’t like. (Their mixed metaphor, not mine!)

(By the way, trivia fact: one technique of populist dictatorships is to portray the opposition as simultaneously weak and effete while being all-powerful and brutal. Just some facts for your next pub trivia night…)

The truth is, however, that I have a fairly simple definition of a fact, and I would hope that a company who’s stated mission is “to organize the world’s information” would as well. For me a fact is:

  • something that is generally not disputed
  • by people in a position to know
  • who can be relied on to accurately tell the truth

And so, not to be too Enlightenment era about this, but all these snippets fail that test. And not just fail: they fail spectacularly.

The person writing about the GMO health risks has no science background and is considered such a sham by the scientific community that when he appeared on Dr. Oz scientists refused to share the stage with him, fearing even that would be too much normalization of him.

The site writing about Sasha and Malia being adopted, “America’s Freedom Fighters”, is site specializing in fake news to such an extent that Google autosuggests “fake news” if you type it into the search box.

aff.PNG

And the JFK conspiracy theory is — well, a conspiracy theory. It’s literally the prototypical modern conspiracy theory. It’s the picture in the dictionary next to the word “conspiracy theory”.

The truth is in cases like these cases Google often fails on all three counts:

  • They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.
  • They choose sites and authors who are in no position to know more about a subject than the average person.
  • They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.

Google Should Not Be Family Feud

I never really got the show Family Feud when I was a kid. That’s partially because my parents mostly put me on a diet of PBS, which made anything higher on the dial look weird. But it’s also because it just didn’t jive with my sense of why we ask questions in the first place.

For those that haven’t seen Family Feud, here’s how it works. The host of Family Feud asks you a question, like “What builds your appetite?” You try to guess what your average American would answer.

You win if you guess something in the top five of what most people would say. So a lot of people say “smelling food” so that ranks in the list. No one says “not eating” so that doesn’t rank.

Watching this as a kid I’d always wonder, “Yes, but what actually builds your appetite the most?” Like, what’s the real answer? Don’t we care about that?

But Family Feud doesn’t care about that. It was never about what is true, it was about what people say.

I don’t think Google’s purpose is to aspire to be a Family Feud game show team, but it’s sometimes hard to tell. For example, a principle of “organizing the world’s information” has to be separating reliable sources from unreliable ones, and trying to provide answers that are true. But it’s clear that in many cases that’s not happening — otherwise quality control would be flagging these misfires and fixing them. The snippets, which create the impression of a definitive answer while feeding people bad science, conspiracy, and hate speech, make matters worse.

It should not be that hard to select good sources of information. For example, there is an excellent National Academies report on genetically engineered crops that was written by a mix of corporate and anti-corporate scientists and policy analysts. Here’s the conclusion of that study on health effects:

gene

On the basis of its detailed examination of comparisons between currently commercialized GE and non-GE foods in compositional analysis, acute and chronic animal-toxicity tests, long-term data on health of livestock fed GE foods, and epidemiological data, the committee concluded that no differences have been found that implicate a higher risk to human health safety from these GE foods than from their non-GE counterparts. The committee states this finding very carefully, acknowledging that any new food—GE or non-GE—may have some subtle favorable or adverse health effects that are not detected even with careful scrutiny and that health effects can develop over time.

That’s actually what science looks and sounds like — having reviewed the data available, we find no evidence but are aware that, since impacts may take time to develop, there may yet be adverse impacts to appear.

If you went to a competent health sciences librarian and asked for material on this, this is what you’d get back. This report as one of the definitive statements to date on GMO safety. Because the librarian’s job is not to play Family Feud, but to get you the best information.

Google instead gives you the blog of a man with no medical or scientific training who claims GMOs cause infertility, accelerated aging, and organ damage. But “survey says!” that’s true, so it’s all good.

The world right now is in a post-truth crisis that threatens to have truly earth-shattering impacts. What Google returns on a search result can truly change the fate of the entire world. What Google returns can literally lead to the end of humanity as we know it, through climate change, nuclear war, or disease. Not immediately, but as it shapes public perception one result at a time.

I’m not asking Google to choose sides. I’m not asking them to put a finger on the scale for the answers I’d like to see. I’m asking them to emulate science in designing a process that privileges returning good information over bad. I’m asking that they take their place as a librarian of knowledge, rather than a Family Feud game show contestant. It seems a reasonable request.

Doubt Versus a Bayesian Outlook

There’s lots of primary causes of the recent assault on truth that are non-technological. In fact, most causes have very little to do with technology. I’d point people to the excellent book The Merchants of Doubt which details the well-funded and and well-planned corporate assault on science that began as early as the 1950s around the issue of whether cigarettes cause cancer. There was a simple but profound realization Big Tobacco had 50 years ago — they didn’t have to refute the conclusion of the science that clearly, even back then, pointed to tobacco as a primary cause of lung cancer. They just had to introduce doubt.

The neat thing about doubt is it makes you look and feel like a pretty deep thinker. America loves doubt. Every four years we run an election for 18 months and then treat the people who haven’t decided until the last week of the election as if they were some sort of free-thinkers rather than the most politically ignorant population in the country. The mythology of doubt is strong.

Reporter:  “So what do you think about the election, Bob?”

Independent: “Well, I’m not sure. Clinton has some good points, but Trump seems like a strong leader. I like to take my time thinking about these things.”

Reporter: “Well, it’s quite the important decision. Back to you, Maria!”

The mythology of doubt is that we have things which need to be “proven”, and until they get proven we we are in a state of doubt: we really don’t know what to believe. Who can say?

But doubt is not actually what you want. Doubt is just certainty from another direction, and these two orientations — doubt and certainty — form a binary worldview that promotes polarization, narrow thinking, and poor policy outcomes.

What you really want is not doubt. What you want, for lack of a better word, to be Bayesian in your outlook. The famous statistician and epidemiologist Jerome Cornfield, responsible for much of the revival of Bayesian approaches in epidemiology in the 1960s and beyond, used to talk about the “Bayesian Outlook”.

The Bayesian Outlook is at its heart simple, but it’s also profound. Here’s Cornfield:

The Bayesian outlook can be summarized in a single sentence: any inferential or decision process that does not follow from some likelihood function and some set of priors has objectively verifiable deficiencies. The application of this outlook is a largely extra-mathematical task, requiring the selection of likelihoods and priors that are appropriate to given problem situations, with the determination of what is appropriate requiring, in Fisher’s words (in another context), ‘responsible and independent thinkers applying their minds and imaginations to the detailed interpretation of verifiable observations. (Cornfield, 1969)

There’s a field of Bayesian statistics that is fairly developed and beyond the scope of this post. But as Cornfield notes, Bayesian approaches are not really about the math — they are about a way of looking at the world. And given that I think it’s possible to talk about having a “Bayesian outlook” when it comes to fact-checking.

What does this mean in practice? As an example, I use this tweet occasionally in my presentations:

https://twitter.com/RonHogan/status/826126335328264192?ref_src=twsrc%5Etfw

Is the part about the Nazis true? It’s either true or not, of course. But we can only view that truth through an array of probability.

When I first see something like this, my immediate reaction is it has a good chance of being true. Why?

Well, there are priors. I know Schumer is Jewish, of European descent. And I know that the Nazis and their collaborators killed a substantial portion of of that population, maybe about 40%. I also know you have, by definition, eight great-grandparents. The chances that at least one of the eight great-grandparents might have died in WWII at the hands of Nazis or Nazi collaborators is something that had a reasonable chance of being true before this tweet.

We call these the priors: they exist before this tweet makes its way to me. One key component of Bayesian analysis is that we begin with a set of priors, and pay careful attention to the selection of those priors before assimilating new information.

Now as to the new information: the fact that someone tweeted this claim makes the claim more probable, to some extent. This is a specific claim. It came to me through a feed where I weed out the worst misinformation offenders pretty regularly. The second statement, about Trump’s father, is true.

It seems plausible. But I follow my prime habit with social media: check your emotions. Never reflexively tweet something that factual that feels “perfect”.

A quick search shows there’s a 1998 article from the New York Times that says that “aides say” seven of nine of his great-grandmother’s children were killed by Nazis. That’s good, and raises the likelihood it’s true. The old priors plus this new information become our new priors. We’ve moved from plausible to probable.

But I want to hear it from Chuck Schumer’s mouth, not some unnamed aides responding to a campaign attack in 1998.

And when you start to try to find Schumer saying it it gets less clear. There is Holocaust after Holocaust event that Schumer has attended — and yet this fact never makes the papers or his speeches:

schumer

Absence of evidence is not strong evidence of absence. But it is evidence, especially as it starts to pile up. With each failed attempt to find support for this, my disposition towards this fact inches down, moving from likely and sinking back towards plausible.

Then, at some point, I change my search terms. One of the unreliable sites on this question — a forum post —  mentions a “porch” where his great grandmother was killed. That’s a specific detail that is likely to get me closer to the event. So I throw it in and look what comes up:

schumer

And when we go to that top result we find testimony from Schumer at a congressional hearing on reparations for Holocaust survivors:

Senator Schumer. Now I am going to give my opening  statement, and first I want to start by thanking our Chairman,  Chairman Leahy, for letting me have the gavel today in order to  explore this exceptionally important topic: how to resolve what  I hope, what we all hope are among the last remaining  reparation claims stemming from the murder of 6 million Jews during the Holocaust. We all know the horror of the Holocaust.  My great-grandmother, who was the matriarch of her family, was told to leave her home. She and her family had gathered on the front porch. They refused to leave, and they just machine-gunned all of them down in 1941. So, obviously, I have personal experience with the horrors of the Holocaust, but the horrors are just awful.

Sometimes we refer to the horror as “unspeakable.” But unspeakable is exactly what the Holocaust must never become.  Those who perpetrated it, those who benefited from it want us not to speak. But we are here to speak and to have this hearing.

Now that’s a good source — official testimony from Schumer himself. From a written statement. The weight of this evidence outweighs everything prior, but is still added to it. It’s not just that Schumer is telling a story here, but that he is telling a story about an event that was plausible to begin with.

Is it bulletproof? No. Schumer could, of course, be lying, or exaggerating. He might have heard or remembered the story told him wrong. But right now, the best information we have is this testimony plus the remarks of others (such as aides) over a 20 year period. We have enough here, in absence of other evidence, to call this claim true.

But unlike “doubt” or “certainty”– the demand that something anything less perfect knowledge one way or another must leave us in a useless middle ground, we end up, with each step, getting better, more informed priors even as our decisions on what is true vacillate. By the end we call this true, because to overcome what we know here would require strong evidence that currently doesn’t seem to exist. But we’d be excited to get new information, even if it contradicted this, because it would build a better set of priors, both for this and other related claims.

This post is pretty nascent stuff — and maybe I’ve bit off a bit more than I can chew here. But I suppose what I’m saying is that fact-checking on a complex claim looks a bit like this:

truth.png

We’ll get this together in a better presented post at some later time. But I do think one of the primary goals of fact-checking is to get students to think about truth in more nuanced ways, and this is the sort of direction I see that going, instead of the cynical skepticism we often peddle.