Connexions News: New Editor, Big Announcement on March 31

I’ve become interesting in how forking content could help OER. The two big experiments in OER forking I know of come from WikiEducator and Connexions. (There may be others I’m forgetting; you can correct me in the comments). Connexions, in particular, has been looking at this issue for a very long time.

In an effort not to be Sebastian Thrun I’m trying to understand the difficulties these efforts have encountered in the past before building new solutions. It turns out Connexions may still have a trick or two up its sleeve — passing the information onto you. There appears to be an announcement coming up next week, and there is a new editor coming out as well:

2014-03-17_0857

One note about OER — this editing thing has always been a bear of a problem. You want editing to be easy for people, which means WYSIWYG. At the same time, since content has to be ported into multiple contexts you want markup to be semantic. Semantic and WYSIWYG have traditionally been oil and water, and so you end up either with a low bar to entry and documents that are a pain to repurpose or portable documents that no one can really edit without a mini-course. There’s multiple ways to deal with this (including just giving up on module level reuse entirely), but I’m interested to see the new editor. We have invested far too little money in the tools to do this right.

Why I Don’t Edit Wikis (And Why You Don’t Either, and What We Can Do About That)

Back in the heady days of 2008, I was tempted to edit a Wikipedia article. Tempted. Jim Groom had just released EDUPUNK to the world, and someone had put up a stub on Wikipedia for the term. Given I was involved with the earlier discussions on the term, I thought I’d pitch in.

Of course, what happened instead was a talkpage war on whether there sufficient notability to the term. Apparently the hundred or so blog posts on the term did not provide notability, since they did not exist in print form. Here’s the sort of maddening quote that followed after Jim got on the page and had granted CC-BY status to a photo so Wikipedia could use it. Speaking as a Wikipedia regular, one editor argues vociferously against the idea EDUPUNK deserves a page on the site:

This is clearly a meme. No one agrees what it means, its nice that a group of educators are so fond of wikipedia but it shouldnt be used for the purpose of promoting a new website and group. Even in this talk page this becomes clear, the poster boy says “Hey Enric, both of these images are already licensed under CC with a 2.0 nc-sa”Attribution-Share Alike 2.0 Generic.” It wouldn’t be very EDUPUNK if they weren’t ” then goes on to change the copyright of his own image to include it in this article, this is not ideology, this is a marketing campaign.

There’s a couple things to note here. First, the person whining above is not wrong, per se. This article is a public billboard of sorts, vulnerable to abuse by marketers, and vigilance makes sense. But ultimately his — and given Wikipedia’s gender bias it’s almost certainly a he — his protestations end up being ridiculous. EDUPUNK ends up a few months later being chosen as one of the words of the year by the New York Times, at the same time Wikipedia is unable to agree if it rises to the dizzying notability heights of fish finger sandwich.

But the most telling part of that comment is this:

No one agrees what it means, its nice that a group of educators are so fond of wikipedia but it shouldnt be used for the purpose of promoting a new website and group.

No one agrees what it means. Ward Cunningham, the guy who invented wikis, has been talking a while about the problem with this assumption — that we must agree immediately on these sorts of sites — and believes it to be the fundamental flaw of wikis. The idea that people should engage with one another and try to come to common understanding is a good thing, absolutely. The flaw, however, is that wiki format pushes you toward immediate consensus. The format doesn’t give people enough time to develop their own ideas individually or as a subgroup. So an article about fish finger sandwiches can get written (we’re all in agreement, good!) whereas an article on EDUPUNK can’t get written (too many different viewpoints, bad!).

It’s important to note Cunningham’s exact point here. Many people have gone after the culture of Wikipedia in recent years, a culture which is increasingly broken. Cunningham’s point is that the culture is a product of the tool itself, which doesn’t give folks enough alone time. We need to break off, develop our ideas, and come back and reconcile them. And we need a tool that encourages us to do that.

I’ve been thinking this through for a bit, trying to come up with a solution to this problem that has the spirit of Cunningham’s proposed federated wiki but is easier for people to wrap their heads around. Here’s the the basic idea, mostly carried forward from Cunningham, but eliminating a couple more complex concepts, and simplifying concepts and implementation.

  1. I install a wiki on my server, but it’s not empty. It’s a copy of a reference on online learning (or some other reference of interest to me), with all wiki pages transcluded. For the uninitiated, what this means is my wiki “passes through” the existing wiki pages. For the purposes of imagining this, let’s pretend I just pull 2500 articles about learning and networks from Wikipedia, and transclude them on my wiki/server.
  2. I then join a federation. So let’s say I join a federation of a 100 instructional designers and technologists. This changes search for me, because search on my wiki is federated now. I can search across the federation for an article on EDUPUNK. Let’s say it’s 2008 and I’m looking for a quick explanatory link on EDUPUNK to send someone. I pump in that search and find there’s five or six somewhat crappy treatments, and one half decent one by Martin Weller.
  3. I don’t edit it. Or rather, I do, but the minute I edit it, this becomes a fork that only lives on my server. So I fix it up without having to get into long arguments with people about notability, etc. When done, I shoot a link to the person I wanted to send the article to. My selfish needs are met.
  4. Now, however, when anyone goes to their EDUPUNK article in the federation, they see that I’ve written a new version. Some people decide to adopt this as their version. Martin Weller sees my edits, and works about half of them into his version along with some other stuff. Jim comes by and adopt Martin’s new version with some changes. It’s better than my version, so I adopt that one.
  5. Tools start to show a coalescence around the Martin-Me-Martin-Jim version. A wiki gardener in charge of the “hub” version looks at the various versions and pulls them together, favoring the Martin-Me-Martin-Jim version, but incorporating other elements as well. This version will get distributed when new people join the federation, but as before, people can fork it, and existing forks remain intact.

The idea here is that forks preserve information by giving people the freedom to edit egocentrically, but that the system makes reconciliation easy by keeping track of the other versions, so that periodic gardening can bring these versions together back into a more generic whole.

You can think about this from any number of angles — imagine an online textbook, for example, that allowed you to see all the modifications made to that textbook by other instructors — and not edits living on a corporate server owned by Harcourt-Brace, but edits that were truly distributed. Imagine a federated student wiki, where your students could build out their articles in piece during the semester, seeing how other students had forked and modified their articles, but keeping control of their subsite, and not being forced to accept outside edits. The student’s final work would reflect *their* set of decisions about the subject and the critiques of their treatment of it. Or imagine support documentation that kept track of localizations, making it easy to see what things various clients needed to clarify, and making those changes available to all.

Anyway, this is the idea. Encourage forking, but make reconciliation easy. It’s the way things are going, and the implications for both OER production and academic wikis are huge.

Short Notes on the Absence of Theory

Martin Weller, Stephen Downes, and Matt Crosslin have been kicking around the “post-theory” critique of MRI ’13 that came up in a discussion Jim Groom and I had Thursday night in the middle of a bar in the middle of a hotel in the middle of an ice storm.

I thought I might just add a bit of context and my two cents.

First, the conversation came up because Jim was quite nicely (and genuinely) asking an edX data analyst what Big Data was. The answer that analyst gave was that Big Data was data that was big. That’s actually technically correct — the original term was meant to refer to data that was big enough in terabytes/petabytes that it could not be processed through traditional means. If your data was big enough that you were using Hadoop, it was Big Data.

Because I’m generally a person that can’t keep my mouth shut, I interjected that while that was true from a technical standpoint, it didn’t really get at the cultural significance of the Big Data movement, which was captured in Chris Andersen’s “End of Theory” article back in 2008. Here’s a sample:

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.


Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

While my analogy-prone brain sees parallels here to Searle’s Chinese Room problem, it’s probably more correct to see this as behaviorism writ large: where Skinner wanted us to see the mind as a black box determined by inputs and outputs, Big Data asks us to see entire classes of people as sets of statistical probabilities, and the process of research becomes the iterative manipulation of inputs to achieve desired outputs. And the same issues emerge: Chomsky’s “destruction” of behaviorism in his 1959 takedown of B. F. Skinner’s Verbal Behavior is generally overstated, but certain passages in that work seem a relevant critique of the “end of theory”; for instance, where Chomsky criticizes Skinner’s notion of reference: “The assertion (115) that so far as the speaker is concerned, the relation of reference is ‘simply the probability that the speaker will emit a response of a given form in the presence of a stimulus having specified properties’ is surely incorrect if we take the words presence, stimulus, and probability in their literal sense.”

Of course, in the past 50 years we’ve seen this Chomsky-Skinner drama played out anew in linguistics. While Chomsky’s transformational grammar underpinned efforts at computer translation for many years, Google’s translational approach, which sees language as nothing more than a set of probabilities (words are “known” to be the same in two different languages if they have the same probability of occurring in a context), is quickly outstripping the traditional methods. In fact, for a certain class of tasks it becomes increasingly obvious that correlation *is* enough. Google’s translation engine has little to no theory of language, yet adequately serves for a person who needs a quick translation of a web page. And that somewhat atheoretical nature of the engine is in fact its strength — Google’s approach needs only a robust set of web pages from any language to generate the correlations needed to start translating.

So this debate is not really new, and there’s certainly a place for this sort of radical pragmatism. Chomsky’s focus on a system of mental rules that form a universal grammar may have enlarged human knowledge, but it’s turning out to be a really inefficient way to train computers to understand language. Gains in understanding underlying models are not always the shortest route to efficacy.

But such approaches come with a down side as well. Morozov deals with this extensively in his book To Save Everything, Click Here, and in his WSJ review of the book Big Data. After noting that Big Data is very useful in situations where you don’t care what the cause is (Amazon cares not a whit *why* people who buy german chocolate also buy cake pans as long as they get to the checkout buying both), where you do care about cause things are a bit different:

Take obesity. It’s one thing for policy makers to attack the problem knowing that people who walk tend to be more fit. It’s quite another to investigate why so few people walk. A policy maker satisfied with correlations might tackle obesity by giving everyone a pedometer or a smartphone with an app to help them track physical activity—never mind that there is nowhere to walk, except for the mall and the highway. A policy maker concerned with causality might invest in pavements and public spaces that would make walking possible. Substituting the “why” with the “what” doesn’t just give us the same solutions faster—often it gives us different, potentially inferior solutions.

A hardline proponent of a Big Data approach might object to Morozov that you just need more nuanced and informed correlations. But assuming you had no theory about of ultimate causes, how would you even conceive of the possibility? (This is similar to what Michael Feldstein was getting at in his piece about the inadequacy of Big Data for education). A person who does not have a model of what is happening is unlikely to know where to look for inconsistencies. And Big Data is, by definition, big. Theory is your roadmap.

This is why at the workshop on analytics at the conference, I insisted on the “grokability” of analytics-produced guidance to the people who would use it to help students. In a way it comes down to the empowerment of the practitioner (and of the student). If I’m told I have a 50% chance of dropping out based on my “rt-score” of 2145.7, that’s one thing. But the interpretation of what to *do* about that number should depend heavily on what the inputs into it were. Was it prior GPA that pumped that score so high, or socioeconomic status? And the reason those variables are treated differently is that we have models and theories about socioeconomic status and GPA that help us understand its significance as a predictor.

Ultimately, like so many in the field, I’m actually very excited about the promise of data (though I would argue that it is actually “small data” — data that can live in a single spreadsheet — that paired with local use has the greatest potential). Still, if we are to enter this world we have to understand the trade-offs we engage in. Most of the theory-bound could certainly use a better understanding of how powerful a tool statistics can be in overcoming our own theoretical predispositions. It’s useful to understand that theory is not the only tool in the toolbox. But it’s equally true that the new breed of data scientist needs to be far more acquainted with the theories and assumptions that animate the sets of data in front of them. At the very least, they have to understand what theory is good for, why it matters, and why it is not always sufficient to tweak inputs and outputs.

Rediscovering (Semi-)Social Bookmarking

I joined Pinboard, the new, ad-free, pay-once-get-it-forever social bookmarking service a few months ago for an educational tech project I am working on. I’m not new to social-bookmarking — I’d been an early user of delicious, a Diigo migrant, and ultimately became a lapsed bookmarker, confused about why the whole thing hadn’t worked out.

I think I may no longer be confused. The thing is, I was doing bookmarking wrong. I was bookmarking articles I thought were stellar, carefully pruning my tags. I imagined strangers stumbling on my account, and being impressed by the well curated collection, like the man with the owl-eyed glasses in Gatsby’s library before he realizes the pages aren’t cut.

In other words, I thought social bookmarking was about the social element.

Now, with a Pinboard archive account that indexes the whole page text of whatever I bookmark and rock-solid API support, I’ve made social bookmarking about me again. And it’s wonderful. I no longer agonize about what to bookmark. If I read something — anything — on the web that I think I might like to remember at some point I click the toolbar Pinboard link and file it. I come up with some terms to index it, but don’t spend more than a couple seconds on them. The point of bookmarking is now to be a Memex, to turn those moments where I tell someone “I think there was an article about that I read a few months ago” into “Here’s a link to an article on that from a few months ago.” Or, more importantly, to call that article to hand when I need it for my own writing.

There are a couple developments since early social bookmarking that make this approach possible. First of all, Twitter and Tumblr have largely satisfied the “I’m recommending these links to you” market. Rather than bookmark only notable articles, I use an IFTTT script that takes anything tagged “to:haplr” and posts it to Twitter and a Tumblr linklog, along with my comments. Feed-reading is also integrated. With Feedbin, any post I star flows automatically into my Pinboard bookmarks, creating the rich searchable archive that we once had with Google Reader, only this time hosted in a paid service that is less likely to pull the rug out from under the user. The ease of private bookmarking in Pinboard also changes the dynamic — allowing you to bookmark (and archive) material that you might not want to clutter your public bookmarks with.

But perhaps the biggest shift is seeing how unsuited Twitter, Tumblr, and other link-sharing mechanisms can be to certain forms of serious work – the number of times I have found myself paging through my tweetstream trying to find a link to an article I tweeted out that I now need to reference is embarrassing.

In any case, if you were once a bookmarker but abandoned the practice, try giving it another shot with a “bookmark it now and sort it out later” approach. Get an archive account, and start caching pages of what you read. Play around with the IFTTT options. I think you might be surprised to find that the abandoned child of the Web 2.0 revolution is actually what you’ve been yearning for the past couple of years.

A Plan for a $10K Degree: A Response

A new proposal is out from Third Way, authored by Anya Kamenetz. It makes an argument for a radical restructuring of higher education in pursuit of a radically cheaper degree. I plan to write a few blog posts on its proposals. This is the first.

There’s many things to like about the plan.

I like the scope of the plan. It’s an ambitious plan, but it starts from the premise we have a rich public educational infrastructure in the U.S. that needs to be reconfigured, not abandoned, dismantled, privatized, or routed around. For that reason alone I think the proposal is worth serious debate.

I like that it correctly diagnoses much of what ails education: it’s a system where competition has distorted our institution’s priorities, resulting in competition in the wrong areas, and a structure that does not work to accomplish our stated mission.

And Anya’s six “steps” are, I think, roughly correct:

  • Reduce and restructure personnel
  • End the perk wars
  • Focus on college completion
  • Scale up blended learning
  • Streamline offerings
  • Rethink college (system) architecture

So it’s a good pass at the issue. At the same time there are some issues at the detail level that require elaboration. Today I want to talk about three pieces of the plan — the $10K premise, the personnel restructuring, and the perk wars.

The $10,000 Degree

I’m not sure how we got to this $10,000 degree number. I went to college in 1987; at that time my four year tuition was around ten to fifteen thousand dollars. If Wolfram Alpha is right, that would be $20,000-$30,000 in today’s dollars. And that isn’t counting the much more sizable state subsidy that we had at that time.

The $10,000 degree also doesn’t jive with what we know about cost in other sectors. A half decent high school will spend $10,000 per student per year on instruction in a flattened no-frills model that looks much like Anya’s proposal. Even assuming a subsidy could half the student side of that (a big assumption), we’re still left with $20,000 for four years.

As a final check, we can look at cost per graduate numbers as they currently stand, and see that they range from about $28,000 to $500,000. The “disruptive” school that Clayton Christensen wrote an entire book about, BYU Idaho, has gotten cost per completion down to about $30,000 a year. A policy that shoots for a result that is likely a couple standard deviations out from the mean is a policy designed to fail.

I’d argue that if we are going to pick a number, it should be one grounded in data, not rhetoric. If you want to see what overly rhetorical stretch goals do to a social institution, you can look at No Child Left Behind’s targets have done to K-12. A $20,000 or $25,000 degree is not as sexy as its Texas cousin, but represents a difficult target that may be achievable, would largely solve the student debt problem, and would not create the sort of unprofitable schism that talk of $10,000 degrees leads to.

Reduce and Restructure Personnel

Here Anya breaks the traditional roles in a university into three roles: Academic Advisors/Mentors, Instructor/Instructional Technologists, and Professor/Instructional Designers. I applaud the rethinking of roles, and think these role delineations are better than what we have currently, but wonder to what extent they are sustainable. People I know all over the country are trying to hire instructional designers and instructional technologists right now. They are incredibly rare, and much more expensive than your average college professor. They also have profitable options in private industry not always available to the average history professor.

And of course finding people highly qualified in their academic discipline who have instructional design experience as well only gets more difficult (and hence, more expensive).

The problem here is that the narrative that schools are expensive because they are administration/staff heavy is in conflict with the narrative that we need more expertise in delivery. In companies that compete for instructional design bids, positions are far more specialized and role-differentiated than one finds at colleges. This is because expertise is rare and expensive, and must be shared across multiple projects.

Ending the Perk Wars

We should end the perk wars, agreed. Campuses are going to have to increasingly organize around the assumption that their students don’t live on campus, and develop communities that are less focused on “campus life” and more focused on “college life”. The attempts to lure richer students to campus with coutry club features has to stop.

Anya also suggests that extracurriculars should be defunded, however, and that is a social justice issue for me. Just as exiling art classes from grade school has resulted in art classes for rich kids, and nothing for the poor, so exiling extracurriculars from state schools will result in a subpar incomplete education for lower-income students. I learned much from working at the radio station at my college and working on the literary journal — much more than I did in most classes. Many students will tell you the same about the clubs they belonged to, and many faculty will tell you they had more impact as advisers to these clubs than in their classes.

More next week, and a note on bloat

Next week I’ll go through the rest of the plan (or at least the next third of it). Looking at the few points I’ve dealt with today, I think the one theme that strikes me is that bloat doesn’t work the way people think it does. As companies become more efficient, roles differentiate, and there ends up being somewhat less frontline staff.

The tendency is to call all non-frontline staff “bloat”, whether they are lab maintenance specialists, instructional technologists, or student financial aid experts. Programs are similar: extracurricular activities (the Geology club) are “bloat”, whereas Geology 101 is core, regardless of the relative impact of each of these on education.

This doesn’t happen in any other industry I’m aware of. We don’t look at a software company and declare that everyone who is not a programmer is “bloat”. Yet the truth is that many elements of interface design that are dealt with by programmers early on in a company’s history are moved to interface designers and human factors experts. Product features that were once the scope of the senior coder are moved into “management” areas, such as product leads. This is because while there are a select number of people that can be expert in many things in a five-person start up, you cannot build a company on them. To build a company you find experts in specific areas, and build the management structure that allows those experts to work together (we can debate on what that structure should look like, it can certainly be agile in nature, but it must be put in place).

All of this allows companies to deliver a better product at a reduced cost. My guess is that if education is truly going to get cheaper we will need to see more role differentiation not less, and start considering extracurricular activities in light of how they provide sometimes invisible support for the curriculum. Most of all we have to get beyond simplistic definitions of “bloat” and move towards a more nuanced understanding of a decades-long shift of instructional and advising expertise from faculty to staff.

More to come…

The Myth of the All-in-one

Beware the All-in-one.
Occasionally (well, OK, more than occassionally) I’m asked why we can’t just get a single educational tech application that would have everything our students could need — blogging, wikis, messaging, link-curation, etc.

The simple answer to that is that such a tool does exist, it’s called Sharepoint, and it’s where content goes to die.

The more complex answer is that we are always balancing the compatibility of tools with one another against the compatibility of tools with the task at hand.

The compatibility of tools with each other tends to be the most visible aspect of compatibility. You have to remember if you typed up something in Word or Google Docs, remember what your username was on x account. There’s also a lot of cognitive load associated with deciding what tool to use and to learning new processes, and that stresses you out and wastes time better spent on doing stuff that matters.

But the hidden compatibility issue is whether the tools are appropriate to the task we have at hand. Case in point — I am a Markdown fan. I find that using Markdown to write documents keeps me focused on the document’s verbal flow instead of its look. I write better when I write in Markdown than I do when I write in Google Docs, and better in Google Docs than when I write in Word. For me, clarity of prose is inversely proportional to the number of icons on the editing ribbon.

Today, Alan Levine introduced me to the tool I am typing in right now — a lightweight piece of software called draftin. Draftin is a tool that is designed around the ways writers work and collaborate, rather than the way that coders think about office software. It uses Markdown, integrates with file sharing services, and sports a revise/merge feature that pulls the Microsoft Word “Merge Revisions” process into the age of cloud storage.

As I think about it, though, it’s also a great example of why the all-in-one dream is an empty one. If I was teaching a composition class, this tool would be a godsend, both in terms of the collaboration model (where students make suggested edits that are either accepted or rejected) and in the way Markdown refocuses student attention on the text. Part of the art of teaching (and part of the art of working) is in the calculus of how the benefits of the new tool stack up against the cognitive load the new tool imposes on the user.

We want more integration, absolutely. Better APIs, better protocols, more fluid sharing. Reduced lock-in, unbundled services, common authentication. These things will help. But ultimately cutting a liveable path between yet-another-tool syndrome and I-have-a-hammer-this-is-a-nail disease has been part of the human experience since the first human thought that chipped flint might outperform pointy stick. The search for the all-in-one is, at its heart, a longing for the end of history. And for most of us, that isn’t what we want at all.

Photo credit: flickr/clang boom steam

Numeracy, Motivated Cognition, and Networked Learning

If you think general education will save the world — that a first-year course in economics, for example, will make students better judges of economic policy — think again. The finding that knowledge in these areas cannot overcome identity barriers (liberal/conservative, rural/urban, etc.) is well established. But the most recent study on the subject makes it so depressingly clear that you may just want to curl up in a ball, pull the covers over your head, and call in sick this morning. It’s really that bad.

What the new study did was muck about with some data. There was a control situation that asked people to evaluate the effectiveness of a skin cream. In that circumstance they are presented a chart like the following:

Face Cream Task

Face Cream Task

So did the face cream work? In general, people with a high level of numeracy (as determined by another test) got the answer right. In short, when you compute the positive effects vs. negative effects as a ratio (rather than being blinded by raw counts) the face cream actually does more harm than good in the above instance. (In an alternate control scenario, the cream actually works).

Now we throw identity into the mix — we make the question about gun control. Then we ask highly numerate conservatives and liberals to evaluate the same sort of chart, but with a twist — sometimes the data supports gun control, sometimes it argues against it:

all types

I think you know where this is going, so I’ll make it short. The highly numerate individuals that were able to handle the face cream task near-perfectly botched the gun control task if-and-only-if the correct result contradicted their beliefs. Or, to put it more depressingly, increasing numeracy does not seem to help people much in this sort of situation. A more numerate society is, in fact, likely a more polarized society, with greater disagreement on what the truth is.

So here it is gun control — but substitute nuclear power, military strikes, global warming, educational policy, etc., and you’re likely to see the same pattern. And the background models this taps are often tested through numerical scenarios, but the models predict such identity preserving behaviors in non-numerical scenarios as well.

So that whole education for democracy idea? That Dewey-eyed belief that a smarter population is going to make better decisions? It’s under threat here, to say the least.

What’s the solution? Well, the first thing to realize is that such a result seems to be primarily about time and effort. This sort of task is one of many where our initial intuitions will be wrong, and it is only the mental discipline we’ve mapped on top of those intuitions that saves us from their error. No matter how smart you are, you will work harder at dissecting things which argue against your beliefs than things which seem to confirm them. You could have no beliefs on anything, I suppose, but that would defeat the whole purpose of looking for the truth in the first place (and make you a pretty horrible person to boot). And it wouldn’t solve the root problem — you don’t have time to look into everything, even if you wanted to.

So what’s the upshot? The authors contend that

In a deliberative environment protected from the entanglement of cultural meanings and policy-relevant facts, moreover, there is little reason to assume that ordinary citizens will be unable to make an intelligent contribution to public policymaking. The amount of decision-relevant science that individuals reliably make use of in their everyday lives far exceeds what any of them (even scientists, particularly when acting outside of the domain of their particular specialty) are capable of understanding on an expert level. They are able to accomplish this feat because they are experts at something else: identifying who knows what about what (Keil 2010), a form of rational processing of information that features consulting others whose basic outlooks individuals share and whose knowledge and insights they can therefore reliably gauge (Kahan, Braman, Cohen, Gastil & Slovic 2010).

Perhaps I’m seeing this through my own world filters, but it seems to me an argument for networked learning. The authors point out that for most decisions we have to make we are going to have to rely on the opinions and analyses of others; thus the way we determine and make use of other’s expertise will determine our success in moving beyond bias. In particular, we have to navigate this difficult problem — we need the opinions of people that share our values and interests (we rightly are suspicious of oil company research on the effects of oil on groundwater purity). But develop a network based solely around values, and you start to reach a state of what Julian Sanchez has referred to as a sort of cultural epistemic closure:

Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!). This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.

It’s easy to har-har about the Fox News set, but we see this element in smaller ways in other areas more familiar to the readership of this blog — the anti-testing set that believes the tests that show that testing does not work is an example I was noticing the other day, but you’re free to generate your own examples.

I belong to many of these communities, and help perpetuate their existence. I’m not claiming some sort of sacred knowledge here. But what the networked learning advocate knows that others may not know is that the only real hope to escape bias is not more mental clock cycles, but participation in better communities that allow, and even encourage the free flow of well-argued ideas while avoiding the trap of knee-jerk centrism. Communities which allow people to at least temporarily disentangle these questions from issues of identity.

In short, if you are going to read reports about gun control correctly, you may need to understand statistics somewhat better, but you also need to build yourself a better network. And that is something we are not spending near enough time helping our students to do.