Sometimes Failure Is Just Failure

Via Audrey Watters, there’s a great article out today about how startup culture, with its preference for “failing fast” instead of doing research, is killing innovation. It’s worse than that, though, in MOOC world, where startup culture is wasting the time and good intentions of millions.

Here’s an example of an actual question that my boss found in a MOOC she was taking:

malthus

This diagram illustrates the important ideas behind the Malthusian model of population. The green curve represents the total population, and the beige line represents the total amount of food available.

If the Malthusian theory is accurate, why do the curves in the figure not follow the paths shown?

  1. All of the other choices are correct
  2. After the point of intersection, the curved line should not exceed the straight line.
  3. The curved path will actually become a constant
  4. The flat path will increase exponentially
  5. Before the point of intersection, the curved line actually matches the straight line

No matter what your friends may tell you, a well-constructed multiple choice question can be a wonderful learning opportunity. That’s not the case here. This question is a hot mess. The question, as phrased, asks why the curves on the graph do not follow the paths of the curves on the graph.

So, if you’re like most people, you sit and parse this out. I’m a former Literature and Linguistics major who eats complex constructions for breakfast, but it took me two minutes just to parse out what the question means. None of that effort, by the way, increased my knowledge of either the English language or Malthusian economics. After parsing it, it takes another minute or two to walk through the options, which are oddly structured.

There are dozens of ways in which this question is wrong.

  • Its “All of the other choices are correct” option is bad.. Experts in multiple choice testing will tell you to avoid this option at all costs, as it causes processing issues and tends to reduce the validity of the test.
  • Its answers should be parallel in structure.  They’re not.
  • It includes incomprehensible and meaningless distractors to get the answer count up to five, when best practice in these constructions suggests three plausible distractors is a better route than putting nonsense up.
  • It references “curves”, but most students will only see one curve and one straight line – seeing these both as curves requires a vocabulary the student doesn’t necessarily have.
  • Its question is not comprehensible without the answers. The answer any intelligent person would formulate reading just the question “Why do the curves not follow the paths shown?” is “Because we run out of food.” But that is not actually the question the answers answer.

The larger issue here is it is unclear what this question actually tests. I suppose it is supposed to see if students understand the fundamental insight of Malthus, but if it is, it feels a bit sloppy. It looks like an application question, but it really is just a confusingly worded recall question.

I didn’t come up with these critiques on my own. A group of people at Brigham Young University has spent decades researching how to create effective multiple choice questions. They’ve boiled their research down to a four page checklist you can find here.

You can use that to reformulate the question this way:

malthus

This diagram illustrates projected trends in population and food production. The green curve represents projected total population, and the beige line represents projected food production.

Malthusian theory says this graph is an incorrect prediction of the future. If Malthusian theory is accurate, what will actually happen at point T?

  1. The population line will change, staying under the food line.
  2. The food production line will change, bending upward to keep up with population.
  3. Both lines will change, the food production line bending up, and the population line bending down.

I’m not sure my version completely captures the objective or disciplinary understanding here (I’m not an economist).  But that’s a five minute rewrite based on the BYU checklist that will increase the question’s effectiveness both as a learning tool and an assessment.

Now Coursera or others might tell you the beauty of Big Data is that questions like these will be pegged by algorithms and improved over time. This is part of being proud of “failing fast”.

But let’s do the math on this. Say this course has 10,000 students that spent four minutes each on a question that has no validity as an assessment and teaches them nothing but how to read absolutely tortured English. That’s 667 total student-hours wasted on a question because the producers of this course couldn’t be bothered to do a five minute rewrite. That’s 667 hours that students who want desperately to learn are spending not learning. It’s 667 hours they could be spending with another product, or with their family, or volunteering with their community. Or just sitting out on the back patio enjoying the weather.

Multiply that by the number of questions like this in a typical MOOC product. Then take that number and multiply it by the number of MOOCs. You start to see what “failing fast” and “solving problems through Big Data” means in human cost. It’s about cavalierly taking millions of hours away from customers because someone (either Coursera or the university partner) can’t be bothered to pay for an instructional designer.

I get why I see questions like this in local university classes. We don’t have the scale to put an instructional designer in every class. But if the large-scale production of course materials is supposed to solve anything, it’s exactly this problem. The whole point of producing course materials at scale is that we can finally afford to do this right, and tap into the research and professional knowledge that can make these things better. “Fail fast”, when used as an excuse for shoddy work, makes a mockery of the benefits of scale, and treats student time as worthless. And all the Big Data in the world is not going to make that better.

Numeracy, Motivated Cognition, and Networked Learning

If you think general education will save the world — that a first-year course in economics, for example, will make students better judges of economic policy — think again. The finding that knowledge in these areas cannot overcome identity barriers (liberal/conservative, rural/urban, etc.) is well established. But the most recent study on the subject makes it so depressingly clear that you may just want to curl up in a ball, pull the covers over your head, and call in sick this morning. It’s really that bad.

What the new study did was muck about with some data. There was a control situation that asked people to evaluate the effectiveness of a skin cream. In that circumstance they are presented a chart like the following:

Face Cream Task

Face Cream Task

So did the face cream work? In general, people with a high level of numeracy (as determined by another test) got the answer right. In short, when you compute the positive effects vs. negative effects as a ratio (rather than being blinded by raw counts) the face cream actually does more harm than good in the above instance. (In an alternate control scenario, the cream actually works).

Now we throw identity into the mix — we make the question about gun control. Then we ask highly numerate conservatives and liberals to evaluate the same sort of chart, but with a twist — sometimes the data supports gun control, sometimes it argues against it:

all types

I think you know where this is going, so I’ll make it short. The highly numerate individuals that were able to handle the face cream task near-perfectly botched the gun control task if-and-only-if the correct result contradicted their beliefs. Or, to put it more depressingly, increasing numeracy does not seem to help people much in this sort of situation. A more numerate society is, in fact, likely a more polarized society, with greater disagreement on what the truth is.

So here it is gun control — but substitute nuclear power, military strikes, global warming, educational policy, etc., and you’re likely to see the same pattern. And the background models this taps are often tested through numerical scenarios, but the models predict such identity preserving behaviors in non-numerical scenarios as well.

So that whole education for democracy idea? That Dewey-eyed belief that a smarter population is going to make better decisions? It’s under threat here, to say the least.

What’s the solution? Well, the first thing to realize is that such a result seems to be primarily about time and effort. This sort of task is one of many where our initial intuitions will be wrong, and it is only the mental discipline we’ve mapped on top of those intuitions that saves us from their error. No matter how smart you are, you will work harder at dissecting things which argue against your beliefs than things which seem to confirm them. You could have no beliefs on anything, I suppose, but that would defeat the whole purpose of looking for the truth in the first place (and make you a pretty horrible person to boot). And it wouldn’t solve the root problem — you don’t have time to look into everything, even if you wanted to.

So what’s the upshot? The authors contend that

In a deliberative environment protected from the entanglement of cultural meanings and policy-relevant facts, moreover, there is little reason to assume that ordinary citizens will be unable to make an intelligent contribution to public policymaking. The amount of decision-relevant science that individuals reliably make use of in their everyday lives far exceeds what any of them (even scientists, particularly when acting outside of the domain of their particular specialty) are capable of understanding on an expert level. They are able to accomplish this feat because they are experts at something else: identifying who knows what about what (Keil 2010), a form of rational processing of information that features consulting others whose basic outlooks individuals share and whose knowledge and insights they can therefore reliably gauge (Kahan, Braman, Cohen, Gastil & Slovic 2010).

Perhaps I’m seeing this through my own world filters, but it seems to me an argument for networked learning. The authors point out that for most decisions we have to make we are going to have to rely on the opinions and analyses of others; thus the way we determine and make use of other’s expertise will determine our success in moving beyond bias. In particular, we have to navigate this difficult problem — we need the opinions of people that share our values and interests (we rightly are suspicious of oil company research on the effects of oil on groundwater purity). But develop a network based solely around values, and you start to reach a state of what Julian Sanchez has referred to as a sort of cultural epistemic closure:

Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!). This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.

It’s easy to har-har about the Fox News set, but we see this element in smaller ways in other areas more familiar to the readership of this blog — the anti-testing set that believes the tests that show that testing does not work is an example I was noticing the other day, but you’re free to generate your own examples.

I belong to many of these communities, and help perpetuate their existence. I’m not claiming some sort of sacred knowledge here. But what the networked learning advocate knows that others may not know is that the only real hope to escape bias is not more mental clock cycles, but participation in better communities that allow, and even encourage the free flow of well-argued ideas while avoiding the trap of knee-jerk centrism. Communities which allow people to at least temporarily disentangle these questions from issues of identity.

In short, if you are going to read reports about gun control correctly, you may need to understand statistics somewhat better, but you also need to build yourself a better network. And that is something we are not spending near enough time helping our students to do.