From A. N. Whitehead’s An Introduction to Mathematics, a brilliant early reflection on what we now see as a System 1/System 2 problem: “It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.”

A Herd Immunity to Nonsense

Mark Pagel on the internet and our cultural evolution:

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

If you go to that link above, which is something you should do right now, you’ll see evolutionary biologist Mark Pagel sorting through some free-ranging and often contradictory ideas on how the internet is shaping our evolution — or to be more precise, how it has taken the 200,000 year trend toward copying over innovation and thrown it, perhaps, into hyperdrive. If you read one thing today, read Pagel. It’s wonderfully free of polemic, and fantastically interesting. It focuses on the copier vs. innovator divide, and explores whether the incentives to innovate are so low in a hyperconnected world that we’re in deep trouble. 

I’d like to focus on something smaller than that — the “docile” part of Pagel’s equation. If you put together Pagel’s insight with Dan Kahneman’s insight that the true value of teaching critical thinking is not that you keep yourself out of trouble, but you can keep others out of trouble, I think you have an interesting argument for the importance of critical consumption to the survival of our species in a connected world.

How do I mean? In a system where people are purely docile copiers, nonsense spreads as quickly as insight. If you want your system to truly float valuable ideas to the top and let the nonsense sink, people have to do more than share — they have to engage with material, annotate it, note reservations, and in most cases simply share nonsense at a considerably lower rate than insight. They have to be able to spot cognitive bias and flaws in reasoning in the three minutes between when the tweet comes in and when they decide to retweet it (or not). That requires, as Kahneman points out, not so much a set of enlightened creators or leaders, but a culture of skilled critics. 

Again, as I noted in my last post, this is a different way of looking at education — instead of seeing it only about the impact on a student’s ability to do specific work, we are looking at more like we look at vaccination — it’s not only about the individual, but about developing a herd immunity to nonsense — getting our collective critical capacity to the point where the dumb ideas spread less widely than the smart ones. 

To do that, what do you need? A background in quantitative reasoning, critical thinking. A recognition of common biases. Critical reading skills, and a good intuition about authoritative sources. You need, essentially, a broadly rethought liberal arts education…

Cognitive Bias and Education as a Public Good

A strange but true exhortation from Dan Kahneman, the guy who, with Amos Tversky, basically invented the field of cognitive bias. After forty years of looking into the weird world of bias he says the only effective way to get around your own biases is to create a society of people skilled enough to correct you:

From the end of Thinking, Fast and Slow:

What can be done about biases? How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely: “This number will be an anchor…,” “The decision could change if the problem is reframed…” And I have made much more progress in recognizing the errors of others than my own.

The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2. This is how you will proceed when you next encounter the Müller-Lyer illusion. When you see lines with fins pointing in different directions, you will recognize the situation as one in which you should not trust your impressions of length. Unfortunately, this sensible procedure is least likely to be applied when it is needed most. We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available, and cognitive illusions are generally more difficult to recognize than perceptual illusions. The voice of reason may be much fainter than the loud and clear voice of an erroneous intuition, and questioning your intuitions is unpleasant when you face the stress of a big decision. More doubt is the last thing you want when you are in trouble. The upshot is that it is much easier to identify a minefield when you observe others wandering into it than when you are about to do so. Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented to critics and gossipers rather than to decision makers.

Emphasis mine.

I think this is a pretty interesting way of thinking about education as a public good. Think of training in critical thinking or quantitative reasoning like CPR training — the person you are most likely to help with it is not yourself but somebody else. The most important role may be that of the informed observer, because, as Kahneman observes, what keeps us on track best is not our own abilities, but a culture of critical thought.

Semantic Mapping vs. Pictorial Cues

From A Theory-Based Meta-Analysis of Research on Instruction by RJ Marzano:

The next two techniques displayed in Table 7.2 employed the information processing function of idea representation.  Techniques that provided students with metacognitive strategies for using visual memory had an effect size of 1.04, indicating a percentile gain of 35 points.  Presumably, these strategies help students represent information they are reading in nonlinguistic form…  From these findings, one might infer that idea representation is a key aspect of the reading process…

[R]eading studies [that] addressed techniques that attempt to enhance the idea representation information processing function during reading using pictorial aids…were considered as a group in themselves (as opposed to grouping them with the idea representation techniques in Table 7.2) because they did not employ the metacognitive system.  Rather, they were considered manipulations of the environment designed to stimulate idea presentation in students.  These studies had an average effect size of .46 (n=16; SD=.20), indicating a percentile gain of 28 points.  Table 7.4 displays the differential effects of techniques within this category

I may be misreading this, but what it seems to say is that interventions that use pictures as cues presented to students while reading underperform interventions that teach students to make up their own pictorial representations by a lot (0.46 is a fairly average effect size seen in numerous interventions, whereas 1.0 and above is a fairly rare effect size).

The key seems to be that information organized for the students into pictorial form allows them to disengage from metacognitive strategies, whereas organizing their ideas into pictorial form themselves engages metacognitive processing. These findings are in line with Hattie’s findings on audio-visual aids and the like. 

But I’ve only just skimmed this at this point, and the focus is not higher education here, so one must tread carefully. I’ll come back to this tomorrow I guess. 

This all relates to a crazy idea for the Quantitative Literacy course next semester…

Stanovich on Conflict and Critical Thinking

Well, actually the Hitchcock review of Stanovich:

What types of people succeed in overriding interactional intelligence in conflict situations? As one might expect, subjects with greater cognitive ability (as measured by SAT Total scores) were more likely to do so. But so were those with the dispositions characteristic of an ideal critical thinker: even after controlling for differences in cognitive ability, reasoning performance correlated with degree of open-mindedness and epistemic flexibility (cultivating reflectiveness rather than impulsivity, seeking and processing information that disconfirms one’s belief, being willing to change one’s beliefs in the face of contradictory evidence). Further, these dispositions tended to cut across different domains.

For those unfamiliar with Stanovich, his model of the mind rewrites the typical intuition/logic model with a intuitive mind/algorithmic mind/reflective mind model. The main implication is that intelligence is not enough — in practice, many people who are highly intelligent have dispositions that shirk from the hard work of interrogating intuitions, and use rationality only to confirm gut instinct. For Stanovich, intelligence and rationality are related but separable terms. 

I like the model, though I’m still slogging through his work, and probably don’t grasp the details fully (the above is surely a simplification, and possibly wrong). What it gets at, by empirically demonstrating the gap between cognitive power and the ability and drive to interrogate intuitions, is a version of “critical thinking” that might actually mean something useful…

Moonwalking with Einstein

Just finished Joshua Foer’s book Moonwalking with Einstein, one of the most amusing books I’ve read in a while. I’d highly recommend it to anyone, just based on the style of his writing alone, which strikes me as Jonah Lehrer as written by Sarah Vowell (of The Wordy Shipmates period, not Assassination Vacation). But that probably doesn’t really capture it either. 

You know what — you just have to read it. 

In any case, a lot of it is about memory tricks and the like, but the thread that runs through it to the end is the power of deliberate practice (he becomes a test subject during his training for Ericsson himself). We tend to not get better at things after achieving base-level competency because we know enough to get by. Unstructured experience, as Ericsson has shown again and again, can be a really poor teacher. Conversely, when we make a commitment to approach our performance more critically and strategically, even small amounts of effort  can pay off big. Working smart at learning something will tend to beat working long.

All good lessons, I think. 

Two Educational Contexts

From Dan Kahneman:

True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.

I’ve been thinking about this a lot lately, in the context of designing authentic learning. The examples above represent two professional contexts, but they represent two distinct educational contexts as well. 

In people engaged in Type A endeavors, those high-feedback, clearly cued tasks such as anesthesiology, we learn from making experiences as authentic as possible. You want to bootstrap students into reality as quickly as you can. 

In Type B endeavors, reality is broken. The longer a professional is in the field, the worse his or her intuitions may be. In these cases you want to create an artificial reality that fixes the feedback problem, that exaggerates cues, that makes irregularities more apparent.

I think we know this, but we forget it, quite a lot, choosing instead to be religious about authenticity or dogmatic about artificial feedback. 

I also think that if you accept that there are more Type B situations than Type A situations that we may have far *less* formal education than we need, but that’s another post.

Pinker on Statistical Literacy

Better Angels, indeed:

In a question and answer session on Freakonomics Radio, Pinker was asked what people can do to help society “resist the urge to think things are worse and worse and the world is less and less safe when this is manifestly not the case”.

Pinker’s answer was interesting: “One necessity is greater statistical literacy among the population and especially among journalists.

“People need to think in terms of proportions rather than salient examples, to appreciate orders of magnitude, to distinguish random blips from systematic trends, and to be aware of — and thereby discount — their own cognitive biases.

Pinker is right, of course. Higher education is relatively meaningless if its graduates cannot reply to the quoting of a raw statistic with the simple question “Out of what?”

To not deal with such deficits in understanding is to risk many of the very real and profound gains we have made in the past couple hundred years. The peculiar nature of a democracy is that it’s no use graduating brilliant climatologists if the row of business majors behind them at graduation believe the early snow this year shows it’s all bunk anyways. We’re only as good as the voting public lets us be.

Why teens are wired for risk

Why teens are wired for risk

Kind of important for higher education to think about, no? 

Scientists typically refer to “the teenage brain” in 13- to 17-year-olds, but that doesn’t mean that college students are totally “adults” yet. In fact, research from the National Institutes of Health has shown, the prefrontal cortex, a region of the brain associated with inhibition of risky behavior, doesn’t get fully developed until age 25. The connections between the prefrontal cortex and other areas of the brain are also developing in teenagers. And a number of deep structures in the brain are influenced by changes in hormones, which may lead to heightened emotions.

Smart Use of Cognitive Disfluency Goes Mainstream

From the NYT, today:

Another common misconception about how we learn holds that if information feels easy to absorb, we’ve learned it well. In fact, the opposite is true. When we work hard to understand information, we recall it better; the extra effort signals the brain that this knowledge is worth keeping. This phenomenon, known as cognitive disfluency, promotes learning so effectively that psychologists have devised all manner of “desirable difficulties” to introduce into the learning process: for example, sprinkling a passage with punctuation mistakes, deliberately leaving out letters, shrinking font size until it’s tiny or wiggling a document while it’s being copied so that words come out blurry.

As far as the larger article: in general, I hate this trend of dressing up educational research findings we’ve known for decades as new “Brain Science”.

“Spaced repetition”? Really, we’re just learning this now?  The truth is Bloom, Vygotsky, Bruner and others got here years ago. And frankly, what is this “new” brain science all the papers keep talking about? Bruner didn’t study the brain? Was the cognitive linguistics I learned a decade and a half ago just applied art?

That said, I think that much of what has been called attention to under this media meme has been good. It reminds me, at least, that we know a crapload about these things in isolation, but most people don’t realize how much we know because the smart integration of these findings into a usable framework has often eluded us. So if some people want to take the public’s fascination with MRI’s and use that to get people to think of education as a legitimate research pursuit and rehash some old findings, I guess I’m fine with that, at least for the moment.