Jeff Selingo has a helpful summary of some recent research that attempts to add some nuance to our “grit”-crazy times:
“Things like grit and 10,000 hours are mindsets that are very misleading because they are consequences not causes — they are lagging indicators of performance,” said Todd Rose, who is the author of The End of Average, a book that illustrates how averages are flawed in understanding human achievement.
Since September, Ogas and Rose have interviewed three dozen people who have achieved success in their fields, from sommeliers to poker players. “When we spent the time to understand how they got better, each master had his or her own unique path,” Ogas said.
Now, these researchers are interviewing masters, and relying on self-constructed narratives for their research. That seems prone to some distortion.
But it reminds me that grit studies suffer from the same distortion as well. The typical grit-speriment involves giving people a task to complete, and seeing how well they complete it. By definition, the biggest predictor of who successfully completes the task is “grit”, which is largely a measure of whether people stick with tasks. It’s tautological, really.
Don’t believe me? Here’s question #2 and # 3 on Duckworth’s Grit Scale. Both questions are taken as negative indicators of grit (e.g. the more you agree with these the less grit you have).
2. New ideas and projects sometimes distract me from previous ones.*
- Very much like me
- Mostly like me
- Somewhat like me
- Not much like me
- Not like me at all
3. My interests change from year to year.*
- Very much like me
- Mostly like me
- Somewhat like me
- Not much like me
- Not like me at all
If new projects distract you, or you evolve in your interests over time, then you have no grit. And scarily, that’s something we’re supposed to fix.
In contrast, here’s a question that identifies the existence of grit. Higher identification means more grit:
9. I finish whatever I begin.
- Very much like me
- Mostly like me
- Somewhat like me
- Not much like me
- Not like me at all
Now, I know that Duckworth’s research is more nuanced than this. But here’s what I see in a lot of research around grit these days:
- Questionaire: Do you finish things you start, even if they suck?
- [Put student in sucky experience]
- Result: Students that say they stick with sucky things stick with sucky things!
Slightly better than this (but not by much) is the intervention approach:
- Questionaire: Do you finish things you start, even if they suck?
- Intervention: Did you know that only bad and unsuccessful people quit things?
- [Put student in sucky experience]
- Result: Students told quitting things is a sign of a weak personality quit less!
I’m surprised people don’t comment on this more. Traditional psychological experiments tend to be about the completion of contrived and inflexible tasks, because you want to measure the same dependent variable for everyone. So, for example, we can look at the fact people with more grit stuck with bootcamp at higher rates. But given that military life might not be for everyone, surely some of the people who dropped out made better decisions than some of the people who stayed in. Since the dependent variable isn’t “success”, but rather “not changing goals based on new experience”, we find that people in our goal-adherent group don’t change goals much. (I’m shocked!)
The whole thing really does seem like one big tautology, an effect of how experiments and institutions are designed instead of any profound insight into human nature. (I wonder if grit is coming up on its own Verbal Behavior moment.)
But let’s move on.
Finding the Unique Path
What the newer researchers found is that most people that mastered something did not get there by a straight path. In effect, a very normal path to success was quitting a way that failed for them.
The example that spoke to me the most was this:
Take the wine connoisseur who spent hours studying for the test to become a master sommelier without success. “Then he realized he was able to recognize wines through his facial reactions when he tasted them. When using this method, he aced the test and spent a fraction of the time studying,” Ogas said.
What we’re talking about here is learning, of course. And again, this is not learning styles we are discussing. What this person realized were the explanations he had been banging his head against on how to discern wine weren’t working for him. He needed a different route, a different way into the understanding.
And so instead of persevering, he switched tactics. Is everyone going to be able to use his path to understanding? Probably not. People are different. But some might.
I can’t help but see this through the lens of Choral Explanations. In that article we noticed that the way proficient programmers came to understand things on help sites was *not* by banging their head against the best explanation, but by scanning a number of alternate explanations until something “clicked”.
This differs from the current “gritty textbooks” approach to teaching. In this approach we give the students what we feel is the ONE BEST EXPLANATION of something in a textbook, and have them bang their head against it. If they don’t get it, well, they just have to try harder.
And at the end of forcing students to learn something the ONE RIGHT THING the ONE RIGHT WAY, we discover an amazing statistical fact — students that don’t have “grit” to do things this way fail.
But who does this pattern really indict?
Totally tautological. I agree. I have all kinds of other reservations about experimental psych stuff.
One of the things that I find helps Clarify the whole “make ppl bang their heads against it til they get it” approach is to refer to disabilities. If someone is color-blind or dyslexic, they won’t see color/text more clearly by trying harder. It’s an extreme way of clarifying the argument. Er. Slightly off topic 🙂 but your post reminded me of it
Jeez, it almost seems like this is opening the door to the .com’s (e.g. Knewton) “adaptive learning” algorithms that promise multiple pathways (less head-banging) leading to the same endpoint. I don’t think you’re intending to head in this direction, but the “unique path” and the computer “learning” the best multiple paths to learning isn’t far from what you’ve laid out…
Sorry for sidetracking this off the grit topic.
It’s a different door ( a very different door). Nobody thinks Knewton is a bad idea because it offers more material for students to look at. Knewton is dumb because there isn’t really a way to decide what to feed a student based on Big Data of the sort that they collect. Their pitch “Maybe you learn best at 3:00 in the afternoon with a kinesthetic task!” is nonsense created by stringing the rudimentary data they collect (what did students access at what time in what order) with meaningless categories of learning styles and other fictions that provide little or no insight into meaningful differences between learners.
As an example, the wine expert didn’t learn that way because they were “kinesthetic” vs auditory or because we fed it to them at a peak time. They learned that way because they happen to make certain faces. Assuming they make those faces and someone points out this is a good way to think about finishes, they are going to do well.
So who does that — who finds that explanation that works? Well, if you take Stack Exchange and other site as an example, then you do that, as the learner. You’re the best expert as to what is helping you understand things.The friendly robot in the sky has nothing to do with it. What we do is make sure you have the resources that support your choice.
Data in this world looks a lot simpler. The wine lover might subscribe to a feed of the writer who taught him the facial expression trick, for instance. Or perhaps they would be recommended other answers that were found helpful by people that liked the facial expression answer.
I don’t have to imagine this as any hand waving adaptive program or AI robot, because if you look at Q&A sites you can see that a few simple mechanisms can accomplish this well. Does this make sense?
This is very helpful and I completely agree. Just wanted to go after this angle for anyone reading the piece from that perspective. It will be so interesting to see how the “AI robot” vs. the “Q&A sites” plays out in educational spheres (not that it is an either or). In my limited exploration, Quora seems more real/effective/appropriate than anything a black box might offer. Thanks for the insight! #connectedcopies