Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
I’ve been collecting these sorts research examples and making an effort to read them thoroughly, partially because I think we’ve become a bit too self-congratulatory on active learning, and partially because you learn more from these failures than yet another paper confirming active learning/constructivism/engaged pedagogy works.
This one is particularly interesting for a couple of reasons. First of all, it ends up showing that although active learning did not correlate with learning gains, using active learning to confront misconceptions did.
That’s really interesting, because if you look at well-designed physics clicker questions, for example, they really plug into common misconceptions — but it takes time in a discipline to really hone a set of questions like that.
The study is also interesting because it reminds us of the normal state of affairs, where students are graduating biology with common misbeliefs about evolution:
Thirty-nine percent (n = 13) of courses had an effect size lower than 0.42, which corresponds to students answering only one more question (out of 10) correctly on the posttest than on the pretest.1 When learning was calculated as average normalized gain, the mean gain was 0.26 (SD = 0.17). On the cheetah question, learning gains were even lower. Effect sizes ranged from −0.16–0.58. The mean effect size was 0.15 (SD = 0.19) and the mean normalized gain for the cheetah question was 0.06 (SD = 0.08). These remarkably low learning gains suggest students are not learning to apply evolutionary knowledge to novel questions in introductory biology courses.
That’s 15 weeks or so to get one more answer out of ten on a post-test right. I’m not mocking that at all — in fact, quite the opposite. It’s worth remembering how hard it is to get gains in these areas. When we see effect sizes of 1 or more, our jaw should be on the floor…
As far as weaknesses of the study — self-reports, self-reports, self-reports. They try to deal with this by doing a correlation with student impressions, but what I’d really like to see is a sample observed on video and coded.