New article to be published in BioScience.
Ebert-May and her five coauthors examined data from two multi-day programs, one of which occurred over several years and one of which was repeated annually. Both programs led to participating faculty knowing more about inquiry-based teaching, as expected, and a large majority of them also reported later in questionnaires that they were actually using such practices. But 75 percent were not in fact doing so substantially, according to Ebert-May’s study. The researchers state that the results call into question the value of the self-assessment method frequently used in education studies, and recommend that in future, researchers should rely on validated independent assessments of teaching performance. Although a lack of support from colleagues for adopting learner-centered teaching methods is often suggested as an explanation when they are not adopted, the participants in Ebert-May’s study reported that having insufficient time was the main impediment to their revising their teaching.
Without a definition of what “substantial” change means and without a sense of how many of the remaining 25% engaged in these activities due to the workshop, it’s hard to gauge this study. If the 25% made substantial-meaning-gigantic changes as a direct result of workshops, I’d call that a victory (I’m guessing though that that is not the case here).
The findings regarding self-reports, however, jive with my own intuitions. Self-assessment is half decent at assessing barriers, but pretty much worthless in assessing impact and actual behavior. At Keene, we’re working with faculty to come up with other ways to engage in assessment of impact. And we are trying to focus on solutions that help address the time issue, which is huge.