New research out on the use of student response systems in the classroom, and really no surprises to be found in it. Students respond favorably to SRS use in the classroom when it’s used consistently with a clear purpose by an instructor who is excited about using it, and committed to the method.
It does remind me though of how often we fail at the commitment piece. We go into a class believing in a method, mostly, but worried about failure. And our first instinct can be for us to distance ourselves from the method or technique or technology, to somehow immunize ourselves against potential future disaster: we say, “Hey, so we’re trying something new in this class, maybe it will work, maybe it won’t, frankly it will probably blow up around midterms (ha ha) but just soldier through it…”
And the students rightly go um, who’s this “we”? You’re the teacher, you have the power, you’ve got them in the desks, you’ve designed the semester that they are pouring their money and time into; have you thought this through or not? There’s some sorts of weakness and doubt you can show to students. In my experience, this sort of weakness is not one of them. You’re not doing your students any favors by fostering worries that all their effort may be for naught. And ultimately you’re not insulating yourself from failure either.
If you want to talk about failure, explain to your students the conditions under which the method seems to work (type of effort and participation required, potential fail points and solutions, etc.) Places where your students can have an impact on the design or success of the project. These are places where you can empower your students, which is quite different from punting on your own responsibility as a course creator and facilitator.
It’s a simple point, but so many educational technology disasters I’ve been involved with have come down to the committment aspect. If you can’t commit to it, don’t do it. But if you’re going to do it, commit.
Photo Credit: flickr/hectorir
It is a simple point, but one probably worth a bit more discussion. In my experience, faculty who were most committed to some sort of technology innovation were also the best at not only communicating to students why the tech/method was being used, but also that failure was indeed a possibility, and how to react (ie to not overreact) if failure were to occur. The faculty who fail most spectacularly are almost always not only those with lower commitment, but also those who thought that their low level of commitment was enough and that they really didn’t need to even consider the possibility of failure, nor to discuss it with students.
So for supporters of faculty there are lots of ways to see this commitment:
– faculty excited about the possibilities for improved teaching & learning take the time to consider how a class session or overall course design should change – they don’t just tack on something new
– faculty aware of the inevitability of foibles when introducing new tech take time to play around with it on their own and with students – they don’t dive right in with a high-stakes use of the tech
– most faculty who are interested in innovation want to connect with other faculty who are on the same path (or a bit ahead) of them, because they know it’s better to have peers to swap stories and tips with
– and IMO faculty who are most effective at introducing innovations do talk with students about not only the intended benefits, but also the possible failure points and how to deal with them (in a way that empowers, not fosters worries, as you say).
The big question is, when do faculty support folks decide that a faculty member really isn’t committed enough, and what do they do at that point to try to minimize the damage that might be done?
This is a great point. There’s a talent to convincing the students that the partial failure of the technology is not necessarily the failure of the project.
And that ends up being tied into understanding what the *real* point of the technological innovation is — if we understand clickers are not just to make class “fun” then when problems with clickers make class not fun it is not seen as failure — we know the bigger point of clickers was to get students to really test and critique their conceptual understandings. And that part is still happening, or if its not it’s still worth shooting for.
Oh, and regarding your big question — I don’t know. It’s certainly a huge question though, because half-assing it (technical term) poisons the well for future efforts.