Are conversation and customization orthogonal? Some thoughts on the Rocketship Schools announcement.Posted: January 3, 2013
Charter school company Rocketship’s current hybrid model seems like a decent enough idea — move from generalist to specialist teachers in grade school — specialists that understand learning issues in math and literacy at perhaps a deeper level. Adopt a 75/25 blend of classroom teaching with learning lab activities that are adaptive — customized to their exact level of reading and math proficiency.
That sort of customization has long been the the great hope of educational technology, the solution to the two-sigma problem of mass education — a problem Bloom defined as the search for mass methods of teaching as effective as one-on-one tutoring.
Unfortunately, of the things that educational technology brings to bear on the two-sigma problem — conversation, customization, and feedback — customization has been the big tech disappointment. While newer entrants into the space may think video + feedback + branching logic is revolutionary, the method dates back at least to B.F. Skinner’s Programmed Learning. A decade later, Bloom’s Mastery Learning focused heavily on teaching that was customized to either the specific class or student. And the use of such methods is not an untested hypothesis: when I went to grade school back in the late 70s we had a programmed reading program, still around I think, called SRA that was based on skill-level customization and individual assessment and practice. This stuff has been around for a while, and tried on a large, national scale.
All of these efforts have had some effect in appropriate domains Bloom’s Mastery Learning, for example, has shown effect sizes around 0.5 in recent meta-analyses. That’s not a bad effect size, but if you look in Hattie, it’s only marginally above the 0.4 cut-off that Hattie shows as the average effect size of *any* intervention.
Computers were supposed to change that by automating much of the work around customization. Starting in the 1960s, they began to do just that in some domains, such as flight simulation, chess instruction, and simple math problems. The idea was that as computers became more sophisticated they would be able to automate instruction in more subtle tasks. But that dream, to this day, has still not been realized. We have more computer power in our pocket than it took to stage the moon landing, and yet….
I admit to being a bit confused by this result. By all accounts, customization should work, and not just in limited domains like learning to play an instrument, but in all domains where there can be something such as “expert” knowledge. Yet, for some reason, it just doesn’t.
Back to Rocketship (you’d almost forgotten about them right?). Let’s look at that chart again:
Today, Rocketship announced it was backing away from it’s learning lab structure:
Rocketship Schools in the Bay Area have been one of the trailblazers in the ever-changing landscape of blended learning. Located in low-income neighborhoods, the schools’ Learning Labs — where students spend up to 90 minutes a day on computers working on math and literacy software — has been one of its defining characteristics.
But this model isn’t working, some Rocketship teachers say, and because it’s a charter school network with evolving systems, it may soon be changing, according to this PBS Newshour story.
First, let me congratulate Rocketship for being a charter that actually makes changes based on what is working and what is not. I’m not a huge charter school fan (I like the idea of magnet schools and universal admission better) but this is exactly what charter schools are supposed to do — pursue iterative improvement, not rigid ideology.
What’s more interesting to me, however, is why it is not working:
“There’s definitely an aspect of us kind of not knowing enough about what’s going on in learning lab to be able to use that in our classrooms,” said teacher Judy Lavi.
“We don’t yet get data that says, OK, teach this differently tomorrow because of what happened here. And that is — that is a frustration point,” said teacher Andrew Elliott-Chandler.
Adam Nadeau, principal of Rocketship Mosaic Elementary, says he doesn’t think the Learning Lab model will continue next year. And Elliott-Chandler sees a different function for the computers.
“Next year, we’re thinking of bringing the computers back to the classrooms and the kids back to the classrooms,” he said.
What’s fascinating about this is it suggests a reason why customization may not work the way we want it to in blended scenarios — it undermines the shared experience necessary to make good use of classroom time. Teachers can’t walk into a classroom and say — “OK, it looks like yesterday’s explanation of weighted averages didn’t really work given your scores, so here’s what we are going to do.” and launch an activity. They can’t do that because there is no single yesterday for the students. The data is there, presumably, but with everyone at a different spot, the data is useless. By the time enough students have gone through a module for the teacher to see the students have radically misconstrued a concept the majority of students have already carried that misconception forward into additional modules, and now everything is blown…
The solution they mention of bringing the lab into the classroom to monitor the students is not bad (although it will likely kill the cost savings investors were salivating over). Walking around, a teacher can get a sense of what the struggles are, and stage an impromptu discussion or activity. But what is most interesting is it represents a compromise on a scale consisting of two somewhat opposed principles at the poles: customization (the holy grail of educational technology) and conversation (its overlooked brother).
Indeed, structured classroom discussion has one of the highest effect sizes in Hattie, much higher than mastery learning. But it’s really difficult to have a classroom discussion (or group activities that foster student discussion) without some level of shared experience and knowledge. I’m curious if this fact might lie behind much of the surprising failure of computerized adaptive learning systems….