I really wish every person involved in online learning could watch this short video:
After watching that, you might assume that I am going to rant against B.F. Skinner. Far from it. In that five minute video Skinner tackles concepts of self-paced learning, the importance of quick feedback, the basics of gamification, aspects of proximal development, and mastery learning. He understands that it is not the *machine* that teaches, but the person that writes the teaching program. And he is better informed than almost the entire current educational press pool in that he states clearly that a “teaching machine” is really just a new kind of textbook. It’s what a textbook looks like in an age where we write programs instead of paragraphs.
That’s in 1954, people. I’m a constructivist most days of the week, but nobody really stands as tall in the field of learning as Skinner.
So Skinner doesn’t irk me. What irks me is that we start every tech discussion as if Skinner, Bruner, Bloom, and their subsequent critics never existed. For example, here’s a presentation on gamification. The principles of gamification are stated as
- Goal Setting,
- Norms of Reciprocity,
- Loss Aversion,
- Set Completion.
Know what? About half of those topics are covered in the five minute video. And covered better than you’re going to find them covered at any THATCamp shindig.
You want more? Here’s a news story from last year about Rocketship Schools and their learning labs:
As students log on in the computer lab, they access what amounts to an individualized skills plan, the day’s instruction based on assessments that adjust to their performance.
The Learning Lab holds 130 students and it’s nearly always full. Once students squeeze into their chairs and pint-sized headphones, the room takes on the hushed air of a study hall during final exams, with each student working at his or her own pace.
The title of that news story? Futuristic Rocketship Schools Redefine Teaching. The list goes on and on. You can play this game easily yourself: find a piece on Khan Academy, edX’s machine grading, read a Friedman article, attend a TEDTalk .
What we have to stop asking is “Why will this work?”.
What we have to start asking is “Why will this work this time around?”
The answer to that second question might be very plausible — the advances in technology have been incredible. We integrate social elements more seamlessly into technology than we used to. We are networked together. But it’s that question — “Why will this work this time around?” — that really needs to form the starting point of the discussion.
1954, people. 1954.
19 thoughts on “B. F. Skinner on Teaching Machines (1954)”
Damn you Mike, you’ve somehow reached into the draft area of my blog and one-upped me. I was going to do something similar based on references from the early1990’s (the previous time thought we had improved on Skinner). I’ll just post a link to this instead.
Sorry Cameron! Although I’d still like to see that article!
That these ideas continue to resurface, without much attention to the failures of the past, demands the uncomfortable questions that you ask. I’m not sure if there’s an implication in your questions–I don’t sense there necessarily is–but I am not ready to conclude that (in this case, Skinner’s, but could be Bloom’s, etc) there are fundamental intellectual failings in the original concepts. Instead, I suspect a failing of the times. If we agree that the theories are sound (and the many replications in the latest fashion suggests many of us do, even if we don’t want to), then it may be that the question, “why will this work this time around?” is answered simply by ubiquity and accessibility of the necessary technology. The mechanisms for the teaching machine are now common and facilitate the spreading of the application of these ideas. So, maybe we’re not really looking for a revolution like people like to say, but a kind of renaissance?
I completely agree that there are not really failings in the original concepts. The truth is that as cognitive science has expanded its understanding of how we learn Skinner turns out to be more right than just about anyone. The stuff we see from Ericsson on deliberate practice confirms the structured approach to learning (and undermines the idea that instruction must be authentic in every instance — learning, in fact, must do *better* than authentic experience). Bruner’s addition of discovery learning to guided methods in the 1960’s turns out to have been superfluous at best, and harmful at worst: novices learn slower and develop broken conceptual frameworks when left to discover knowledge on their own, and experiments from the 1980’s to the present have demonstrated that conclusively — you’d have a rough time finding a cognitive scientist that supports unguided or minimally guided discovery. So again, Skinner for the win on highly guided practice.
More things Skinner was right about — high success percentages. He talks about this in the video and uses 90% as his magic number. He’s right. You have to structure assignments so that students do pretty well. That’s partially motivational, but it’s also because it’s easier to reinforce an action being performed than to try to punish a student for not doing the thing that they don’t know how to do. His focus on foundational knowledge for novices is also correct, even if it has been much maligned in subsequent years — look at the research on the Expertise Reversal Effect (which can also be read as a stunning affirmation of Bloom).
Where Skinner goes wrong in theory I think is his lack of interest in internal states of knowing (which is where behaviorism kind of fails in general) and in the social components of knowledge and learning. But as you say, the big barriers are not theoretical but issues of implementation. The three big barriers I think technology hits in education are:
a) On the macro-level, technologists (and cognitive scientists) tend to treat educational institutions as if their only purpose was education, and education in a very narrow sense. And so the idea is if you build a brilliant teaching machine, then the system to support it can be built around it. If the institution cannot accomodate the brilliant teaching machine, it’s the fault of the system, not the machine. But educational institutions are like ancient software — they encode a lot of functions that are crucial, but not immediately apparent to the outsider. Lawrence Cremin pointed this out rather forcefully at the end of his life. We assign our schools all the difficult problems of society, from integration to social equality to community development and participation. In my opinion, with all these things encoded, we rightly are conservative in breaking apart these institutions to accomodate tech. And so, if you’re serious about change, you can’t require your method to require a complete rethinking of the institution. You can’t build the perfect car that requires all new roads.
b) On the micro-level, Skinner and others missed (or largely ignored) the social nature of education. Personalized education is like personalized anything else — it sacrifices a common experience for a customized one. And that’s something where you have to strike a balance, because that common experience is a powerful thing. I’m a godless liberal, but one of the most striking educational experiences I’ve ever had was a semester I spent in a Catholic Great Books-style program. At this school the entire school read the same books at the same time. The semester I went, it was the Renaissance. So freshman to senior was reading Machiavelli, Ben Jonson, Thomas More, Erasmus, Shakespeare. At breakneck speeds, no less.
You’d think this was a recipe for disaster. But what happened, in my experience, was that the conversations that it produced among students were wonderful. Students would actually talk about recent readings in the dining commons over lunch, in their dorms, while drinking. If you had just really enjoyed or hated The Taming of the Shrew, you could talk to any student at that school — they had just read it too. And external connections were still made — students that had been around for the Romantic Era semester would preview for students the sorts of things that would build off Shakespeare. It was a profound thing to see, this student body so connected by the ultimate cohort experience.
I’m not saying we want (or can) go to that level of common experience. We need some people to study psychology, some to study medicine, some to become writers. This involves some specialization, some fragmenting of the common experience, and that’s good. My daughter is in middle school, and the curriculum there is killing her — she wants to get to high school where she can take social science courses, which is where her interest lies, not algebra. So the common experience can also be de-motivating. Implemented broadly, a return to a canonical educational experience is a disaster. But I think an experience where every student has an entirely personalized experience is also a disaster, because it seals us off. It becomes like the underground music scene today where we completely customize what we listen to only to find there is no one left to talk to about it with.
c) I’d say the final thing tech gets wrong is it underestimates the importance of presence. Physical presence matters. Synchronous experience matters. A million (billion?) years of evolution has gone into us privilege the event that is in front of us over the one that is distant, the event that is human over the one that is mechanical, the event that is ephemeral vs. the one that is always-on.
Anyway, I think those three things — the broadness of education’s actual charter (and the need to integrate with the existing system), the importance of common experience, and the priority we give presence — are the major things tech gets wrong again and again. If you look at the MOOC experience, for example, parts a and c are left out. Rocketship’s implementation seems to have stumbled on issue b. And so on.
Yikes. That’s a long comment. But there you go — thanks for pushing me on this, I think it’s clarified my thinking a bit.
Fascinating video and comments, thanks. Your final thoughts remind me of the economist James Heckman’s comparison of students who passed the GED exam versus those who completed coursework (I heard him interviewed by Ira Glass on This American Life, http://www.thisamericanlife.org/radio-archives/episode/474/transcript). Those who completed coursework were far more successful because of the cognitive “soft skills” they developed, not just the intellectual skills required. I think a lot about how online connectivity can foster those, too.
Sarah — thanks for this — posted just in time for me to get the episode and load it to my tablet for the ride home!
One way to read the history of ed-tech is to view it as the quest to develop the Teaching Machine. As this video of Skinner makes incredibly clear, our interest in using automation in the service of more personalized and more efficient learning isn’t new. Not in the least little bit.
As such, there are several things that irk me about a lot of the (new) tools that I see being developed that tout adaptivity or personalization.
They seem to be incredibly unaware of this history. Leaving aside what they may or may not know about education *theory* (in terms of cognitive science or in terms of sociology), there isn’t even an understanding of the history of *tech*.
(I realize it’s an easy target but ook at the staff at Khan Academy, for example (https://www.khanacademy.org/about/the-team): software engineer, software engineer, software engineer. Just one studied cognitive science.)
From this — not surprisingly — we see a focus on finetuning the machines in the hopes that the answer lies there. There’s lots of excitement about AI — and sure, we’ve seen some whiz-bang developments. We have self-driving cars — hot diggity — but still struggle with “self-driving” educational software (and admittedly Sebastian Thrun’s interest in education is what prompted me to start working on a book on this topic).
Years after Skinner and leaps and bounds and Moore’s Laws later, we haven’t yet found the holy grail of adaptive learning (despite all of Knewton’s claims to the contrary). This suggests to me that the missing piece here isn’t just the ability, technologically, to build a better “Box.” It isn’t just about having larger and larger datasets for us to refine our algorithms (although yes, Knewton, that will likely help).
It’s a radical disconnect between machine learning and people learning and social learning. Part of that failure hinges on the cognitive science piece; and I agree with Mike, much involves the miscontruing the social elements of learning and (for lack of a better word) the politics of schooling.
And I have to wonder too (and I need to stew on this a lot more admittedly) in having this decades-old goal that we’re still working towards — to build a Teaching Machine — that we’re working on solving a problem that doesn’t match our contemporary world of information and knowledge production, consumption, and construction.
Impressive BF Skinner was certainly smart,
You might be interested in a video mash-up we did at Liverpool John Moores (UK) which links Skinner’s views with current tablet use in schools.
Skinner might have fallen out of fashion in educational research and instructional theory but he has a profound influence on the practice of corporations which have a massive influence in schools and colleges.
Although Skinner still has relevance I see the return of a behaviourist approach cloaked in new technology as a profoundly backward step. Much of the work done since Skinner is being ignored masked by a smoke screen of new technology