I’ve just finished attending the MOOC Research Initiative conference and workshop, which felt not so much a conference on MOOCs to me as the beginning of something else. We kick Courdacity-style MOOCs around a lot, but if all MOOCs did was bring this level of intelligence and insight into problems of online learning, it would be worth it all. There’s a way that that black hole of a term is pulling different universes together, and if the culture of MRI ’13 is what results from that, the future looks very bright.
Conferences are always a bit of a disembodied experience for me; I never feel grounded away from home. But this time the whole experience had a warm dream-like quality, and I’ve been trying to parse out why that was. Certainly, part of it was the psychological overload of seeing so many people who I admire in one place. Part of it might be the ice storm that swept in the night of day two, pushing us all together into smaller spaces.
But I think it was actually the conversations that struck me the most. You know how in a dream characters suddenly pop up and have bizarre yet fluid conversations that pull your mind in all sorts of new directions? The conversations here were not that bizarre, I suppose, but they had that rapid fluidity of dream logic, where a sort of conversational shorthand shuffles you quickly past the normal pro/con nonsense and into something much bigger, and much more exciting and mysterious.
Kudos to George, Amy, Tanya, and everyone else that put this thing together. I’ll be processing the experience for weeks to come, and there’s no greater compliment for a conference than that.
I used to do more statistical literacy stuff on this blog, and I’m toying with the idea of going back to that. The problem is that the stuff that really tends to matter is stuff everybody thinks they already know, but which most people have not built habits around. It’s not really fascinating stuff to talk about, and most of the time it doesn’t result in huge discoveries, but rather, small modifications to our understanding of claims.
A good example of this is recent history PhD study, which shows surprisingly high employment of history PhD’s. It’s a great study, and hugely useful. However, the summary contains this line, which I’m sure people will latch onto:
The overall employment rate for history PhDs was exceptionally high: only two people in the sample (of 2,500) appeared unemployed and none of them occupied the positions that often serve as punch lines for jokes about humanities PhDs—as baristas or short order cooks. (italics mine)
In the COMPARABLE framework I used to give my students, one of the first questions you ask is “How was this number computed?” (“O” stands for “How were the variables Operationalized?”). A quick two minute scan of the article shows us this:
To identify the career paths of recent history PhDs, the AHA hired Maren Wood (Lilli Research Group) to track down the current employment of a random sample of 2,500 PhDs culled from a total of 10,976 history dissertations reported to the AHA’s Directory of History Departments and Historical Organizations from May 1998 through August 2009. The AHA’s Directory Editor, Liz Townsend, compared the data to employment information in the AHA Directory—which lists academic faculty—and the Association’s membership lists, and Wood used publicly available information on the Internet. Data was collected during February and March of 2013, and reviewed in June and July. Together, AHA staff and Maren Wood identified current employment or status information, as of spring 2013, on all but 70 members of the sample group.
A lot of time when you can’t determine the status of part of your sample, you can assume that the unreachable, unfindable people break down more or less into the same percentages as the reachable part of your sample. But how you collect data affects this. In this case, the existence of the American History Association directory makes it highly unlikely that there were unfound tenure-track positions, and the public nature of university directories probably sussed out most other people in university positions.
On the other side of things, we can imagine that the most invisible, hard-to-find people would be the ones that are unemployed or work low-paying, low-profile, non-academic jobs.
All in all, I think it likely that tracking down the untrackable would substantially add to the unemployed count, and might even dig up a barista. The research methodology almost guarantees that the 3% of people not found will be primarily people outside the university system.
So I think this “two people unemployed” business is overstated. Still. the claim that half of history PhDs are employed in four-year tenure track stands despite this, and that remains a rather interesting result.
With that result, there’s perhaps another issue. The initial sample is culled from finished dissertations. But dissertations are often abandoned, and all-but-dissertation (ABD) tends to become a permanent state for many that don’t find employment in academia. Why finish the dissertation if you can’t find a job in your field? Barista jokes are unfair, but if there is a PhD barista, they are likely ABD, and they wouldn’t show up in these stats anyway.
What would the stats look like if we included the ABD students? A minor quibble, unlikely to have a *huge* impact on the numbers. But it moves possibly sensational claims a bit closer to reality, especially in the humanities, where 10 year degree completion is sub-50%, IIRC.
A final thing I might note as rather odd is the small number of the PhDs working the community college system. In the “M” part of the COMPARABLE framework, students are asked to create a basic “model” in their head, and make predictions — if X is true, what else is likely to be true? Can you check it? Here, the fact that a large number of history teaching jobs are at community college, but only a 5.5% of our PhD sample work these jobs (compared to 50% of faculty working tenure track jobs) elicits a guess from us that the vast majority of people teaching history at the community college level must not have PHDs. There are certainly ways where that could be false and the data is still good, but if that prediction turns out wrong, then we’d have to dig deeper into the data.
So there you go. A partial analysis.
Now here’s the question for readers — is this boring as hell? Interesting? Boring, but salvagable?
The thing is I really believe in this stuff — getting into these habits of mind that let you do a five minute analysis of numbers. And the way I’ve learned it is by watching people model it (Tim Harford, Ben Goldacre, Milo Scheild, Joel Best, etc). But I think it can be a bit boring to read unless there is some big revelation, and most of the time the revelation is that the numbers are worthwhile, but likely somewhat overstated. Hardly edge of chair stuff.
Thoughts on how to blog this sort of thing? I was thinking of doing one a week if I could find some way to make it interesting.
This “experimented on” phrase bothers me a bit:
The SJSU affair falls somewhere between educational research and a social experiment, and we are very much in need of better experiments in these areas. Most educational research is pretty abysmal. Most social policy goes untested. The lack of decently designed experiments in these areas generally allows the people with the most money and policy clout to determine what constitutes truth in this space. And people suffer because of that, every day.
So we need more experimentation. And we probably need better experiments than SJSU, where Udacity demonstrated negligence in offering students an experience they should have known to be inferior. I am not arguing that we should shrug our shoulders at a company that takes student failure too lightly, or directs policy interventions disproportionately at the powerless.
But “experimented on” somehow implies to me that the rest of us are not making choices every day on how we educate students. I used a team-based model in my third iteration of a statistical literacy class I taught, and I tracked its effectiveness. Results were mixed. Was I “experimenting on” my students? I introduced peer instruction another semester. I was certainly experimenting with my delivery — but was I experimenting “on” the class?
I’d say no. I was altering instruction to find out what worked best, and paying attention to the results. This, broadly, is what it means to be a professional. And I don’t think that changes at the institutional level. I was not using my power as a teacher to collect data on stress reactions to various forms of supersonic pitches, or on heart rate reactions to violent imagery. I was trying to do the best I could at what both society and the students were paying me to do.
I’m eliding a lot of concerns here for the sake of brevity. I’m happy to argue this more deeply in the comments. But “experimented on” sounds to me like the ickiness is coming from the use of a formal design. That’s wrong, in so many ways.
In reality, the ethical considerations in situations like SJSU are both more broad and more narrow. Such activities are unethical if the treatment received by either of the test groups is unethical, end of story. If the treatment of the Udacity students is unethical inside the experiment, it would likely be unethical had no experiment framed it — the fact that we’re tracking outcomes has little to do with it. Likewise, if the use of Udacity for this purpose is an acceptable policy option outside of an experiment, then the use of random assignment to assign it is ethically neutral.
To my ears, the phrase “experimented on” confuses that issue by imposing a particular set of ethical concerns that only exist once we decide to track outcomes, or use random assignment to allocate limited resources. So please — argue whether offering such courses as educational alternatives is ethical, and debate whether experimentation that tends to target those alternatives at poorer schools is socially just. But let’s not create the impression that it’s the presence of the experiment that makes these solutions ethically dubious.
I can’t read much of the recent piece of Thrun hagiography without wanting to do bodily harm to myself, so this following analysis might miss some of the subtlety of the article. I’ve tried to push myself to read it fully, and I really just can’t. From the photo up top of Thrun in what looks to be a 1973 Swedish cycling film, to the URL (“uphill climb”, get it?), to the vast research incompetence of the unbelievably compromised reporter who wrote it, every paragraph reminds us that Fast Company and other such publications exist as a sort of Pravda for the Valley set. With apologies, of course, to Pravda.
But if I read the article right, Sebastian Thrun, a man who slaved a full hour over a lesson he had to correct in between displays of physical prowess, is done with the traditional higher education market.
But for Thrun, who had been wrestling over who Udacity’s ideal students should be, the results were not a failure; they were clarifying. “We were initially torn between collaborating with universities and working outside the world of college,” Thrun tells me. The San Jose State pilot offered the answer. “These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit.”
What is the answer? Move to a market segment where innovator-preneurs are free to innovator-preneuriate. Here’s one of the new classes, taught by educator-preneur Chris Wilson:
If Wilson seems slightly unprofessional as an educator, that’s because his only formal teaching credential is as an assistant scuba-diving instructor. Wilson works at Google as a developer advocate in the company’s Chrome division. His class was conceived, and paid for, by Google as a way to attract developers to its platforms. Over the past year, Udacity has recruited a dozen or so companies, including Autodesk, Intuit, Cloudera, Nvidia, 23andMe, and Salesforce.com, which had sent a couple of reps to discuss a forthcoming course on how to best use its application programming interface, or API. The companies pay to produce the classes and pledge to accept the certificates awarded by Udacity for purposes of employment.
There’ll likely be lots of analysis on this article and change in direction. He’s my little contribution. Thrun can’t build a bucket that doesn’t leak, so he’s going to sell sieves. I discussed this a bit a year ago in Why We Shouldn’t Talk MOOCs as Meritocracies (graph at top seems broken, sorry):
It’s that central point that I want to deal with though – that as a society we need only be interested in equality of opportunity, and that wide disparities of results on display are in fact OK, because they represent the system working its sorting magic. The people that have merit, who put in the work are succeeding. The people that don’t are not.
I hear this tossed around as an answer to MOOC failure rate, and it scares me a bit. It has taken decades for us to get to a point in higher education and K-12 where we are held accountable for social outcomes. And while there are flaws in the way those outcomes are measured, I know my own institution has actually undergone a sea change since I attended. We still struggle, occasionally, with faculty who think their job is to thin the class on its way up, but on the whole most faculty are committed to increasing the student success rate…similarly, my child’s grade school has moved heaven and earth to successfully teach skills to children that would have been abandoned years ago.
Udacity dithered for a bit on whether it would be accountable for student outcomes. Failures at San Jose State put an end to that. The move now is to return to the original idea: high failure rates and dropouts are features, not bugs, because they represent a way to thin pools of applicants for potential employers. Thrun is moving to an area where he is unaccountable, because accountability is hard.
It’s tempting to say good riddance, but I would add just one more thing. Despite giving up on equality of outcomes, Thrun still believes he is in the education business. Fast Company still believes he’s in education. So do a lot of policy makers.
And it’s quite possible to go to a model of education that sees its primary goal as thinning the herd. Such systems have existed in many places throughout history. There is no reason that this version of education can’t come back, and every day we allow Thrun to pretend he is not running from accountability is a day we move closer to such a model. That future involves an “uphill climb” for the people who need our help the most, and I’m hoping we can avoid it.
Cross-posted from e-Literate.
After reading an an excellent post by tech-blogger Jon Udell on innovation, I spent the weekend getting reacquainted with work of Eric von Hippel, the researcher who pioneered the study of user-driven innovation.
What’s interesting about von Hippel is that his research hits on the common themes of the open education movement, but does so in a slightly different key.
Briefly, there are a number of intersecting debates about MOOCs. There is what Reich frames as the Dewey/Thorndike debate about what learning is. There is the centralized/de-centralized debate about what the web does best. There is the debate about about whether MOOCs are disruptive or innovative or neither, and the discussion over how much ability to remix teachers need to make classroom learning work well (answer, probably, is quite a bit).
But people on both sides of the debates are often driven by a larger question that we are not naming directly enough: “What are the sources of innovation?”
This is the question that von Hippel has been investigating for over thirty years now. And if we see innovation not as something that has happened, but as something we want to continue to happen, this may be the most important question of all.
The traditional answer, says von Hippel, is that product industries (“suppliers”) are the innovators. In this view a company comes across a set of “sad users”, finds what their problems are, and designs (via research and development) a solution.
But is that really how things happen? Since the 1980s von Hippel has been looking at the history of “transforming” innovations in various industries. These are innovations which haven’t just offered a slightly better or slightly cheaper product, but ones that have radically altered what is possible in an industry. A great example of such a transforming innovation is the center pivot irrigation system, considered by many to be on par with the invention of the tractor in the history of agricultural technology:
Before the center pivot system, farmers had to draw water from a single well and then pipe that water throughout the farm. The fundamental insight of the system was that instead of piping the water all over the farm (with the resulting leakage) you could drop a well in the center of a section of crops, and then use a gigantic rotating sprinkler to irrigate a large section of crops from that well. If you’ve ever flown over the country, you’ve seen what such farms look like from the air:
What von Hippel points out is that major innovations like these almost always come not from suppliers, but from “lead users”, a set of highly motivated and skilled users for whom the current technology or practice is restrictive. In this case, for example, the first center pivot system was created by an individual in the 1950s who wasn’t initially looking to market it, but simply to solve the set of local problems he was facing:
The Valley Corporation then came in later and perfected it, allowing it to work more flawlessly, with less user intervention. They perfected it and prepared it for mass adoption. But the innovation was not theirs.
Look under between 75% to 80% of all major innovations, and this is the story you find again and again, from the first heart-lung machine, to the development of wider skateboards, to protein-enhanced hair conditioners. On the web, people were running makeshift blogs well before Blogger, net-sync’ed folders well before Dropbox, video + question sequences well before Coursera. What smart companies do, for the most part, is not “innovate” but find what “lead users” are hacking together and figure out how to make that simpler for the general population to tap into. Research often plays its most important role after the fact, not in producing designs, but allowing us to determine which lead-user designs work best, and to understand what, exactly, is making them work.
EDUPUNK and User Innovation
For many readers, this process may call to mind the EDUPUNK wave of 2008. The term was coined by Jim Groom in a conversation with Brian Lamb and subsequently extrapolated on by a number of edubloggers, eventually hitting the New York Times (if I remember correctly) as a word of the year.
What some may not remember is that the coining of the term was a reaction to the announcement of Blackboard that they were moving to a Learning 2.0 platform, one that would (supposedly) integrate the technologies they had worked so hard to keep out of education because they weren’t perceived as serious or safe.
Lead users like Jim had gone out and done their own thing, hacking together syndication feeds, wikis, and modded themes into a workable replacement for a learning management system that did far better at meeting the emerging needs of the open classroom. And when it was looking like they were out of Blackboard for good, Blackboard came up with this system of blogs, and “2.0″ features which replicated much of the functionality, but at the cost of hackability. (And at a higher price increase to boot!). Here’s Jim in that piece:
Corporations are selling us back our ideas, innovations, and visions for an exorbitant price. I want them all back, and I want them now!
Enter stage left: EDUPUNK!
My next series of posts will be about what I think EDUPUNK is and the necessity for a communal vision of EdTech to fight capital’s will to power at the expense of community.
I’ve never fully gone for the “capital’s will to power” bit of that, although I know that piece remains important to Jim. But for me the piece that resonated — and still resonates — is the disturbing vision of an educational-technology-complex that is aligned against the communities of innovators that it supposedly serves.
While a company like Blackboard, which produces tools to create things, may seem qualitatively different than an irrigation system company, it’s not different in the respect that it codifies practice. To the farmer coming up with an irrigation plan the range of devices and options available to her are just as much building blocks in an overall design as is the Blackboard gradebook or discussion forum.
As with other industries, most of the practice that Blackboard codifies (and the rudimentary architecture to support it) was developed outside of Blackboard by user innovators. And that’s fine. But the message Blackboard sent (and I think intentionally sent) over the years to skittish administrators was “Now that we’ve offered these innovations in the product itself, you can rein in all your experimenters and put them back in the box.”
As Jim so rightly points out, such actions and attitudes destroy innovation communities rather than foster them. And it’s not just Blackboard either. The entire education reform-industrial complex has often waged war on educational communities, based on the perception that questions of educational practice are mostly solved, and if we could get teachers to just teach using the centrally specified method (or foundation-approved test) we’d be set. Technology thought leaders even make bizarre claims that there is no innovation going on in education, outside, of course, the Silicon Valley entities here to save us.
People have termed this approach “a war on teachers”. It’s that, certainly. But since a subset of those teachers are where the innovations of the future are likely to come from, it’s a war on innovation as well.
The Sources of Educational Innovation
Once we see the question “What is the source of educational innovation?” as a core question of the debate, certain things become clearer. In fact, the answer an individual has to that question is probably highly predictive of what technologies they favor.
The current breed of xMOOCs emerged as a fluid hacking together of different educational elements in places like Stanford. In this environment, teachers using the system were encouraged to extend and supplement the product through both technological and pedagogical innovation.
But as Bob Dylan would say, things have changed. As MOOCs have reoriented to see a significant piece of their customer base as providers of blended learning (rather than the students themselves) they have failed to invite that user base into the culture of innovation, presumably due to their erroneous belief that innovation begins at the top, then filters down to the masses. The licensing, technology, and content, and supporting community are all designed to preserve their innovation as shipped, in an effort to protect it from the users.
On the other hand, EDUPUNK technologies (varieties of cMOOCs, ds106, FemTechNet, Open Course Frameworks, P2PU) have continued to engage their users, asking the the users to experiment, remix, hack, and redistribute. They are, in the words of von Hippel, “user innovation toolkits” which encourage users to alter, and even subvert, given designs. Because they codify much practice in convention rather than code (see, for example, the use of tag-based RSS and the harnessing together of readily accessible technologies) they retain a fluidity that promotes experimentation. They are, in a word, so EDUPUNK.
You can look at either of these paradigms, and ask which one is more innovative, or which one fits with your model of education. We can ask which framework is more effective or more suited to various local conditions. But the key question for administrators and policy makers is not just which system is more effective today, but which framework will continue to grow and adapt in the future.
And on this question the historical record is fairly clear — open frameworks which allow lead users to hack are the systems that will produce long-term gains. As a case in point, take Lego Mindstorms, a project built over 7 years by LEGO engineers which was significantly improved by user hackers within three weeks of its release.
Rather than fight against those hackers, LEGO decided to embrace them. And maybe this is where I differ from Jim in this respect — I don’t think gutting user communities is necessary to for-profit enterprise. Counterexamples like the one below show that both the interests of investors and users can be aligned. In fact, given LEGO’s explosive growth in the face of a recession, one could see a more enlightened capitalism as a force for good:
I believe that this idea of fostering user innovation informs the rhetoric of Instructure around the Canvas LMS (the reality will emerge over time). It’s the business plan of Lumen Learning’s Candela OER Project, which acts as a publisher, polisher, and integrator of products produced and maintained by their user base. It’s something along the lines of what Alan Levine is proposing in his recent Shuttleworth grant proposal.
And at the same time, it is the antithesis of much of what we see out of Silicon Valley, which, not well versed enough to invent the wheel reinvents instead the tree trunk roller, and then mounts a campaign to get lead users to give up their makeshift wheel-and-axle systems as too ad hoc.
The situation is further complicated, because local knowledge is “sticky” in two major ways. First of all, many educators and educational technologists have extensive tacit knowledge of what works that is difficult to express to people who design products. As von Hippel points out, when such knowledge is sticky at the point of use (in this case the classroom), it makes sense to push design functions downstream.
Knowledge is also sticky in another way in education. It resists generalization. Despite what Udacity might tell you, there is no “magic formula”. Rather, there are dozens, perhaps hundreds of magic formulas: the success and applicability of which are determined by the subject and skills being taught, the specific capacities of the students, and the nature of the local learning environment. What works in one situation is not always applicable to other situations.
When knowledge is sticky in this way, the importance of hackability to innovation is even greater. Yet while industry moves more and more towards recognizing the importance of user-driven innovation the educational-reform-industrial complex still treats such innovation as a disease in need of a cure.
The Last Innovators
The truth is that Salman Khan, Sebastian Thrun, Andrew Ng and others know this at heart — they are all, in fact, former lead users who solved their own problems with technology and then took their solutions to a broader market. And that’s wonderful: we’ve benefited from their contributions.
But they are only a fraction of a fraction user innovators out there. We can’t afford to regard these figures as the last innovators to ever walk the earth. If we wish to engage in ongoing innovation, we need to focus on generating conditions that foster more communities of more such people, not less. That means making sure that educational technology is as hackable as farm equipment, shampoo, and skateboards. That means choosing technology for your campus based on what your most creative and effective users need, so that they can advance your local practice, and steering away from lowest common denominator technology. It means looking to our practitioners to lead the way, and then asking industry to follow. And ultimately it requires that we cease to see innovation as a set-and-forget product we buy, and engage with it as a process and a culture we intend to foster.
Photo/Image Credits: Center pivot system: USDA, via Wikipedia; Kansas fields: U.S. satellite image via Wikipedia; Center pivot prototype: T-L irrigation; Jim Groom as EDUPUNK: bavatuesdays; Tree-trunk roller: Jonnie Hughes.
- It details the preliminary “impressions” of professors engaged in a three year study that will end in 2016. Despite having run flipped classes, they are in week three of that study.
- It mentions that flipped models might not work for philosophy, because it’s difficult to come up with “real-world problems” to which one could apply philosophy. This despite the fact that philosophy classes (like many humanities classes) are largely already flipped.
- It has no mention of sample size, methodology (other than the most basic information), controls, or quantitative findings.
- It is not clear whether the teachers teaching flipped classroom had any training or experience in the methodology, despite having what looks like a depth of experience in lecture methodologies.
- Hilariously, the article dates the flipped classroom trend to 2007.
What’s more depressing than this is the mass of otherwise intelligent people on Twitter seeing this as either supporting or rejecting nuanced claims. Come on, people.
Asking whether flipped classrooms “work better” is like asking which medication or treatment works best for someone’s psychological problems. Not a specific problem, mind you, just psychological problems in general. What’s the one best pill/treatment at any dosage for depression, schizophrenia, ADHD, and/or agnosia?
Well, what you’re actually treating matters. Medication dosage matters. Therapy method and frequency matters. Therapist competence matters. Regimen compliance matters.
What research actually checks is whether specific regimens are effective for specific problems in specific sorts of cases. When we see good outcomes replicated across a variety of situations, or great outcomes replicated within very specific situations, we label that regimen “promising”, which is where I think certain flipped practices are today. But the details in a thing like this are the whole point as far as research is concerned. So a story that removes the context, student profile, and methodology might as well not be written (or cited) at all.
Jon Udell gets it 100% right:
“Thanks to the philosophical foundations of the Internet — open standards, collaborative design, layered architecture — its technologies typically qualify as user innovation toolkits. That wasn’t true, though, for the Internet era’s first wave of educational technologies. That’s why my friends in that field led a rebellion against learning management systems and sought out their own innovation toolkits: BlueHost, del.icio.us, MediaWiki, WordPress.
My hunch is that those instincts will serve them well in the MOOC era. Educational technologists who thrive will do so by adroitly blending local culture with the global platforms. They’ll package their own offerings for reuse, they’ll find ways to compose hybrid services powered by a diverse mix of human and digital resources, and they’ll route around damage that blocks these outcomes.
These values, skills, and attitudes will help keep a diverse population of universities alive. And to the extent students at those universities absorb them, they’ll be among the most useful lessons learned there.”
What I like best about the post is what a hopeful message it brings. You can see the recent rise of the xMOOC/neo-LMS as a giant step back (and I have at times felt that way). Conversely, you can see it as creating exactly the sort of problems we’ve spent the last decade building toolkits to solve.
We were built, my esteemed peeps, for precisely this moment. That’s not such a bad thing at all.