I’ve just finished attending the MOOC Research Initiative conference and workshop, which felt not so much a conference on MOOCs to me as the beginning of something else. We kick Courdacity-style MOOCs around a lot, but if all MOOCs did was bring this level of intelligence and insight into problems of online learning, it would be worth it all. There’s a way that that black hole of a term is pulling different universes together, and if the culture of MRI ’13 is what results from that, the future looks very bright.
Conferences are always a bit of a disembodied experience for me; I never feel grounded away from home. But this time the whole experience had a warm dream-like quality, and I’ve been trying to parse out why that was. Certainly, part of it was the psychological overload of seeing so many people who I admire in one place. Part of it might be the ice storm that swept in the night of day two, pushing us all together into smaller spaces.
But I think it was actually the conversations that struck me the most. You know how in a dream characters suddenly pop up and have bizarre yet fluid conversations that pull your mind in all sorts of new directions? The conversations here were not that bizarre, I suppose, but they had that rapid fluidity of dream logic, where a sort of conversational shorthand shuffles you quickly past the normal pro/con nonsense and into something much bigger, and much more exciting and mysterious.
Kudos to George, Amy, Tanya, and everyone else that put this thing together. I’ll be processing the experience for weeks to come, and there’s no greater compliment for a conference than that.
I used to do more statistical literacy stuff on this blog, and I’m toying with the idea of going back to that. The problem is that the stuff that really tends to matter is stuff everybody thinks they already know, but which most people have not built habits around. It’s not really fascinating stuff to talk about, and most of the time it doesn’t result in huge discoveries, but rather, small modifications to our understanding of claims.
A good example of this is recent history PhD study, which shows surprisingly high employment of history PhD’s. It’s a great study, and hugely useful. However, the summary contains this line, which I’m sure people will latch onto:
The overall employment rate for history PhDs was exceptionally high: only two people in the sample (of 2,500) appeared unemployed and none of them occupied the positions that often serve as punch lines for jokes about humanities PhDs—as baristas or short order cooks. (italics mine)
In the COMPARABLE framework I used to give my students, one of the first questions you ask is “How was this number computed?” (“O” stands for “How were the variables Operationalized?”). A quick two minute scan of the article shows us this:
To identify the career paths of recent history PhDs, the AHA hired Maren Wood (Lilli Research Group) to track down the current employment of a random sample of 2,500 PhDs culled from a total of 10,976 history dissertations reported to the AHA’s Directory of History Departments and Historical Organizations from May 1998 through August 2009. The AHA’s Directory Editor, Liz Townsend, compared the data to employment information in the AHA Directory—which lists academic faculty—and the Association’s membership lists, and Wood used publicly available information on the Internet. Data was collected during February and March of 2013, and reviewed in June and July. Together, AHA staff and Maren Wood identified current employment or status information, as of spring 2013, on all but 70 members of the sample group.
A lot of time when you can’t determine the status of part of your sample, you can assume that the unreachable, unfindable people break down more or less into the same percentages as the reachable part of your sample. But how you collect data affects this. In this case, the existence of the American History Association directory makes it highly unlikely that there were unfound tenure-track positions, and the public nature of university directories probably sussed out most other people in university positions.
On the other side of things, we can imagine that the most invisible, hard-to-find people would be the ones that are unemployed or work low-paying, low-profile, non-academic jobs.
All in all, I think it likely that tracking down the untrackable would substantially add to the unemployed count, and might even dig up a barista. The research methodology almost guarantees that the 3% of people not found will be primarily people outside the university system.
So I think this “two people unemployed” business is overstated. Still. the claim that half of history PhDs are employed in four-year tenure track stands despite this, and that remains a rather interesting result.
With that result, there’s perhaps another issue. The initial sample is culled from finished dissertations. But dissertations are often abandoned, and all-but-dissertation (ABD) tends to become a permanent state for many that don’t find employment in academia. Why finish the dissertation if you can’t find a job in your field? Barista jokes are unfair, but if there is a PhD barista, they are likely ABD, and they wouldn’t show up in these stats anyway.
What would the stats look like if we included the ABD students? A minor quibble, unlikely to have a *huge* impact on the numbers. But it moves possibly sensational claims a bit closer to reality, especially in the humanities, where 10 year degree completion is sub-50%, IIRC.
A final thing I might note as rather odd is the small number of the PhDs working the community college system. In the “M” part of the COMPARABLE framework, students are asked to create a basic “model” in their head, and make predictions — if X is true, what else is likely to be true? Can you check it? Here, the fact that a large number of history teaching jobs are at community college, but only a 5.5% of our PhD sample work these jobs (compared to 50% of faculty working tenure track jobs) elicits a guess from us that the vast majority of people teaching history at the community college level must not have PHDs. There are certainly ways where that could be false and the data is still good, but if that prediction turns out wrong, then we’d have to dig deeper into the data.
So there you go. A partial analysis.
Now here’s the question for readers — is this boring as hell? Interesting? Boring, but salvagable?
The thing is I really believe in this stuff — getting into these habits of mind that let you do a five minute analysis of numbers. And the way I’ve learned it is by watching people model it (Tim Harford, Ben Goldacre, Milo Scheild, Joel Best, etc). But I think it can be a bit boring to read unless there is some big revelation, and most of the time the revelation is that the numbers are worthwhile, but likely somewhat overstated. Hardly edge of chair stuff.
Thoughts on how to blog this sort of thing? I was thinking of doing one a week if I could find some way to make it interesting.
This “experimented on” phrase bothers me a bit:
The SJSU affair falls somewhere between educational research and a social experiment, and we are very much in need of better experiments in these areas. Most educational research is pretty abysmal. Most social policy goes untested. The lack of decently designed experiments in these areas generally allows the people with the most money and policy clout to determine what constitutes truth in this space. And people suffer because of that, every day.
So we need more experimentation. And we probably need better experiments than SJSU, where Udacity demonstrated negligence in offering students an experience they should have known to be inferior. I am not arguing that we should shrug our shoulders at a company that takes student failure too lightly, or directs policy interventions disproportionately at the powerless.
But “experimented on” somehow implies to me that the rest of us are not making choices every day on how we educate students. I used a team-based model in my third iteration of a statistical literacy class I taught, and I tracked its effectiveness. Results were mixed. Was I “experimenting on” my students? I introduced peer instruction another semester. I was certainly experimenting with my delivery — but was I experimenting “on” the class?
I’d say no. I was altering instruction to find out what worked best, and paying attention to the results. This, broadly, is what it means to be a professional. And I don’t think that changes at the institutional level. I was not using my power as a teacher to collect data on stress reactions to various forms of supersonic pitches, or on heart rate reactions to violent imagery. I was trying to do the best I could at what both society and the students were paying me to do.
I’m eliding a lot of concerns here for the sake of brevity. I’m happy to argue this more deeply in the comments. But “experimented on” sounds to me like the ickiness is coming from the use of a formal design. That’s wrong, in so many ways.
In reality, the ethical considerations in situations like SJSU are both more broad and more narrow. Such activities are unethical if the treatment received by either of the test groups is unethical, end of story. If the treatment of the Udacity students is unethical inside the experiment, it would likely be unethical had no experiment framed it — the fact that we’re tracking outcomes has little to do with it. Likewise, if the use of Udacity for this purpose is an acceptable policy option outside of an experiment, then the use of random assignment to assign it is ethically neutral.
To my ears, the phrase “experimented on” confuses that issue by imposing a particular set of ethical concerns that only exist once we decide to track outcomes, or use random assignment to allocate limited resources. So please — argue whether offering such courses as educational alternatives is ethical, and debate whether experimentation that tends to target those alternatives at poorer schools is socially just. But let’s not create the impression that it’s the presence of the experiment that makes these solutions ethically dubious.
I can’t read much of the recent piece of Thrun hagiography without wanting to do bodily harm to myself, so this following analysis might miss some of the subtlety of the article. I’ve tried to push myself to read it fully, and I really just can’t. From the photo up top of Thrun in what looks to be a 1973 Swedish cycling film, to the URL (“uphill climb”, get it?), to the vast research incompetence of the unbelievably compromised reporter who wrote it, every paragraph reminds us that Fast Company and other such publications exist as a sort of Pravda for the Valley set. With apologies, of course, to Pravda.
But if I read the article right, Sebastian Thrun, a man who slaved a full hour over a lesson he had to correct in between displays of physical prowess, is done with the traditional higher education market.
But for Thrun, who had been wrestling over who Udacity’s ideal students should be, the results were not a failure; they were clarifying. “We were initially torn between collaborating with universities and working outside the world of college,” Thrun tells me. The San Jose State pilot offered the answer. “These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit.”
What is the answer? Move to a market segment where innovator-preneurs are free to innovator-preneuriate. Here’s one of the new classes, taught by educator-preneur Chris Wilson:
If Wilson seems slightly unprofessional as an educator, that’s because his only formal teaching credential is as an assistant scuba-diving instructor. Wilson works at Google as a developer advocate in the company’s Chrome division. His class was conceived, and paid for, by Google as a way to attract developers to its platforms. Over the past year, Udacity has recruited a dozen or so companies, including Autodesk, Intuit, Cloudera, Nvidia, 23andMe, and Salesforce.com, which had sent a couple of reps to discuss a forthcoming course on how to best use its application programming interface, or API. The companies pay to produce the classes and pledge to accept the certificates awarded by Udacity for purposes of employment.
There’ll likely be lots of analysis on this article and change in direction. He’s my little contribution. Thrun can’t build a bucket that doesn’t leak, so he’s going to sell sieves. I discussed this a bit a year ago in Why We Shouldn’t Talk MOOCs as Meritocracies (graph at top seems broken, sorry):
It’s that central point that I want to deal with though – that as a society we need only be interested in equality of opportunity, and that wide disparities of results on display are in fact OK, because they represent the system working its sorting magic. The people that have merit, who put in the work are succeeding. The people that don’t are not.
I hear this tossed around as an answer to MOOC failure rate, and it scares me a bit. It has taken decades for us to get to a point in higher education and K-12 where we are held accountable for social outcomes. And while there are flaws in the way those outcomes are measured, I know my own institution has actually undergone a sea change since I attended. We still struggle, occasionally, with faculty who think their job is to thin the class on its way up, but on the whole most faculty are committed to increasing the student success rate…similarly, my child’s grade school has moved heaven and earth to successfully teach skills to children that would have been abandoned years ago.
Udacity dithered for a bit on whether it would be accountable for student outcomes. Failures at San Jose State put an end to that. The move now is to return to the original idea: high failure rates and dropouts are features, not bugs, because they represent a way to thin pools of applicants for potential employers. Thrun is moving to an area where he is unaccountable, because accountability is hard.
It’s tempting to say good riddance, but I would add just one more thing. Despite giving up on equality of outcomes, Thrun still believes he is in the education business. Fast Company still believes he’s in education. So do a lot of policy makers.
And it’s quite possible to go to a model of education that sees its primary goal as thinning the herd. Such systems have existed in many places throughout history. There is no reason that this version of education can’t come back, and every day we allow Thrun to pretend he is not running from accountability is a day we move closer to such a model. That future involves an “uphill climb” for the people who need our help the most, and I’m hoping we can avoid it.
Cross-posted from e-Literate.
After reading an an excellent post by tech-blogger Jon Udell on innovation, I spent the weekend getting reacquainted with work of Eric von Hippel, the researcher who pioneered the study of user-driven innovation.
What’s interesting about von Hippel is that his research hits on the common themes of the open education movement, but does so in a slightly different key.
Briefly, there are a number of intersecting debates about MOOCs. There is what Reich frames as the Dewey/Thorndike debate about what learning is. There is the centralized/de-centralized debate about what the web does best. There is the debate about about whether MOOCs are disruptive or innovative or neither, and the discussion over how much ability to remix teachers need to make classroom learning work well (answer, probably, is quite a bit).
But people on both sides of the debates are often driven by a larger question that we are not naming directly enough: “What are the sources of innovation?”
This is the question that von Hippel has been investigating for over thirty years now. And if we see innovation not as something that has happened, but as something we want to continue to happen, this may be the most important question of all.
The traditional answer, says von Hippel, is that product industries (“suppliers”) are the innovators. In this view a company comes across a set of “sad users”, finds what their problems are, and designs (via research and development) a solution.
But is that really how things happen? Since the 1980s von Hippel has been looking at the history of “transforming” innovations in various industries. These are innovations which haven’t just offered a slightly better or slightly cheaper product, but ones that have radically altered what is possible in an industry. A great example of such a transforming innovation is the center pivot irrigation system, considered by many to be on par with the invention of the tractor in the history of agricultural technology:
Before the center pivot system, farmers had to draw water from a single well and then pipe that water throughout the farm. The fundamental insight of the system was that instead of piping the water all over the farm (with the resulting leakage) you could drop a well in the center of a section of crops, and then use a gigantic rotating sprinkler to irrigate a large section of crops from that well. If you’ve ever flown over the country, you’ve seen what such farms look like from the air:
What von Hippel points out is that major innovations like these almost always come not from suppliers, but from “lead users”, a set of highly motivated and skilled users for whom the current technology or practice is restrictive. In this case, for example, the first center pivot system was created by an individual in the 1950s who wasn’t initially looking to market it, but simply to solve the set of local problems he was facing:
The Valley Corporation then came in later and perfected it, allowing it to work more flawlessly, with less user intervention. They perfected it and prepared it for mass adoption. But the innovation was not theirs.
Look under between 75% to 80% of all major innovations, and this is the story you find again and again, from the first heart-lung machine, to the development of wider skateboards, to protein-enhanced hair conditioners. On the web, people were running makeshift blogs well before Blogger, net-sync’ed folders well before Dropbox, video + question sequences well before Coursera. What smart companies do, for the most part, is not “innovate” but find what “lead users” are hacking together and figure out how to make that simpler for the general population to tap into. Research often plays its most important role after the fact, not in producing designs, but allowing us to determine which lead-user designs work best, and to understand what, exactly, is making them work.
EDUPUNK and User Innovation
For many readers, this process may call to mind the EDUPUNK wave of 2008. The term was coined by Jim Groom in a conversation with Brian Lamb and subsequently extrapolated on by a number of edubloggers, eventually hitting the New York Times (if I remember correctly) as a word of the year.
What some may not remember is that the coining of the term was a reaction to the announcement of Blackboard that they were moving to a Learning 2.0 platform, one that would (supposedly) integrate the technologies they had worked so hard to keep out of education because they weren’t perceived as serious or safe.
Lead users like Jim had gone out and done their own thing, hacking together syndication feeds, wikis, and modded themes into a workable replacement for a learning management system that did far better at meeting the emerging needs of the open classroom. And when it was looking like they were out of Blackboard for good, Blackboard came up with this system of blogs, and “2.0″ features which replicated much of the functionality, but at the cost of hackability. (And at a higher price increase to boot!). Here’s Jim in that piece:
Corporations are selling us back our ideas, innovations, and visions for an exorbitant price. I want them all back, and I want them now!
Enter stage left: EDUPUNK!
My next series of posts will be about what I think EDUPUNK is and the necessity for a communal vision of EdTech to fight capital’s will to power at the expense of community.
I’ve never fully gone for the “capital’s will to power” bit of that, although I know that piece remains important to Jim. But for me the piece that resonated — and still resonates — is the disturbing vision of an educational-technology-complex that is aligned against the communities of innovators that it supposedly serves.
While a company like Blackboard, which produces tools to create things, may seem qualitatively different than an irrigation system company, it’s not different in the respect that it codifies practice. To the farmer coming up with an irrigation plan the range of devices and options available to her are just as much building blocks in an overall design as is the Blackboard gradebook or discussion forum.
As with other industries, most of the practice that Blackboard codifies (and the rudimentary architecture to support it) was developed outside of Blackboard by user innovators. And that’s fine. But the message Blackboard sent (and I think intentionally sent) over the years to skittish administrators was “Now that we’ve offered these innovations in the product itself, you can rein in all your experimenters and put them back in the box.”
As Jim so rightly points out, such actions and attitudes destroy innovation communities rather than foster them. And it’s not just Blackboard either. The entire education reform-industrial complex has often waged war on educational communities, based on the perception that questions of educational practice are mostly solved, and if we could get teachers to just teach using the centrally specified method (or foundation-approved test) we’d be set. Technology thought leaders even make bizarre claims that there is no innovation going on in education, outside, of course, the Silicon Valley entities here to save us.
People have termed this approach “a war on teachers”. It’s that, certainly. But since a subset of those teachers are where the innovations of the future are likely to come from, it’s a war on innovation as well.
The Sources of Educational Innovation
Once we see the question “What is the source of educational innovation?” as a core question of the debate, certain things become clearer. In fact, the answer an individual has to that question is probably highly predictive of what technologies they favor.
The current breed of xMOOCs emerged as a fluid hacking together of different educational elements in places like Stanford. In this environment, teachers using the system were encouraged to extend and supplement the product through both technological and pedagogical innovation.
But as Bob Dylan would say, things have changed. As MOOCs have reoriented to see a significant piece of their customer base as providers of blended learning (rather than the students themselves) they have failed to invite that user base into the culture of innovation, presumably due to their erroneous belief that innovation begins at the top, then filters down to the masses. The licensing, technology, and content, and supporting community are all designed to preserve their innovation as shipped, in an effort to protect it from the users.
On the other hand, EDUPUNK technologies (varieties of cMOOCs, ds106, FemTechNet, Open Course Frameworks, P2PU) have continued to engage their users, asking the the users to experiment, remix, hack, and redistribute. They are, in the words of von Hippel, “user innovation toolkits” which encourage users to alter, and even subvert, given designs. Because they codify much practice in convention rather than code (see, for example, the use of tag-based RSS and the harnessing together of readily accessible technologies) they retain a fluidity that promotes experimentation. They are, in a word, so EDUPUNK.
You can look at either of these paradigms, and ask which one is more innovative, or which one fits with your model of education. We can ask which framework is more effective or more suited to various local conditions. But the key question for administrators and policy makers is not just which system is more effective today, but which framework will continue to grow and adapt in the future.
And on this question the historical record is fairly clear — open frameworks which allow lead users to hack are the systems that will produce long-term gains. As a case in point, take Lego Mindstorms, a project built over 7 years by LEGO engineers which was significantly improved by user hackers within three weeks of its release.
Rather than fight against those hackers, LEGO decided to embrace them. And maybe this is where I differ from Jim in this respect — I don’t think gutting user communities is necessary to for-profit enterprise. Counterexamples like the one below show that both the interests of investors and users can be aligned. In fact, given LEGO’s explosive growth in the face of a recession, one could see a more enlightened capitalism as a force for good:
I believe that this idea of fostering user innovation informs the rhetoric of Instructure around the Canvas LMS (the reality will emerge over time). It’s the business plan of Lumen Learning’s Candela OER Project, which acts as a publisher, polisher, and integrator of products produced and maintained by their user base. It’s something along the lines of what Alan Levine is proposing in his recent Shuttleworth grant proposal.
And at the same time, it is the antithesis of much of what we see out of Silicon Valley, which, not well versed enough to invent the wheel reinvents instead the tree trunk roller, and then mounts a campaign to get lead users to give up their makeshift wheel-and-axle systems as too ad hoc.
The situation is further complicated, because local knowledge is “sticky” in two major ways. First of all, many educators and educational technologists have extensive tacit knowledge of what works that is difficult to express to people who design products. As von Hippel points out, when such knowledge is sticky at the point of use (in this case the classroom), it makes sense to push design functions downstream.
Knowledge is also sticky in another way in education. It resists generalization. Despite what Udacity might tell you, there is no “magic formula”. Rather, there are dozens, perhaps hundreds of magic formulas: the success and applicability of which are determined by the subject and skills being taught, the specific capacities of the students, and the nature of the local learning environment. What works in one situation is not always applicable to other situations.
When knowledge is sticky in this way, the importance of hackability to innovation is even greater. Yet while industry moves more and more towards recognizing the importance of user-driven innovation the educational-reform-industrial complex still treats such innovation as a disease in need of a cure.
The Last Innovators
The truth is that Salman Khan, Sebastian Thrun, Andrew Ng and others know this at heart — they are all, in fact, former lead users who solved their own problems with technology and then took their solutions to a broader market. And that’s wonderful: we’ve benefited from their contributions.
But they are only a fraction of a fraction user innovators out there. We can’t afford to regard these figures as the last innovators to ever walk the earth. If we wish to engage in ongoing innovation, we need to focus on generating conditions that foster more communities of more such people, not less. That means making sure that educational technology is as hackable as farm equipment, shampoo, and skateboards. That means choosing technology for your campus based on what your most creative and effective users need, so that they can advance your local practice, and steering away from lowest common denominator technology. It means looking to our practitioners to lead the way, and then asking industry to follow. And ultimately it requires that we cease to see innovation as a set-and-forget product we buy, and engage with it as a process and a culture we intend to foster.
Photo/Image Credits: Center pivot system: USDA, via Wikipedia; Kansas fields: U.S. satellite image via Wikipedia; Center pivot prototype: T-L irrigation; Jim Groom as EDUPUNK: bavatuesdays; Tree-trunk roller: Jonnie Hughes.
- It details the preliminary “impressions” of professors engaged in a three year study that will end in 2016. Despite having run flipped classes, they are in week three of that study.
- It mentions that flipped models might not work for philosophy, because it’s difficult to come up with “real-world problems” to which one could apply philosophy. This despite the fact that philosophy classes (like many humanities classes) are largely already flipped.
- It has no mention of sample size, methodology (other than the most basic information), controls, or quantitative findings.
- It is not clear whether the teachers teaching flipped classroom had any training or experience in the methodology, despite having what looks like a depth of experience in lecture methodologies.
- Hilariously, the article dates the flipped classroom trend to 2007.
What’s more depressing than this is the mass of otherwise intelligent people on Twitter seeing this as either supporting or rejecting nuanced claims. Come on, people.
Asking whether flipped classrooms “work better” is like asking which medication or treatment works best for someone’s psychological problems. Not a specific problem, mind you, just psychological problems in general. What’s the one best pill/treatment at any dosage for depression, schizophrenia, ADHD, and/or agnosia?
Well, what you’re actually treating matters. Medication dosage matters. Therapy method and frequency matters. Therapist competence matters. Regimen compliance matters.
What research actually checks is whether specific regimens are effective for specific problems in specific sorts of cases. When we see good outcomes replicated across a variety of situations, or great outcomes replicated within very specific situations, we label that regimen “promising”, which is where I think certain flipped practices are today. But the details in a thing like this are the whole point as far as research is concerned. So a story that removes the context, student profile, and methodology might as well not be written (or cited) at all.
Jon Udell gets it 100% right:
“Thanks to the philosophical foundations of the Internet — open standards, collaborative design, layered architecture — its technologies typically qualify as user innovation toolkits. That wasn’t true, though, for the Internet era’s first wave of educational technologies. That’s why my friends in that field led a rebellion against learning management systems and sought out their own innovation toolkits: BlueHost, del.icio.us, MediaWiki, WordPress.
My hunch is that those instincts will serve them well in the MOOC era. Educational technologists who thrive will do so by adroitly blending local culture with the global platforms. They’ll package their own offerings for reuse, they’ll find ways to compose hybrid services powered by a diverse mix of human and digital resources, and they’ll route around damage that blocks these outcomes.
These values, skills, and attitudes will help keep a diverse population of universities alive. And to the extent students at those universities absorb them, they’ll be among the most useful lessons learned there.”
What I like best about the post is what a hopeful message it brings. You can see the recent rise of the xMOOC/neo-LMS as a giant step back (and I have at times felt that way). Conversely, you can see it as creating exactly the sort of problems we’ve spent the last decade building toolkits to solve.
We were built, my esteemed peeps, for precisely this moment. That’s not such a bad thing at all.
Moving on, the report’s suggestions around use of blended learning and the impact of student success initiatives on cost are fairly familiar recaps of approaches readers of this blog will already know, and given the time this blog has spent on those issues, I’m not sure digging into them again is worthwhile. On the whole, I’m supportive of the idea that retention and decreased time to degree are the most promising foci for anyone wanting to increase the impact of money spent on education. And I do believe that blended approaches (combined with high-quality resources) can address some of our challenges around quality, cost, and access.
Rethink College (System) Architecture
The more interesting piece of the article, and the one I would like to talk about, is Anya’s plan to save the flagship research university by seeing the state university system as a system.
In this section, Anya details a new organization for a state college system. In state college systems as currently designed, there is a lot of overlap in roles, and much unnecessary competition between institutions. In Anya’s plan, different colleges would have more defined roles, and work together (hopefully) in synergy. Most current colleges are transformed into “Cohort Colleges”, non-residential experiences that offer a small range of general purpose degrees. Adult online is split off into its own entity, and research flagships remain close to what they are currently.
There’s stuff to like in this reformulation, though I think it’s the principle (move campuses back towards working as a system) that’s more important than the specific practice. The plan also attempts to deal with something most such plans have avoided — the tricky question of how to support research universities as we move away from the traditional cross-subsidies involved in higher education. In the system as outlined, flagship research universities exist in a very similar form to today, serving the specialized needs that the cohort colleges cannot provide. As an interesting solution to the research university problem, the research universities continue to be heavily subsidized in return for providing educational services to the rest of the system: open content, analytics, assessments, infrastructure.
This is very much the direction we need to go — the fact that states are not currently producing such materials is criminal. Moderate investments in the production of such resources could improve the quality of education, and could reduce total cost of attendance immediately through providing high quality textbook replacements. Keeping those materials open would allow faculty to make the materials even more effective.
Of all the ideas in the paper, it is this one that intrigues me the most. It is one of the best attempts I’ve seen to save the research university from unbundling.
At the same time, the concept needs some tweaking. The system as described looks a bit too much like the MOOC system of today — elite institutions pushing out materials to smaller institutions in a broadcast model. We know that successful models will have to be more collaborative in nature.
There is also the question of whether flagship research universities are best positioned to build such materials. Anya notes that the lack of diversity of students at such institutions must be addressed if the materials they produce are to be relevant to a general population, and this is true — we saw this most recently with the San Jose State Udacity pilot, which made farm league errors in implementation that your average community college would have known to avoid. But you can’t just swap in different students. The faculty that build digital resources to teach the general population should ideally have expertise in teaching the general population, and expertise takes years to develop.
Who has that expertise? In certain disciplines, a community college professor ten years into a career has likely taught 100 sections to a research university professor’s twenty. They’ve also likely taught a broader variety of students, have more exposure to blended and online modalities, and received more instruction on teaching than their more research-oriented colleagues.
If the flagships truly hire the best instructors they can find to develop and test materials, the likelihood is that many of those instructors will be from outside the flagship institution. This is in no way to disparage faculty at research universities doing wonderful things in the classroom — I work with such individuals every day. But everything we know about expertise says that many world-class *teaching* experts are likely to come from the pool of people teaching multiple sections of the same class semester after semester.
And this is where the model begins to crack. The best teachers aren’t going to be the best researchers, so to the extent we hire the best teachers to put together these courses, our research cross-subsidy disappears again.
Still, I think this idea has merit. There’s at least a small need for subject matter experts in esoteric disciplines, and in the end, perhaps society has to just learn to eat the cost of research. Certainly the idea that state university systems should be in the business of producing materials is a strong one; the best configuration to achieve that is murkier.
On the whole, there’s a lot in the paper that will look familiar to those following the policy debate of the past couple years. The two recommendations that stand out — for restructuring work roles of employees and restructuring institutional roles within the system — are problematic, but provide some creative thinking and good starting points for discussion, which is what a paper like this is supposed to do.
Tayna Joosten asks on Twitter whether anyone has any best practices for reusing MOOCs. I’ve been looking at this with Amy Collier, Helen Chen and others, but we’ve tended to focus on the question of how to create MOOCs that make reuse easier. However, it’s not a big jump to flip that perspective around to the instructor view.
So briefly, what are some things we’ve found? The first thing is that the major hurdles to reusing MOOCs look, for the most part, like the hurdles to moving to any learning-centered paradigm. The sorts of issues that faculty encounter look like the sort of issues you see with any move to something like a flipped classroom, team-based learning, or peer instruction paradigm.
While that may seem obvious, it really can’t be emphasized enough. Using a MOOC is not going to solve your cultural problem for you. You’re going to have to learn to stand back. You’re going to have to be much clearer with students about expectations than you are used to, and you’re going to have to explain the “why” of the pedagogy to them as well as the how. Pick up a book on blended learning (there are plenty of good ones, but if you can wait until December, I’m going to particularly recommend this one).
That said, there are some particular issues/opportunities that arise with using MOOCs that are worth mentioning.
Finding a MOOC
Make sure you’re allowed to use the MOOC in your classroom
All the major MOOC providers currently disallow classroom use of MOOCs without permission. There are exceptions — Stanford Online allows reuse (although I believe individual instructors may set other restrictions?), and many of the Canvas Network classes are free to use in your class.
If you want to use a course by Coursera, edX, or Udacity in your classroom, you will need to talk to them and obtain explicit permission. There is a possibility that they may ask your institution to pay a fee, or place certain restrictions or requirements on your reuse.
Think about the schedule and availability of the MOOC
You might be on a quarter system using a semester system MOOC, or vice versa. We don’t recommend syncing up completely with the MOOC (see below) so some of these things can be worked around. But make sure at the very least that your students aren’t going to be shut out of the MOOC halfway through the class. Know how long the materials are available after the MOOC ends. Know what sort of materials are available before it starts.
Check the focus and prerequisites
Textbooks tend to be “overspecified” — that is, there is far more material in a textbook than you would use in any one class. This allows you to customize the class’s focus and level to the students.
MOOCs tend to be one very specific view of the subject matter, with little to no extra material. This means you have to think a bit more carefully about the “fit” of the MOOC with your students than you might with a text.
Look for a MOOC with remix and reuse rights.
If you do find a MOOC you can use in your classroom, pay attention to your specific reuse rights. David Wiley talks about 4R’s openness: the right to reuse, revise, remix, redistribute. Most current MOOCs don’t permit all these rights, but some are better than others. Can you post material from the MOOC on your class blog? Use pieces of it in your LMS?
Even the right to do a small amount of integration with the materials can go a long way towards making the experience a coherent one for students.
We’ve also talked to one person who has moved to pulling from multiple MOOCs, and recommends that practice. That’s the ultimate remix, and a practice worth considering if you have the time (and legal rights) to pull it off.
Starting the class
Don’t try to stay in complete sync with the MOOC
If the MOOC is running at the same time as your class, it’s tempting to try and stay in sync with the MOOC. Fight that temptation. You may, in fact, end up staying in sync with it, but setting up that expectation with students is going to make falling behind feel like failure.
Additionally, there are lots of advantages to staying a week or so behind in the MOOC. You have more time to plan as an instructor, and the quicker students have an opportunity to “work ahead” if they want.
The one caveat to this is that your students may want to get a completion certificate for the MOOC, and this may require they complete the course in a certain timeframe. Find out what that time frame is before the class, and encourage students that may want to work ahead a little to do so.
Set expectations and explain the why
This is blended learning 101, but it’s perhaps even more important here. When you are flipping class using your own videos, students recognize the work you put in, and respect your expertise. When you are patching together many videos from different sources, students can see that curation as an expression of your expertise, and respect the amount of time you spend constructing the experience.
In this case, however, you may be handing over the entire lecturing piece of your curriculum to one person at a top tier institution. If blended learning can look to some students like the professor is slacking, using a MOOC can look like a complete punt.
As such, it’s incredibly important to explain to the students that the lectures free up your time to do the more difficult work. Again, you can learn from other blended instructors here. I used to tell my students the Bloom’s Two Sigma story — how the best performance comes from one on one tutoring, and talk about how our Team-based Learning structure was meant to provide that sort of interaction. Your mileage may vary, but have a speech ready.
Don’t use the word “experiment”
More from my own experience running other de-centered pedagogies than anything we learned from talking to instructos, but never tell your students they are part of an experiment. Tell them it’s new, and it’s innovative, and that you seek their feedback. But telling them it’s an experiment is read by some students as permission to fail.
Running the class
Localize it, criticize it
You’re the one charged with making this experience not feel like the generic, warmed-over leftovers of an Ivy League education, so look for opportunities to bring the local element in.
Projects are a great way to do that. With people we’ve talked to, the projects are the piece that makes this feel less like a B.F. Skinner experiment, and helps the students to see the value of the work they are doing.
Small things may help as well. You might want to dialogue with the videos — if you disagree with the way something is explained in the videos, provide an alternate explanation. Supplement lectures with local examples that mean something for the students. Don’t be afraid to put yourself in the frame a bit. The last thing you want is for your students to see everything in the MOOC as wisdom brought down from the mountain. Treat the MOOC as a text, and push your students to think and engage critically with it.
Make sure your students are connected to one another
In the MOOC experience we’ve been pitched, the students in your class are all going to reach out to a student in Brazil at 2 a.m. when they have a problem. In reality, this is not the case. Students in Blended MOOCs tend to rely on other students in their class, not on the global cohort.
That’s OK. In fact, that’s what we want — students bonding with one another over classwork is one of those prime indicators of persistence to degree. Remember though that your students are going to need methods to communicate with one another and organize that are not provided by the MOOC. Maybe that’s an LMS, or twitter, or knowing each other’s email addresses. Whatever you decide, make sure your students are connected, and encourage them to reach out to one another.
Make sure they aren’t too connected
A joke, partially. But because many MOOC activities feel like trying to “beat the machine”, and because the MOOC feels like a far-away entity, your students may not know what constitutes cheating and what constitutes collaboration and support. Set clear guidelines. Explain that what constitutes cheating in the MOOC constitutes cheating in class.
This is a difficult line to walk, and it was a major worry of at least one professor we talked to. But like most things, a little clarity up front and occasional policing can go a long way.
Align your assessments
Maybe it goes without saying — but you can’t switch from the textbook you’re using to the MOOC and give the students the same test you gave last year. Do the work of thinking through what parts of your tests or assignments align with the material covered and practiced in the MOOC. If possible, set up an appeals process to catch and deal with the inevitable mismatches.
Finishing the course
As mentioned above, students who want to get the MOOC certificate may have to meet a certain deadline. Set up support for them to do that if you are trailing the global MOOC cohort. If the MOOC continues after the class is finished, offer post-class support.
In an ideal world, the MOOC runs shorter than your semester or quarter, and finishes a bit before your class, giving students time to complete it, study for your personal final exam, or finish out a project. However the MOOC is structured, make sure you allow adequate time to wrap up your local version of the class, and give students time to reflect on the experience.
Don’t forget to survey
You may wish to survey your students during the class (I highly recommend Brookfield’s Critical Incident Questionaire as an unbobtrusive way to do that). You may not.
But at the very least, survey your students at the end of the experience. Amy Collier (over at Stanford Online) has a very rough draft of a student survey you can use, and hopefully help us improve. You might also look at using the Community of Inquiry Survey which is a validated, well-respected tool to assess cognitive, social, and teaching presence in the classroom. Since so many of the struggles with MOOCs revolve around issues of presence, it’s our (as yet untested) belief that a survey like this could provide deep insights into the strengths and weakness of individual course designs. If you’d like to work with us on a research project around this, let us know.
Is Domain of One’s Own a platform? The short answer is no, not in the traditional sense.
But I came across this 1995 post from Dave Winer about what a platform should be, and there is some definite resonance with his conception of the idea:
A platform must have potential, or open space. I call this blue sky. The platform’s API must show thru enough power so you can do anything on top of it. That’s a very elusive idea, hard to define. You want an API to put limits on the problems it deals with, but you also want to leave open the possibility that any developer could pervert the API to make it solve problems that the inventor couldn’t imagine. The author of an API is offering a challenge, saying “blow my mind,” to everyone who might take a stab at implementing something on top of the API.
He then winds up at this simple definition of a platform:
A platform is “a blueprint for the evolution of a popular software interface or specification.”
DoOO is not a platform in the traditional sense, as you have to do nontrivial acts of metaphorical violence to talk about the “APIs” of it. But in this larger sense I like to think it is very platform-like. It does create a blueprint of what education could and should look like, via both the included applications and the extant examples that drive it. There are limits on the type of educational projects it is designed to support. It’s not your Student Information System or your analytics backend. Much of the course hub architecture is built on assumptions of open publishing. It does very much capture a specific vision of what net-enabled education is.
But it’s also wide open, and I think nothing captures the spirit of it better than
[Domain of One's Own] is offering a challenge, saying “blow my mind,” to everyone who might take a stab at implementing something on top of [it].
Neither here nor there, I suppose, but was struck by how Winer’s words resonated against it.