I Have a Research Question About MOOCs That Your Elite Institution Can Answer in Under an Hour

I’ve been really curious about how much (and in what way) xMOOC students use forums.    And I can’t find any good data on it. Not even a “per capita visits to forum” number.

This is pretty suboptimal for the field, since one of the main advantages of MOOCs often presented is the conversation they allow between students. Now, there is a question of whether these online conversations have any learning impact. But we haven’t even got to the basic question of whether the median xMOOC user uses the forums at all, outside a few brief scans. That seems insane to me.

Fortunately, if you work at an institution that offers a Coursera MOOC, you can answer that question pretty easily. What’s more, you can be the first to do it.

Here’s the data export procedure and SQL schema (thank you for the link, George Veletsianos!):

coursera_data_export_policies_and_sql_schema.17dec2012

There are some unique challenges, due to the anonymization of user IDs on a per session basis. But post data is not anonymized, so you could get a metric like median posts per user, or percent of users who post. And on the reviewing the forums side, perhaps the percent of sessions involving a forum visit?

I’m not sure how hard it is to piece the database together from the download. But a competent SQL coder could use the document and database to get at these sorts of questions in a matter of minutes once the database is up and running and the document read. These are very simple sorts of SQL statements.

I don’t think these will answer any research questions definitively, but they will help us refine the questions that we ask in very helpful ways. And that, in turn, will help us improve education. I’ve seen a lot of institutions say they are doing Coursera courses as a charitable effort, to help improve education for all. Data is essential to that effort, so now is the time to see whether the rhetoric reflects reality.

Show me the data!

Advertisements

Peak Demo is Real

I’ve known (ever since my own short stint in higher education marketing) that the demographics of the U.S. are a bit dismal from here on out. The situation, which I’ve jokingly referred to as “Peak Demo”, is that most growth in higher education has been funded by the 17-24 year old set, and that demographic starts stabilizing (and actually shrinking slightly) from here on out, as the so-called “echo-boomers” move into their late-20s and early 30s. Somewhere in the mid-aughts was peak demo. We’ll bottom out around 2020, and, barring international students and changes in immigration law, we won’t get back to 2005/6 levels until around 2030.

pyramid

(From the excellent stats site NationMaster)

That has really ugly implications for higher education institutions, which used the echo-boom years to expand, rather than transform (Oops!). It means that transformation now has to be accomplished while being run down by the three horsemen of the edu-pocalypse: Baumol’s cost disease, rising medical cost demands on GDP (with attendant declines in state support), and, now, lousy demographics to boot.

All this is old hat. What I didn’t realize until a tweet by Scott Leslie (to Brian Lamb, who is doing his own stint in marketing) is the demographic situation in Canada is even more dire:

statcan

(From Postsecondary Enrolment Trends to 2031: Three Scenarios)

Wow. And of course, there’s Western Europe, which has been imploding for a while (but maybe has the advantage of having not expanded during an echo boom):

Italy

No real poignant observation here, just interesting to see this issue outside the U.S.-centric context.

(and, yes, if you are Kenya, it’s a whole different ball game)

Thank you, Cathy

From Cathy Davidson’s announcement of her new MOOC on the History and Future of Higher Education:

It makes me sad that, at a time of educational crisis, the ideas seem all to be coming from elite private schools (i.e. schools that do not need to change their base), from corporations (some of which have motives of public good but others of which do not and see a potentially profitable bottom line as a chief motivation for their investment in higher education now).  In terms of disciplines, the ideas are also coming out lopsided:  most are coming from the computational sciences, as if technology alone is the best way to solve a human problem.   To really reimagine higher education, we need brilliant ideas from the best teachers in the human and social sciences and the arts; we need modes of teaching that address real world problems but not with standardized, static modes of video-lecturing but with engaged, interactive peer-to-peer communities that enhance critical thinking and creative contribution.   We need that not just in higher education but, again, in the real world where that kind of in-depth thinking is desperately needed to address the possibilities and challenges of a future where technological change drives economic and social change at a blinding speed.

I’m teaching “The History and Future of Higher Education” online through Coursera next year because I hope to elicit thousands upon thousands of great ideas from people who, every day, are coming up with great ideas and have not had a chance, platform, or opportunity to articulate them to a larger public.

I dislike the use of the word crisis at the top there, a word that limits the narratives one can tell about education in unproductive ways. But much of the rest here is lovely. I know people who have been working in distance education units for ten or fifteen years, who are fountains of knowledge about what works and what doesn’t and have superb intuitions about the best routes for improvement. I know people that have been in open education for more than a decade (and some much longer than that) and can tell you where open education has really worked, where it has not worked as well, and why that might be.

And yet we’re all sitting here watching panels of people that discovered online learning and open education yesterday make the same stupid mistakes, tread laboriously over the same ground. Why?

I admire many of the people currently pushing this conversation for their commitment and their concern. I think they are sincere. But they often remind me of the redditors last week, crowdsourcing the Boston bombing investigation, so sure that they were going to prove how slow the “legacy” organizations like the FBI and Boston Police Department were, compared to the magic powers of united tech geeks.

It turns out that the FBI knows a thing or two about conducting investigations. In that case, it turns out some of that slowness is due to concerns that the redditors didn’t think through. And so it’s no surprise that the brilliance and commitment of the redditors did more damage than good, as they frustrated the police’s efforts to work methodically towards a solution. From the Washington Post:

In addition to being almost universally wrong, the theories developed via social media complicated the official investigation, according to law enforcement officials. Those officials said Saturday that the decision on Thursday to release photos of the two men in baseball caps was meant in part to limit the damage being done to people who were wrongly being targeted as suspects in the news media and on the Internet.

(By the way, that’s not where the damage ends — read that article when you get a chance, it’s a real trip.)

I may be being overdramatic, but this is what it often feels like to me. A bunch of amateurs, convinced of their native brilliance, working in ways that not only don’t seek insight from the people who have actually been trying to expand educational access for decades (read: public colleges, including community colleges, open and distance education units, academic tech outfits at under-resourced institutions, the original MOOCers, the IRRODL set of researchers, and heck, even one or two for-profits), but in ways that actively frustrate and undermine our ongoing efforts. (I suppose this is the institutionalist in me coming out again…trash my statism in the comments, I’m used to it…). And the problem is that the damage these education “redditors” can do is much much greater than what happened in Boston. We’re talking not just the safety of a city or the reputations of a few, but the future of education here. The stakes are pretty big. It’s probably time we had a talk.

So good for Cathy for bringing “legacy” voices into the discussion. Who knows, maybe this is a MOOC I could actually finish. It’s certainly one I plan to start.

The Bloat

Last night, shortly after falling asleep, I had bizarre dreams, dreams likely created out of the post Brian put up, a post largely about whether institutions block social progress or facilitate it.

Dreams aren’t really interesting to read on paper, so I’ll spare you the imagery. But the subject of bloat kept coming up. All that administrative bloat. The word worked its way through dreamscapes seemingly co-authored by Lewis Carroll and Harold Pinter. Bloat, bloat, bloat, bloat, bloat.

I woke up at 1 a.m. One of those startled awakenings where you are so sure something has happened that you have to go online and see.

Something had happened. A campus officer had been shot, fatally, at MIT. I looked at the picture of the poor guy. He got up this morning to do his job, just another day. Surreal and unfathomable.

The internet churned. There were two suspects, they were at large. They were likely related to the marathon bombing. Reddit was on the case though – they had determined that the “white cap” suspect was actually a suicidal missing ex-Brown student. Side by side pictures with the suspect proved it. Reddit identified suspect #2 as Sunil Tripathi, a kid who had gone missing weeks earlier, whose family had been frantically putting the word out in the last month, hoping that someone would find him before he came to harm.

The commenters began to inform his parents of the news via Facebook that their son was most likely the terrorist. They found a person who had predicted Sunil would engage in a “public suicide”. Creepy, prescient! They wondered at the world of bureaucracy so muddled that the FBI hadn’t even connected the missing persons case to the bomber case, whereas the hivemind had solved it in mere hours.

hhh
Somewhere along the line a policeman allegedly muttered Sunil’s name over a scanner. It was confirmed! Reddit had been right. The tweets began to link the historic thread. Reddit had beat the FBI.

his

The name “Sunil Tripathi” trended worldwide on twitter as people linked the thread, the photos, the family’s facebook page. Bloggers blogged. People on Reddit talked about commenting on this “historic post”. They mocked the FBI, saying the entire department should be outsourced to a subreddit. CNN too! They argued (of course) over who had been the first to post the connection.

There was a lot of whuffie to be divvy’d up.

Meanwhile, while cable spun in circles, and Reddit mocked the FBI, tweets from newspaper reporters in the Boston area told a different story. The suspects had not been identified. Police were cautioning people on drawing connections. The local reporters uploaded video from the chaotic scene as they stood in a locked down neighborhood with an armed terrorist running loose.

I think most of you know how it ended. It wasn’t Sunil Tripathi.

Bloat, bloat, bloat, bloat, bloat. The services “bloat” that was the campus officer died a hero, trying to protect his campus. The dinosaurs that were the local newspaper reporters were the true media heroes, being very careful in their analysis while putting themselves in harm’s way to get it exactly right. 

Bruce Sterling’s latest SXSW lecture came to mind. I disagreed with much of it, but this struck me:

And in conclusion: how can we get past the wow factor? How can we really inquire with this? How can we treat this with moral seriousness?

I think the first step, really the proper step, is to accept that our hands are not clean. We don’t just play and experiment: we kill.

We have a revolution everywhere that claims there’s nothing to lose, and everything to gain. Which is really just a way of saying you’re accountable, I’m not. Winners don’t play by the rules. Our success are successes and our failures are irrelevant.

But the much-maligned bloat is very often the accountability. When you take accountability for public safety, journalistic truth, graduation rates, compliance with disability legislation, human subjects rules, mail delivery to everyone instead of just urban centers – bloat, bloat, bloat, bloat, bloat. And a lot of that bloat should go, or be streamlined, or updated, or moved from rigid procedure to individual ethical calls. We need to be more nimble and more agile.  

But we also need to be aware of why the bloat is there in the first place. Sometimes it encodes the things we most value as a society — our children’s safety, a family’s reputation, success for all. Coming to the table saying you’ve solved our problems by ignoring those values isn’t agile — it’s sociopathic.

I sympathize with both sides of the institutions debate. The price of admission is not about where you stand on that spectrum.

But do you come to this with moral seriousness? Do you take responsibility for who your chosen future might harm? Do you act by an identifiable code and have you spent time thinking through the implications of that code? Are you just spitballing, or do you have the courage of your convictions?

Everyone else can take a walk.

colllier

Intro Psych OOC Rolls On Without Me

Got a note from Larry Welkowitz, who is teaching the Canvas.net MOOC we were putting together when I left. They haven’t got through QA yet (Canvas.net has a course QA process), but Larry recorded his Hello video:

Larry is going to be excellent. You already see that here — he wants to shoot outside for an informal feel, but doesn’t want the audio overwhelmed by traffic or street noise, so he has someone shoot from inside the car outward. This is what you get when you put a person with years of vlogging and podcasting experience as the teacher in one of these courses. These are the kind of teachers we really need to attract, people with the disciplinary knowledge who also have this idiom down. (Hint: They aren’t all at Harvard and Stanford. In fact most of them are not.)

As far as the course itself, I left this course a bit of a mess — it was (and is) an experiment in designing a course backwards from a set of open materials, essentially a reverse engineering of the course followed by a difficult reintegration. Initially, it was meant to show how quickly you could put something Coursera-like together yourself from open materials (you can), but it quickly became an investigation of how to make a truly kick-ass course. It wasn’t there when I left — it looked like a garage halfway through an engine rebuild — but I left it in good hands, and I am excited to see what they do with it.

I will link to the course registration as soon as it is up.

It’s actually the potential of the course as a freely shareable piece of courseware (the OOC in MOOC) that has me most excited, but more on that later.

Final unrelated note — someone told me the other day that Coursera spends $30,000 a course, and their partners spend at least that. I was supposed to think that was high, I think. I don’t. If building a course is that cheap, why isn’t every institution in the country trying their hand at it? $60,000 is probably less than what your students are spending on Psych 101 textbooks each year…in the grand scheme of things, it’s pocket change. You should be making one of these too. Talk to me if you want to get started.

Introducing the “Distributed Flip”

So I think with the recent San Jose State news people may finally start to pay attention to the use of MOOCs and MOOC-like things to support blended learning, a match-up we’ve been supporting here for a while.

Good, and glad to see it. Although there is still this pesky little issue of what to call such things. The term “wrapped MOOC” is tied to the M-word in ways likely to be unhelpful in 12 months, and it can get a bit difficult at times to figure out what is wrapping what.

Amy Collier’s been talking about some of the stuff we’ve been looking at (and some of the newer stuff Stanford is doing) as “distributed flips”. When I asked her what was behind the name she told me that “You have a slide deck, you’ve got a presentation tomorrow, you’ve got to call it something.”

(OK, that’s a paraphrase. But it was along those lines.)

Initially I rebelled — aren’t all flipped classroom designs distributed in some sense? But the more I thought on it, the more it made sense.

In most flipped scenarios, content creation is distributed. Sometimes assessment as well. I create my course as a sequence, and then go out and find individually created content that suits my narrative and supports the flow of my course. On this chapter we’re doing standard deviation — grab Video X from Khan Academy. This one is on confounding — pull this material from OLI. But I, and I alone, own the flow. It’s my course with their pieces.

A distributed flip goes further. In a distributed flip content curation is distributed. Sequencing is distributed. Community may be distributed. There are in fact (at least) two separate streams of the course, each coherent in itself, with (at least) two separate curators, sequencers, creators, assessors working independently of one another. The person offering the face-to-face course has to sync up, at least partially, with the other stream or streams. But control over almost every aspect of the course can be distributed through multiple providers using a somewhat looser coupling than traditional digital materials.

What strikes me as particularly useful about the term is it sets up a discussion of the different ways parts of the class can be distributed across multiple providers. It also focuses people on the particular questions MOOCs raise when used in the blended format, questions that were with us before MOOCapalooza and will be with us long after MOOCapalooza has been absorbed into other efforts. It’s a term that pulls back the camera a bit and reveals the bigger picture, after a year and a half of tight focus on a small piece of that picture.

Of course, it doesn’t quite have that “word of the year” zestiness…..

amy_distributedflip

But we’re working on it (it’s a lot of letters!!!!!)

OER as Snapshot vs. OER as “Runnable Code”

Today I was reading Pamela Fox’s blog post about discussions inside Coursera on whether they want to open up their code. It’s a thoughtful treatment of the subject, and contains I think a nice discussion of some of the hidden costs of going the open source route. It’s great to see engineers at Coursera thinking about this (see, I am not the anti-Coursera ravebot you all thought I was! I can be nice when I see good things! Show me good things!)

What Pamela ends up on is a desirable compromise — a halfway point between open and closed she calls the “source snapshot” approach:

A “snapshot” is a dump of some part of our codebase, taken at a point in time and copied into a public repository. It may be an incomplete dump (missing dependencies or server-side, e.g.), it would not necessarily be runnable, and it would have no guarantees of being up-to-date or ever being updated in the future.

What’s fascinating is to think of these two models — open-sourcing and snapshotting — in terms of OER, and in particular OCW. Read that paragraph again, and you’ll realize that “Source Snapshot” defines what OpenCourseWare has largely been in the past. Materials from a course (but not all the materials), not runnable out of the box, with no expectation of updates or upkeep.

As Pamela points out, this is a good intermediate step:

The snapshot would still be useful, for developers looking to see how we approached some aspect in the codebase, and also for us to refer to in talks and blog posts. It would also be a way for us to dip our toes into the open source waters, and to see what developers are most interested in. If a particular snapshot got a lot of attention, then maybe one day, when we felt we had the resources, we would turn it into an actual living open-source library and spend the time needed to nurture that community.

And again, we see the parallel here, where most institutional reuse of OCW ends up influencing approach to course design without being directly reused. To get reuse, however, you need to go the extra mile.

We’ve talked about this before, of course.  There’s this ancient post of mine on Openess as Reuse vs. Openness as Transparency, and Stian Haklev’s amazing dissertation on the history and context of OCW in China makes an expanded version of this distinction central to its analysis (see the “Typology” on page seven for starters). Others have made similar distinctions.

But it’s useful, I think, to see it through this slightly different lens of “snapshots” vs. “open source”. To paraphrase Chris Kelty, an anthropologist of the Open Source community, such analogies are “good to think with” in that they provide new angles on old questions.

In this case, what the analogy allows us to ask is this: If we see open-source as a continuum from “source snapshot” to ‘true’ open source, what elements would we need to move from Point A to Point B?

This view sees transparency and reuse not as two seperate concerns, but as potential stages of a project’s development. That’s problematic but it’s also liberating, because the minute we see “OCW as open source” as built on top of “OCW as snapshot”, we see much of what makes the difference in software would also make the difference in OCW. To move OCW beyond the snapshot phase OCW would need to be:

  • More runnable. It should be usable, not just viewable. Real repurposable content shipped for immediate use in whatever wrapper (LMS, WordPress, EdX) it runs natively in. No more PDFs. No more quizzes as documents.
  • Dependency-free. It should contain everything it needs to run. No copyright redactions, no references to textbooks not included, no mentions of class activities that aren’t provided in the materials.
  • Community-supported. It would need a community around it, and a conscious (and funded!) effort by the original developer or developing institution to nurture that community through its initial stages.
  • Maintained and updated. It should have a community or institution that commits to the ongoing pruning and extension that such projects require so that they do not become empty husks of linkrot. (See Dave Cormier’s post for how that might work).

I think we’re getting there, slowly. But it’s hard, and the transition from “source snapshot” to “open source” is more resource intensive than many realize. Thinking about it in these terms helps us explain why it is resource intensive, and under what circumstances that use of resources might be warranted. Again, not a perfect lens and not the only lens on this (not nearly), but one that I am finding useful at the moment.