MRI 13

I’ve just finished attending the MOOC Research Initiative conference and workshop, which felt not so much a conference on MOOCs to me as the beginning of something else. We kick Courdacity-style MOOCs around a  lot, but if all MOOCs did was bring this level of intelligence and insight into problems of online learning, it would be worth it all. There’s a way that that black hole of a term is pulling different universes together, and if the culture of MRI ’13 is what results from that, the future looks very bright.

Conferences are always a bit of a disembodied experience for me; I never feel grounded away from home. But this time the whole experience had a warm dream-like quality, and I’ve been trying to parse out why that was. Certainly, part of it was the psychological overload of seeing so many people who I admire in one place. Part of it might be the ice storm that swept in the night of day two, pushing us all together into smaller spaces.

But I think it was actually the conversations that struck me the most. You know how in a dream characters suddenly pop up and have bizarre yet fluid conversations that pull your mind in all sorts of new directions?  The conversations here were not that bizarre, I suppose, but they had that rapid fluidity of dream logic, where a sort of conversational shorthand shuffles you quickly past the normal pro/con nonsense and into something much bigger, and much more exciting and mysterious.

Kudos to George, Amy, Tanya, and everyone else that put this thing together. I’ll be processing the experience for weeks to come, and there’s no greater compliment for a conference than that.

Counting History PhD Employment

I used to do more statistical literacy stuff on this blog, and I’m toying with the idea of going back to that. The problem is that the stuff that really tends to matter is stuff everybody thinks they already know, but which most people have not built habits around. It’s not really fascinating stuff to talk about, and most of the time it doesn’t result in huge discoveries, but rather, small modifications to our understanding of claims.

A good example of this is recent history PhD study, which shows surprisingly high employment of history PhD’s. It’s a great study, and hugely useful. However, the summary contains this line, which I’m sure people will latch onto:

The overall employment rate for history PhDs was exceptionally high: only two people in the sample (of 2,500) appeared unemployed and none of them occupied the positions that often serve as punch lines for jokes about humanities PhDs—as baristas or short order cooks. (italics mine)

In the COMPARABLE framework I used to give my students, one of the first questions you ask is “How was this number computed?” (“O” stands for “How were the variables Operationalized?”). A quick two minute scan of the article shows us this:

To identify the career paths of recent history PhDs, the AHA hired Maren Wood (Lilli Research Group) to track down the current employment of a random sample of 2,500 PhDs culled from a total of 10,976 history dissertations reported to the AHA’s Directory of History Departments and Historical Organizations from May 1998 through August 2009. The AHA’s Directory Editor, Liz Townsend, compared the data to employment information in the AHA Directory—which lists academic faculty—and the Association’s membership lists, and Wood used publicly available information on the Internet. Data was collected during February and March of 2013, and reviewed in June and July. Together, AHA staff and Maren Wood identified current employment or status information, as of spring 2013, on all but 70 members of the sample group.

A lot of time when you can’t determine the status of part of your sample, you can assume that the unreachable, unfindable people break down more or less into the same percentages as the reachable part of your sample. But how you collect data affects this. In this case, the existence of the American History Association directory makes it highly unlikely that there were unfound tenure-track positions, and the public nature of university directories probably sussed out most other people in university positions.

On the other side of things, we can imagine that the most invisible, hard-to-find people would be the ones that are unemployed or work low-paying, low-profile, non-academic jobs.

All in all, I think it likely that tracking down the untrackable would substantially add to the unemployed count, and might even dig up a barista. The research methodology almost guarantees that the 3% of people not found will be primarily people outside the university system.

So I think this “two people unemployed” business is overstated. Still. the claim that half of history PhDs are employed in four-year tenure track stands despite this, and that remains a rather interesting result.

With that result, there’s perhaps another issue. The initial sample is culled from finished dissertations. But dissertations are often abandoned, and all-but-dissertation (ABD) tends to become a permanent state for many that don’t find employment in academia. Why finish the dissertation if you can’t find a job in your field? Barista jokes are unfair, but if there is a PhD barista, they are likely ABD, and they wouldn’t show up in these stats anyway.

What would the stats look like if we included the ABD students? A minor quibble, unlikely to have a *huge* impact on the numbers. But it moves possibly sensational claims a bit closer to reality, especially in the humanities, where 10 year degree completion is sub-50%, IIRC.

A final thing I might note as rather odd is the small number of the PhDs working the community college system. In the “M” part of the COMPARABLE framework, students are asked to create a basic “model” in their head, and make predictions — if X is true, what else is likely to be true? Can you check it? Here, the fact that a large number of history teaching jobs are at community college, but only a 5.5% of our PhD sample work these jobs (compared to 50% of faculty working tenure track jobs) elicits a guess from us that the vast majority of people teaching history at the community college level must not have PHDs. There are certainly ways where that could be false and the data is still good, but if that prediction turns out wrong, then we’d have to dig deeper into the data.

So there you go. A partial analysis.

Now here’s the question for readers — is this boring as hell? Interesting? Boring, but salvagable?

The thing is I really believe in this stuff — getting into these habits of mind that let you do a five minute analysis of numbers. And the way I’ve learned it is by watching people model it (Tim Harford, Ben Goldacre, Milo Scheild, Joel Best, etc). But I think it can be a bit boring to read unless there is some big revelation, and most of the time the revelation is that the numbers are worthwhile, but likely somewhat overstated. Hardly edge of chair stuff.

Thoughts on how to blog this sort of thing? I was thinking of doing one a week if I could find some way to make it interesting.