The MOOC tsunami is here, and I’ve been trying to think of MOOC accreditation models that don’t hollow out the subsidization business model that allows universities to function.
If you want to know why that’s important, you can read this. The summary is
- I think it’s highly likely a state legislature somewhere will force state schools to develop a path to credit via MOOCs in the next couple of years.
- If this is developed in the wrong way, it will kill the bundling which allows us to offer high-quality low enrollment face-to-face classes that really do matter. We run these classes currently at a loss, and subsidize them with profits from the larger enrollment classes that MOOCs are likely to hollow out of the curriculum.
One idea I have been turning over in my head is the “MOOC follow-on”. Instead of granting credit for a MOOC directly, the MOOC is used as a prerequisite for an intensive week-long project-based learning course.
How it would work is this — I take a MOOC approved by my state college system, say “Intro to Statistical Literacy”. Passing that class allows me to enroll in a one week intensive program on a college campus where a project builds off the MOOC course, say “Intro to Statistical Literacy Practicum”. You’d have 15 hours of class time, and 30 hours of out of class time to demonstrate through PBL the skills you’ve learned. This works out to a one credit hour class.
If you obtain credit in this follow-on class the course retroactively grants you credit for the pre-req. So you get a four credit package from one week of paid face to face instruction. The college charges what it needs to make this a marginally profitable class — perhaps charging a three credit price for the four credits you have gotten (or a two-credit price if it is sustainable).
Imagine how this might feel — students from around the state of New Hampshire or Virginia (or the world) taking a MOOC, and then driving or flying somewhere for the follow-on. Students would come having already forged relationships on the web, already having mastered basic skills. Instructors would know precisely what their class experience looked like and would be able to hit the ground running on day one.
Parents could send their kids away for an intensive week that has a sense of momentum and focus to it. Campuses could focus on what campuses do best — providing the sort of physical and cultural environment that creates an social and intellectual community where students can excel. Employers could know that the credit means something, that the student has not only taken proctored assessments but can collaborate with others, show up on some type of schedule, and produce meaningful work.
It doesn’t address the meritocracy issues of MOOCdom, but I can deal with that in another post. But it allows the integration of MOOCs for credit in a way that not only preserves the best aspects of face-to-face education, but heightens them, and it preserves the subsidy of later courses that may need to be offered wholly face-to-face.
Mainly though, I just think back to my own 17 year old self. How cool would it be leaving home for the first time to head off and meet you fellow online coursemates with the promise of doing amazing things in the space of a week? Heck, how cool is that as a 40 year old?
Why aren’t we doing this now, again?
I was reminded by a colleague today that the most important technology in a classroom is the classroom itself. She had taught in overcrowded classrooms where you couldn’t walk between desks, and lecture classrooms where it was nearly impossible for students to collaborate without throwing their back out.
This year she lucked into a classroom with mobile furniture that could be reconfigured with ease, in a classroom that was not overstuffed with desks. She was amazed. She could engage with her students separately or as a group. She could easily reach students in the back that needed assistance. She could walk around the classroom while talking and make eye contact with everyone. Like most people that have taught in well-designed flexible classrooms, she said the difference was night and day.
I’ve told people that my vision of educational technology’s future is to make the “clicks more click and the bricks more brick” — that is, to use the particular affordances of face-to-face and online education to better effect by recognizing and promoting their distinct bu complementary strengths. Just as online classes are moving to finally make more and better use of of the networked nature of the internet, face-to-face spaces have to make better use of the emergent & immediate interaction that they provide. They have to foreground that and enable it in ways that many classrooms don’t. That’s why while I’m a huge believer in online classes, I’m also incredible excited with technologies like rolling tables, movable screens, and extra space. And it’s why the digital technology I most want to see in classroom is the digital overhead projector, because unlike Powerpoint, Prezi, and all manner of smart tools, the digital overhead enhances the feeling of presence in the classroom, the feeling of immediacy — and it does that while promoting the sort of emergent, physical interaction that only physical space can provide.
Or *might* kill it at least. Consider this a possible future.
Basically the bundle that higher education sells is a bunch of relatively cheap to offer interchangeable courses (Psych 101) that subsidize relatively expensive higher level courses that aren’t so interchangable (Psychology capstone seminar).
The way the subsidy works is this — we offer both courses for say $2,000 apiece. But in the 70 person Intro to Psych we make (let’s say) $1500 per student profit. In the capstone seminar, which has, let’s say 12 people in it, involves coordination with community partners and a nice cozy seminar room, we lose about $800 per student.
Maybe that’s exaggerated profit and loss. But even at smaller margins of gains and loss, the problem is evident. The courses that are MOOC-able are the courses that subsidize that college experience that we keep saying MOOCs can’t replace. But if MOOCs replace the MOOC-able courses, the non MOOC-able ones are suddenly far beyond affordability for most people.
In public education, the problem has the potential to get bad quite quickly. Imagine a legislature that says that the state colleges must provide a path to credit through MOOCs offered by accredited institutions. Suddenly the easy, profitable stuff is gone, and even at existing tuition rates colleges will be bleeding red ink. Add to that the fact that the best, most self-motivated students will be taking MOOCs, leaving the most challenging students to take survey courses on campus, and the situation is even worse. We rely on the talent subsidy of our top students to provide the environment which helps our struggling students, and we rely on the financial subsidy of our entry level classes to subsidize our later classes.
In one legislative swoop, both those subsidies could disappear. How will we cope with a change of that magnitude? I think there are definitely models that would work (I have one model I’ll talk about tomorrow called the “MOOC Follow-on”), but it’s difficult to see how we would get there if the time frame becomes compressed…
From Twilight of the Elites, by Chris Hayes:
“Go all the way back to Sumerian civilization,” Bill Clinton instructed a crowd of global jet-setters at the 2011 World Economic Forum in Davos, “and you’ll see that every successful civilization builds institutions that work, that lift people up and reward people for their greatness. Then, if you look at every one of those civilizations, all those institutions that benefited people get long in the tooth. They get creaky. The people ruling them become more interested in holding on to power than the purpose they were designed for. That’s where we are now in the public and private sector.”
I think this is where we are with higher education, frankly. I don’t like some of the rhetoric around the MOOC craze, but that is all offset by one thing — the people behind these projects are sitting down and saying “How can we give as much opportunity to as many people as possible?” And it’s really about time.
For a variety of reasons, it’s hard to do that in higher education. More people means less selectivity, and less selectivity means less power. I think that’s starting to change. Student success initiatives are a start — more graduates, at any rate. Online initiatives are helping institutions see that many of these questions are not zero sum. But this is all still against the background of an industry which measures success by status, and not graduates.
I’m hoping reading this book will give me some insight into these issues, I’ll let you all know.
I have been looking a little more closely at the Course Signals research since I mentioned it in my Post-Content LMS whitepaper, and it’s — well, it’s a more complex story than I first thought.
First, there’s this interesting finding:
The graph says that those students from the 2009 cohort who have at least one course with CS persist to year 2 at a 1.3% higher rate than those without such courses. That’s good stuff, especially with the balance in the numbers between these compared groups. The earlier 2007 cohort data possibly overstates the impact due to an adoption bias, but the 2009 data is showing impact as it nears scale.
All fine and good. But the graph also says that students that have one course with CS persist at a 1% lower rate than their non-CS counterparts.
Can you find an explanation for this? Sure. There might be a lot of remedial classes using CS. There might be high-retention majors not using it. But it’s an odd fact, and deserves picking at. If you are building a causal argument, you like to see a nice clean dose-response effect. J-curves exist, of course, but are generally an indication that the story is more complex and multi-factored than we want to admit.
Here’s the second thing — look at the non-CS 2 year retention rate in 2007.
It’s 73%. Now compare it to the non-CS 2 year retention rate in 2009. It’s 82%. That’s a huge 9% gain in a short span of time that has nothing to do with analytics in the narrow sense, and one that actually dwarfs the cross-sectional differences we were just looking at. And it means that, in general, longitudinal comparisons of classes before and after CS adoption are likely suspect — there are massive things afoot at Purdue that are increasing student retention semester over semester, and it shouldn’t be a surprise if your 2010 course with CS demonstrates better outcomes than your 2008 course without it, because, on average, classes in 2010 are likely doing better all around.
Do I think this means Course Signals is snakeoil? Far from it. If I had to bet money, I’d say CS is having some impact, and that when the numbers and research firms up that impact will be demonstrably positive. But what these numbers remind me is that we get too focused on the wrong questions when it comes to things like analytics. Assuming I’m not missing a broad student demographic change, it’s very clear that the culture around this tool has had more impact than the tool itself.
In other words, we are far to early in this enterprise to peel apart how much success at Purdue was the tool, and how much success was the conversations around the tool, and how much success had nothing to do with the tool. Until we can peel apart those impacts more precisely, quoting x% increase or y% decrease around tool adoption as a way to compare products is meaningless.
What is not meaningless is this — it’s clear that this retention effort that involved analytics and other measures succeeded. It’s likely the technology played a strong positive role in that, and it’s possible that some of that role was indirect. It’s possible to move the needle on this. What affordances did the particular implementation offer, and how might we learn from them in our own initiative? In what ways did analytics change the culture of the classroom and institution, and do our own technology decisions support those types of shifts? How does our own technology hinder (or actively work against) these types of shifts? How do we change that?
In other words, technology matters, but it matters most at its intersection with culture, and that intersection needs to be the focus. Of course, you have to look at bugginess, stability, security, must-have features and other issues. But past a certain baseline level of functionality, tools are about culture. Choose tools that foster the right culture, and discard those that don’t. That’s your one line technology plan. It won’t lead you astray.
Update: Incidentally, there’s an interesting potential confounder in the “2 or more” category, and I can’t tell if it is controlled for skimming the paper. Briefly — which student is like to have had two or more classes with CS — a student that dropped out after one semester? Or a student that stayed for two semesters? Obviously the two semester student — they have taken more courses, so they are likely to have taken more CS courses as well.
That would distort the data, quite a bit.
I’ve only skimmed this, but I can’t see where that is controlled for. Can someone help? (Again, I’ve only skimmed, I may be missing something obvious).
Additional Update: There’s a second issue too, less important, maybe. But at Keene, taking a low number of classes (say 3) is correlated with less persistence than taking a high number of classes (say 5). So again, the student that is more likely to persist may also be more likely to end up with more CS classes — they will take 10 classes in a year, whereas the low-persist students will take 6. If 15% of each student’s classes are CS, the low persist student will get one CS class, whereas the high persist student is as likely to get 2 as to get one. Again, just thinking out loud, but worth a look.
I wish the Beloit Mindset List would go away, with it’s oh-so-cute trivialization of cultural disconnect. I don’t particularly care about the differences in students pop-cultural past, or whether they’ll get the Pulp Fiction references that the list apparently assumes I’m throwing out in class like rice at a wedding (Oh, they don’t throw rice any more? There goes my Intro to Phonology syllabus!)
But be that as it may, I find one item on the list really telling:
15. Having grown up with MP3s and iPods, they never listen to music on the car radio and really have no use for radio at all.
Um, this is wrong. REALLY WRONG. They are absolutely addicted to the radio, at least as far as 8th grade (I have an eighth grader). What they listen to is entirely determined by what is on the radio, and when I’m driving Katie and her friends in the car, they don’t want me to play their iPods or MP3 CDs. They want me to turn on the local Keene-based radio station. That’s their primary communal music experience.
But don’t take my word for it — do what any self-respecting mindset list should do — look at the data. According to a just published Nielsen report, more teens listen to music on the radio than through iTunes, and it is YouTube, not iTunes that tops the delivery mechanism list:
- 64% of teens listen to music through YouTube
- 56% of teens listen to music on the radio
- 53% of teens listen to music through iTunes
- 50% of teens listen to music on CD
Note also this — almost as many teens listen to CDs as iTunes. CDs.
I think there’s a weird logic to this — for teens music is communal — it’s about sharing. It’s about knowing the songs others know, singing along in the car to a song your boyfriend is hearing at the same time across town, posting the newest most amazing song ever written (via YouTube) on your best friend’s Facebook wall. Years after they were created, MP3s are still a lousy way to share, with a ton of barriers; CDs, radio, and embeddable YouTube videos, on the other hand, seem built with sharing in mind. You could convince your friends to all sign up for Spotify and share playlists — or you could just build your playlist in YouTube and embed it in your Facebook stream. Or, heck, just listen to the radio, which is basically Keene’s playlist anyway. Broader taste isn’t seen as a virtue in middle school.
But weird logic aside, I think the danger of the Beloit list is it does exactly the opposite of what it says on the tin (On the tin? What does he meeeaaannn?). Rather than compel us to explain the fascinating uniqueness and stunning diversity of current youth culture, it invites us to see the students as a monolithic alien force coming into college trying to puzzle out why we are so backward technologically. And that view has been proved wrong so many times, it’s not worth even going into it anymore.
The most fascinating question about music a professor could ask herself is why the heck her students are still listening to the radio. I’m serious, if you could dig deep into that question, you’d get to a profound place; but our desire to distort our students into digital native caricatures prevents us from even noticing that as a reality. We prefer to bend reality to our chosen stereotypes of iTunes obsessed youth.
Too serious maybe? For a lighthearted list? OK, so let me finish off with this, then — a video made by entirely by my thirteen year old in a two hour editing binge; a video of a day at the pool (a physical pool!?!) with her friend and sister, cut to a song they first heard on the radio (!!!), put together on a Windows laptop using Movie Maker, shot not through an iPod, but through the point and shoot cameras Beloit thinks they don’t know about, and uploaded to YouTube.
Real life is messy, and not list-friendly. But it’s a lot more interesting, and ten times as wonderful.
Someone recently tweeted a Malcolm Harris article from last year on student debt. Here is the first line:
The Project On Student Debt estimates that the average college senior in 2009 graduated with $24,000 in outstanding loans.
What the Project actually says:
We estimate that college seniors who graduated in 2009 carried an average of $24,000 in student loan debt, up six percent from the previous year.1
And if you follow that footnote:
1These figures reflect the average cumulative debt levels of 2008-09 bachelor’s degree recipients with loans at public and private nonprofit four-year colleges. See the Where the Numbers Come From and How We Use Them section for more information.
What does this mean? It means that this line
The Project On Student Debt estimates that the average college senior in 2009 graduated with $24,000 in outstanding loans.
Is absolutely wrong. The average debt of this set of students with debt is $24,000. However, only about 66% of college students take on debt. When you average in those students that funded it fully through grants, scholarships, needs adjustments, and parental support, the real average is going to be somewhere closer to $16,000. Maybe less. I’d look it up, but that’s actually the job of the reporter writing the piece.
So in the first sentence, we start with a figure that is likely inflated by 50%. Not a whole lot of confidence there.
I continued reading however, until the third paragraph, in which he says:
What kind of incentives motivate lenders to continue awarding six-figure sums to teenagers facing both the worst youth unemployment rate in decades and an increasingly competitive global workforce?
I can actually answer that question for Mr. Harris — they aren’t. The 1.5% of loans that top six figures don’t go to teenagers. They go to medical school and law school students, and a smattering of other grad school students. Only 3 out of every 1,000 college seniors graduate with over $100,000 in debt. That’s three out of a thousand. It’s difficult to communicate the rarity of that, but let’s put it this way — if you are a man, you have about a 3 in 1,000 chance of being as tall as Kobe Bryant. Talking about the problem of six figure undergraduate debt is about as informed as referring to Kobe as a man of average height. Three in 1,000 is also about your chance of dying through assault by firearm. Though, frankly, the average American is slightly *more* likely to be murdered with a gun than the average U.S. college senior is to graduate with $100,000 or more in debt.
In fact, 90% of graduates graduate with under $40,000 in debt, and those that accumulate more than that are far more likely to have gone to expensive private or for profit schools. A full twenty-five percent of those graduating with over $40,000 in debt took six years to finish.
These things matter. I haven’t bothered to read the rest of the article, and why should I? I can have no confidence that the author has even basic facts correct. The world he paints in the first paragraphs is one where graduating student debt is 50% higher than it actually is, and one where six-figure undergraduate debt is a problem (it isn’t).
The more important statistics, some of which show that a major part of the student debt problem is a four-year completion problem, go unremarked. The low rates of $40k+ debt among state schools gets no attention, even though increasing access to those schools represents an potentially attractive solution to the problem. Reforming graduate school loans for law school, a major contributor to six-figure debt, doesn’t even make the list. Neither does the bump in $100k loans among 30 and 40 year olds, possibly indicating, among other things, that many people that don’t graduate with $100k loans get there over time through absurd rates of interest on loans guaranteed against discharge by the federal government.
I’m not saying we don’t have a student debt problem or a college cost problem. We do. But if we want to solve that problem, the details matter. And the people that don’t bother to get the details right (or at the very least to retract the details they get wrong) are not worth your time. Start demanding more careful treatment of student debt numbers, and stop reading people that refuse to provide that level of care.
Looking at yesterday’s reply to Dave’s post with today’s eyes, it occurs to me that coercion and enforcement are trigger words for me.
Why? Because coercion is portrayed as something to be avoided. It’s portrayed as a flaw of the educational system.
Coercion can be ugly, but consider this: In democratic capitalism, it’s democracy that is the coercive element, and it is capitalism that is the free-to-be-you-and-me networked decision making component. The fact that one of the most prominent distributed decision making networks is in fact the market economy ends up providing a basis for a more nuanced discussion of coercion and enforcement.
Market economies are good because they preserve information and individual choice. It stands in distinction to a bunch of other options that tend to force decisions one way or another and erase minority opinion. Democracies take away the option to do these things. A democracy might say, for instance, that all cars must be sold with seat belts. Then through coercion and enforcement, the non-seatbelt option is eliminated and we lose that minority option. In doing so we lose some information about what consumers truly prefer and how much they will pay for it. We also save the lives of many people.
With seatbelts, resorting to coercion seems to make sense. With airbags and anti-lock brakes it’s less clear. At one time there were people who argued that leaving sub-prime loans to the market would increase options and access. I think now we wish the federal government would have used coercion to prevent people from engaging in such contracts.
This is not a new discussion. The United States was founded on principles of freedom and equality and self-government. It’s like most communities in that it turned out in very short order that its core values were more or less in constant conflict. Former Supreme Court Justice David Souter, in a speech worth a thousand TED talks nailed it when he said:
The “notion that all of constitutional law lies there in the Constitution waiting for a judge to read it fairly” is not only “simplistic,” he said; it “diminishes us” by failing to acknowledge that the Constitution is not just a set of aphorisms for the country to live by but a “pantheon of values” inevitably in tension with one another. The Supreme Court may serve no higher function than to help society resolve the “conflict between the good and the good,” he suggested:
A choice may have to be made, not because language is vague, but because the Constitution embodies the desire of the American people, like most people, to have things both ways. We want order and security, and we want liberty. And we want not only liberty but equality as well. These paired desires of ours can clash, and when they do a court is forced to choose between them, between one constitutional good and another one. The court has to decide which of our approved desires has the better claim, right here, right now, and a court has to do more than read fairly when it makes this kind of choice.
In other words, the “right” on the other side of coercion — which is roughly “freedom” — is not a first principle we reason from to a conclusion. It’s never a case that “The more freedom, the better”, or “as much freedom as possible”– because an increase in freedom leads to a decrease in other things the community holds dear.
So when we repeat that the key to our communities is they lack coercion or enforcement, we are, in the words of Souter “diminished”, because we fail to respect the organic complexities of the communities we actually serve. We fail the first test as community leaders, which is to realize that all communities are founded not on aphorisms or first principles, but on a pantheon of competing goods, competing goods that we, as community leaders, must adjudicate between from time to time.
That’s messy stuff, but it’s at the heart of what we do in politics. And I think it’s at the heart of what teaching is as well.
OK, I’m overstating it a little for effect. But I just read Dave’s post, and I have to take issue with this:
Assessing what someone ‘knows’ is an act of enforcement of a given point of view, not a(n apolotical) helpful guideline to learning
Education is a means of cultural transmission. And I think it can take many forms, everything from MOOCs to skill drills.
But there’s this weird resonance in that quote that somehow a cMOOC
a) Is apolitical, and
b) Lacks enforcement of norms
and that seems wrong to me. Part of the reason people get involved with MOOCs and online communities of inquiry is, in fact, to learn from people what it is they “need to know”. They want to be acculturated! Noobs, famously, don’t know what they need to know, and are promptly corrected by the community. And that correction is provided in the context of an internal power structure that the online community develops.
So you have hierarchical assessment, even in a MOOC. You don’t design it, you don’t map it out, you don’t credential it. But you might as well admit that in a Connectivism MOOC a comment by George on how he thinks you are getting Connectivism wrong is, at least from a functional perspective
b) formative assessment
And together the reactions to your contributions form the MOOC’s assessment. Why run from that?
My class is doing some projects on NH this fall, infographic things, like incidence of melanoma in NH. And one thing you have to do with such things of course is look at the state demographic profile — we’re #1 in melanoma in the country (per capita basis), but we’re also an elderly state in terms of demographics.
Or so I thought. It turns out that when we use the cutpoint of 65+ we’re actually the 37th most elderly state (2000 data):
So where’d I get the idea we were an elderly state? Because I keep hearing that in median age we’re in the top 10 “oldest” states in the . And we are — we’re number 7:
So what’s going on here? It’s obvious once you think about it — the median age is far more affected by the fertility rate, which varies state to state, is highly impacted by culture, and skews the distribution to the right. A lot of times this will line up with your cut-point elderly — Utah is the youngest state on a median basis because they have big families in Utah, and they also have a very small percentage of people over 65.
But in New Hampshire we have the lowest fertility rate in the country and a migration inflow that consists more of mid-career professionals than young adults. So that tends to reduce the expected population skew.
Cutpoint percentages are a really simple analytical tool, and like mean, median, and mode they can be expressed as single number summaries. For example: you can say things like 15% of the population is 65+, or that only 0.2% of undergraduates graduate with over $100,000 in debt. (By the way, you read that right, despite those student debt examples every newspaper article on the subject leads off with, the actual incidence that sort of thing is about 2 in 1,000. That’s about as representative of the graduate population as a 6 foot 6 inch tall male would be of the male population. Perhaps the reporters should also interview Kobe Bryant to find out what it’s like to be average height?).
In short cutpoint percentages are incredibly useful tools for quick and dirty analysis of a distribution, and they are used all the time in business and policy analysis. And given a cutpoint and a set of data they aren’t that much harder to compute than the median. So why aren’t we placing them next to mean, median, and mode in our student toolboxes?