But something was wrong with the CAPTCHA system.
In any case, my comment on his recent post:
I think we have also lost the idea that part of what education is supposed to do is impart to the next generation a common body of knowledge and skills that allows society to, quite frankly, function. And for a lot of those things we have to learn we may not have an intrinsic motivation for learning them. I think here of Jere Brophy (who probably knew more about and believed in student motivation more than anyone) — he called intrinsic motivation “ideal but unattainable as an all-day everyday motivational state”. Any socially just system must recognize that relying purely on intrinsic motivation results in a society where education only benefits a few.
On a more personal note, I’ve been teaching a course on statistical literacy the past two semesters to incoming freshmen, and it’s an eye-opener. I would say nearly everything we teach is important to being an empowered citizen in the 21st century, and that some of what we teach is absolutely vital.
It’s been interesting to think about my class in the context of recent open learning experiments, because so much in my class does not apply to the sort of scenario they address.
It’s not really an option for my students to leave the class without understanding margin of error, or why randomized control trials are more highly valued than cross-sectional observational studies. And the core skill — to be able to quickly break down the strengths and weaknesses of a quantitative argument presented in the media — is so crucial and at the same time so complex a skill that I envy photo-a-day projects and paint-this-picture communities.
Here’s what I know from teaching this course though:
- Students don’t get this on their own (even just based on the motivational issue)
- There is a huge difference between a well designed class and a poorly designed class in whether students do get it.
Now, if we agree that it’s perfectly fine that our public debate in this country continues to be horrifyingly statistically illiterate, except for conversation between people that care enough to bother to learn what a good quantitative argument looks like, then that’s fine, I suppose. Just rely on intrinsic motivation and an internet connection, and the people that want to learn about stats will learn about stats.
But if we actually think that education might not only be a consumer good, but part of a cultural transmission, then we are going to have to deal with extrinsic motivation, and outcomes, and all of that stuff. Because, to put it simply, we want to build more of the classes that work, and less of the ones that don’t.
I really want to make clear I am not knocking current MOOCs etc. I am just saying they are not relevant to the problems I am facing in the classroom. Yet.
That said, I just got a message that the great people down at UMW may have duct-taped up the sort of Ed Roulette system I describe here: — *that* could be major for my purposes, so it’s early days yet…
From the new report: Coming in 2020: New Hampshire’s “Silver Tsunami”
By the year 2020, New Hampshire’s shift towards an increasingly older population will reach a peak. And by 2030, nearly half a million Granite Staters will be over the age of 65, representing almost one-third of the population. This trend will influence nearly every critical policy debate, perhaps none more so than health care. Released today, the Center’s latest report, “New Hampshire’s Silver Tsunami,” analyzes the potential impacts of this demographic shift on the state’s health care systems.
As we’ve shown here before, each dollar increase in Medicaid funding (which will be affected by this, according to the report) decreases state spending on education between six and seven cents. Again, I am not rationalizing the cuts here. We spend way too little on higher education, and we have to push to spend more. But the economic dynamics are such that we will lucky if we can hold institutions of higher education at level funding going forward.
What do we do? Well, politically, if you care about education, the single most important thing you can do is to support politicians and policies that attack the health care cost problem. All the other stuff is pretty useless without that.
But from a professional standpoint, imagining some alternate world where everything changes in the next decade and we don’t have to compete for student dollars or cap costs in one way or another because suddenly everyone gets how important education is and we get the funding we need — this is not a helpful professional stance, IMHO. The healthcare situation is increasingly going to pit competing goods against one another; we are going to lose in that battle repeatedly, because asked to choose between funding an ethics class and treating poor people in the ER, we will choose the ER.
In the meantime we have students we are going to need educate — never mind the issues of access we have to address. There is not a magical world where we are not going to have to do this at at least a stable inflation-adjusted per-student cost over the next ten to twenty years. There’s just not. The only responsible action given the need and the economic reality is to look to drive down costs while maintaining or increasing access.
Brilliant essay by Simon Reynolds. And worth thinking about more broadly than music:
Cobain, arguably the last rebel-rocker-as-star, had owed his rise to the centralizing power of the old media; now in his death, he was entangled with the emerging new media disorder. The old media and entertainment channels (what I think of as the analog system) constructed the mainstream while simultaneously creating the possibility of that mainstream being breached and reinvigorated by forces “outside.” In grunge’s case, that meant the flannel-wearing, slacker-minded alt-rock underground that had developed during the ’80s, fostered by a network of independent labels. This curious process of inversion—the underground becoming the overground—was how the analog system had worked repeatedly in the past. (’50s rock’n’roll came initially from the regional independent labels.) And with Nirvana and their fellow travelers, that’s how it worked one last time.
But what is also true is that that the media organs of the analog system generated what you might call the “Epochal Self-Image”: a sense of a particular stretch of years as constituting an era, a period with a distinct “feel” and spirit. That sense is always constructed, always entails the suppression of the countless disparate other things going on in any given stretch of time, through the focus on a select bunch of artists, styles, recordings, events, deemed to “define the times.” If we date the takeoff point of the Internet as a dominant force in music culture to the turn of the millennium (the point at which broadband enabled the explosive growth of filesharing, blogging, et al.), it is striking that the decade that followed is characterized by the absence of epochal character. It’s not that nothing happened … it’s that so many little things happened, a bustle of microtrends and niche scenes that all got documented and debated, with the result that nothing was ever able to dominant and define the era.
The failure is bound-up with the erosion of the filtering function of the media and its increasing inability to marshal and synchronize popular taste around particular artists or phenomena. The Internet works against convergence and consensus: the profusion of narrowcast media (blogs, netradio, innumerable outlets of analysis and opinion) and the accelerated way that news and buzz get disseminated, mean that it is harder and harder for a cultural phenomenon to achieve full-spectrum dominance of the attention economy. Now triumphant, the digital system has interfered with our very sense of culture-time.
Photo Credit: Marc Romer
I think this post of David’s is right, mostly (though I think the title is a bit misleading).
I think it also keys into a broader shift that is happening. OER projects are increasingly driven by very specific ends and defined needs. As David points out, when we move efficacy and access out of the core OER definition it allows creators and funding agencies to apply more specific and local criteria for evaluation.
And this is good. I think we’ve romanticized the “purpose-neutral” nature of much OER. There’s nothing wrong with OER that is released with no end in mind. This blog post is yours to take if you want it. I’m writing it, it makes sense to release it without knowing what use it might be put to. Take it if you want. That’s good practice, and can result in a lot of unexpected benefits.
But given the pressing social and educational problems we face, it makes sense that focus would and should move away from “general openness” projects, and towards (at least partially) specific projects that present strong opportunities to improve our shared lot at a reasonable cost. Projects, dare I say it, that are designed (or expanded) with a purpose in mind. And this is what’s happening, thankfully, as openness becomes an attribute we desire in our social solutions rather than an end in itself.
Moving issues of general access and universal efficacy outside the definition of OER seems like a good step towards that.
It’s not so much a case of “Here Comes Everybody”, as of “Everybody Was Here All Along”. People aren’t late to this party, technology and business are.Posted: September 23, 2011
It’s not so much a case of “Here Comes Everybody”, as of “Everybody Was Here All Along”. People aren’t late to this party, technology and business are.
Matt Edgar, http://matt.me63.com/2008/05/22/erm-excuse-me-but-i-think-everybody-was-here-all-along/
In the past few days I’ve had my first spamonym “followers” (presumably because I am posting on G+ publicly). And more prominent people like Howard Rheingold are already fighting them in the stream.
I’ve said it before, I’ll say it again — if G+ is to maintain a shred of legitimacy about real names, they have to reduce spamonyms to near zero. It’s a little hard for people to stomach that they can’t hide their name if Asif Ali is running all over the place talking about “technogies”.
One more note on George’s post. He calls the above the “Encarta price curve of death”, and I think what he is saying is that as content became cheaper to produce consumers were less willing to pay high prices for it.
I haven’t read the article he linked to thoroughly, but my guess is this is a partial misread of the graph. This does not represent a stable set of consumers willing to pay less and less for a product, but rather it represents an evolving market. The price drops not because the content becomes less valuable, but because the market for that content at lower price points becomes larger.
In 1985, the set of customers interviewed, I imagine, would be library purchasers, and maybe some forward thinking home users (how many people even had a home PC in 1985?). By 1994, that market was everyone. That’s actually most of the collapse, right there, pre-web. The price collapsed not because information became more worthless, but because suddenly a lot of people had PC’s with CD drives. The price collapsed because demand expanded, and competitors began to compete using different math to make ends meet.
In other words, the curve of Encarta is a classic sort of disruptive innovation effect, where a previously niche product becomes commodified, and finer gradations of quality become far less meaningful than price.
I don’t think this undermines George’s argument per se, but it is an important distinction. To the extent our institutions are like Encarta, it is because there is more demand for education, not less, and the institutions that survive will be those that realize that higher education is ceasing to be a niche product and follow the economics of that shift to its logical conclusion.
I think a lot of this is happening, frankly. Univ. of Central Florida, a leader in online/hybrid courses has a student body that for the most part takes a fluid blend of online, hybrid, and f2f classes. That model is being duplicated many places. The Innovative University’s fascinating final chapters talk about this as well — the fact that our STRENGTH is our physical campus and face to face interaction.
The key, though, is to see these things (F2F, physical campus, and to some extent domain expertise) not as the go-to solutions for all problems, but as scarce competitive advantages that we must start deploying to maximum effect. We burn away these resources like they are nothing — we have beautiful facilities and a captive audience and we lecture to them with slides we pulled from the textbook.
There are a lot of things that just work better in person. There is a value in going to a place apart from the rest of life to pursue those things. I find it wildly ironic that a group of people that spend a good part of their life at conferences can’t see that….
A post on Philipp Schmidt’s stream got me thinking about how we might tap into the 20 years of research on Peer Instruction to better inform peer learning initiatives.
I’m assuming here that readers are familiar with the Peer Instruction research (if you are not, you really should take a look at it — some of the more amazing results are coming out of that area).
The question is how you take Peer Instruction (which after 20 years of tinkering and research has become a robust method) and transfer it to the web in a way that reduces the need for instructor facilitation.
Here’s one idea:
Consider a piece of software that puts a question to 1000 students in real time scattered around the web. The students answer, and then are automatically paired with people that choose a different answer through chat or web cam. Each person explains their reasoning to one another for a couple of minutes, then people revote. If the questions were well designed you might be able to get some of the effects of PI.
The voting piece turns out to be a big factor, BTW, because it focuses the mind and the discussion on an objective, leading to a more productive exchange (at least for many applications). I find it actually work wonders in my F2F classes.
But I could certainly see how with a little coding you could have, if not instructorless sessions, then at least sessions which were mostly facilitated by software. And the nice thing here is you already have literally 100s of articles that have researched what makes these sorts of experiences more effective and what makes them less effective — you’re not shooting in the dark quite as much.
I could also see a sort of rotation thing built in — if you (with your peer instruction help) are able to answer a certain percentage of the questions correctly (say 95% correct, rolling average over last 50 questions) you graduate. So people roll out, new people roll in, etc.
If I built this, by the way, I would call it some pun-like name based on Chat Roulette. Because I see it as that level of simplicity — you answer a question and suddenly the system sends you to videochat with a random person that disagrees with you, and the 3 minute timer for the discussion begins to count down. You and this person have to figure this out before the vote, right?
How fun is that?
Would this work? Maybe, maybe not. But at the very least it’s an attempt to bring something with demonstrated success into the web sphere.
From the NYT, today:
Another common misconception about how we learn holds that if information feels easy to absorb, we’ve learned it well. In fact, the opposite is true. When we work hard to understand information, we recall it better; the extra effort signals the brain that this knowledge is worth keeping. This phenomenon, known as cognitive disfluency, promotes learning so effectively that psychologists have devised all manner of “desirable difficulties” to introduce into the learning process: for example, sprinkling a passage with punctuation mistakes, deliberately leaving out letters, shrinking font size until it’s tiny or wiggling a document while it’s being copied so that words come out blurry.
As far as the larger article: in general, I hate this trend of dressing up educational research findings we’ve known for decades as new “Brain Science”.
“Spaced repetition”? Really, we’re just learning this now? The truth is Bloom, Vygotsky, Bruner and others got here years ago. And frankly, what is this “new” brain science all the papers keep talking about? Bruner didn’t study the brain? Was the cognitive linguistics I learned a decade and a half ago just applied art?
That said, I think that much of what has been called attention to under this media meme has been good. It reminds me, at least, that we know a crapload about these things in isolation, but most people don’t realize how much we know because the smart integration of these findings into a usable framework has often eluded us. So if some people want to take the public’s fascination with MRI’s and use that to get people to think of education as a legitimate research pursuit and rehash some old findings, I guess I’m fine with that, at least for the moment.