Steal the Package / Idea Mining

One thing I’ve learned from my deep dive into wiki is that wiki is most powerful when seen as a collection of *ideas*. Those ideas might be stories, examples, software patterns, chord progressions, whatever. But when treated as a repository of ideas instead of a collection of publications wiki gains a certain type of power.

Ward demonstrates this nicely this morning in his federated wiki running journal (metaphorically called He starts his day of reading this article on net neutrality and net regulation:


It’s a multi-page treatment of the relationship of law to the internet which argues that in fact we already have many other legal tools at our disposal. But Ward doesn’t summarize it, exactly. He mines it for ideas he can name and connect. He finds one, and adds it to his journal:


David Reed says, “Not all laws come from governments. There is a whole body of “common law” that is generally accepted, transcending government. One such law is that you cannot steal a package that you’ve agreed to transport from point (a) to point (b). That is true whether or not there is a “contract”. It’s just not done, and courts in any jurisdiction, no matter what the government, will hold to that principle.” webpage

He makes a compelling case that the “inter” part of the internet works pretty well without governance by ITU or FCC or anyone else for that matter.

And he gives the idea a name: Steal the Package.

I fork the page, not necessarily because I agree (although I do, in this case) but because this is a useful concept to think with. At some later point I’ll connect that thought to an idea of mine. The whole process functions in some ways like the creation of sub-disciplinary jargon to express ideas quickly and succinctly, but it does it in a way that makes these terms accessible to anyone. In a wiki, each idea gets a page. Complex thoughts are formed by connecting pages.

That idea can quickly flow through a network, maybe even change the debate. As it flows through the network it can be extended, qualified, annotated, connected.

This is a different activity than forwarding a link via Twitter, and it’s different than writing up a response in WordPress. It’s a form of analogical, metaphorical thinking that we have barely tapped into as educators. Yet it embraces the core of education — you collect and curate a collection of ideas that will serve you well later. By chunking those ideas into terms you develop the ability to construct and express complex thoughts quickly.

We’ve come out of a decade of using wiki as a glorified book report publishing engine. We have barely tapped its educational potential at all.

Blue Hampshire’s Death Spiral

Blue Hampshire, a political community I gave years of my life to, is in a death spiral. The front page is a ghost town.

It’s so depressing, I won’t even link to it. It’s so depressing, that I haven’t been able to talk about it until now. It actually hurts that much.

This is a site that at the point I left it had 5,000 members, 10,000 posts, and 100,000 comments. And at the point co-founders Laura Clawson and Dean Barker left it circa 2011(?), it had even more than that.

And what comments! Because I say that *I* put sweat into it, or Laura and Dean did, but it was the community on that site that really shone.  Someone would put up a simple post, and the comments would capture history, process, policy, backstory — whatever. Check out these comments on a randomly selected post from 2007.

The post concerns an event where the local paleoconservative paper endorsed John McCain for their Democratic candidate, as a way to slight a strong field of Democrats in 2008.

What happens next is amazing, but it was the sort of thing that happened all the time on Blue Hampshire. Sure, people gripe, but they do so while giving out hidden pieces of history and background that just didn’t exist anywhere else on the web. They relate personal conversations with previous candidates, document the history the paper has of name-calling and concern-trolling.

Honest to God, this is one article, selected at random from December 2007 (admittedly, one of our top months). In December 2007, our members produced 426 articles like this. Not comments, mind you. Articles. And on so many of those articles, the comments read just like this — or better.

That’s the power of the stream, the conversational, news-peg driven way to run a community. Reddit, Daily Kos, TreeHugger, what have you.

But it’s also the tragedy of the stream, not only because sites die, but because this information doesn’t exist in any form of much use to an outsider. We’re left with the 10,000 page transcript of dead conversations that contain incredible information ungrokable to most people not there.

And honestly, this is not just a problem that affects sites in the death spiral or sites that were run as communities rather than individual blogs. The group of bloggers formerly known as the edupunks have been carrying on conversations about online learning for a decade now. There’s amazing stuff in there, such as this recent how-to post from Alan Levine, or this post on Networked Study from Jim. But when I teach students this stuff or send links to faculty I’m struck by how surprisingly difficult it is for a new person to jump into that stream and make sense of it. You’re either in the stream or out of it, toe-dipping is not allowed.

And so I’m conflicted. One of the big lessons of the past 10 years is how powerful this stream mode of doing things is. It elicits facts, know-how, and insights that would otherwise remain unstated.

But the same community that produces those effects can often lock out outsiders, and leaves behind indecipherable artifacts.

Does anyone else feel this? That the conversational mode while powerful is also lossy over time?

I’m not saying that the stream is bad, mind you — heck, it’s been my way of thinking about every problem since 2006. I’m pushing this thought out to all you via the stream. But working in wiki lately, I’ve started to wonder if we’ve lost a certain balance, and if we pay for that in ways hidden to us. Pay for our lack of recursion through these articles, pay for not doing the work to make all entry points feel scaffolded. If that’s true, then — well, almost EVERYTHING is stream now. So that could be a problem.




Reclaim Hackathon

Kin and Audrey have already written up pretty extensive summaries about the Reclaim event in Los Angeles. I won’t add much.

Everything was wonderful, and I hope I don’t upset people by choosing one thing over another. But there were a few things for me that stood out.

Seeing the Domain of One’s Own development trajectory. I’ve seen this at different points, but the user experience they have for the students at this point is pretty impressive.

JSON API directories. So I really like JSON, as does Kin. But at dinner on Friday he was proposing that the future was that the same way that we query a company for its APIs we would be able to query a person. I’d honestly never thought of this before. This is not an idea like OAuth, where I delegate some power/data exchange between entities. This is me making a call to the authoritative Mike Caulfield API directory and saying, hey how do I set up a videochat? Or where does Mike post his music? And pulling back from that an API call directly to my stuff. This plugged into the work he demonstrated the next day, where he is painstakingly finding all his services he uses, straight down to Expedia, and logging their APIs.  I  like the idea of hosted lifebits best, but in the meantime this idea of at least owning a directory of your APIs to stuff in other places is intriguing.

Evangelism Know-how. I worked for a while at a Services-Oriented Architecture obsessed company as an interface programmer (dynamically building indexes to historical newspaper archives using Javascript and Perl off of API-returned XML). I’m newer to GitHub, but have submitted a couple pull requests through it already. So I didn’t really need Kin’s presentation on APIs or GitHub. But I sat and watched it because I wanted to learn how he did presentations. And the thing I constantly forget? Keep it simple. People aren’t offended getting a bit of education about what they already know, and the people for whom it’s new need you to take smaller steps. As an example, Kin took the time to show how JSON can be styled into most anything. On the other hand, I’ve been running around calling SFW a Universal JSON Canvas without realizing people don’t understand why delivering JSON is radically different (and more empowering) than delivering HTML (or worse, HTML + site chrome).

Known. I saw known in Portland, so it wasn’t new to me. But it was neat to see the reaction to it here. As Audrey points out, much of day two was getting on Known.

Smallest Federated Wiki. Based on some feedback, I’ve made a decision about how I am  going to present SFW from now on. I am astounded by the possibilities of SFW at scale, but you get into unresolvable disagreements about what a heavily federated future would look like. Why? Because we don’t have any idea. I believe that for the class of documents we use most days that stressing out about whether you have the the best version of a document will seem as quaint as stressing out about the number of results Google returns on a search term (remember when we used to look at the number of results and freak out a bit?). But I could be absolutely and totally wrong. And I am certain to be wrong in a lot of *instances* — it may be for your use case that federation is a really really bad idea. Federation isn’t great for policy docs, tax forms, or anything that needs to be authoritative, for instance.

So my newer approach is to start from the document angle. Start with the idea that we need a general tool to store our data, our processes, our grocery lists, our iterated thoughts.  Anything that is not part of the lifestream stuff that WordPress does well. The stuff we’re now dropping into Google Docs and emails we send to ourselves. The “lightly-structured data” that Jon Udell rightly claims makes up most of our day. What would that tool have to look like?

  • It’d have to be general purpose, not single purpose (more like Google Docs than Remember the Milk)
  • It’d have to support networked documents
  • It’d have to support pages as collections of sequenced data, not visual markup
  • It’d have to have an extensible data format and functionality via plugins
  • It’d have to have some way to move your data through a social network
  • It’d have to allow the cloning and refactoring of data across multiple sites
  • It’d have to have rich versioning and rollback capability
  • It’d have to be able to serve data to other applications (in SFW, done through JSON output)
  • It’d have to have a robust flexible core that established interoperability protocols while allowing substantial customization (e.g. you can change what it does without breaking its communication with other sites).

Of those, the idea of a document as  a collection of JSON data is pretty important, and the idea of federation as a “document-centered network” is amazing in its implications. But I don’t need to race there. I can just start by talking about the need for a general use, personal tool like this, and let the networking needs emerge from that. At some point it will turn out that you can replace things like wikis with things like this or not, but ultimately there’s a lot of value you get before that.







Napster, “All the Rave” (Notes)

I’m on a staycation of sorts, taking a few days off to do nothing. One of the nothing things I’m doing is reading a couple books, The first, Reinventing Discovery: The New Era of Science, I’ll talk about later. The other one All the Rave (A history of Napster) I’ve barely started, but held some surprises for me. I wanted to jot them down here, partly to process them.

Sean Fanning was more radical and less radical than he gets credit for. The image of Sean we were sold back in 1999 by the Valley was of a slacker college student who just wanted to solve the problem of accessing music from anywhere and wrote Napster in a couple weeks. The image we were sold by the RIAA was of someone ripping off artists for profit. What I didn’t realize was that Sean was what we’d consider a legitimate gray-hat hacker before Napster, and that Napster was built with Sean using the famous/infamous w00w00 group on IRC as an extended learning community and sometime workforce. In fact, since Sean was completely unfamiliar with Windows programming he leaned heavily on the group to help him figure out how to build his prototype. It was a hacker IRC project from the start, both idealistic and radical. It wasn’t about the money, but it was also meant to shake things up from the start.

Sean seems to have been aware that this was not about music, but about rethinking the web. Sean mentions to the author that what struck him about IRC vs. the web was that IRC was “presence aware”. Here’s the author talking about that in a recent interview:

JOSEPH_MENN: Shawn’s great insight was that there was no reason that he could not combine the power of a search engine like Google with what is known as “presence-awareness” of instant messaging and other systems. In this way, only people whose MP3 files were available at any one moment would have those files listed for others to find. 

From very early on, Sean seemed to have a good insight about how a presence-aware web made a peer-to-peer architecture possible. To get the progression it is useful to think through what existed at the time — people were indexing MP3 sites, Google-style. But by the time you went to most of these sites they were down or gone, or the published files had changed. I may be wrong about this, but my understanding from the book was that the way Napster worked was that when you logged on you published your index to the server, and while you were connected those results would be part of the searched database. That’s a fundamentally different way of thinking about the problem of search. The Berners-Lee web, the Google web (or, I guess back then Alta Vista web) is based largely on the conception of permanently available URLs. Google just assumes your server is there because that’s how the server-client web works. And that assumption ends up making a particular type of web run by a particular type of people. Presence awareness subverts that.

In today’s world that may seem old hat. Or maybe not. The more I think about it, the more I think the promise of that vision has never really been delivered on.

To download is to share. This was a design decision that probably owed a lot to the IRC culture that Sean came out of. While you were using Napster to download you were also publishing your index and opening up to download (at least by default). It was just the way it worked. Use = sharing by design.

Now you could circumvent that, and some did. But the point was you had to hack your way to free-riding, it wasn’t the default. Whatever you believe about Napster and file-sharing it’s a powerful example of how software design is a driver of culture as well as a product of it.

It was not as simple as “The Music Industry Killed Napster”. I’d love it to be that simple a story. And I’m not to the Metallica portion of the story yet, but it’s already clear that Napster had issues before then. Most notably, Fanning’s uncle negotiated himself a 70% stake in the company early on, and became a huge corporate liability. His influence made investment problematic and governance a massive problem. So there you go — the music industry can screw you, but it takes family to *really* fuck you up.

Maybe more notes as I get further into the book; I find even when reading for fun I do better if I process the experience via blogging.

The Sieve Manufacture Continues at Udacity

From Udacity last week, regarding the phasing out of free certificates:

“We owe it to you, our hard working students, that we do whatever we can to ensure your certificate is as valuable as possible.”


We have now heard from many students and employers alike that they would like to see more rigor in certifying actual accomplishments.

Jonathan Rees has more on this odd phrasing about what is essentially a decision to charge students for what used to be a free education:

Perhaps I’m reading too much into this here, but I think this announcement raises profound questions about what education actually is, or perhaps simply what it’s supposed to accomplish. Is higher education a good thing because of the skills it represents or is it a good thing because you have it and others don’t?

To which I’d answer, no, you’re not reading too much into it. As I said back in November about Thrun’s sudden pivot:

There’ll likely be lots of analysis on this article and change in direction. He’s my little contribution. Thrun can’t build a bucket that doesn’t leak, so he’s going to sell sieves…Udacity dithered for a bit on whether it would be accountable for student outcomes. Failures at San Jose State put an end to that. The move now is to return to the original idea: high failure rates and dropouts are features, not bugs, because they represent a way to thin pools of applicants for potential employers. Thrun is moving to an area where he is unaccountable, because accountability is hard.

If thinning pools through creating high-failure courses is good, then thinning pools through creating high-failure courses and costly identity verification is even better. It’s MOOCs as Meritocracies, and it’s just as dumb (and culturally illiterate) an idea as it was in 2012, when the Chronicle was drooling over it.

Meritocracy is not a system, but rather a myth power tells itself. It’s openness as a privilege multiplier. It’s the antithesis of open education, which must measure its success in outputs, not inputs. You can’t claim you’ve granted people access to the top shelf of goods if you don’t provide them a stepladder.

I would disagree with one small point in Jonathan’s post, however. He indicts all MOOCs for this view. There are, I think, significant differences in the approach of both edX and Coursera to these issues. Most notably, edX has Justin Reich on board, and Justin Reich’s research is in exactly this area — how do we make sure that openness closes gaps rather than widen them. But I’ve also seen Coursera publicly grapple with this question in a way Udacity has walked away from. It’s clear to me that Coursera at the very least is *committed* to gap-closing as a principle, even if they have not yet acheived that in practice.

And that’s a difference worth noting and encouraging. There will be fundamental changes coming to Coursera’s model soon, and the question is which way will they lean. Let’s hope it’s not towards the sieve market Udacity continues to pioneer.

Gruber: “It’s all the Web”

Tim Owens pointed me to this excellent piece by John Gruber. Gruber has been portrayed in the past as a bit too in the Apple camp; but I don’t think anyone denies he’s one of the sharper commentators out there on the direction of the Web. He’s also the inventor of Markdown, the world’s best microformat, so massive cred there as well.

In any case, Gruber gets at a piece of what I’ve been digging at the past few months, but from a different direction. Responding to a piece on the “death of the mobile web”, he says:

I think Dixon has it all wrong. We shouldn’t think of the “web” as only what renders inside a web browser. The web is HTTP, and the open Internet. What exactly are people doing with these mobile apps? Largely, using the same services, which, on the desktop, they use in a web browser. Plus, on mobile, the difference between “apps” and “the web” is easily conflated. When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well. What should that count as, “app” or “web”?

I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”?

I say: who cares? It’s all the web.

I firmly believe this is true. But why does it matter to us in edtech?

  • Edtech producers have to get out of browser-centrism. Right now, mobile apps are often dumbed-down version of a more functional web interface. But the mobile revolution isn’t about mobile, it’s about hybrid apps and the push of identity/lifestream management up to the OS. As hybrid apps become the norm on more powerful machines we should expect to start seeing the web version becomeing the fall-back version. This is already the case with desktop Twitter clients, for example — you can do much more with Tweetdeck than you can with the Twitter web client — because once you’re freed from the restrictions of running everything through the same HTML-based, cookie-stated, security-constrained client you can actually produce really functional interfaces and plug into the affordances of the local system. I expect people will still launch many products to the web, but hybrid on the desktop will become a first class citizen.
  • It’s not about DIY, it’s about hackable worldware. You do everything yourself to some extent. If you don’t build the engine, you still drive the car. If you don’t drive the car, you still choose the route. DIY is a never-ending rabbit-hole as a goal in itself. The question for me is not DIY, but the old question of educational software vs. worldware. Part of what we are doing is giving students strategies they can use to tackle problems they encounter (think Jon Udell’s “Strategies for Internet citizens“). What this means in practice is that they must learn to use common non-educational software to solve problems. In 1995, that worldware was desktop software. In 2006, that worldware was browser-based apps. In 2014, it’s increasingly hybrid apps. If we are commited to worldware as a vision, we have to engage with the new environment. Are some of these strategies durable across time and technologies? Absolutely. But if we believe that, then surely we can translate our ideals to the new paradigm.
  • Open is in danger of being left behind. Open education mastered the textbook just as the battle moved into the realm of interactive web-based practice. I see the same thing potentially happening here, as we build a complete and open replacement to an environment no one uses anymore.

OK, so what can we do? The first thing is to get over the religion of the browser. It’s the king of web apps, absolutely. But it’s no more pure or less pure an approach than anything else.

The second thing we can do is experiment with hackable hybrid processes. One of the fascinating things to me about file based publishing systems is how they can plug into an ecosystem that involves locally run software. I don’t know where experimentation with that will lead, but it seems to me a profitable way to look at hybrid approaches without necessarily writing code for Android or iOS.

Finally, we need to hack apps. Maybe that means chaining stuff up with IFTTT. Maybe it means actually coding them. But if we truly want to “interrogate the technologies” that guide our daily life, you can’t do that and exclude the technologies that people use most frequently in 2014. The bar for some educational technologists in 2008 was coding up templates and stringing together server-side extensions. That’s still important, but we need to be doing equivalent things with hybrid apps. This is the nature of technology — the target moves.




Teaching the Distributed Flip [Slides & Small Rant]

Due to a moving-related injury I was sadly unable to attend ET4Online this year. Luckily my two co-presenters for the “Teaching the Distributed Flip” presentation carried the torch forward, showing what recent research and experiementation has found regarding how MOOCs are used in blended scenarios.

Here are the slides, which actually capture some interesting stuff (as opposed to my often abstract slides — Jim Groom can insert “Scottish Twee Diagram” joke here):


One of the things I was thinking as we put together these slides is how little true discussion there has been on this subject over the past year and a half. Amy and I came into contact with the University System of Maryland flip project via the MOOC Research Initiative conference last December, and we quickly found that we were finding the same unreported opportunities and barriers they were in their work. In our work, you could possibly say the lack of coverage was due to the scattered nature of the projects (it’d be a lousy argument, but you could say it). But the Maryland project is huge. It’s much larger and better focused than the Udacity/SJSU experiment. Yet, as far as I can tell, it’s crickets from the industry press, and disinterest from much of the research community.

So what the heck is going on here? Why aren’t we seeing more coverage of these experiments, more sharing of these results? The findings are fascinating to me. Again and again we find that the use of these resources energizes the faculty. Certainly, there’s a self-selection bias here. But given how crushing experimenting with a flipped model can be without adequate resources, the ability of such resources to spur innovation is nontrivial. Again and again we also find that local modification is *crucial* to the success of these efforts, and that lack of access to flip-focussed affordances works against potential impact and adoption.

Some folks in the industry get this — the fact the the MRI conference and the ET4Online conference invited presentations on this issue shows the commitment of certain folks to exploring this area. But the rest of the world seems to have lost interest when Thrun discovered you couldn’t teach students at a marginal cost of zero. And the remaining entities seem really reluctant to seriously engage with these known issues of local use amd modification. The idea that there is some tension between the local and the global is seen as a temporary issue rather than an ongoing design concern.

In any case, despite my absence I’m super happy to have brought two leaders in this area — Amy Collier at Stanford Online and MJ Bishop at USMD — together. And I’m not going to despair over missing this session too much, because if there is any sense in this industry at all this will soon be one of many such events. Thrun walked off with the available oxygen in the room quite some time ago. It’s time to re-engage with the people who were here before, are here after, and have been uncovering some really useful stuff. Could we do that? Could we do that soon? Or do we need to make absurd statements about a ten university world to get a bit of attention?

Why I Don’t Edit Wikis (And Why You Don’t Either, and What We Can Do About That)

Back in the heady days of 2008, I was tempted to edit a Wikipedia article. Tempted. Jim Groom had just released EDUPUNK to the world, and someone had put up a stub on Wikipedia for the term. Given I was involved with the earlier discussions on the term, I thought I’d pitch in.

Of course, what happened instead was a talkpage war on whether there sufficient notability to the term. Apparently the hundred or so blog posts on the term did not provide notability, since they did not exist in print form. Here’s the sort of maddening quote that followed after Jim got on the page and had granted CC-BY status to a photo so Wikipedia could use it. Speaking as a Wikipedia regular, one editor argues vociferously against the idea EDUPUNK deserves a page on the site:

This is clearly a meme. No one agrees what it means, its nice that a group of educators are so fond of wikipedia but it shouldnt be used for the purpose of promoting a new website and group. Even in this talk page this becomes clear, the poster boy says “Hey Enric, both of these images are already licensed under CC with a 2.0 nc-sa”Attribution-Share Alike 2.0 Generic.” It wouldn’t be very EDUPUNK if they weren’t ” then goes on to change the copyright of his own image to include it in this article, this is not ideology, this is a marketing campaign.

There’s a couple things to note here. First, the person whining above is not wrong, per se. This article is a public billboard of sorts, vulnerable to abuse by marketers, and vigilance makes sense. But ultimately his — and given Wikipedia’s gender bias it’s almost certainly a he — his protestations end up being ridiculous. EDUPUNK ends up a few months later being chosen as one of the words of the year by the New York Times, at the same time Wikipedia is unable to agree if it rises to the dizzying notability heights of fish finger sandwich.

But the most telling part of that comment is this:

No one agrees what it means, its nice that a group of educators are so fond of wikipedia but it shouldnt be used for the purpose of promoting a new website and group.

No one agrees what it means. Ward Cunningham, the guy who invented wikis, has been talking a while about the problem with this assumption — that we must agree immediately on these sorts of sites — and believes it to be the fundamental flaw of wikis. The idea that people should engage with one another and try to come to common understanding is a good thing, absolutely. The flaw, however, is that wiki format pushes you toward immediate consensus. The format doesn’t give people enough time to develop their own ideas individually or as a subgroup. So an article about fish finger sandwiches can get written (we’re all in agreement, good!) whereas an article on EDUPUNK can’t get written (too many different viewpoints, bad!).

It’s important to note Cunningham’s exact point here. Many people have gone after the culture of Wikipedia in recent years, a culture which is increasingly broken. Cunningham’s point is that the culture is a product of the tool itself, which doesn’t give folks enough alone time. We need to break off, develop our ideas, and come back and reconcile them. And we need a tool that encourages us to do that.

I’ve been thinking this through for a bit, trying to come up with a solution to this problem that has the spirit of Cunningham’s proposed federated wiki but is easier for people to wrap their heads around. Here’s the the basic idea, mostly carried forward from Cunningham, but eliminating a couple more complex concepts, and simplifying concepts and implementation.

  1. I install a wiki on my server, but it’s not empty. It’s a copy of a reference on online learning (or some other reference of interest to me), with all wiki pages transcluded. For the uninitiated, what this means is my wiki “passes through” the existing wiki pages. For the purposes of imagining this, let’s pretend I just pull 2500 articles about learning and networks from Wikipedia, and transclude them on my wiki/server.
  2. I then join a federation. So let’s say I join a federation of a 100 instructional designers and technologists. This changes search for me, because search on my wiki is federated now. I can search across the federation for an article on EDUPUNK. Let’s say it’s 2008 and I’m looking for a quick explanatory link on EDUPUNK to send someone. I pump in that search and find there’s five or six somewhat crappy treatments, and one half decent one by Martin Weller.
  3. I don’t edit it. Or rather, I do, but the minute I edit it, this becomes a fork that only lives on my server. So I fix it up without having to get into long arguments with people about notability, etc. When done, I shoot a link to the person I wanted to send the article to. My selfish needs are met.
  4. Now, however, when anyone goes to their EDUPUNK article in the federation, they see that I’ve written a new version. Some people decide to adopt this as their version. Martin Weller sees my edits, and works about half of them into his version along with some other stuff. Jim comes by and adopt Martin’s new version with some changes. It’s better than my version, so I adopt that one.
  5. Tools start to show a coalescence around the Martin-Me-Martin-Jim version. A wiki gardener in charge of the “hub” version looks at the various versions and pulls them together, favoring the Martin-Me-Martin-Jim version, but incorporating other elements as well. This version will get distributed when new people join the federation, but as before, people can fork it, and existing forks remain intact.

The idea here is that forks preserve information by giving people the freedom to edit egocentrically, but that the system makes reconciliation easy by keeping track of the other versions, so that periodic gardening can bring these versions together back into a more generic whole.

You can think about this from any number of angles — imagine an online textbook, for example, that allowed you to see all the modifications made to that textbook by other instructors — and not edits living on a corporate server owned by Harcourt-Brace, but edits that were truly distributed. Imagine a federated student wiki, where your students could build out their articles in piece during the semester, seeing how other students had forked and modified their articles, but keeping control of their subsite, and not being forced to accept outside edits. The student’s final work would reflect *their* set of decisions about the subject and the critiques of their treatment of it. Or imagine support documentation that kept track of localizations, making it easy to see what things various clients needed to clarify, and making those changes available to all.

Anyway, this is the idea. Encourage forking, but make reconciliation easy. It’s the way things are going, and the implications for both OER production and academic wikis are huge.

A Plan for a $10K Degree: A Response

A new proposal is out from Third Way, authored by Anya Kamenetz. It makes an argument for a radical restructuring of higher education in pursuit of a radically cheaper degree. I plan to write a few blog posts on its proposals. This is the first.

There’s many things to like about the plan.

I like the scope of the plan. It’s an ambitious plan, but it starts from the premise we have a rich public educational infrastructure in the U.S. that needs to be reconfigured, not abandoned, dismantled, privatized, or routed around. For that reason alone I think the proposal is worth serious debate.

I like that it correctly diagnoses much of what ails education: it’s a system where competition has distorted our institution’s priorities, resulting in competition in the wrong areas, and a structure that does not work to accomplish our stated mission.

And Anya’s six “steps” are, I think, roughly correct:

  • Reduce and restructure personnel
  • End the perk wars
  • Focus on college completion
  • Scale up blended learning
  • Streamline offerings
  • Rethink college (system) architecture

So it’s a good pass at the issue. At the same time there are some issues at the detail level that require elaboration. Today I want to talk about three pieces of the plan — the $10K premise, the personnel restructuring, and the perk wars.

The $10,000 Degree

I’m not sure how we got to this $10,000 degree number. I went to college in 1987; at that time my four year tuition was around ten to fifteen thousand dollars. If Wolfram Alpha is right, that would be $20,000-$30,000 in today’s dollars. And that isn’t counting the much more sizable state subsidy that we had at that time.

The $10,000 degree also doesn’t jive with what we know about cost in other sectors. A half decent high school will spend $10,000 per student per year on instruction in a flattened no-frills model that looks much like Anya’s proposal. Even assuming a subsidy could half the student side of that (a big assumption), we’re still left with $20,000 for four years.

As a final check, we can look at cost per graduate numbers as they currently stand, and see that they range from about $28,000 to $500,000. The “disruptive” school that Clayton Christensen wrote an entire book about, BYU Idaho, has gotten cost per completion down to about $30,000 a year. A policy that shoots for a result that is likely a couple standard deviations out from the mean is a policy designed to fail.

I’d argue that if we are going to pick a number, it should be one grounded in data, not rhetoric. If you want to see what overly rhetorical stretch goals do to a social institution, you can look at No Child Left Behind’s targets have done to K-12. A $20,000 or $25,000 degree is not as sexy as its Texas cousin, but represents a difficult target that may be achievable, would largely solve the student debt problem, and would not create the sort of unprofitable schism that talk of $10,000 degrees leads to.

Reduce and Restructure Personnel

Here Anya breaks the traditional roles in a university into three roles: Academic Advisors/Mentors, Instructor/Instructional Technologists, and Professor/Instructional Designers. I applaud the rethinking of roles, and think these role delineations are better than what we have currently, but wonder to what extent they are sustainable. People I know all over the country are trying to hire instructional designers and instructional technologists right now. They are incredibly rare, and much more expensive than your average college professor. They also have profitable options in private industry not always available to the average history professor.

And of course finding people highly qualified in their academic discipline who have instructional design experience as well only gets more difficult (and hence, more expensive).

The problem here is that the narrative that schools are expensive because they are administration/staff heavy is in conflict with the narrative that we need more expertise in delivery. In companies that compete for instructional design bids, positions are far more specialized and role-differentiated than one finds at colleges. This is because expertise is rare and expensive, and must be shared across multiple projects.

Ending the Perk Wars

We should end the perk wars, agreed. Campuses are going to have to increasingly organize around the assumption that their students don’t live on campus, and develop communities that are less focused on “campus life” and more focused on “college life”. The attempts to lure richer students to campus with coutry club features has to stop.

Anya also suggests that extracurriculars should be defunded, however, and that is a social justice issue for me. Just as exiling art classes from grade school has resulted in art classes for rich kids, and nothing for the poor, so exiling extracurriculars from state schools will result in a subpar incomplete education for lower-income students. I learned much from working at the radio station at my college and working on the literary journal — much more than I did in most classes. Many students will tell you the same about the clubs they belonged to, and many faculty will tell you they had more impact as advisers to these clubs than in their classes.

More next week, and a note on bloat

Next week I’ll go through the rest of the plan (or at least the next third of it). Looking at the few points I’ve dealt with today, I think the one theme that strikes me is that bloat doesn’t work the way people think it does. As companies become more efficient, roles differentiate, and there ends up being somewhat less frontline staff.

The tendency is to call all non-frontline staff “bloat”, whether they are lab maintenance specialists, instructional technologists, or student financial aid experts. Programs are similar: extracurricular activities (the Geology club) are “bloat”, whereas Geology 101 is core, regardless of the relative impact of each of these on education.

This doesn’t happen in any other industry I’m aware of. We don’t look at a software company and declare that everyone who is not a programmer is “bloat”. Yet the truth is that many elements of interface design that are dealt with by programmers early on in a company’s history are moved to interface designers and human factors experts. Product features that were once the scope of the senior coder are moved into “management” areas, such as product leads. This is because while there are a select number of people that can be expert in many things in a five-person start up, you cannot build a company on them. To build a company you find experts in specific areas, and build the management structure that allows those experts to work together (we can debate on what that structure should look like, it can certainly be agile in nature, but it must be put in place).

All of this allows companies to deliver a better product at a reduced cost. My guess is that if education is truly going to get cheaper we will need to see more role differentiation not less, and start considering extracurricular activities in light of how they provide sometimes invisible support for the curriculum. Most of all we have to get beyond simplistic definitions of “bloat” and move towards a more nuanced understanding of a decades-long shift of instructional and advising expertise from faculty to staff.

More to come…

The Myth of the All-in-one

Beware the All-in-one.
Occasionally (well, OK, more than occassionally) I’m asked why we can’t just get a single educational tech application that would have everything our students could need — blogging, wikis, messaging, link-curation, etc.

The simple answer to that is that such a tool does exist, it’s called Sharepoint, and it’s where content goes to die.

The more complex answer is that we are always balancing the compatibility of tools with one another against the compatibility of tools with the task at hand.

The compatibility of tools with each other tends to be the most visible aspect of compatibility. You have to remember if you typed up something in Word or Google Docs, remember what your username was on x account. There’s also a lot of cognitive load associated with deciding what tool to use and to learning new processes, and that stresses you out and wastes time better spent on doing stuff that matters.

But the hidden compatibility issue is whether the tools are appropriate to the task we have at hand. Case in point — I am a Markdown fan. I find that using Markdown to write documents keeps me focused on the document’s verbal flow instead of its look. I write better when I write in Markdown than I do when I write in Google Docs, and better in Google Docs than when I write in Word. For me, clarity of prose is inversely proportional to the number of icons on the editing ribbon.

Today, Alan Levine introduced me to the tool I am typing in right now — a lightweight piece of software called draftin. Draftin is a tool that is designed around the ways writers work and collaborate, rather than the way that coders think about office software. It uses Markdown, integrates with file sharing services, and sports a revise/merge feature that pulls the Microsoft Word “Merge Revisions” process into the age of cloud storage.

As I think about it, though, it’s also a great example of why the all-in-one dream is an empty one. If I was teaching a composition class, this tool would be a godsend, both in terms of the collaboration model (where students make suggested edits that are either accepted or rejected) and in the way Markdown refocuses student attention on the text. Part of the art of teaching (and part of the art of working) is in the calculus of how the benefits of the new tool stack up against the cognitive load the new tool imposes on the user.

We want more integration, absolutely. Better APIs, better protocols, more fluid sharing. Reduced lock-in, unbundled services, common authentication. These things will help. But ultimately cutting a liveable path between yet-another-tool syndrome and I-have-a-hammer-this-is-a-nail disease has been part of the human experience since the first human thought that chipped flint might outperform pointy stick. The search for the all-in-one is, at its heart, a longing for the end of history. And for most of us, that isn’t what we want at all.

Photo credit: flickr/clang boom steam