Critique by Redesign and Revision

David Wiley’s Remix Hypothesis[1] is that we won’t see the full impact of digital culture on education until we embrace the central affordance of digital media — its remixability, by which he means the ability of others to directly manipulate the media for reuse, revision, or adaptation to local circumstance. I think this is an important enough concept that it’s worth expanding on over a few posts.

To review, we’re used to conversation — that transient many-to-many chain of utterances. And with the advent of cheap books many centuries ago it became common for us to think in terms of publication, that permanent pass at a subject that rises above particular context. And we’ve played with these forms in profitable experiments, having published conversations and building conversational publications.

But what we have done only sporadically is to use the fluidity of digital media to have the sort of “conversation through editing” that digital media makes possible.

There are, of course, precedents. My sister makes her living building models of complex phenomena in Excel spreadsheets. She’s considered a bit of a rock star at this. Analytics people laugh, I suppose, at the use of Excel (although frankly, she probably out-earns them). Why would a business pay her so much money for a *spreadsheet*? There are so many sexy tools out there!

The answer is that her spreadsheets are the start of a conversation. She works to capture the knowledge of the organization in Excel and model it, producing projections and the like. But that’s only the first step. When the spreadsheet is done its handed off to people who continue that conversation not by drive-by commenting that the model is all wrong, but by changing the assumptions and the model and showing the impact of those changes. Because Excel is a common currency among the people involved, these conversations can pull in a wide range of expertise, and ultimately improve the model or assumptions.

Today I found another precursor — “critique by redesign” in the visualization arena. Even in the days before digital media visualizations were built on few enough data points that the more effective way to converse about them was to redesign them. Here’s a famous example from Tufte:

critiquebyredesign

This example is meant to be an improvement over the other version, showing how damage to the space shuttle O-rings increases as temperature dips. The critique of the original graphic isn’t (purely) conversation about its flaws — it’s a revision.

Design and Redesign in Data Visualization (from which I pulled this example) comments:

But the process of giving and even receiving visualization criticism does turn out to hold surprises. It’s not just that visualization is so new, or that criticism can stir up emotions in any medium. As we’ll discuss, the fact that visualizations are based on transforming raw data means that criticism can take forms that would be impossible for a movie or book.

The authors are absolutely correct, and yet as all work becomes digitized it’s likely both print and film will add critique by redesign or revision to their respective cultures.

On the edge of networked digital media we see that permissionless remix, revision, and adaptation accomplishes what traditional conversation and publication could not. We’ll note that collaborative revision of documents has become normal in most businesses, finally unlocking the full potential of digital editing. As I’ve noted elsewhere, electronic music is undergoing a similar revolution. Wikipedia, the one impossible thing the web has produced, created the most massive work in the history of mankind using similar processes. Coders on large complex projects critique code by changing code, letting revision histories stand in for debate where appropriate.

We are educating our students in the art of online conversation and publication, and this is important. It represents the scaling of those cultures of talk and print, and maybe the democratization of them too (although that question is more fraught). And remix *needs* these skills: publication is the start of remix, and conversation around artifacts is what makes the process intelligible.


But there’s a third leg to this stool, and it’s as cultural as it is technological. Our students (and teachers!) need to learn how to supplement comment and publication with direct revision and repurposing. It’s only then that we’ll see the true possibilities of a world of connected digital media.[2]


1. I’m shamelessly borrowing and slightly expanding David Wiley’s excellent term here. For more context and a great read, go to The Remix Hypothesis.

2. I think we do this in education, a bit. But not nearly at the level it merits, and the tools we have still mostly treat this as an afterthought.


Paper Thoughts and the Remix Hypothesis

David Wiley has an excellent post out today on a subject dear to my heart — the failure to take advantage of the peculiar affordances of digital objects.

Yeah, I know. Jargon. But here’s a phrase from Bret Victor that gets at what I mean:

“We’re computer users thinking paper thoughts”  – Bret Victor

You can do a lot of things with digital media. You can chat in a forum, which is rather like conversation. You can put out a blog post, which is rather like print publication. You can tweet, which is rather like, um, conversation. You can watch a video, which is rather like publication. You can post to Instagram, which is rather like, um…publication. With conversation attached. You can put out a course framework, which is rather like publication of a space where people can have conversations.

Hmmmm….

There’s really only two modes that most people think in currently. One is conversation, where transient messages are passed in a many-to-many mode. The other is publication, where people communicate in a one-to-many way that has more permanence.

What we are seeing now in education, for the most part, is the automation and scaling of conversation and publication. And this is what always happens with new technology — initially the focus is on doing what you’ve been doing, but doing it more cheaply or more often.

But that’s not where the real benefits come from. The benefits come when you start thinking in the peculiar terms of the medium, and getting beyond the previous forms.

I would argue (along with Alan Kay and so many others) that for digital media the most radical affordance is the remixability of the form (what Kay would call its dynamism). We can represent ideas not as finished publications, but as editable models that can be shared, redefined, and recontextualized. Conversations are transient, publications are fixed. But digital media can be forever fluid, if we let it.

We see this in music. I’m a person who has benefitted from the crashing price of digital audio workstations and the distribution channels now available for music. These have allowed me to record things that would be impossible for a single person to record even ten years ago. Distribution channels have led to weird incidents, like having a a multi-week number one song on Latvian college radio stations in 2011 (so broadly played, in fact, that I actually made the Latvian Airplay Top 40).

This is cool stuff, absolutely. But it’s not the real revolution.

To produce music, I use Propellerhead Reason, and I suppose you could say that tools like have changed the industry at the margins. But nothing like what is about to happen to music with the new breed of tools.

The latest release of Reason, for example, doesn’t make music any cheaper than the last one. It’s big advance is a tool called Discover which allows artists to share material to a commons that other artists can mine for inspiration.

And here’s the key — the material is directly editable and resharable by anyone. It is music as something forever fluid.

This is a marketing video for the new feature, but it’s short, and you should watch it, because I think it shows the future of education as well. And because I really think you need to see it. I really, really do.

Now let me ask you — what would happen if our students could work across classes in this way? If our teachers could collaborate in this way?

This, and really nothing else, is the thing to watch. These people who are talking the Uber-Netflix-Amazon of Education as the future? That is so tiny a vision that it depresses the hell out of me. I don’t worry that education can’t catch up to industry in these spaces. I worry that we’ll be pulled down by their conservatism and small-mindedness.

You should worry about that too. Because Uber is a taxi service co-op with a services center that skims money off the top. Amazon is a very effective mail-order company. Netflix supplies video-on-demand. All of these are done in ways that are made highly efficient by technology, but not one of them taps into the particular affordances of digital media (beyond reproducibility).

We need to think bigger. What David is concerned about in terms of teacher collaboration (how do we get teachers to tap into the affordances of fluidity) is what I am concerned about with students (how do we move past the forms of conversation and publication to something truly new). We can have a future as big as we like if we can get beyond these paper thoughts. We’re starting to see this sort of thinking in the music software industry and glimmers of it in education (see, for example, the new Creative Commons-focused approaches to LORs).

These glimmers happen in a world that has been distracted with other more trivial things (Videos with multiple choice questions! Learning styles!). They happen in a world that continues to think the primary benefit of the digital world is that it’s cheap.

What would happen if we moved remix to the center of the conversation? What would happen if we stressed remix for students as well as faculty? What could we accomplish? And if a little Swedish audio workstation company can see the future, why can’t we?


Age of the Incunable

After the western invention of movable type not much changed for a very long time. It took many many years for people to realize the peculiar possibilities of cheap, printed texts.

Gutenberg invents the Western version of movable type in the 1440s, and it’s in use by 1450. He thinks of it in terms of cost, really. Efficiency.

You can print cheap bibles – still in Latin, mind you. Affordable chess manuals.

He dies broke, by the way.

For almost fifty years, change creeps along.

They have a name for books of this period, which I love: “Incunabula”. Or if we go singular, the incunable. So we could call this the “Age of the Incunable”.

Detail of a Gutenberg Bible

Detail of a Gutenberg Bible. Source.

This is what books look like at that time. Almost identical in form and function, style and content to medieval manuscripts.

Just to be really clear – this is a machine printed book here, later adorned by hand. In case you didn’t notice.

There were printed books, but there was no book culture. There were printed books but there was no shift in what those books did.

But then things change. First in the Italian presses. Bibles are printed in Italian, for example. Illustrations become more common.

Aldus Manutius creates the “pocket book” in an octavo format, somewhere around 1500. We get cheap mobility. In 1501, his shop ditches the Calligraphic font for early “Roman fonts” more like the unadorned fonts we know today.

Sentence structure starts to change. We start to develop written forms of argument that have no parallel in verbal rhetoric. Ways of talking that don’t exist in oral culture.

People learn to read silently, which is huge, at three to four times the speed of reading aloud.

And here’s the transition: We start to think the sort of thoughts that are impossible without books.

De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566.

De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566. Source.

And it’s almost 70 years after Gutenberg that you see a real print culture emerge. Copernicus, Luther, etc. What we start to see is how fast new ideas can spread. We start to see what happens when every believer has their own Bible in which to look up things, in their own language.

We see what happens when an idea can be proposed and replied to across a continent in months rather than decades. We start to see the impact of the long tail of the past, what happens when esoteric works of the past, long hidden away, can be mass produced. What happens when you get Aristotle for everyone. What happens when every scientist can get his hands on a copy of Copernicus.

And Churches fell. And Science was born. And Governments toppled.

But 70 years later.

It’s something worth remembering for those of us excited about the educational affordances of digital material and networked learning. For a long time I thought — well, change is faster now, right? Technological change is, maybe. But it may be the case that certain types of social change are as slow as they ever were. There are days when I think they might even be slower.

We’ll see. For the moment, whether fact or fiction, the belief that this is just a lull will power me through. We’ll get there yet.


People Have the Star Trek Computer Backwards

I was watching Star Trek — the early episodes — with the family a couple weeks ago when it occurred to me: Silicon Valley has got the lesson of the Star Trek computer all wrong.

Here’s the Silicon Valley mythology of it, from Google, but it could be from any company there really:

So I went to Google to interview some of the people who are working on its search engine. And what I heard floored me. “The Star Trek computer is not just a metaphor that we use to explain to others what we’re building,” Singhal told me. “It is the ideal that we’re aiming to build—the ideal version done realistically.” He added that the search team does refer to Star Trek internally when they’re discussing how to improve the search engine. “It comes up often,” Singhal said. “For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”

This is what happens when you live in a town without history.

The Star Trek computer, at least in the 1960s, was not ahead of its time, but *of* its time. It lacked the vision to see even five years into the future.

It’s hard to get a good shot to demonstrate this, but here’s a couple to give you an idea. These are from the Omega Sector fan site.

alternative-factor BOTHAlternative2   Computer_center Star-Trek-The-Original-Series-Desktop-Computer-3

Now you can say as they do at Google:

“For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”

But this profoundly misses the point. Captain Kirk never pulled out a keyboard, because the idea was that computers were not meant to be messed with by users. They were instrumentation, for doing advanced sorts of mathematics and using it to decide which colored bulb to light. There’s no keyboard because there is no text, anywhere, on any computer on the Enterprise to edit.

And the reason for this was that in the 1960s people thought using computers for text processing was ridiculous. You see this in the history of hypertext. Andy Van Dam, who built pioneering text editing systems at Brown in the sixties was reduced to begging for time on the Brown computers. Why? Because computers were for math, stupid! The scientists at Brown laughed at him.

This is the same set of people who would tell Jef Raskin at Apple (a decade later) that you didn’t need lowercase letters on the Apple II because all users would be doing is playing games and writing BASIC anyway. (Thanks for the example, Lisa!)

Star Trek is not a post-keyboard world, it’s a pre-keyboard one. You would think a company that makes its money processing the billions of lower-case non-BASIC words that have been typed into computers since then would get that.

The Meaning of “Personal” and “Dynamic” in Personal Dynamic Media

So what happened? What changed? Well, for one, we started type text into computers.

But something bigger happened as well. Because text editing became a way of thinking about computers. You see this when Alan Kay starts talking about the DynaBook vision in the late 60s and early seventies. He starts by saying, look, you could have some text on this, and you could edit it. And you could swap out different fonts.

And then he thinks, well, music is really the same thing as text, isn’t it? Strings of characters produce documents the way that strings of notes produce songs. When you “display” a song, you play it. So you could edit sequences of notes and play them without being able to play an instrument, in a kind of text editor for music.

And he goes further. The same way you switch fonts, you could switch the sounds. You could try your composition as played by something trumpet-eque, and then switch it to organ, without redoing the composition. The way you can edit fonts you could edit timbre in the different sounds.

And pulling from ideas like Sutherland’s SketchPad he moves to notions of editable models, he imagines a user-created model of hospital throughput. You set your assumptions about time per patient, and how patients move through different departments. Then you fool around with staffing by adding or subtracting staff from different departments and see where bottlenecks emerge.

op

And in his mind, this changes communication, and allows us to communicate in new ways.

Now when I want to send my manager this week’s staffing, I can send them this dynamic document. Do they disagree with the staffing? Well, the document is open. They can change the staffing and see what happens. They can look at the assumptions and edit them. We have a conversation back and forth through editing the model. And you can do this with everything — you send me a song you wrote, I like it — but wouldn’t it be nice to add some resonance to that viola?

Compare this vision to the Star Trek vision. Here is Kirk interacting with a computer:

Now, having just seen this episode, I can tell you that Kirk has discovered that this dude who is a travelling actor might just be an infamous war criminal.

This is pretty important, the sort of observation that Star Fleet Command will want to have in their files. So Kirk edits the file, noting….

Except that he can’t edit this file. In the Star Trek world information goes into the computer and comes out of it, but nothing can be edited.

He can tell the computer, I suppose. And then the computer can decide whether to splice that into the next presentation or not. But editing?

Other computers are similar. Here is an Omega Sector reconstruction of a command and control system.

Return-to-Tomorrow

Now I imagine the way this works is this. The lights show you various information and projections about the performance of the ship. Based on those you can alter the flow rate, jettison fuel, or do two other things I don’t quite get.

But what if I want to change the model? What if I want to know what those lights would look like if we reduced load by dropping half our cargo? Or if the computer’s assumptions about oxygen consumption by the crew turned out to be too optimistic?

What if, discovering an oversight in the assumptions, I wanted to distribute the new model to Star Fleet Command?

Again, I have no way to find that out, because I can’t edit, I can’t distribute.

These computers are centuries ahead, in some ways, but they are already behind the vision the pioneers of personal computing were imagining at the time. Vulcan intelligence may be unparalleled in the universe, but the equipment Spock uses reduces him to a switch flipper.

It’s this vision of a population of computer “operators” (a vision that was the most common at the time) that guides the portrayal of Enterprise technology, and renders it so quaintly 1960s, so non-textual, so I/O.

Stumbling Forward Into the Past

So the question we have to ask ourselves is how Silicon Valley came to see the Star Trek computer as a vision of the future, rather than an artifact of a pre-Kay, pre-Engelbart world.

I don’t have easy answers to that.

One possibility is they see the personal computing era as an anomaly. We edited our documents because computers weren’t smart enough to produce and edit documents for us. We edited assumptions in Excel spreadsheets because computers weren’t yet trustworthy enough to choose the right formulas. Soon computers will be smart enough, and Star Trek can commence.

Another is the scale of ubiquitous computing. Perhaps there is a belief that in a universe where everything is a computer, the prospect of having time to mess with parameters is just too overwhelming.

There’s some validity to these arguments, though it’s worth noting that these beliefs are identical to the beliefs of the average 1960s computer scientist. Computers were smart enough and numerous enough for them to believe that the future could be hard-wired in the 1960s. And they were dead wrong.

There’s a third possibility, though, and one that scares me quite a bit. And that’s that they are unfamiliar with how Star Trek’s technology vision was proved wrong.

In the end, perhaps it doesn’t matter. Either the personal computing revolution can be rolled back (as it has been in many ways in the past few years) or we can push forward and see what happens. It serves the interests of the Google’s of the world to make their computers dynamic and your interface static, because dynamic means control (it’s not for nothing the term comes from the Greek for “power”).

For better or worse, Google, Apple, Facebook and others all are building the “ideal version of the Star Trek computer”. If we want to move past these quaint, archaic notions, it’s up to us to build something else.


A Portfolio of Connections

I’ve talked a bit about federated wiki in terms of the way it enables collaboration with others across institutional boundaries. But as we go into Happening #2, I’m gaining more appreciation with the way that it allows for collaboration with ourselves across temporal boundaries.

That may sound really muddled. But consider the scenario I demonstrate below. I’m reading a piece by MC Morgan in the current happening about the Jacquard Loom. He’s discussing it in our happening on teaching machines because it was an influential example of a “programmable machine”.

And I start to get a bit of an itch reading that, because I feel like we talked about something like that in the FIRST happening (which was *not* on teaching machines, or even machines). And so I — well, I’ll show you what happened in this 4 minute video.

Incidentally, while I edited out some “umms” and “ahhs” and silent readings out of that video, it’s not staged. It’s actually me realizing in near-real-time the connection between Stravinsky’s idea that the player piano ensured “fidelty” to the score to the idea the Jacquard Loom ensured fidelity to the design, to the idea that the appeal of courseware to administrations is tied up with this notion of fidelity too. That we talk about efficiency, but the other concern has been there since day one.

I knew these things separately, but I didn’t see the connection, didn’t REALLY see the connection, until just then.

A quick aside: If you’ve done screencasts of educational technology before, let me ask you this: have you caught an intense, unscripted moment of learning on them? Probably not, right? The weird thing is with federated wiki this happens ALL THE TIME. 

You start to see the bigger vision when you realize that federated wiki can accomodate many types of data: formulas, equations, programming tools, CSV data. Here I pull in an idea and connect it. But maybe I’m in a student in a stats class and I realize I can pull in some water readings I took in last semester’s bio class, and use that data to work through my understanding of standard deviation.

Maybe I see another kid pull in his old bio data, and I remember I built a data visualization tool last semester, so I pull that in and link it to the data, which pushes out a tweakable representation.

The thing is we think we know what hypertext and reuse looks like. But I don’t think we have any idea, because we’ve been confined to the very minimal linking and reuse the web allows. And so the idea vendors are pushing for students on the web is the “ePortfolio”, a coffin of dead projects the student has worked on, indistinguishable from a printed binder or filled portfolio case.

On one side, have this amazing, dynamic, living tool that could help us think thoughts impossible without it, and truly augment our intellect. You could graduate with a tool you had assembled, personally, to help you think through problems. Something quite close to Alan Kay’s vision of Personal Dynamic Media.

And on the other side we have a gaggle of vendors trying to sell us self-publishing tools.

Our thinking here is so, so small. As David Wiley has put it, we have built ourselves jets, and yet we’re driving them on the ground like cars. We have to do better.


Update for Alan (2/13): The full route

In the comments, Alan brings up the very real issue of what happens as more stuff pours into federated wiki. Will you be able to find the connections? Or will you be overwhelmed?

And I realized I had changed the meaning of the video a bit by cutting out the three to four boring minutes of digging around the last happening. In the newer video it looks like I was looking for Stravinsky, but in fact I was not looking for Stravinsky at all. I had 100% forgotten about player pianos, and mechanical ballets.

Here’s an uncut (but partly sped up) video of the process. You can put the sound down and run it while you read the rest of this post:

If you jump to 22 seconds in, you can see I come in and put a search in for music. What I’m actually thinking initially is there’s a relationship to artwork as recipe. The punch card is like a recipe.

But in music, it’s really not. And I realize this as I read it. We’ve had sheet music for a long time, but sheet music is a collaboration between the recipe and the cook. The loom doesn’t collaborate with anyone.

OK, so maybe it’s a different kind of sheet music. I’m reminded of the Varèse Score by the search results. Such scores were the representation of an electronic video and film show produced by Varèse. Is that a better connection?

I pull up some third party materials, but scanning it, it’s not really the Jacquard Loom, is it? These are scores written on paper, and in fact it’s kind of the opposite of the loom — because even Varèse couldn’t know exactly how the music would turn out — there was an element of randomness to it.

But Varèse Score links me to a page called Art as Mechanical Reproduction. I’ve actually been on this page a couple times before, but I was so fixated on the Varèse possibility I didn’t really read it.

With the Varèse idea finally dead, I dig deeper. And as I scan it I see this Stravinsky’s Player Piano link. And the first thing I think is a player piano roll is very like a punch card.

I click it, and as I scan it I’m reminded of Stranvinsky’s obsession that people play his music without interpretation. This notion of “fidelity” to an original abstract vision. And this is the connection that ties all three together — the loom, the player piano, and courseware. We talk efficiency, but the other attraction, for better or worse, is fidelity. And I say “Ah, this is what I was looking for!” as if I had known it the whole time. But of course I didn’t.

And in fact, it was the process of understanding why Varèse didn’t fit that primed me to see the Stravinsky connection.

This is a long answer to Alan’s question, but I think the answer is it may get harder to find the thing you want, but it should get easier to find the thing you need. More links is more serendipity, more routes to the idea that can help you. And since the neighborhood will dynamically expand as you wander, all your Happenings will link seamlessly together giving you access to everything as you need it.


“Users”

[T]he problem is that bad writers tend to have the self-confidence, while the good ones tend to have self-doubt. So the bad writers tend to go on and on writing crap and giving as many readings as possible to sparse audiences. These sparse audiences consist mostly of other bad writers waiting their turn to go on, to get up there and let it out in the next hour, the next week, the next month, the next sometime. The feeling at these readings is murderous, airless, anti-life.

– Charles Bukowski on why he encouraged people to not write.

There’s only one other industry that calls their customers ‘users’.”

– Old information technology proverb.

Poetry Slam

Somewhere in 2009 it hit me that I had been wrong about educational technology. Very wrong.

The year before, I had been working for an organization that dealt with OpenCourseWare, and the rhetoric was (as it still is) that reuse of OCW could lead to education sector efficiency. But as we looked for reuse we found that there wasn’t much evidence of it. Not institutional reuse anyway, or reuse by professors.

And as I pondered this, it became very obvious why this was the case. Every single decision in the OER community at that time seemed to be predicated on glorifying and funding creators, based on a trickle-down theory of impact. Courseware was shipped as PDFs, with school logos burnt into the slides. Hewlett was funding Ivies to the tune of tens of millions to create OCW, and yet none of those projects ever sat down with teachers from state universities and community colleges and asked what they might actually want.

Simple things precluded any reuse. Test questions were published with answer keys, in formats that were not importable into LMS’s. Course videos contained references to resources unavailable to students not in the lecture, or housekeeping about advising periods that would mean nothing to someone watching this video for a class.

The Open Educational Resources community, at least the elite part of it, portrayed itself as a community-minded set of save the world do-gooders. But in reality, much of it was the Poetry Slam from Hell Bukowski talks about above, a bunch of elite schools sitting in Hewlett’s coffee shop, waiting for Yale to step down so they could show their OCW.

Makers, Builders, Producers

People say they want a world of “producers not consumers” or “makers not takers”.  Peel back the assumptions under those statements and you’ll find some disturbing stuff.

And so it was when I returned to instructional design in 2009, fresh off the OCW experience, that I found these phrases, which used to seem so normal, now strange.

There was a time, after all, that we used to call lurkers “readers”. Users were “doers”.  These things had respect.

Now anything short of “making” was devalued. “We’re going to turn our students from passive consumers to producers!” we yelled.

This was presented as revolutionary, but it wasn’t revolutionary at all. The forms might have been revolutionary – video, podcasts, CAD-based fabrication. But the idea that “producers” should be valued at a university is the least subversive idea one could have. It’s the entire basis of the academic enterprise. Everything, from tenure review to pay scale, is based on the notion that it’s not enough to be well-read, or to think good thoughts. One must make something. Academia is one of the few places where your entire career is based on how many important things have your name on them.

And so we showed students how to make things with their name on them.

Edtech-based making in the university was not an attempt to introduce a new value structure. It was (and is) an attempt to give students tools to achieve value in the existing power structure.

That’s valuable, certainly. I repeat, it is valuable. But it hardly makes you Illich.

And of course there’s a possibility that you’re just spreading the “Shut Up or Ship” culture of Silicon Valley that believes participation is only the domain of those-who-code, or the PDFing logo-stamping culture of the Ivies. It’s possible that you’re just reinforcing the same narrative that justifies the massive inequality in our country on the basis that the 1% “contribute”, and that being a “Job Creator” is better than doing a job for someone else well.  It’s possible that you’re enabling the people that went nuts after Obama’s “You didn’t build that” speech, where Obama made the rather mundane observation that success in America is enabled by our entire community.

business_billboardfla

It’s possible it gets worse. As Audrey Watters,  Debbie Chachra, and Bjork have noted recently, definitions of what constitutes “building” are gendered, and differently applied. A Taylor Swift album is seen (correctly) as co-produced and co-written. A Kanye West album is a Kanye West album, even if it contains a cast of hundreds. Kanye West is seen brilliant where Bjork is seen as making a excellent collaborative album.

And while we’re talking about the Brilliant Mind myth creatorism is based on, we might as well pull in this chart, which plots female PhDs in a field against the emphasis that field places on “brilliance” vs. “hard work”.

F1.large

The paper in Science the above graph comes from focuses just on the brilliance connection. A focus on hard, sustained work over brilliance appears to predict female representation better than any other general model I know of.

At the same time, I can’t help but see them as interlinked phenomenon. Sociology graduates Bjorks. Philosophy graduates Kanyes. The hierarchy is Kayne > Bjork > Rock Critic > Listener. The fact that it is discerning listeners who produce artistic revolutions is lost on everyone.

To coders, people who don’t code are not makers. But we keep punching all the way down. Published writers kick bloggers. Bloggers see themselves as creating on a level that readers and commenters don’t. It’s not just about your job title, it’s about your internal taxonomy.

The Sources of Innovation

The things I have done since 2009 seem very scattered to people. I ranted on about OER reuse.  I got deep into von Hippel. With Amy Collier and Helen Chen I looked at how classes interact with MOOC materials produced elsewhere. I experimented with the pedagogy of summary. And now I’m involved with a federated wiki project so complex I barely know where to link to to explain it.

Underneath all of these projects (and many others: Water106, the Mixable MOOC, Design Patterns in ID) is really a single obsession. What would happen if we got over our love affair with creators? What would happen if we collapsed the distinction between maker and taker, consumer and producer, not by “moving people from consumption to production”, but by eliminating the distinction? What if we saw careful curation of material as better than unconsidered personal expression?

What if we stopped calling readers lurkers? What if we stopped caring about who got the credit? What if the OER community saw the creation of materials as a commodity, but the reuse as an art?

I’m not attacking digital storytelling here, personal blogs, or Makey-Makey Boards. I’ve used all of these in my work with faculty, and I’m going to continue to do so. These things get students engaged and excited, and in the process of making things they learn much more deeply than they could ever learn from a textbook. Sometimes a poetry slam is what you need. Sometimes it’s even good.

But the projects that interest me most nowadays are the one where the thing made doesn’t fall into the traditional categories. Federated wiki, the pedagogy of summary, student curation. What interests me most in ds106, for example, is not the making, but the co-making. In these projects I see a chance for an engagement that is less ego-driven, less divisive, and ultimately more useful to society. In the years since Gutenberg I think we’ve managed to get the single brilliant author thing down pretty pat.  It might be time to try something else.

I’ll end here with a story. Back in 2010 I was getting a coffee with Jon Udell, and he said something that has stuck with me. He had been trying to get people involved with community by encouraging production of various things, but it wasn’t working the way he planned.  He said that there was a point he realized that he was trying to make everyone a writer. And everyone’s not a writer.

His obsession with getting people to share calendar feeds seemed odd to some people, but for him it was (I think) about something bigger. Were people to simply share community calendar feeds with a hub, we could solve far more community issues than a roomful of bloggers ever would. A community getting this idea, that the work they don’t even think of as creation could be valuable; that would be much more powerful than than telling people to podcast their town meetings, or asking them to blog their work.

That idea turned out to be difficult for a number of reasons, but I think the concept is right. I imagine classes where writing a good and useful summary of research is seen as being as “brilliant” as writing an original paper, where cleaning up data is seen as valuable as theorizing about it. Where a well curated and quoted set of material is as valuable as research. Where reuse is valued over reinvention. Where replicating experiments is as revered as creating new experimental designs. Where people who connect others and think about how to connect others get credit for the advances those connections bring.

I think students who came out of a program like that would be better suited to solving the sorts of problems the world has right now. It’s just a theory, but I’m hoping we get a chance to prove it.


Rethinking Wiki Lifecycle: Sites as Bounded Conversations

As we plan for our second fedwiki happening the differences between federated wiki and wiki become, well, stranger.

If you’ve been following the story thus far, you know that federated wiki is pretty radical already. As with wiki, people converse through making, linking, and editing documents. But because each person has a seperate wiki, there is a fluidity to this “talking in documents” that is hard to describe.

You write a post referencing Derrida’s concept of “ultimate hospitality”. I get interested in that, do a bit of research. I save a copy of (fork) your post to my site, but link it to a page describing Derrida’s hospitality in more detail. Not because I’m an expert, but because it’s a good excercise to understand your writing. You see that post and fork mine back. The next visitor to your page finds your page plus my article annotating it. Maybe they edit it, creating their own fork. And so it goes.

Carry It Forward or Clean Slate?

As we move into Happening #2 on Teaching Machines, this question comes up: do we pull these documents into the new happening? Use the same site? Do I create one big “Megasite of Mike” which I haul into each event like my persistent twitter feed?

And what we’re thinking is maybe I don’t. Maybe Happening #1 site is done, and I create another site.

So for instance, my wiki farm is at hapgood.net. For Happening #1 on “journaling” I had journal.hapgood.net. Maybe for the Teaching Machines happening I make machines.hapgood.net. And to the extent I want to talk about something from a previous event in this new context, I fork it into the new context.

It’s pretty simple to do this in Federated Wiki after all. I just drag the page from one site and drop it on another, I edit it for the new audience, or maybe even take the opportunity to clean up a few typos. And that’s it, done!

While this may not sound extraordinary, it is in fact an inversion of how we usually think of wiki (and sites in general). And it has some neat ramifications.

The Dreaded Curve of Collaborative Sites

To understand why this is such a departure from traditional collaborative sites, we need to introduce you to the dreaded logistic curve. Here’s the curve of Wikipedia production:

download

Via Wikimedia Commons.

Until about March 2007 many people thought that Wikipedia’s growth was exponential. Above, we see a log scale graph, where exponential growth would be represented by a straight line (explanation of log scale).

But things go a little wonky in 2007. In that year it begins to become apparent that a logistic model might better predict Wikipedia growth. Currently the site is growing faster than a logistic model would predict, but well under earlier exponential models.

Enwikipediagrowthcomparison

“Enwikipediagrowthcomparison” by HenkvD

If the extended model above fits, Wikipedia will near heat death in about 10 years.

Graph_WP_extended_growth_2025

Projection by HenkvD

People have attributed this to a lot of things, and certainly it’s a phenomenon with many inputs. But the logistic-like nature of it suggests one simple explanation — limited resources. Logistic curves are what you find when you map animal populations, for example, coming up against the resource limits of the environment.

In wiki, as in many other collaborative projects, the limited resource is stuff left to be written. People get together to write articles, and initially each new article leads to two other articles, and you see that growth on the left side of the graph.

Eventually, though, the easier stuff is written. It may not be written the way you’d like it to be, but it’s written. Wikizens move from homesteading new territory to vicious fighting over already cultivated plots of land. The enterprise begins to feel less fresh, and the type of people who find this stage envigorating are often dreadful bureaucrats.

Now, you might think this was just a function of Wikipedia, as it has articles on everything. But you’d be wrong. Smaller wiki communities see this same pattern. Here’s a graph of the first wiki approaching its own heat death:

WikiPageGrowth

(Chart by Donald Noyes)

Now these are the tail-end days of that wiki — it started in 1995. And its this observed pattern that led Ward Cunningham to predict, much earlier than most, that Wikipedia would follow the same pattern.

Colony Collapse

We get why Wikipedia would hit a resource limit — there’s an article on fish fingers already, for crying out loud. But why would WikiWikiWeb run out of subjects? There’s only 30,000 pages, after all.

The answer’s simple — people come together for a reason, a shared interest. In the case of WikiWikiWeb it was to share software design patterns and agile programming techniques. As those subjects fill out, the opportunities for new contribution diminish. The community has a choice — expand its charter (which fractures the cohesion of the group) or move into maintenance mode. (Likewise, Wikipedia could expand its charter by loosening notability requirements, for example, but this would radically change the nature of the project people thought they were contributing to).

Very few people want to be on a site that’s all about maintenance. As we mention above, the lack of new ground to cover leads to a claustrophobia manifested in endless arguments about what should and shouldn’t be on the wiki and non-stop edit wars.

This happens in other communities too, by the way. At some point in a political forum if there isn’t new blood people feel like they are just rehashing the same conversation over and over.

And so you get what I call Colony Collapse. One day you reach a tipping point, and suddenly the only people left on your site are the people you actually considered banning at the beginning. (This is what has happened to my old political community). For a while these people maintain the site, but eventually they too get bored and leave, and the site falls into disrepair. It starts to rot.

You can see this at my old site Blue Hampshire, which has reached the final phase of collapse, and now consists primarily of syndicated political press releases and occassional comments about how moronic other commenters are. There’s 8 years of beautiful posts in that site, almost 15,000 posts. Many contain wonderful explanations on how New Hampshire government works, personal reminiscences of New Hampshire political history. There are comments on that site with more political wisdom then you’ll find in a year of the Washington Post (there’s over 100,000 comments).

And it’s all rotting.

It’s heartbreaking, and after you’ve been through it once you get into a feed-the-beast mentality about all future communities. To paraphrase the line from Annie Hall: Online communities are like sharks; they have to keep moving forward or they die. And so, as community leader, you take on the exhausting role of the shark, pushing the site forward, always watching for the dreaded inflection point which presages the site’s collapse. Because once that happens, it’s all over.

A Different View: Wiki Sites as Bounded Conversations

That’s a long digression. But back to federated wiki.

Here’s a possible vision for federated wiki sites that you, the user make.

You’ll make federated sites for conversations on things, the way we are having this happening on Teaching Machines. And during the event, you’ll build it out. And then, at a predetermined point, you’ll call your personal version of that site done and abandon it.

In other words, we bake heat death into the plan. We accept our mortality.

That’s great, but now comes the question — how does the material get maintained? How do we recurse over it?

The answer we’re coming up with is that it gets pulled into new sites belonging to new conversations.

We have an example of this right now. We tried an experiment a while back that was a bit of a flop — a wiki called the “Hidden History of Online Learning”.  It’s got about a hundred pages of this variety in it:

mathe

The site flared up into activity for a couple of weeks, then sputtered and died. By all normal metrics it’s a failure, doomed to bit-rot and link-rot, a slow descent into a GeoCities-like hell.

But in this case everyone involved has their copy of the site. As they participate in the Teaching Machines happening, they can pull stuff over into the new teaching machines wiki they have made. They’ll spruce it up, check the links, and maybe even improve it a little.

Ward and I were talking about this, and he said it reminded him of an earlier time when he was first working with Smalltalk. Objects would get better the more systems they were used in, because each time they were reused they would get refactored, optimized, simplified, extended. He and his fellow coders even had a name for this: “Reverse Bit-rot”.

It shouldn’t happen. It defies the entropy we see in all those pretty graphs up-page. But it happened.

Maybe this can happen here too. Sites can end, like conversations end. But we reach back into those previous conversations and say —  you know, we were talking about that a couple months ago, let’s pull this thing in. And that thing gets another spin.

Maybe this doesn’t sound radical to you. Maybe it sounds ordinary. But I am sure every former online community manager out there understands just how radical this idea is. It’s equivalent to the soft-forking solution that was adopted Live Journal that became *the* solution for the non-collaborative communities that followed. But here it is applied to collaborative tasks.

In this world, sites like Blue Hampshire die — and maybe even die quicker, more humane deaths. In fact, maybe we say, hey, here’s a site that is going to exist just for the 2008 election. Two years later you spin up a site on 2010. Four years later on 2012. Some material from 2008 flows in. Some doesn’t. If the material is good enough to live on, there is no failure, no Colony Collapse.

Each wiki site is the product of a bounded conversation, expected to die, but also expected to be raided for the next conversation.

Kind of nice, right?


Follow

Get every new post delivered to your Inbox.

Join 176 other followers