I was watching Star Trek — the early episodes — with the family a couple weeks ago when it occurred to me: Silicon Valley has got the lesson of the Star Trek computer all wrong.
Here’s the Silicon Valley mythology of it, from Google, but it could be from any company there really:
So I went to Google to interview some of the people who are working on its search engine. And what I heard floored me. “The Star Trek computer is not just a metaphor that we use to explain to others what we’re building,” Singhal told me. “It is the ideal that we’re aiming to build—the ideal version done realistically.” He added that the search team does refer to Star Trek internally when they’re discussing how to improve the search engine. “It comes up often,” Singhal said. “For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”
This is what happens when you live in a town without history.
The Star Trek computer, at least in the 1960s, was not ahead of its time, but *of* its time. It lacked the vision to see even five years into the future.
It’s hard to get a good shot to demonstrate this, but here’s a couple to give you an idea. These are from the Omega Sector fan site.
Now you can say as they do at Google:
“For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”
But this profoundly misses the point. Captain Kirk never pulled out a keyboard, because the idea was that computers were not meant to be messed with by users. They were instrumentation, for doing advanced sorts of mathematics and using it to decide which colored bulb to light. There’s no keyboard because there is no text, anywhere, on any computer on the Enterprise to edit.
And the reason for this was that in the 1960s people thought using computers for text processing was ridiculous. You see this in the history of hypertext. Andy Van Dam, who built pioneering text editing systems at Brown in the sixties was reduced to begging for time on the Brown computers. Why? Because computers were for math, stupid! The scientists at Brown laughed at him.
This is the same set of people who would tell Jef Raskin at Apple (a decade later) that you didn’t need lowercase letters on the Apple II because all users would be doing is playing games and writing BASIC anyway. (Thanks for the example, Lisa!)
Star Trek is not a post-keyboard world, it’s a pre-keyboard one. You would think a company that makes its money processing the billions of lower-case non-BASIC words that have been typed into computers since then would get that.
The Meaning of “Personal” and “Dynamic” in Personal Dynamic Media
So what happened? What changed? Well, for one, we started type text into computers.
But something bigger happened as well. Because text editing became a way of thinking about computers. You see this when Alan Kay starts talking about the DynaBook vision in the late 60s and early seventies. He starts by saying, look, you could have some text on this, and you could edit it. And you could swap out different fonts.
And then he thinks, well, music is really the same thing as text, isn’t it? Strings of characters produce documents the way that strings of notes produce songs. When you “display” a song, you play it. So you could edit sequences of notes and play them without being able to play an instrument, in a kind of text editor for music.
And he goes further. The same way you switch fonts, you could switch the sounds. You could try your composition as played by something trumpet-eque, and then switch it to organ, without redoing the composition. The way you can edit fonts you could edit timbre in the different sounds.
And pulling from ideas like Sutherland’s SketchPad he moves to notions of editable models, he imagines a user-created model of hospital throughput. You set your assumptions about time per patient, and how patients move through different departments. Then you fool around with staffing by adding or subtracting staff from different departments and see where bottlenecks emerge.
And in his mind, this changes communication, and allows us to communicate in new ways.
Now when I want to send my manager this week’s staffing, I can send them this dynamic document. Do they disagree with the staffing? Well, the document is open. They can change the staffing and see what happens. They can look at the assumptions and edit them. We have a conversation back and forth through editing the model. And you can do this with everything — you send me a song you wrote, I like it — but wouldn’t it be nice to add some resonance to that viola?
Compare this vision to the Star Trek vision. Here is Kirk interacting with a computer:
Now, having just seen this episode, I can tell you that Kirk has discovered that this dude who is a travelling actor might just be an infamous war criminal.
This is pretty important, the sort of observation that Star Fleet Command will want to have in their files. So Kirk edits the file, noting….
Except that he can’t edit this file. In the Star Trek world information goes into the computer and comes out of it, but nothing can be edited.
He can tell the computer, I suppose. And then the computer can decide whether to splice that into the next presentation or not. But editing?
Other computers are similar. Here is an Omega Sector reconstruction of a command and control system.
Now I imagine the way this works is this. The lights show you various information and projections about the performance of the ship. Based on those you can alter the flow rate, jettison fuel, or do two other things I don’t quite get.
But what if I want to change the model? What if I want to know what those lights would look like if we reduced load by dropping half our cargo? Or if the computer’s assumptions about oxygen consumption by the crew turned out to be too optimistic?
What if, discovering an oversight in the assumptions, I wanted to distribute the new model to Star Fleet Command?
Again, I have no way to find that out, because I can’t edit, I can’t distribute.
These computers are centuries ahead, in some ways, but they are already behind the vision the pioneers of personal computing were imagining at the time. Vulcan intelligence may be unparalleled in the universe, but the equipment Spock uses reduces him to a switch flipper.
It’s this vision of a population of computer “operators” (a vision that was the most common at the time) that guides the portrayal of Enterprise technology, and renders it so quaintly 1960s, so non-textual, so I/O.
Stumbling Forward Into the Past
So the question we have to ask ourselves is how Silicon Valley came to see the Star Trek computer as a vision of the future, rather than an artifact of a pre-Kay, pre-Engelbart world.
I don’t have easy answers to that.
One possibility is they see the personal computing era as an anomaly. We edited our documents because computers weren’t smart enough to produce and edit documents for us. We edited assumptions in Excel spreadsheets because computers weren’t yet trustworthy enough to choose the right formulas. Soon computers will be smart enough, and Star Trek can commence.
Another is the scale of ubiquitous computing. Perhaps there is a belief that in a universe where everything is a computer, the prospect of having time to mess with parameters is just too overwhelming.
There’s some validity to these arguments, though it’s worth noting that these beliefs are identical to the beliefs of the average 1960s computer scientist. Computers were smart enough and numerous enough for them to believe that the future could be hard-wired in the 1960s. And they were dead wrong.
There’s a third possibility, though, and one that scares me quite a bit. And that’s that they are unfamiliar with how Star Trek’s technology vision was proved wrong.
In the end, perhaps it doesn’t matter. Either the personal computing revolution can be rolled back (as it has been in many ways in the past few years) or we can push forward and see what happens. It serves the interests of the Google’s of the world to make their computers dynamic and your interface static, because dynamic means control (it’s not for nothing the term comes from the Greek for “power”).
For better or worse, Google, Apple, Facebook and others all are building the “ideal version of the Star Trek computer”. If we want to move past these quaint, archaic notions, it’s up to us to build something else.
I’ve talked a bit about federated wiki in terms of the way it enables collaboration with others across institutional boundaries. But as we go into Happening #2, I’m gaining more appreciation with the way that it allows for collaboration with ourselves across temporal boundaries.
That may sound really muddled. But consider the scenario I demonstrate below. I’m reading a piece by MC Morgan in the current happening about the Jacquard Loom. He’s discussing it in our happening on teaching machines because it was an influential example of a “programmable machine”.
And I start to get a bit of an itch reading that, because I feel like we talked about something like that in the FIRST happening (which was *not* on teaching machines, or even machines). And so I — well, I’ll show you what happened in this 4 minute video.
Incidentally, while I edited out some “umms” and “ahhs” and silent readings out of that video, it’s not staged. It’s actually me realizing in near-real-time the connection between Stravinsky’s idea that the player piano ensured “fidelty” to the score to the idea the Jacquard Loom ensured fidelity to the design, to the idea that the appeal of courseware to administrations is tied up with this notion of fidelity too. That we talk about efficiency, but the other concern has been there since day one.
I knew these things separately, but I didn’t see the connection, didn’t REALLY see the connection, until just then.
A quick aside: If you’ve done screencasts of educational technology before, let me ask you this: have you caught an intense, unscripted moment of learning on them? Probably not, right? The weird thing is with federated wiki this happens ALL THE TIME.
You start to see the bigger vision when you realize that federated wiki can accomodate many types of data: formulas, equations, programming tools, CSV data. Here I pull in an idea and connect it. But maybe I’m in a student in a stats class and I realize I can pull in some water readings I took in last semester’s bio class, and use that data to work through my understanding of standard deviation.
Maybe I see another kid pull in his old bio data, and I remember I built a data visualization tool last semester, so I pull that in and link it to the data, which pushes out a tweakable representation.
The thing is we think we know what hypertext and reuse looks like. But I don’t think we have any idea, because we’ve been confined to the very minimal linking and reuse the web allows. And so the idea vendors are pushing for students on the web is the “ePortfolio”, a coffin of dead projects the student has worked on, indistinguishable from a printed binder or filled portfolio case.
On one side, have this amazing, dynamic, living tool that could help us think thoughts impossible without it, and truly augment our intellect. You could graduate with a tool you had assembled, personally, to help you think through problems. Something quite close to Alan Kay’s vision of Personal Dynamic Media.
And on the other side we have a gaggle of vendors trying to sell us self-publishing tools.
Our thinking here is so, so small. As David Wiley has put it, we have built ourselves jets, and yet we’re driving them on the ground like cars. We have to do better.
Update for Alan (2/13): The full route
In the comments, Alan brings up the very real issue of what happens as more stuff pours into federated wiki. Will you be able to find the connections? Or will you be overwhelmed?
And I realized I had changed the meaning of the video a bit by cutting out the three to four boring minutes of digging around the last happening. In the newer video it looks like I was looking for Stravinsky, but in fact I was not looking for Stravinsky at all. I had 100% forgotten about player pianos, and mechanical ballets.
Here’s an uncut (but partly sped up) video of the process. You can put the sound down and run it while you read the rest of this post:
If you jump to 22 seconds in, you can see I come in and put a search in for music. What I’m actually thinking initially is there’s a relationship to artwork as recipe. The punch card is like a recipe.
But in music, it’s really not. And I realize this as I read it. We’ve had sheet music for a long time, but sheet music is a collaboration between the recipe and the cook. The loom doesn’t collaborate with anyone.
OK, so maybe it’s a different kind of sheet music. I’m reminded of the Varèse Score by the search results. Such scores were the representation of an electronic video and film show produced by Varèse. Is that a better connection?
I pull up some third party materials, but scanning it, it’s not really the Jacquard Loom, is it? These are scores written on paper, and in fact it’s kind of the opposite of the loom — because even Varèse couldn’t know exactly how the music would turn out — there was an element of randomness to it.
But Varèse Score links me to a page called Art as Mechanical Reproduction. I’ve actually been on this page a couple times before, but I was so fixated on the Varèse possibility I didn’t really read it.
With the Varèse idea finally dead, I dig deeper. And as I scan it I see this Stravinsky’s Player Piano link. And the first thing I think is a player piano roll is very like a punch card.
I click it, and as I scan it I’m reminded of Stranvinsky’s obsession that people play his music without interpretation. This notion of “fidelity” to an original abstract vision. And this is the connection that ties all three together — the loom, the player piano, and courseware. We talk efficiency, but the other attraction, for better or worse, is fidelity. And I say “Ah, this is what I was looking for!” as if I had known it the whole time. But of course I didn’t.
And in fact, it was the process of understanding why Varèse didn’t fit that primed me to see the Stravinsky connection.
This is a long answer to Alan’s question, but I think the answer is it may get harder to find the thing you want, but it should get easier to find the thing you need. More links is more serendipity, more routes to the idea that can help you. And since the neighborhood will dynamically expand as you wander, all your Happenings will link seamlessly together giving you access to everything as you need it.
[T]he problem is that bad writers tend to have the self-confidence, while the good ones tend to have self-doubt. So the bad writers tend to go on and on writing crap and giving as many readings as possible to sparse audiences. These sparse audiences consist mostly of other bad writers waiting their turn to go on, to get up there and let it out in the next hour, the next week, the next month, the next sometime. The feeling at these readings is murderous, airless, anti-life.
– Charles Bukowski on why he encouraged people to not write.
“There’s only one other industry that calls their customers ‘users’.”
– Old information technology proverb.
Somewhere in 2009 it hit me that I had been wrong about educational technology. Very wrong.
The year before, I had been working for an organization that dealt with OpenCourseWare, and the rhetoric was (as it still is) that reuse of OCW could lead to education sector efficiency. But as we looked for reuse we found that there wasn’t much evidence of it. Not institutional reuse anyway, or reuse by professors.
And as I pondered this, it became very obvious why this was the case. Every single decision in the OER community at that time seemed to be predicated on glorifying and funding creators, based on a trickle-down theory of impact. Courseware was shipped as PDFs, with school logos burnt into the slides. Hewlett was funding Ivies to the tune of tens of millions to create OCW, and yet none of those projects ever sat down with teachers from state universities and community colleges and asked what they might actually want.
Simple things precluded any reuse. Test questions were published with answer keys, in formats that were not importable into LMS’s. Course videos contained references to resources unavailable to students not in the lecture, or housekeeping about advising periods that would mean nothing to someone watching this video for a class.
The Open Educational Resources community, at least the elite part of it, portrayed itself as a community-minded set of save the world do-gooders. But in reality, much of it was the Poetry Slam from Hell Bukowski talks about above, a bunch of elite schools sitting in Hewlett’s coffee shop, waiting for Yale to step down so they could show their OCW.
Makers, Builders, Producers
People say they want a world of “producers not consumers” or “makers not takers”. Peel back the assumptions under those statements and you’ll find some disturbing stuff.
And so it was when I returned to instructional design in 2009, fresh off the OCW experience, that I found these phrases, which used to seem so normal, now strange.
There was a time, after all, that we used to call lurkers “readers”. Users were “doers”. These things had respect.
Now anything short of “making” was devalued. “We’re going to turn our students from passive consumers to producers!” we yelled.
This was presented as revolutionary, but it wasn’t revolutionary at all. The forms might have been revolutionary – video, podcasts, CAD-based fabrication. But the idea that “producers” should be valued at a university is the least subversive idea one could have. It’s the entire basis of the academic enterprise. Everything, from tenure review to pay scale, is based on the notion that it’s not enough to be well-read, or to think good thoughts. One must make something. Academia is one of the few places where your entire career is based on how many important things have your name on them.
And so we showed students how to make things with their name on them.
Edtech-based making in the university was not an attempt to introduce a new value structure. It was (and is) an attempt to give students tools to achieve value in the existing power structure.
That’s valuable, certainly. I repeat, it is valuable. But it hardly makes you Illich.
And of course there’s a possibility that you’re just spreading the “Shut Up or Ship” culture of Silicon Valley that believes participation is only the domain of those-who-code, or the PDFing logo-stamping culture of the Ivies. It’s possible that you’re just reinforcing the same narrative that justifies the massive inequality in our country on the basis that the 1% “contribute”, and that being a “Job Creator” is better than doing a job for someone else well. It’s possible that you’re enabling the people that went nuts after Obama’s “You didn’t build that” speech, where Obama made the rather mundane observation that success in America is enabled by our entire community.
It’s possible it gets worse. As Audrey Watters, Debbie Chachra, and Bjork have noted recently, definitions of what constitutes “building” are gendered, and differently applied. A Taylor Swift album is seen (correctly) as co-produced and co-written. A Kanye West album is a Kanye West album, even if it contains a cast of hundreds. Kanye West is seen brilliant where Bjork is seen as making a excellent collaborative album.
And while we’re talking about the Brilliant Mind myth creatorism is based on, we might as well pull in this chart, which plots female PhDs in a field against the emphasis that field places on “brilliance” vs. “hard work”.
The paper in Science the above graph comes from focuses just on the brilliance connection. A focus on hard, sustained work over brilliance appears to predict female representation better than any other general model I know of.
At the same time, I can’t help but see them as interlinked phenomenon. Sociology graduates Bjorks. Philosophy graduates Kanyes. The hierarchy is Kayne > Bjork > Rock Critic > Listener. The fact that it is discerning listeners who produce artistic revolutions is lost on everyone.
To coders, people who don’t code are not makers. But we keep punching all the way down. Published writers kick bloggers. Bloggers see themselves as creating on a level that readers and commenters don’t. It’s not just about your job title, it’s about your internal taxonomy.
The Sources of Innovation
The things I have done since 2009 seem very scattered to people. I ranted on about OER reuse. I got deep into von Hippel. With Amy Collier and Helen Chen I looked at how classes interact with MOOC materials produced elsewhere. I experimented with the pedagogy of summary. And now I’m involved with a federated wiki project so complex I barely know where to link to to explain it.
Underneath all of these projects (and many others: Water106, the Mixable MOOC, Design Patterns in ID) is really a single obsession. What would happen if we got over our love affair with creators? What would happen if we collapsed the distinction between maker and taker, consumer and producer, not by “moving people from consumption to production”, but by eliminating the distinction? What if we saw careful curation of material as better than unconsidered personal expression?
What if we stopped calling readers lurkers? What if we stopped caring about who got the credit? What if the OER community saw the creation of materials as a commodity, but the reuse as an art?
I’m not attacking digital storytelling here, personal blogs, or Makey-Makey Boards. I’ve used all of these in my work with faculty, and I’m going to continue to do so. These things get students engaged and excited, and in the process of making things they learn much more deeply than they could ever learn from a textbook. Sometimes a poetry slam is what you need. Sometimes it’s even good.
But the projects that interest me most nowadays are the one where the thing made doesn’t fall into the traditional categories. Federated wiki, the pedagogy of summary, student curation. What interests me most in ds106, for example, is not the making, but the co-making. In these projects I see a chance for an engagement that is less ego-driven, less divisive, and ultimately more useful to society. In the years since Gutenberg I think we’ve managed to get the single brilliant author thing down pretty pat. It might be time to try something else.
I’ll end here with a story. Back in 2010 I was getting a coffee with Jon Udell, and he said something that has stuck with me. He had been trying to get people involved with community by encouraging production of various things, but it wasn’t working the way he planned. He said that there was a point he realized that he was trying to make everyone a writer. And everyone’s not a writer.
His obsession with getting people to share calendar feeds seemed odd to some people, but for him it was (I think) about something bigger. Were people to simply share community calendar feeds with a hub, we could solve far more community issues than a roomful of bloggers ever would. A community getting this idea, that the work they don’t even think of as creation could be valuable; that would be much more powerful than than telling people to podcast their town meetings, or asking them to blog their work.
That idea turned out to be difficult for a number of reasons, but I think the concept is right. I imagine classes where writing a good and useful summary of research is seen as being as “brilliant” as writing an original paper, where cleaning up data is seen as valuable as theorizing about it. Where a well curated and quoted set of material is as valuable as research. Where reuse is valued over reinvention. Where replicating experiments is as revered as creating new experimental designs. Where people who connect others and think about how to connect others get credit for the advances those connections bring.
I think students who came out of a program like that would be better suited to solving the sorts of problems the world has right now. It’s just a theory, but I’m hoping we get a chance to prove it.
As we plan for our second fedwiki happening the differences between federated wiki and wiki become, well, stranger.
If you’ve been following the story thus far, you know that federated wiki is pretty radical already. As with wiki, people converse through making, linking, and editing documents. But because each person has a seperate wiki, there is a fluidity to this “talking in documents” that is hard to describe.
You write a post referencing Derrida’s concept of “ultimate hospitality”. I get interested in that, do a bit of research. I save a copy of (fork) your post to my site, but link it to a page describing Derrida’s hospitality in more detail. Not because I’m an expert, but because it’s a good excercise to understand your writing. You see that post and fork mine back. The next visitor to your page finds your page plus my article annotating it. Maybe they edit it, creating their own fork. And so it goes.
Carry It Forward or Clean Slate?
As we move into Happening #2 on Teaching Machines, this question comes up: do we pull these documents into the new happening? Use the same site? Do I create one big “Megasite of Mike” which I haul into each event like my persistent twitter feed?
And what we’re thinking is maybe I don’t. Maybe Happening #1 site is done, and I create another site.
So for instance, my wiki farm is at hapgood.net. For Happening #1 on “journaling” I had journal.hapgood.net. Maybe for the Teaching Machines happening I make machines.hapgood.net. And to the extent I want to talk about something from a previous event in this new context, I fork it into the new context.
It’s pretty simple to do this in Federated Wiki after all. I just drag the page from one site and drop it on another, I edit it for the new audience, or maybe even take the opportunity to clean up a few typos. And that’s it, done!
While this may not sound extraordinary, it is in fact an inversion of how we usually think of wiki (and sites in general). And it has some neat ramifications.
The Dreaded Curve of Collaborative Sites
To understand why this is such a departure from traditional collaborative sites, we need to introduce you to the dreaded logistic curve. Here’s the curve of Wikipedia production:
Via Wikimedia Commons.
Until about March 2007 many people thought that Wikipedia’s growth was exponential. Above, we see a log scale graph, where exponential growth would be represented by a straight line (explanation of log scale).
But things go a little wonky in 2007. In that year it begins to become apparent that a logistic model might better predict Wikipedia growth. Currently the site is growing faster than a logistic model would predict, but well under earlier exponential models.
“Enwikipediagrowthcomparison” by HenkvD
If the extended model above fits, Wikipedia will near heat death in about 10 years.
Projection by HenkvD
People have attributed this to a lot of things, and certainly it’s a phenomenon with many inputs. But the logistic-like nature of it suggests one simple explanation — limited resources. Logistic curves are what you find when you map animal populations, for example, coming up against the resource limits of the environment.
In wiki, as in many other collaborative projects, the limited resource is stuff left to be written. People get together to write articles, and initially each new article leads to two other articles, and you see that growth on the left side of the graph.
Eventually, though, the easier stuff is written. It may not be written the way you’d like it to be, but it’s written. Wikizens move from homesteading new territory to vicious fighting over already cultivated plots of land. The enterprise begins to feel less fresh, and the type of people who find this stage envigorating are often dreadful bureaucrats.
Now, you might think this was just a function of Wikipedia, as it has articles on everything. But you’d be wrong. Smaller wiki communities see this same pattern. Here’s a graph of the first wiki approaching its own heat death:
(Chart by Donald Noyes)
Now these are the tail-end days of that wiki — it started in 1995. And its this observed pattern that led Ward Cunningham to predict, much earlier than most, that Wikipedia would follow the same pattern.
We get why Wikipedia would hit a resource limit — there’s an article on fish fingers already, for crying out loud. But why would WikiWikiWeb run out of subjects? There’s only 30,000 pages, after all.
The answer’s simple — people come together for a reason, a shared interest. In the case of WikiWikiWeb it was to share software design patterns and agile programming techniques. As those subjects fill out, the opportunities for new contribution diminish. The community has a choice — expand its charter (which fractures the cohesion of the group) or move into maintenance mode. (Likewise, Wikipedia could expand its charter by loosening notability requirements, for example, but this would radically change the nature of the project people thought they were contributing to).
Very few people want to be on a site that’s all about maintenance. As we mention above, the lack of new ground to cover leads to a claustrophobia manifested in endless arguments about what should and shouldn’t be on the wiki and non-stop edit wars.
This happens in other communities too, by the way. At some point in a political forum if there isn’t new blood people feel like they are just rehashing the same conversation over and over.
And so you get what I call Colony Collapse. One day you reach a tipping point, and suddenly the only people left on your site are the people you actually considered banning at the beginning. (This is what has happened to my old political community). For a while these people maintain the site, but eventually they too get bored and leave, and the site falls into disrepair. It starts to rot.
You can see this at my old site Blue Hampshire, which has reached the final phase of collapse, and now consists primarily of syndicated political press releases and occassional comments about how moronic other commenters are. There’s 8 years of beautiful posts in that site, almost 15,000 posts. Many contain wonderful explanations on how New Hampshire government works, personal reminiscences of New Hampshire political history. There are comments on that site with more political wisdom then you’ll find in a year of the Washington Post (there’s over 100,000 comments).
And it’s all rotting.
It’s heartbreaking, and after you’ve been through it once you get into a feed-the-beast mentality about all future communities. To paraphrase the line from Annie Hall: Online communities are like sharks; they have to keep moving forward or they die. And so, as community leader, you take on the exhausting role of the shark, pushing the site forward, always watching for the dreaded inflection point which presages the site’s collapse. Because once that happens, it’s all over.
A Different View: Wiki Sites as Bounded Conversations
That’s a long digression. But back to federated wiki.
Here’s a possible vision for federated wiki sites that you, the user make.
You’ll make federated sites for conversations on things, the way we are having this happening on Teaching Machines. And during the event, you’ll build it out. And then, at a predetermined point, you’ll call your personal version of that site done and abandon it.
In other words, we bake heat death into the plan. We accept our mortality.
That’s great, but now comes the question — how does the material get maintained? How do we recurse over it?
The answer we’re coming up with is that it gets pulled into new sites belonging to new conversations.
We have an example of this right now. We tried an experiment a while back that was a bit of a flop — a wiki called the “Hidden History of Online Learning”. It’s got about a hundred pages of this variety in it:
The site flared up into activity for a couple of weeks, then sputtered and died. By all normal metrics it’s a failure, doomed to bit-rot and link-rot, a slow descent into a GeoCities-like hell.
But in this case everyone involved has their copy of the site. As they participate in the Teaching Machines happening, they can pull stuff over into the new teaching machines wiki they have made. They’ll spruce it up, check the links, and maybe even improve it a little.
Ward and I were talking about this, and he said it reminded him of an earlier time when he was first working with Smalltalk. Objects would get better the more systems they were used in, because each time they were reused they would get refactored, optimized, simplified, extended. He and his fellow coders even had a name for this: “Reverse Bit-rot”.
It shouldn’t happen. It defies the entropy we see in all those pretty graphs up-page. But it happened.
Maybe this can happen here too. Sites can end, like conversations end. But we reach back into those previous conversations and say — you know, we were talking about that a couple months ago, let’s pull this thing in. And that thing gets another spin.
Maybe this doesn’t sound radical to you. Maybe it sounds ordinary. But I am sure every former online community manager out there understands just how radical this idea is. It’s equivalent to the soft-forking solution that was adopted Live Journal that became *the* solution for the non-collaborative communities that followed. But here it is applied to collaborative tasks.
In this world, sites like Blue Hampshire die — and maybe even die quicker, more humane deaths. In fact, maybe we say, hey, here’s a site that is going to exist just for the 2008 election. Two years later you spin up a site on 2010. Four years later on 2012. Some material from 2008 flows in. Some doesn’t. If the material is good enough to live on, there is no failure, no Colony Collapse.
Each wiki site is the product of a bounded conversation, expected to die, but also expected to be raided for the next conversation.
Kind of nice, right?
So I’m super excited to announce we’ll be doing a Fedwiki Happening on Teaching Machines with Audrey Watters helping to facilitate.
If you don’t know what federated wiki is yet, the short answer is its a form of wiki where everyone has personal wiki that magically links to other wikis to form a federation. People find the experience of using it difficult to explain, but I personally thought that Alan Levine’s description of it as communal index cards was a good pass at explaining it.
We ran a Happening in December that was structured around “collaborative journaling” but as people in the Happening pointed out, the exciting stuff was cooperative, not collaborative, and journaling didn’t really capture the weird hypertext mind meld that happens when you get deep into federated wiki. While the first Happening was a great success, we’re interested in exploring the form. So we’re offering this Course that’s not a Course with Audrey’s help in March.
A cMOOC Not Based On Stream
Here’s how it will work. You’ll notice a lot of parallels with Wiley’s proto-MOOCs here as well as with Siemens/Downes/Kop cMOOCs and Cormier Rhizo-experiences. The approach is far from revolutionary, but trying it on federated wiki will add some interesting dimensions.
First, we’ll get all of you set up on federated wiki through two waves on onboarding which demonstrate how to use the tool. The first will start Feb 1. and the second will start March 1. I really recommend getting in the February cohort so you can play around with the tool a bit before the Audrey part of the Happening starts.
The way we think the topical part of The Happening is going to work is this. On Day One, Audrey will publish a concept sheet. It’ll look like this (although this is not from the course, it was just made up as an example).
Each one of those links to nothing initially. As participants come through, they pick topics, click the link and write summaries of these subjects. Or (and this is important) they write something else maybe only tangentially related. Maybe something not related. Or they go to other people’s articles and extend them by linking to new articles of their own creation.
Slowly, however, material gets covered. Over the course of three days as you click these links you’ll see articles that others have written. If you like those articles, you can fork them to your site. If you like them, but would like to extend them, you can do that too. You’re also encouraged to link out from the articles and explore other areas of interest not captured in the list. You’re slowly building a personal wiki site on this subject that represents what you believe to be the best possible view of the subject.
At the end of three days we’ll have a Google Hangout on Air, and we’lll use the articles as prompts for a conversation with Audrey. We’ll look at the articles people have written on her suggested subjects (“Audrey, this article says ‘Augmenting Human Intellect’ is sometimes seen as oppositional to AI — can you talk about that?”). But we’ll also talk about the ones that people have contributed on their own (“Is anyone involved with the article on ‘A/B Testing’ in the Hangout or chatroom? Do they want to say a bit about it?”).
Finally, we’ll look for places where there’s some variation in the versions people have written and forked. Is there an interesting disagreement here, or are things just out of sync?
Then it’s another list of articles for Day 4, and we go through this whole cycle again. We do it seven times for a total of 21 days.
What’s New About It
It sounds a lot like David Wiley’s wiki-based Open Education course, obviously. Or any of a number of open wiki courses. Again, it *is* a lot like that.
Where it gets different however, is that it is a cooperative, not a collaborative class. So each participant will be creating their own wiki out of the works of others, a wiki that captures their view of the subject, their connections, their set of interests. (In this way it’s the fusion of the blog-based approach people have been using with wiki culture).
It’s also meant to be highly associative. One thing I’ve learned with federated wiki is that the distributed ownership of the wiki space frees people up to make the sort of connections and extensions of ideas they would be timid about in a shared space. In a normal wiki, you ask forgiveness, not permission. In a federated wiki you ask for neither, and ultimately this benefits everybody, as people edit and supplement material at levels you just don’t see in a standard wiki based class.
Which bring us to this point — outside of learning about teaching machines, you’ll also get to use federated wiki, which is a bit of a mindbender. It’s sort of what the web might have been if it had been built by the people who produced KMS and the creators of GitHub.
Ward Cunningham calls it “A new kind of browser embedded in an old kind of browser.” And that’s what you’ll see if you have the patience to deal with the weirdness of it — not a new kind of website, but a different kind of web, one that looks far more like what Engelbart, Kay, and Nelson might have built.
I’m not going to lie. It’s not easy software to master, and it is more “developer release” than polished product. But any Happening participant who stuck with it more than a few days will tell you that it was well worth the effort to figure out. (In fact, a few of them told me the course and software changed their lives forever, but we’re being modest here).
Audrey Fucking Watters
Oh, and did I mention Audrey Watters will faciliate it?
Audrey and I talked about the structure of the event earlier this week, and what sort of skeleton she was going to build for it. And it ended up sounding even cooler than I thought it was going to.
Here’s the premise. There’s these teaching machines, and then there’s this idea of augmenting human intellect. There’s Artificial Intelligence. There’s machines that work on you, and then there’s machines that work *with* you. There’s Illich’s hammer, convivial tools, Memex, Pressey’s Machine, PLATO, Logo, Scantron sheets, Speak-and-Spell, and the weird question of how a construction company became the world’s largest educational publisher.
There’s this whole mess of things where you start by talking about Thorndike’s desire for adaptive release textbooks and end up talking about the nature of the Google Car. If you’ve seen Audrey talk recently, you realize how fascinating these paths are, like a James-Burke-in-Connections path from Lascaux Caves to the invention of refrigeration.
You could do this learning event as a standard read-reflect-make sort of thing. But I’m interested to see how it develops in the more associative soil of federated wiki. Maybe you just put up the article on Programmed Learning. But slowly that piece of yours is woven into a larger narratives — or actually woven into many competing narratives, with each person bringing their knowledge into the web.
I had you at Audrey Fucking Watters, right? I’ll just stop here.
So it starts March 1. It ends March 21. There’s maybe seven hours of minimum commitment, up to however much time you want to spend.
I have a sign up form. People who sign up will get an orientation to Federated Wiki in early February and be plugged into the event (via a hub-like thing called “Conversation Clubs”) for March.
We are providing servers for your fedwiki for this event, so please sign up soon so we can estimate server needs.
See you in the Happening!
A Note to Previous Happeners
I know the Happening over Christmas was a very special event for many people, and one or two people had reservations about using the Happening name for something like this that is a bit more structured. The Happening was a neat enough moment that we all feel we own it, and we don’t want to dilute it.
I completely get that. So I just want to say that 1) this is an experiment, we can go back at any time to the more self-directed version at any time, and 2) I think that while this sounds structured that federated wiki has a way of blowing a great big hole in anything hierarchical. So I think we’ll all find in the end that this is just as unpredictable, and just as special as the December event.
And if it isn’t, we’ll blow a hole in it ourselves. ;)
Tim Klapdor has a good explanation of what the FedWiki Happening was and how it went on his site. For those that want to understand what all the fuss is about, that’s maybe a good place to start.
He also has one of the better lines of the week:
There are some idiosyncrasies to learn, some slightly odd concepts and practices but if you’ve ever driven a French car it’s nothing you can’t take in your stride.
Federated wiki, the Peugeot of social software! New tag line I guess.
But he says one thing I want to pick up on:
I’m kind of shocked at the flexility of Fedwiki as a tool. It’s really only limited by your imagination and I’m only just starting to get a sense of how it can be used.
This is the thing. When you first get your first generic Lego set and build the Millennium Falcon, it doesn’t really work as well as just buying the Millennium Falcon Lego set.
Setup is a pain in the butt. Things end up in weird places. It’s a bit funky looking.
But you start to realize after a while that, holy crap, I can build anything with this.
And that’s the case with Federated Wiki. It can be a hub for sensors. A film review application. A navigational database. Interactive fiction. A calorie counting application.
It’s not really a web site at all, or even web software. As Ward Cunningham puts it, it’s a new sort of browser embedded in your old sort of browser. It replaces HTML with JSON. It sees paragraphs/items as the atomic units of the web, not pages. It collapses the read/write distinction of the web, and replaces location-based networking with networking based on named objects. It introduces cross-page refactoring, which turns out to be a much bigger deal than you’d ever guess.
In many ways it resembles HyperCard, the missing link to the Web, a maker set for networked creation.
As the Ars Technica article linked above notes, the variety of uses of HyperCard in education were extraordinary:
- a stack of multiple choice test questions
- assembling, storing and delivering teaching materials that included graphs from Excel
- making class KeyNote-like presentations and handouts for students
- a calculator that included a variety of mathematical functions and graphing capabilities
- computer aided instruction in the sciences incorporating animation and sound
- oil-spill modelling
- a database front-end to an Oracle database
- a database in toxicology
- selecting and playing tracks on a videodisk
- an interactive educational presentation showing jobs in the wool industry
- educational interactive games ‘Flowers of Crystal’ and ‘Granny’s Garden’
HyperCard even was the original platform the puzzle game Myst was programmed in. Myst remains one of the best selling computer games of all time.
It’s hard to see right now, but underneath the hood of Federated Wiki is some very careful thinking in how a few concepts — JSON, plugin architectures, dynamic neighborhoods, forking, pages as data sources to other pages — can be put together so that you can build applications without programmers.
(In fact, one of the joys of working with Ward has been when presented with a needed capability his question always is “How do we build a solution that gives users more creative power?” He’s iterative , but he rejects the incrementalism of the current age. If you want your users to do amazing things, they need the tools to get out in front of you).
So yes, it’s a bit of a Peugeot at the beginning. It’s not the fanciest Millennium Falcon on the lot. But in return you get a user innovation toolkit like no other. We may not talk about that much for the time being (I’ve found people get overwhelmed when I show everything Federated Wiki can do). But I saw Tim’s comment and could’t resist saying — you don’t know the half of it. ;)
There’s a really excellent post over at Frances Bell’s blog talking about problems with forking in fedwiki — and really about the meanings associated with different types of revisions and decisions of people. And there’s a lot there to comment on, but the piece I take away is that we haven’t reduced the stress of revising work of others as much as we intended.
As Frances points out, sometimes this is because we don’t see the edits we should — and that’s a system issue. But sometimes it’s that we do have the edits we want, but we’re just feeling pressure to adopt “the latest version” and we’re very worried someone’s contribution might have gotten lost. Sometimes it’s that we’d love to incorporate edits, but we’re newbies, and we go about things in ways that create more confusion.
I started writing a long comment on Frances’s blog, but got routed to a weird WordPress login loop. And I thought rather than go back and reconstruct that comment, I might put out a way of thinking about fedwiki for your critique. This doesn’t quite answer the questions or issues in that post (or the many, many comments), but it deals with the *stress* that misfires might cause newbies in the system. Because the misfires are a problem, but it’s the stress that worries me most.
A Kindle Parable
Imagine a world where everbody has a Kindle, but they are not that sucky, closed system you usually get stuck with. These Kindles allow you to treat your books like a word-processing document. You can add notes, change things, delete passages. And you can forward your annotated & edited version to all of your friends — the license allows you to do that.
So, for instance, I am reading Memory Machines right now, and the chapter I am on is the one on Englebart’s NLS project.
Now I know a bit about Englebart. And so as I’m going through it I’m writing in notes, and linking it up to other books I’ve read and other documents I’ve written. There’s also a bunch of stuff in there that’s not really relevant to what I am reading it for, and I delete it, knowing that when I come back to this to review it I want it to be short and scannable.
I also know I am going to share it with friends, and they won’t read it if it’s super long, so I edit it down to the pieces that really deal with federated wiki, since that’s what we’re talking about.
You (a friend of mine) get my annotated copy on this Kindle book and start making notes, adding some stuff, linking to the books that you have.
Now, here’s the thing — I see your edits. What does it mean if I don’t pull them back?
You can’t make it mean *nothing* — I’m not so naive as that. But I would love it to mean *less*. I’d like it to mean, primarily, that for my own reasons it wasn’t useful to me to pull those edits back.
Underlying that could be a number of things. Maybe I didn’t have the time. Maybe I missed them. Maybe I read them, and enjoyed them, but thought, hey — I can always read this version on your site, no need to fork back. Maybe someone had even better notes, and so I forked theirs and I didn’t want to take the time to integrate your notes in.
Basically I have me, and the people who read my annotated versions of books, and those are my responsibilities. My job is to write the best article I can given the time, but “best” is heavily influenced by what my personal needs are and who I feel I am writing for.
The one thing that best really shouldn’t consider too much is “What article will make my collaborators happy?” because that way madness lies. And that way self-editing lies too — and keep in mind if you’re afraid to say something that you think is useful, it’s likely we all lose.
The Fast Cooperation Problem
So this sort of system is properly called cooperative, not collaborative.
In collaborative systems, people share goals. In Wikipedia, that goal is to write the best and most representative neutral point of view article on a subject, in an encyclopedic style accessible to a general audience. You may have personal goals in addition to that (raising profile of women in science, for example) — but they must remain aligned with the group goal — they can’t supercede it.
In cooperative systems people are allowed to pursue wildly different goals, but are encouraged to do so in ways that can benefit the work of others. Blogging is in some sense cooperative — you pursue your own goals, but you publish in a way that makes your stuff easily quotable by others. The notation system that Kindle currently has, where I can see what Gardner Campbell highlighted is cooperative.
Federated wiki is cooperation dialed up to eleven. Or it’s meant to be.
But looking at the interactions and the confusions and stress that sometimes results, I’m starting to see a failing point.
Here’s the world as I imagined it. I publish an article on NLS. A bunch of people read it and put in their notes. Or just save it. I’m an attention addict, so maybe I immediately read the notes. But I don’t want to pull them back.
Why? Well, I read them, and some were good, But I don’t want to edit this thing every day. I’ll wait for some more notes to accumulate.
Other people fork my article and write their notes, edit their version. Some fork forks of forks. Each one is different. This happens over the space of weeks or months.
At some point I’m writing another article and I link to the NLS article. When I test out that link I see that there are a half dozen twins. I think, I should look at integrating some of this stuff and I browse the twins. and write stuff up.
This is my dream world, but there’s a couple of problems with this. First, we have cross-page item dragging, but it doesn’t create references in the history. So I am pushed to fork a version to get the reference, but all the versions are different, and forking destroys my own history.
This is a big part of the “catch-up” problem. If things get too out of sync it’s hard to put them back together while maintaining attribution. This can be fixed.
Harder to deal with is the different sort of environment our Happenings present. In the Happenings we’re on the time-scale of hours, not days. It’s a sort of a “wikiswarm”, and it has the benefits and drawbacks of swarm behavior. So people all pounce on the same article at the same time.
That changes the dynamic dramatically. And I’d argue that it dials up the meaning of whether you get forked or not forked, or have your edits included or excluded. It certainly creates an environment where I feel I must stay on top of new edits, for the good of the tribe.
Some upcoming software upgrades (particularly a “history for paragraphs” upgrade) will make this much better. But the social issues remain.
It may be we need to encourage certain behaviors in Happenings that we don’t need to encourage in slow-cooperation environments. I’ve noticed that linking small articles out tends to work better than doing larger pages. When you create a small article with your additional idea, example, or data that idea stays visible even if the main article gets slammed with edits.
It maybe that we need to encourage “random article” behavior, to push people into editing the things that are not where the buzz is. We definitely need to get people to attribute less meaning to inclusion and exclusion of edits. The key is the long view.
At the same time, the “buzz” is part of what is addictive about the Happening. I have talked to so many people who describe just *dying* to get back to their computer to see what the Happening had pushed to the top of Recent Changes. That excitement is a powerful tool, and we want to preserve it in some fashion.
Anyway, lot’s to think about. Thanks to Frances for starting the conversation.