PowerPoint Remix Rant

I’m just back from some time off, and I’m feeling too lazy to finish reading the McGraw-Hill/Microsoft Open Learning announcement. Maybe someone could read it for me?

I can tell you where I stopped reading though. It was where I saw that the software was implemented as a “PowerPoint Plugin”.

Now, I think that the Office Mix Project is a step in the right direction in a lot of ways. It engages with people as creators. It creates a largely symmetric reading/authoring environment. It corrects the harmful trend of shipping “open” materials without a rich, fork-friendly environment to edit them in. (Here’s how you spot the person who has learned nothing in the past ten years about OER: they are shipping materials in PDF because it’s an “open” format).

The PowerPoint problem is that everything in that environment encourages you to create something impossible to reuse. Telling people to build for reuse in PowerPoint is like putting someone on a diet and then sending them to Chuck E. Cheese for lunch every day. Just look at this toolbar:

ppt

That toolbar is really a list of ways to make this content unusable by someone else. Bold stuff, position it in pixel-exact ways. Layer stuff on top of other stuff. Set your text alignment for each object individually. Choose a specific font and font-size that makes the layout work just right (let’s hope that font is on the next person’s computer!). Choose a text color to match the background of your slides, because all people wanting to reuse this slide will have the same color background as you. Mark it up, lay it out, draw shapes that don’t dynamically resize, shuffle the z-index of elements. Get the text-size perfect so that you can’t add or subtract a bullet point without the layout breaking.

Once you’re done making sure the only people who can reuse your document must use your PPT template, with your background, your custom font, and with roughly the same number of characters per slide, take it further! Make it even more unmixable by making sure that each slide is not understandable outside of the flow of it. Be sure to make the notes vague and minimal. In the end it doesn’t matter, because there is no way to link to individual slides anyway.

You get my point. Almost every tool on this interface is designed to “personalize” your slides. Create your brand. The idea is that this is a publication and you or your univesity’s stamp should be on it, indelibly.

Most things work like this, unfortunately, encouraging us to think of our resources in almost physical terms, as pieces of paper or slides for which there is only upside to precisely controlling their presentation. But that desire to control presentation is also a desire to control and limit context, and it makes our products as fragile and non-remixable as the paper and celluloid materials they attempt to emulate. We take fluid, re-usable data and objects, and then we freeze them into brittle data-poor layout, and then wonder why nothing ever gets reused.

So I love the idea of desktop-based OER tools, of symmetric editing and authoring. But there’s part of me that can’t help but feel that the “personal” in “personal publishing tools” has a more pernicious influence than we realize. It’s “personal” like a toothbrush, and toothbrushes do not get reused by others.

End of rant. Maybe I need a bit more sleep…


Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century

Maybe I’m just not connected to the edublogosphere the way I used to be, but the story of Matt Rognlie should be on every person’s front page right now, and it’s not. So let’s fix that, and talk a bit about remix along the way.

(Let me admit the title is a bit of hyperbole, but not by much. Plus, if you have other candidates, why aren’t you posting them?)

First, the story in brief.

  • A scholar named Piketty produces the most influential economic work of the 21st century, which pulls together years of historical data suggesting that inequality is baked into our current economic system at a pretty deep level. It’s the post-war years that were the anomaly, and if you look at the model going forward, inequality is going to get worse.
  • A lot of people try to take this argument down, but mostly misunderstand the argument.
  • An MIT grad student named Matt Rognlie enters a discussion on this topic by posting a comment on the blog Marginal Revolution. He proposes something that hasn’t been brought up before: Piketty has not properly accounted for depreciation of investment. If you account for that, he claims most of the capital increase comes from housing. And if that’s the case, we should see a slowing of inequality growth.
  • He gets encouragement from Tyler Cowen and others at the blog. So he goes and writes a paper on this and posts it in his student tilde space
  • On March 20th he presented that paper at Brookings. He’s been back and forth with Piketty on this. To the extent that policy is influenced by prevailing economic models, the outcome of this debate will determine future appoaches to the question of inequality.
  • As of today, it seems whatever the outcome of that debate may be Rognlie has permanently altered the discussion around the work, and the discussion around inequality.

So first things first — this is a massive story that started as a student blog comment and germinated in tilde space. So why are we not spreading this as Exhibit A of why blogging and commenting and tilde space matters? Did we win this argument already and I just didn’t notice? Because I think we still need to promote stories like this.

Forward Into Remix

Of course, I need to add my own take on this, because this is a perfect example of why remix is important, and why Alan Kay’s dream of Personal Dynamic Media is still so relevant.

Here’s what the first comments on that post read like. You’ll recognize the current state of most blogs today:

krug

Hooray for petty ad hominem attacks and the Internet that gives us them! Paul Krugman is rich, and therefore his take on Piketty is wrong. Al Gore took a jet somewhere, so climate change does not exist. Call this a Type 1 Comment.

Matt comes in though, and does something different. Matt addresses the model under discussion.

matt

He goes on and shows the impact of this on the projections that Piketty makes. Call this a Type 2 Comment.

This is an amazing story, and an amazing comment. I don’t want to take anything away from this. No matter what the outcome of this debate we will end up with a better understanding of equality, thanks to a student commenting on a blog.

But here’s my frustration, and why I’m so obsessed with alternative models of the web. The web, as it currently stands, makes Type 1 Comments trivially easy and very profitable for people to make. The web *loves* a pig pile. It thrives on confirmation bias, identity affirmation, shaming, grandstanding, the naming-of-enemies etc.

On the other hand, the web makes Type 2 Comments impossibly hard. Matt has to read Piketty’s book, go download the appropriate data, sift through the assumptions of the data, change those assumptions and see the effect and then come back to this blog post and explain his findings in a succinct way to an audience that then has to decide whether to take him at his word.

If they do decide he might be right, they have to go re-read Piketty, download the data themselves, change the assumptions in whatever offline program they are using and then come back on the blog and say, you know, you might be right.

And it’s that labor-intensive, disconnected element that makes it the case that the most important economics blog comment of the 21st century (so far) received less comment in the forum than debates about whether Paul Krugman’s pay makes him a hypocrite.

And before people object that maybe that’s just human nature — I don’t think that’s the whole story. The big issue is that the web simply doesn’t optimize for this sort of thing. One of the commenters hints at this, trying to carry forward the conversation but lost as to how to do so. “Is this the Solow model, ” he asks, “I need a refresh, but I don’t know what to Google….”

There is a Better Way

What Matt is doing is actually remix. He’s taking Piketty’s data, tweaking assumptions, and presenting his own results. But it’s taken him a whole bunch of time to gather that data, put together the model, and run the assumptions.

Piketty does make his data sets available for the charts he presents, and that’s really helpful. But notice what happens here — Piketty builds a model, runs the data through it and presents the results in a form that resolves to charts and narrative in a book. Matt takes those charts, narrative, and the data, reinflates the model, works with the model, then does the same thing to blog commenters, by producing an explanation with no dynamic model.

Blog commenters who want to comment have to make guesses at how Matt built his model, re-read Piketty, download the data and run their own results and come back and talk about it.

I understand it’s always going to be easier to post throwaway conversation around a result than actively move a model forward, or clean up data, or cooperatively investigate assumptions. But the commenters on Marginal Revolution and other blogs like it often would like to do more, it’s just that the web forces them to redo each other’s work at each stage of the conversation.

It’s hard to get people to see this. But given Picketty’s book was primarily about the data and the model lets imagine a different world, shall we?

Let’s imagine a world where that model is published as a dynamic, tweakable model. A standard set of Javascript plugins are created which allow people to dynamically alter the model. In this alternate world Piketty publishes these representations along with page text on what the model shows and why the assumptions are set the way they are.

When Matt sees it, he doesn’t read it in a book. He reads it online, in a framework like the data aware federated wiki. He forks a graph he’s interested in, and can click in immediately and see those assumptions. He edits those or modifies those in his forked version.

When a discussion happens about Krugman’s post, he writes a shorter explanation of what is wrong with Piketty and links to his forked model. People can immediately see what he did by looking at the assumptions. They can trace it back to Piketty’s page in the fork history, and see what he has changed and if he’s being intellectually honest. If he’s introduced minor issues, they can fork their own version and fix them.

At each stage, we keep the work done by the previous contributors fluid and editable. You can’t put out an argument without giving other the tools to prove you wrong. You disagree by trying to improve a model rather than defeat your opponents.

The Future

I don’t know what the future of federated wiki is. I really don’t. I think it could be huge, but it could also be just too hard for people to wrap their heads around.

But the point is that it asks and answers the right questions, and it shows what is very hard now could be very simple and fluid.

It’s great that this comment was able to move through tilde space to have impact on the real world. But when you look at the friction introduced every step of the way you realize this was the outlier.

What would happen if we increased the friction for making stupid throwaway comments and decreased the friction for advancing our collective knowledge through remix and cooperation? Not just with data, but in all realms and disciplines. What could we accomplish?

The Remix Hypothesis says we could accomplish an awful lot more than we are doing now. It says that for every Matt out there there were hundreds of people who had an insight, but not the time to get together the data. For every Marginal Revolution there were dozens of blogs that didn’t have the time or sense to see the value of the most important comment made on them becuase it required to much research, fact-checking or rework.

This is a triumphant story for the advocates of Connected Learning — publicize it! But it’s also a depressing insight in how far we have to go to make this sort of thing a more ordinary occurence.


Critique by Redesign and Revision

David Wiley’s Remix Hypothesis[1] is that we won’t see the full impact of digital culture on education until we embrace the central affordance of digital media — its remixability, by which he means the ability of others to directly manipulate the media for reuse, revision, or adaptation to local circumstance. I think this is an important enough concept that it’s worth expanding on over a few posts.

To review, we’re used to conversation — that transient many-to-many chain of utterances. And with the advent of cheap books many centuries ago it became common for us to think in terms of publication, that permanent pass at a subject that rises above particular context. And we’ve played with these forms in profitable experiments, having published conversations and building conversational publications.

But what we have done only sporadically is to use the fluidity of digital media to have the sort of “conversation through editing” that digital media makes possible.

There are, of course, precedents. My sister makes her living building models of complex phenomena in Excel spreadsheets. She’s considered a bit of a rock star at this. Analytics people laugh, I suppose, at the use of Excel (although frankly, she probably out-earns them). Why would a business pay her so much money for a *spreadsheet*? There are so many sexy tools out there!

The answer is that her spreadsheets are the start of a conversation. She works to capture the knowledge of the organization in Excel and model it, producing projections and the like. But that’s only the first step. When the spreadsheet is done its handed off to people who continue that conversation not by drive-by commenting that the model is all wrong, but by changing the assumptions and the model and showing the impact of those changes. Because Excel is a common currency among the people involved, these conversations can pull in a wide range of expertise, and ultimately improve the model or assumptions.

Today I found another precursor — “critique by redesign” in the visualization arena. Even in the days before digital media visualizations were built on few enough data points that the more effective way to converse about them was to redesign them. Here’s a famous example from Tufte:

critiquebyredesign

This example is meant to be an improvement over the other version, showing how damage to the space shuttle O-rings increases as temperature dips. The critique of the original graphic isn’t (purely) conversation about its flaws — it’s a revision.

Design and Redesign in Data Visualization (from which I pulled this example) comments:

But the process of giving and even receiving visualization criticism does turn out to hold surprises. It’s not just that visualization is so new, or that criticism can stir up emotions in any medium. As we’ll discuss, the fact that visualizations are based on transforming raw data means that criticism can take forms that would be impossible for a movie or book.

The authors are absolutely correct, and yet as all work becomes digitized it’s likely both print and film will add critique by redesign or revision to their respective cultures.

On the edge of networked digital media we see that permissionless remix, revision, and adaptation accomplishes what traditional conversation and publication could not. We’ll note that collaborative revision of documents has become normal in most businesses, finally unlocking the full potential of digital editing. As I’ve noted elsewhere, electronic music is undergoing a similar revolution. Wikipedia, the one impossible thing the web has produced, created the most massive work in the history of mankind using similar processes. Coders on large complex projects critique code by changing code, letting revision histories stand in for debate where appropriate.

We are educating our students in the art of online conversation and publication, and this is important. It represents the scaling of those cultures of talk and print, and maybe the democratization of them too (although that question is more fraught). And remix *needs* these skills: publication is the start of remix, and conversation around artifacts is what makes the process intelligible.


But there’s a third leg to this stool, and it’s as cultural as it is technological. Our students (and teachers!) need to learn how to supplement comment and publication with direct revision and repurposing. It’s only then that we’ll see the true possibilities of a world of connected digital media.[2]


1. I’m shamelessly borrowing and slightly expanding David Wiley’s excellent term here. For more context and a great read, go to The Remix Hypothesis.

2. I think we do this in education, a bit. But not nearly at the level it merits, and the tools we have still mostly treat this as an afterthought.


Paper Thoughts and the Remix Hypothesis

David Wiley has an excellent post out today on a subject dear to my heart — the failure to take advantage of the peculiar affordances of digital objects.

Yeah, I know. Jargon. But here’s a phrase from Bret Victor that gets at what I mean:

“We’re computer users thinking paper thoughts”  – Bret Victor

You can do a lot of things with digital media. You can chat in a forum, which is rather like conversation. You can put out a blog post, which is rather like print publication. You can tweet, which is rather like, um, conversation. You can watch a video, which is rather like publication. You can post to Instagram, which is rather like, um…publication. With conversation attached. You can put out a course framework, which is rather like publication of a space where people can have conversations.

Hmmmm….

There’s really only two modes that most people think in currently. One is conversation, where transient messages are passed in a many-to-many mode. The other is publication, where people communicate in a one-to-many way that has more permanence.

What we are seeing now in education, for the most part, is the automation and scaling of conversation and publication. And this is what always happens with new technology — initially the focus is on doing what you’ve been doing, but doing it more cheaply or more often.

But that’s not where the real benefits come from. The benefits come when you start thinking in the peculiar terms of the medium, and getting beyond the previous forms.

I would argue (along with Alan Kay and so many others) that for digital media the most radical affordance is the remixability of the form (what Kay would call its dynamism). We can represent ideas not as finished publications, but as editable models that can be shared, redefined, and recontextualized. Conversations are transient, publications are fixed. But digital media can be forever fluid, if we let it.

We see this in music. I’m a person who has benefitted from the crashing price of digital audio workstations and the distribution channels now available for music. These have allowed me to record things that would be impossible for a single person to record even ten years ago. Distribution channels have led to weird incidents, like having a a multi-week number one song on Latvian college radio stations in 2011 (so broadly played, in fact, that I actually made the Latvian Airplay Top 40).

This is cool stuff, absolutely. But it’s not the real revolution.

To produce music, I use Propellerhead Reason, and I suppose you could say that tools like have changed the industry at the margins. But nothing like what is about to happen to music with the new breed of tools.

The latest release of Reason, for example, doesn’t make music any cheaper than the last one. It’s big advance is a tool called Discover which allows artists to share material to a commons that other artists can mine for inspiration.

And here’s the key — the material is directly editable and resharable by anyone. It is music as something forever fluid.

This is a marketing video for the new feature, but it’s short, and you should watch it, because I think it shows the future of education as well. And because I really think you need to see it. I really, really do.

Now let me ask you — what would happen if our students could work across classes in this way? If our teachers could collaborate in this way?

This, and really nothing else, is the thing to watch. These people who are talking the Uber-Netflix-Amazon of Education as the future? That is so tiny a vision that it depresses the hell out of me. I don’t worry that education can’t catch up to industry in these spaces. I worry that we’ll be pulled down by their conservatism and small-mindedness.

You should worry about that too. Because Uber is a taxi service co-op with a services center that skims money off the top. Amazon is a very effective mail-order company. Netflix supplies video-on-demand. All of these are done in ways that are made highly efficient by technology, but not one of them taps into the particular affordances of digital media (beyond reproducibility).

We need to think bigger. What David is concerned about in terms of teacher collaboration (how do we get teachers to tap into the affordances of fluidity) is what I am concerned about with students (how do we move past the forms of conversation and publication to something truly new). We can have a future as big as we like if we can get beyond these paper thoughts. We’re starting to see this sort of thinking in the music software industry and glimmers of it in education (see, for example, the new Creative Commons-focused approaches to LORs).

These glimmers happen in a world that has been distracted with other more trivial things (Videos with multiple choice questions! Learning styles!). They happen in a world that continues to think the primary benefit of the digital world is that it’s cheap.

What would happen if we moved remix to the center of the conversation? What would happen if we stressed remix for students as well as faculty? What could we accomplish? And if a little Swedish audio workstation company can see the future, why can’t we?


Age of the Incunable

After the western invention of movable type not much changed for a very long time. It took many many years for people to realize the peculiar possibilities of cheap, printed texts.

Gutenberg invents the Western version of movable type in the 1440s, and it’s in use by 1450. He thinks of it in terms of cost, really. Efficiency.

You can print cheap bibles – still in Latin, mind you. Affordable chess manuals.

He dies broke, by the way.

For almost fifty years, change creeps along.

They have a name for books of this period, which I love: “Incunabula”. Or if we go singular, the incunable. So we could call this the “Age of the Incunable”.

Detail of a Gutenberg Bible

Detail of a Gutenberg Bible. Source.

This is what books look like at that time. Almost identical in form and function, style and content to medieval manuscripts.

Just to be really clear – this is a machine printed book here, later adorned by hand. In case you didn’t notice.

There were printed books, but there was no book culture. There were printed books but there was no shift in what those books did.

But then things change. First in the Italian presses. Bibles are printed in Italian, for example. Illustrations become more common.

Aldus Manutius creates the “pocket book” in an octavo format, somewhere around 1500. We get cheap mobility. In 1501, his shop ditches the Calligraphic font for early “Roman fonts” more like the unadorned fonts we know today.

Sentence structure starts to change. We start to develop written forms of argument that have no parallel in verbal rhetoric. Ways of talking that don’t exist in oral culture.

People learn to read silently, which is huge, at three to four times the speed of reading aloud.

And here’s the transition: We start to think the sort of thoughts that are impossible without books.

De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566.

De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566. Source.

And it’s almost 70 years after Gutenberg that you see a real print culture emerge. Copernicus, Luther, etc. What we start to see is how fast new ideas can spread. We start to see what happens when every believer has their own Bible in which to look up things, in their own language.

We see what happens when an idea can be proposed and replied to across a continent in months rather than decades. We start to see the impact of the long tail of the past, what happens when esoteric works of the past, long hidden away, can be mass produced. What happens when you get Aristotle for everyone. What happens when every scientist can get his hands on a copy of Copernicus.

And Churches fell. And Science was born. And Governments toppled.

But 70 years later.

It’s something worth remembering for those of us excited about the educational affordances of digital material and networked learning. For a long time I thought — well, change is faster now, right? Technological change is, maybe. But it may be the case that certain types of social change are as slow as they ever were. There are days when I think they might even be slower.

We’ll see. For the moment, whether fact or fiction, the belief that this is just a lull will power me through. We’ll get there yet.


People Have the Star Trek Computer Backwards

I was watching Star Trek — the early episodes — with the family a couple weeks ago when it occurred to me: Silicon Valley has got the lesson of the Star Trek computer all wrong.

Here’s the Silicon Valley mythology of it, from Google, but it could be from any company there really:

So I went to Google to interview some of the people who are working on its search engine. And what I heard floored me. “The Star Trek computer is not just a metaphor that we use to explain to others what we’re building,” Singhal told me. “It is the ideal that we’re aiming to build—the ideal version done realistically.” He added that the search team does refer to Star Trek internally when they’re discussing how to improve the search engine. “It comes up often,” Singhal said. “For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”

This is what happens when you live in a town without history.

The Star Trek computer, at least in the 1960s, was not ahead of its time, but *of* its time. It lacked the vision to see even five years into the future.

It’s hard to get a good shot to demonstrate this, but here’s a couple to give you an idea. These are from the Omega Sector fan site.

alternative-factor BOTHAlternative2   Computer_center Star-Trek-The-Original-Series-Desktop-Computer-3

Now you can say as they do at Google:

“For instance, we might say, ‘Captain Kirk never pulled out a keyboard to ask a question.’ So in that way it becomes one of the design principles—we see that because the Star Trek computer actively relies on speech, if we want to do that we need to work to push the barrier of speech recognition and machine understanding.”

But this profoundly misses the point. Captain Kirk never pulled out a keyboard, because the idea was that computers were not meant to be messed with by users. They were instrumentation, for doing advanced sorts of mathematics and using it to decide which colored bulb to light. There’s no keyboard because there is no text, anywhere, on any computer on the Enterprise to edit.

And the reason for this was that in the 1960s people thought using computers for text processing was ridiculous. You see this in the history of hypertext. Andy Van Dam, who built pioneering text editing systems at Brown in the sixties was reduced to begging for time on the Brown computers. Why? Because computers were for math, stupid! The scientists at Brown laughed at him.

This is the same set of people who would tell Jef Raskin at Apple (a decade later) that you didn’t need lowercase letters on the Apple II because all users would be doing is playing games and writing BASIC anyway. (Thanks for the example, Lisa!)

Star Trek is not a post-keyboard world, it’s a pre-keyboard one. You would think a company that makes its money processing the billions of lower-case non-BASIC words that have been typed into computers since then would get that.

The Meaning of “Personal” and “Dynamic” in Personal Dynamic Media

So what happened? What changed? Well, for one, we started type text into computers.

But something bigger happened as well. Because text editing became a way of thinking about computers. You see this when Alan Kay starts talking about the DynaBook vision in the late 60s and early seventies. He starts by saying, look, you could have some text on this, and you could edit it. And you could swap out different fonts.

And then he thinks, well, music is really the same thing as text, isn’t it? Strings of characters produce documents the way that strings of notes produce songs. When you “display” a song, you play it. So you could edit sequences of notes and play them without being able to play an instrument, in a kind of text editor for music.

And he goes further. The same way you switch fonts, you could switch the sounds. You could try your composition as played by something trumpet-eque, and then switch it to organ, without redoing the composition. The way you can edit fonts you could edit timbre in the different sounds.

And pulling from ideas like Sutherland’s SketchPad he moves to notions of editable models, he imagines a user-created model of hospital throughput. You set your assumptions about time per patient, and how patients move through different departments. Then you fool around with staffing by adding or subtracting staff from different departments and see where bottlenecks emerge.

op

And in his mind, this changes communication, and allows us to communicate in new ways.

Now when I want to send my manager this week’s staffing, I can send them this dynamic document. Do they disagree with the staffing? Well, the document is open. They can change the staffing and see what happens. They can look at the assumptions and edit them. We have a conversation back and forth through editing the model. And you can do this with everything — you send me a song you wrote, I like it — but wouldn’t it be nice to add some resonance to that viola?

Compare this vision to the Star Trek vision. Here is Kirk interacting with a computer:

Now, having just seen this episode, I can tell you that Kirk has discovered that this dude who is a travelling actor might just be an infamous war criminal.

This is pretty important, the sort of observation that Star Fleet Command will want to have in their files. So Kirk edits the file, noting….

Except that he can’t edit this file. In the Star Trek world information goes into the computer and comes out of it, but nothing can be edited.

He can tell the computer, I suppose. And then the computer can decide whether to splice that into the next presentation or not. But editing?

Other computers are similar. Here is an Omega Sector reconstruction of a command and control system.

Return-to-Tomorrow

Now I imagine the way this works is this. The lights show you various information and projections about the performance of the ship. Based on those you can alter the flow rate, jettison fuel, or do two other things I don’t quite get.

But what if I want to change the model? What if I want to know what those lights would look like if we reduced load by dropping half our cargo? Or if the computer’s assumptions about oxygen consumption by the crew turned out to be too optimistic?

What if, discovering an oversight in the assumptions, I wanted to distribute the new model to Star Fleet Command?

Again, I have no way to find that out, because I can’t edit, I can’t distribute.

These computers are centuries ahead, in some ways, but they are already behind the vision the pioneers of personal computing were imagining at the time. Vulcan intelligence may be unparalleled in the universe, but the equipment Spock uses reduces him to a switch flipper.

It’s this vision of a population of computer “operators” (a vision that was the most common at the time) that guides the portrayal of Enterprise technology, and renders it so quaintly 1960s, so non-textual, so I/O.

Stumbling Forward Into the Past

So the question we have to ask ourselves is how Silicon Valley came to see the Star Trek computer as a vision of the future, rather than an artifact of a pre-Kay, pre-Engelbart world.

I don’t have easy answers to that.

One possibility is they see the personal computing era as an anomaly. We edited our documents because computers weren’t smart enough to produce and edit documents for us. We edited assumptions in Excel spreadsheets because computers weren’t yet trustworthy enough to choose the right formulas. Soon computers will be smart enough, and Star Trek can commence.

Another is the scale of ubiquitous computing. Perhaps there is a belief that in a universe where everything is a computer, the prospect of having time to mess with parameters is just too overwhelming.

There’s some validity to these arguments, though it’s worth noting that these beliefs are identical to the beliefs of the average 1960s computer scientist. Computers were smart enough and numerous enough for them to believe that the future could be hard-wired in the 1960s. And they were dead wrong.

There’s a third possibility, though, and one that scares me quite a bit. And that’s that they are unfamiliar with how Star Trek’s technology vision was proved wrong.

In the end, perhaps it doesn’t matter. Either the personal computing revolution can be rolled back (as it has been in many ways in the past few years) or we can push forward and see what happens. It serves the interests of the Google’s of the world to make their computers dynamic and your interface static, because dynamic means control (it’s not for nothing the term comes from the Greek for “power”).

For better or worse, Google, Apple, Facebook and others all are building the “ideal version of the Star Trek computer”. If we want to move past these quaint, archaic notions, it’s up to us to build something else.


A Portfolio of Connections

I’ve talked a bit about federated wiki in terms of the way it enables collaboration with others across institutional boundaries. But as we go into Happening #2, I’m gaining more appreciation with the way that it allows for collaboration with ourselves across temporal boundaries.

That may sound really muddled. But consider the scenario I demonstrate below. I’m reading a piece by MC Morgan in the current happening about the Jacquard Loom. He’s discussing it in our happening on teaching machines because it was an influential example of a “programmable machine”.

And I start to get a bit of an itch reading that, because I feel like we talked about something like that in the FIRST happening (which was *not* on teaching machines, or even machines). And so I — well, I’ll show you what happened in this 4 minute video.

Incidentally, while I edited out some “umms” and “ahhs” and silent readings out of that video, it’s not staged. It’s actually me realizing in near-real-time the connection between Stravinsky’s idea that the player piano ensured “fidelty” to the score to the idea the Jacquard Loom ensured fidelity to the design, to the idea that the appeal of courseware to administrations is tied up with this notion of fidelity too. That we talk about efficiency, but the other concern has been there since day one.

I knew these things separately, but I didn’t see the connection, didn’t REALLY see the connection, until just then.

A quick aside: If you’ve done screencasts of educational technology before, let me ask you this: have you caught an intense, unscripted moment of learning on them? Probably not, right? The weird thing is with federated wiki this happens ALL THE TIME. 

You start to see the bigger vision when you realize that federated wiki can accomodate many types of data: formulas, equations, programming tools, CSV data. Here I pull in an idea and connect it. But maybe I’m in a student in a stats class and I realize I can pull in some water readings I took in last semester’s bio class, and use that data to work through my understanding of standard deviation.

Maybe I see another kid pull in his old bio data, and I remember I built a data visualization tool last semester, so I pull that in and link it to the data, which pushes out a tweakable representation.

The thing is we think we know what hypertext and reuse looks like. But I don’t think we have any idea, because we’ve been confined to the very minimal linking and reuse the web allows. And so the idea vendors are pushing for students on the web is the “ePortfolio”, a coffin of dead projects the student has worked on, indistinguishable from a printed binder or filled portfolio case.

On one side, have this amazing, dynamic, living tool that could help us think thoughts impossible without it, and truly augment our intellect. You could graduate with a tool you had assembled, personally, to help you think through problems. Something quite close to Alan Kay’s vision of Personal Dynamic Media.

And on the other side we have a gaggle of vendors trying to sell us self-publishing tools.

Our thinking here is so, so small. As David Wiley has put it, we have built ourselves jets, and yet we’re driving them on the ground like cars. We have to do better.


Update for Alan (2/13): The full route

In the comments, Alan brings up the very real issue of what happens as more stuff pours into federated wiki. Will you be able to find the connections? Or will you be overwhelmed?

And I realized I had changed the meaning of the video a bit by cutting out the three to four boring minutes of digging around the last happening. In the newer video it looks like I was looking for Stravinsky, but in fact I was not looking for Stravinsky at all. I had 100% forgotten about player pianos, and mechanical ballets.

Here’s an uncut (but partly sped up) video of the process. You can put the sound down and run it while you read the rest of this post:

If you jump to 22 seconds in, you can see I come in and put a search in for music. What I’m actually thinking initially is there’s a relationship to artwork as recipe. The punch card is like a recipe.

But in music, it’s really not. And I realize this as I read it. We’ve had sheet music for a long time, but sheet music is a collaboration between the recipe and the cook. The loom doesn’t collaborate with anyone.

OK, so maybe it’s a different kind of sheet music. I’m reminded of the Varèse Score by the search results. Such scores were the representation of an electronic video and film show produced by Varèse. Is that a better connection?

I pull up some third party materials, but scanning it, it’s not really the Jacquard Loom, is it? These are scores written on paper, and in fact it’s kind of the opposite of the loom — because even Varèse couldn’t know exactly how the music would turn out — there was an element of randomness to it.

But Varèse Score links me to a page called Art as Mechanical Reproduction. I’ve actually been on this page a couple times before, but I was so fixated on the Varèse possibility I didn’t really read it.

With the Varèse idea finally dead, I dig deeper. And as I scan it I see this Stravinsky’s Player Piano link. And the first thing I think is a player piano roll is very like a punch card.

I click it, and as I scan it I’m reminded of Stranvinsky’s obsession that people play his music without interpretation. This notion of “fidelity” to an original abstract vision. And this is the connection that ties all three together — the loom, the player piano, and courseware. We talk efficiency, but the other attraction, for better or worse, is fidelity. And I say “Ah, this is what I was looking for!” as if I had known it the whole time. But of course I didn’t.

And in fact, it was the process of understanding why Varèse didn’t fit that primed me to see the Stravinsky connection.

This is a long answer to Alan’s question, but I think the answer is it may get harder to find the thing you want, but it should get easier to find the thing you need. More links is more serendipity, more routes to the idea that can help you. And since the neighborhood will dynamically expand as you wander, all your Happenings will link seamlessly together giving you access to everything as you need it.


Follow

Get every new post delivered to your Inbox.

Join 179 other followers