Simple Generative Ideas

I’ve been explaining federated wiki to people for over a year now, sometimes sucessfully and sometimes not. But the thing I find the hardest to explain is the simple beauty of the system.

As exhibit A, this is something that happened today:

Screenshot 2015-04-25 at 8.13.21 AM

In case you can’t see that, this is what is going on. I’m creating a new template (look at my last post to understand the power of templates). But since the way you create a template is to just create a normal page but make the last word of the name “template” some thing interesting happens: you can create templates based on other templates.

Now since the IDs of the elements follow each template, that means that your second generation template inherits the abilities you coded for the first generation template, including all the bug-checking, cross-browser compatibility fixes, etc. You get all of that without writing a single line of code. So I can hammer out a basic template, write some handlers on the client side, then others can come and build a template based on my template and extend it, sometimes without even programming. Or they can drag and drop items from other templates, and since the IDs follow those items they might be able to mix and match their way to a new template.

Did Ward, Paul, and Nick plan any of this as they designed the code? No, not a bit. But every time a feature is requested the slow discussion begins — instead of a feature, is there a way to deliver this as a capability that extends the system as a whole?  Is there a way to make this simpler? More generative? More in line with the way other things in federated wiki work?

So Ward is surprised as me that you can build templates out of templates. But we are both also not surprised, because this is how things go with software that’s been relentlessly refactored. There’s even a term for it: “Reverse bit-rot”.

A normal software process would have long ago decided to give Templates their own type and data model, soup them up with additional features, protections, tracking. The agile process says things should be constantly refactored down to a few simple and powerful ideas. It’s not as flashy, but you find you have killer features that you didn’t even intentionally write.


The Simplest Federated Database That Could Possibly Work

The first wiki was described by Ward Cunningham as the “simplest database that could possibly work.” Over the next couple of years, many different functions were built on top of that simple database. Categories (and to some extent, the first web notions of tagging) were built using the “What links here?” functionality. The recent changes stream (again, a predecessor to the social streams you see every day now) was constructed off of file write dates. Profile signatures were page links, and were even used to construct a rudimentary messaging system.

In other words, it was a simple database that was able to build a rough fascimile of what would later become Web 2.0.

While we’ve talked about federated wiki as a browser, it can also be used as a backend database that natively inherits the flexibility of JSON instead of the rigidity of relational databases. Here we show how a few simple concepts — JSON templates, pages as data, and stable ids allow a designer to create a custom content application free of the mess of traditional databases that still preserves data as data. We do it in 45 minutes but we cut the video down to 12 minutes viewing time.

Best of all, anything you build on top of federated wiki inherits its page-internal journaling and federation capabilities. So you’re not only putting up a site in record time, you’re putting up a federated, forkable site as well.

The first wiki showed how far one could go with form elements, cgi, and links towards creating a robust community unlike those before it. It’s possible to see federated wiki as a similar toolkit to build new ways of working. I hope this video hints at how that might be done.

Note: I’ve also constructed this page using the above methods here. I think it looks pretty nice.


That Time Berners-Lee Got Knocked Down to a Poster Session

I’ve known about the Berners-Lee Poster Session for a while, but in case you all don’t, here’s the skinny: as late as December 1991 belief in Tim-Berners Lee’s World Wide Web idea was low enough that a paper he submitted on the subject to the Hypermedia ’91 conference in San Antonio, TX was bumped down to a poster session.

Today, though, things got a bit more awesome. While looking for an Internet Archive video to test Blackboard embedding on (this is my life now, folks) I came across this AMAZING video, which has only 47 views.

In it Mark Frisse, the man who rejected Berners-Lee’s paper on the World Wide Web from the conference, explains why he rejected it, and apologizes to Tim Berners-Lee for the snub. He just couldn’t see that in practice people who hit a broken link would just back up and find another. It just seemed broken to him, a “spaghetti bowl of gotos”.

The background music is mixed a bit loud. But it is worth sitting through every minute.

Where this might lead you, if you are so inclined, is to Amy Collier’s excellent posts on Not-Yetness, which talk about how we get so hung up on eliminating messiness that we miss the generative power of new approaches.

I will also probably link to this page when people ask me “But how will you know you have the best/latest/fullest version of a page in fedwiki?” Because the answer is the same answer that Frisse couldn’t see: ninety-nine percent of the time you won’t care. You really won’t. From the perspective of the system, it’s a mess. From the perspective of the user you just need an article that’s good enough, and a system that gives you seven of those to choose from is better than one that gives you none.

 


PowerPoint Remix Rant

I’m just back from some time off, and I’m feeling too lazy to finish reading the McGraw-Hill/Microsoft Open Learning announcement. Maybe someone could read it for me?

I can tell you where I stopped reading though. It was where I saw that the software was implemented as a “PowerPoint Plugin”.

Now, I think that the Office Mix Project is a step in the right direction in a lot of ways. It engages with people as creators. It creates a largely symmetric reading/authoring environment. It corrects the harmful trend of shipping “open” materials without a rich, fork-friendly environment to edit them in. (Here’s how you spot the person who has learned nothing in the past ten years about OER: they are shipping materials in PDF because it’s an “open” format).

The PowerPoint problem is that everything in that environment encourages you to create something impossible to reuse. Telling people to build for reuse in PowerPoint is like putting someone on a diet and then sending them to Chuck E. Cheese for lunch every day. Just look at this toolbar:

ppt

That toolbar is really a list of ways to make this content unusable by someone else. Bold stuff, position it in pixel-exact ways. Layer stuff on top of other stuff. Set your text alignment for each object individually. Choose a specific font and font-size that makes the layout work just right (let’s hope that font is on the next person’s computer!). Choose a text color to match the background of your slides, because all people wanting to reuse this slide will have the same color background as you. Mark it up, lay it out, draw shapes that don’t dynamically resize, shuffle the z-index of elements. Get the text-size perfect so that you can’t add or subtract a bullet point without the layout breaking.

Once you’re done making sure the only people who can reuse your document must use your PPT template, with your background, your custom font, and with roughly the same number of characters per slide, take it further! Make it even more unmixable by making sure that each slide is not understandable outside of the flow of it. Be sure to make the notes vague and minimal. In the end it doesn’t matter, because there is no way to link to individual slides anyway.

You get my point. Almost every tool on this interface is designed to “personalize” your slides. Create your brand. The idea is that this is a publication and you or your univesity’s stamp should be on it, indelibly.

Most things work like this, unfortunately, encouraging us to think of our resources in almost physical terms, as pieces of paper or slides for which there is only upside to precisely controlling their presentation. But that desire to control presentation is also a desire to control and limit context, and it makes our products as fragile and non-remixable as the paper and celluloid materials they attempt to emulate. We take fluid, re-usable data and objects, and then we freeze them into brittle data-poor layout, and then wonder why nothing ever gets reused.

So I love the idea of desktop-based OER tools, of symmetric editing and authoring. But there’s part of me that can’t help but feel that the “personal” in “personal publishing tools” has a more pernicious influence than we realize. It’s “personal” like a toothbrush, and toothbrushes do not get reused by others.

End of rant. Maybe I need a bit more sleep…


Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century

Maybe I’m just not connected to the edublogosphere the way I used to be, but the story of Matt Rognlie should be on every person’s front page right now, and it’s not. So let’s fix that, and talk a bit about remix along the way.

(Let me admit the title is a bit of hyperbole, but not by much. Plus, if you have other candidates, why aren’t you posting them?)

First, the story in brief.

  • A scholar named Piketty produces the most influential economic work of the 21st century, which pulls together years of historical data suggesting that inequality is baked into our current economic system at a pretty deep level. It’s the post-war years that were the anomaly, and if you look at the model going forward, inequality is going to get worse.
  • A lot of people try to take this argument down, but mostly misunderstand the argument.
  • An MIT grad student named Matt Rognlie enters a discussion on this topic by posting a comment on the blog Marginal Revolution. He proposes something that hasn’t been brought up before: Piketty has not properly accounted for depreciation of investment. If you account for that, he claims most of the capital increase comes from housing. And if that’s the case, we should see a slowing of inequality growth.
  • He gets encouragement from Tyler Cowen and others at the blog. So he goes and writes a paper on this and posts it in his student tilde space
  • On March 20th he presented that paper at Brookings. He’s been back and forth with Piketty on this. To the extent that policy is influenced by prevailing economic models, the outcome of this debate will determine future appoaches to the question of inequality.
  • As of today, it seems whatever the outcome of that debate may be Rognlie has permanently altered the discussion around the work, and the discussion around inequality.

So first things first — this is a massive story that started as a student blog comment and germinated in tilde space. So why are we not spreading this as Exhibit A of why blogging and commenting and tilde space matters? Did we win this argument already and I just didn’t notice? Because I think we still need to promote stories like this.

Forward Into Remix

Of course, I need to add my own take on this, because this is a perfect example of why remix is important, and why Alan Kay’s dream of Personal Dynamic Media is still so relevant.

Here’s what the first comments on that post read like. You’ll recognize the current state of most blogs today:

krug

Hooray for petty ad hominem attacks and the Internet that gives us them! Paul Krugman is rich, and therefore his take on Piketty is wrong. Al Gore took a jet somewhere, so climate change does not exist. Call this a Type 1 Comment.

Matt comes in though, and does something different. Matt addresses the model under discussion.

matt

He goes on and shows the impact of this on the projections that Piketty makes. Call this a Type 2 Comment.

This is an amazing story, and an amazing comment. I don’t want to take anything away from this. No matter what the outcome of this debate we will end up with a better understanding of equality, thanks to a student commenting on a blog.

But here’s my frustration, and why I’m so obsessed with alternative models of the web. The web, as it currently stands, makes Type 1 Comments trivially easy and very profitable for people to make. The web *loves* a pig pile. It thrives on confirmation bias, identity affirmation, shaming, grandstanding, the naming-of-enemies etc.

On the other hand, the web makes Type 2 Comments impossibly hard. Matt has to read Piketty’s book, go download the appropriate data, sift through the assumptions of the data, change those assumptions and see the effect and then come back to this blog post and explain his findings in a succinct way to an audience that then has to decide whether to take him at his word.

If they do decide he might be right, they have to go re-read Piketty, download the data themselves, change the assumptions in whatever offline program they are using and then come back on the blog and say, you know, you might be right.

And it’s that labor-intensive, disconnected element that makes it the case that the most important economics blog comment of the 21st century (so far) received less comment in the forum than debates about whether Paul Krugman’s pay makes him a hypocrite.

And before people object that maybe that’s just human nature — I don’t think that’s the whole story. The big issue is that the web simply doesn’t optimize for this sort of thing. One of the commenters hints at this, trying to carry forward the conversation but lost as to how to do so. “Is this the Solow model, ” he asks, “I need a refresh, but I don’t know what to Google….”

There is a Better Way

What Matt is doing is actually remix. He’s taking Piketty’s data, tweaking assumptions, and presenting his own results. But it’s taken him a whole bunch of time to gather that data, put together the model, and run the assumptions.

Piketty does make his data sets available for the charts he presents, and that’s really helpful. But notice what happens here — Piketty builds a model, runs the data through it and presents the results in a form that resolves to charts and narrative in a book. Matt takes those charts, narrative, and the data, reinflates the model, works with the model, then does the same thing to blog commenters, by producing an explanation with no dynamic model.

Blog commenters who want to comment have to make guesses at how Matt built his model, re-read Piketty, download the data and run their own results and come back and talk about it.

I understand it’s always going to be easier to post throwaway conversation around a result than actively move a model forward, or clean up data, or cooperatively investigate assumptions. But the commenters on Marginal Revolution and other blogs like it often would like to do more, it’s just that the web forces them to redo each other’s work at each stage of the conversation.

It’s hard to get people to see this. But given Picketty’s book was primarily about the data and the model lets imagine a different world, shall we?

Let’s imagine a world where that model is published as a dynamic, tweakable model. A standard set of Javascript plugins are created which allow people to dynamically alter the model. In this alternate world Piketty publishes these representations along with page text on what the model shows and why the assumptions are set the way they are.

When Matt sees it, he doesn’t read it in a book. He reads it online, in a framework like the data aware federated wiki. He forks a graph he’s interested in, and can click in immediately and see those assumptions. He edits those or modifies those in his forked version.

When a discussion happens about Krugman’s post, he writes a shorter explanation of what is wrong with Piketty and links to his forked model. People can immediately see what he did by looking at the assumptions. They can trace it back to Piketty’s page in the fork history, and see what he has changed and if he’s being intellectually honest. If he’s introduced minor issues, they can fork their own version and fix them.

At each stage, we keep the work done by the previous contributors fluid and editable. You can’t put out an argument without giving other the tools to prove you wrong. You disagree by trying to improve a model rather than defeat your opponents.

The Future

I don’t know what the future of federated wiki is. I really don’t. I think it could be huge, but it could also be just too hard for people to wrap their heads around.

But the point is that it asks and answers the right questions, and it shows what is very hard now could be very simple and fluid.

It’s great that this comment was able to move through tilde space to have impact on the real world. But when you look at the friction introduced every step of the way you realize this was the outlier.

What would happen if we increased the friction for making stupid throwaway comments and decreased the friction for advancing our collective knowledge through remix and cooperation? Not just with data, but in all realms and disciplines. What could we accomplish?

The Remix Hypothesis says we could accomplish an awful lot more than we are doing now. It says that for every Matt out there there were hundreds of people who had an insight, but not the time to get together the data. For every Marginal Revolution there were dozens of blogs that didn’t have the time or sense to see the value of the most important comment made on them becuase it required to much research, fact-checking or rework.

This is a triumphant story for the advocates of Connected Learning — publicize it! But it’s also a depressing insight in how far we have to go to make this sort of thing a more ordinary occurence.


Critique by Redesign and Revision

David Wiley’s Remix Hypothesis[1] is that we won’t see the full impact of digital culture on education until we embrace the central affordance of digital media — its remixability, by which he means the ability of others to directly manipulate the media for reuse, revision, or adaptation to local circumstance. I think this is an important enough concept that it’s worth expanding on over a few posts.

To review, we’re used to conversation — that transient many-to-many chain of utterances. And with the advent of cheap books many centuries ago it became common for us to think in terms of publication, that permanent pass at a subject that rises above particular context. And we’ve played with these forms in profitable experiments, having published conversations and building conversational publications.

But what we have done only sporadically is to use the fluidity of digital media to have the sort of “conversation through editing” that digital media makes possible.

There are, of course, precedents. My sister makes her living building models of complex phenomena in Excel spreadsheets. She’s considered a bit of a rock star at this. Analytics people laugh, I suppose, at the use of Excel (although frankly, she probably out-earns them). Why would a business pay her so much money for a *spreadsheet*? There are so many sexy tools out there!

The answer is that her spreadsheets are the start of a conversation. She works to capture the knowledge of the organization in Excel and model it, producing projections and the like. But that’s only the first step. When the spreadsheet is done its handed off to people who continue that conversation not by drive-by commenting that the model is all wrong, but by changing the assumptions and the model and showing the impact of those changes. Because Excel is a common currency among the people involved, these conversations can pull in a wide range of expertise, and ultimately improve the model or assumptions.

Today I found another precursor — “critique by redesign” in the visualization arena. Even in the days before digital media visualizations were built on few enough data points that the more effective way to converse about them was to redesign them. Here’s a famous example from Tufte:

critiquebyredesign

This example is meant to be an improvement over the other version, showing how damage to the space shuttle O-rings increases as temperature dips. The critique of the original graphic isn’t (purely) conversation about its flaws — it’s a revision.

Design and Redesign in Data Visualization (from which I pulled this example) comments:

But the process of giving and even receiving visualization criticism does turn out to hold surprises. It’s not just that visualization is so new, or that criticism can stir up emotions in any medium. As we’ll discuss, the fact that visualizations are based on transforming raw data means that criticism can take forms that would be impossible for a movie or book.

The authors are absolutely correct, and yet as all work becomes digitized it’s likely both print and film will add critique by redesign or revision to their respective cultures.

On the edge of networked digital media we see that permissionless remix, revision, and adaptation accomplishes what traditional conversation and publication could not. We’ll note that collaborative revision of documents has become normal in most businesses, finally unlocking the full potential of digital editing. As I’ve noted elsewhere, electronic music is undergoing a similar revolution. Wikipedia, the one impossible thing the web has produced, created the most massive work in the history of mankind using similar processes. Coders on large complex projects critique code by changing code, letting revision histories stand in for debate where appropriate.

We are educating our students in the art of online conversation and publication, and this is important. It represents the scaling of those cultures of talk and print, and maybe the democratization of them too (although that question is more fraught). And remix *needs* these skills: publication is the start of remix, and conversation around artifacts is what makes the process intelligible.


But there’s a third leg to this stool, and it’s as cultural as it is technological. Our students (and teachers!) need to learn how to supplement comment and publication with direct revision and repurposing. It’s only then that we’ll see the true possibilities of a world of connected digital media.[2]


1. I’m shamelessly borrowing and slightly expanding David Wiley’s excellent term here. For more context and a great read, go to The Remix Hypothesis.

2. I think we do this in education, a bit. But not nearly at the level it merits, and the tools we have still mostly treat this as an afterthought.


Paper Thoughts and the Remix Hypothesis

David Wiley has an excellent post out today on a subject dear to my heart — the failure to take advantage of the peculiar affordances of digital objects.

Yeah, I know. Jargon. But here’s a phrase from Bret Victor that gets at what I mean:

“We’re computer users thinking paper thoughts”  – Bret Victor

You can do a lot of things with digital media. You can chat in a forum, which is rather like conversation. You can put out a blog post, which is rather like print publication. You can tweet, which is rather like, um, conversation. You can watch a video, which is rather like publication. You can post to Instagram, which is rather like, um…publication. With conversation attached. You can put out a course framework, which is rather like publication of a space where people can have conversations.

Hmmmm….

There’s really only two modes that most people think in currently. One is conversation, where transient messages are passed in a many-to-many mode. The other is publication, where people communicate in a one-to-many way that has more permanence.

What we are seeing now in education, for the most part, is the automation and scaling of conversation and publication. And this is what always happens with new technology — initially the focus is on doing what you’ve been doing, but doing it more cheaply or more often.

But that’s not where the real benefits come from. The benefits come when you start thinking in the peculiar terms of the medium, and getting beyond the previous forms.

I would argue (along with Alan Kay and so many others) that for digital media the most radical affordance is the remixability of the form (what Kay would call its dynamism). We can represent ideas not as finished publications, but as editable models that can be shared, redefined, and recontextualized. Conversations are transient, publications are fixed. But digital media can be forever fluid, if we let it.

We see this in music. I’m a person who has benefitted from the crashing price of digital audio workstations and the distribution channels now available for music. These have allowed me to record things that would be impossible for a single person to record even ten years ago. Distribution channels have led to weird incidents, like having a a multi-week number one song on Latvian college radio stations in 2011 (so broadly played, in fact, that I actually made the Latvian Airplay Top 40).

This is cool stuff, absolutely. But it’s not the real revolution.

To produce music, I use Propellerhead Reason, and I suppose you could say that tools like have changed the industry at the margins. But nothing like what is about to happen to music with the new breed of tools.

The latest release of Reason, for example, doesn’t make music any cheaper than the last one. It’s big advance is a tool called Discover which allows artists to share material to a commons that other artists can mine for inspiration.

And here’s the key — the material is directly editable and resharable by anyone. It is music as something forever fluid.

This is a marketing video for the new feature, but it’s short, and you should watch it, because I think it shows the future of education as well. And because I really think you need to see it. I really, really do.

Now let me ask you — what would happen if our students could work across classes in this way? If our teachers could collaborate in this way?

This, and really nothing else, is the thing to watch. These people who are talking the Uber-Netflix-Amazon of Education as the future? That is so tiny a vision that it depresses the hell out of me. I don’t worry that education can’t catch up to industry in these spaces. I worry that we’ll be pulled down by their conservatism and small-mindedness.

You should worry about that too. Because Uber is a taxi service co-op with a services center that skims money off the top. Amazon is a very effective mail-order company. Netflix supplies video-on-demand. All of these are done in ways that are made highly efficient by technology, but not one of them taps into the particular affordances of digital media (beyond reproducibility).

We need to think bigger. What David is concerned about in terms of teacher collaboration (how do we get teachers to tap into the affordances of fluidity) is what I am concerned about with students (how do we move past the forms of conversation and publication to something truly new). We can have a future as big as we like if we can get beyond these paper thoughts. We’re starting to see this sort of thinking in the music software industry and glimmers of it in education (see, for example, the new Creative Commons-focused approaches to LORs).

These glimmers happen in a world that has been distracted with other more trivial things (Videos with multiple choice questions! Learning styles!). They happen in a world that continues to think the primary benefit of the digital world is that it’s cheap.

What would happen if we moved remix to the center of the conversation? What would happen if we stressed remix for students as well as faculty? What could we accomplish? And if a little Swedish audio workstation company can see the future, why can’t we?


Follow

Get every new post delivered to your Inbox.

Join 179 other followers