A Thankful Wikipedia

A weird thing happened to me on Wikipedia the other day: I was thanked.

I wasn’t expecting. Far from it. I ressurected my Wikipedia account a couple months ago, with the idea I’d walk the talk and start fixing inclusivity problems on Wikipedia: everything from women tech pioneers with underdeveloped articles, to black Americans in STEM with no articles, to foreign literatures with little to no coverage.

My goal has been to make a small but meaningful edit each week in this regard. Not to dive into the expensive never-ending arguments of Wikipedia, but to do the relatively uncontroversial stuff that doesn’t get done mostly because no one puts in the time to do it. Stuff like making sure that people like Liza Loop and Thomas Talley have pages about them, and that Adele Goldberg gets credit for her work on the Dynabook.

Most underrepresentation on Wikipedia is not the result of people blocking edits, but of no one doing the work. I don’t recommend people wandering into the GamerGate article, or getting into the debate about whether Hedy Lamarr’s work on frequency-hopping can really be seen as a predecessor to Wi-fi. But, on the other hand, the main reason the Karen Spärck Jones page is underdeveloped is because no one has expanded it substantially since her death. The reason that the Kathleen Booth page has no photo is no one has gone through the laborious process of finding an openly licensed photo.

That lack of effort in these areas is why when you google Kathleen Booth (born 1922) you are greeted with this incongruous Google supplied Twitter photo that is actually the CEO of a small marketing firm. If Wikipedia had a photo in there, Google would pull it. But they don’t so they guess and this is the result:

2015-04-29_1445

 

The simple solution to this is to cut out some time you spend decompressing on Twitter an replace it with doing some of the boring yet restful work of improving articles.

But I digress — I was talking about thanking. Normally when you do this sort of thing you either get negative feedback “This source does not support this claim!” or silence.  And usually it’s silence.

Today, something different happened. I got thanked, via an experimental feature Wikipedia is trying out:

2015-04-29_1407

Here this person thanked me for finding an Adele Goldberg photo. They then went to my Liza Loop article and disagreed with a claim of mine, saying the source cited didn’t support the strong claim. Without the thank, it would be easy to think of this person as some opponent, out to undo my work. The thank changes things. Consequently, when I review their edit on the Liza Loop article, and it’s persuasive enough, I thank them back. I *want* more people working on these articles — people making sane edits and revision is a *good* thing, because over time it will improve the quality.

snip

Wikipedia gets a lot of flack for its bias, exclusivity, and toxic bureaucratic culture. And rightly so — the site is clearly working through an awkward phase in its history. It’s succeeded in becoming a much higher quality publication in the past ten years than anyone would have dreamed possible. But in the process it has also becoming a somewhat less inviting place.

Features like thanking (introduced a couple years ago, but becoming more widely used), show that they are still trying to get the right mix of hospitality and precision, and that they are correctly seeing the potential of the interface to help them change the culture. The Visual Editor is another such effort.

I’ve often said that the amazing beauty and the potential ugliness of the future of the web is there to see in Wikipedia. It’s the canary in the coal mine, the Punxsutawney Phil of our networked ambitions. We have to make it work, because if we can’t we’re in for a lot more years of winter. It’s good to see the efforts going on there. And it’s good to be back!

 

Simple Generative Ideas

I’ve been explaining federated wiki to people for over a year now, sometimes sucessfully and sometimes not. But the thing I find the hardest to explain is the simple beauty of the system.

As exhibit A, this is something that happened today:

Screenshot 2015-04-25 at 8.13.21 AM

In case you can’t see that, this is what is going on. I’m creating a new template (look at my last post to understand the power of templates). But since the way you create a template is to just create a normal page but make the last word of the name “template” some thing interesting happens: you can create templates based on other templates.

Now since the IDs of the elements follow each template, that means that your second generation template inherits the abilities you coded for the first generation template, including all the bug-checking, cross-browser compatibility fixes, etc. You get all of that without writing a single line of code. So I can hammer out a basic template, write some handlers on the client side, then others can come and build a template based on my template and extend it, sometimes without even programming. Or they can drag and drop items from other templates, and since the IDs follow those items they might be able to mix and match their way to a new template.

Did Ward, Paul, and Nick plan any of this as they designed the code? No, not a bit. But every time a feature is requested the slow discussion begins — instead of a feature, is there a way to deliver this as a capability that extends the system as a whole?  Is there a way to make this simpler? More generative? More in line with the way other things in federated wiki work?

So Ward is surprised as me that you can build templates out of templates. But we are both also not surprised, because this is how things go with software that’s been relentlessly refactored. There’s even a term for it: “Reverse bit-rot”.

A normal software process would have long ago decided to give Templates their own type and data model, soup them up with additional features, protections, tracking. The agile process says things should be constantly refactored down to a few simple and powerful ideas. It’s not as flashy, but you find you have killer features that you didn’t even intentionally write.

The Simplest Federated Database That Could Possibly Work

The first wiki was described by Ward Cunningham as the “simplest database that could possibly work.” Over the next couple of years, many different functions were built on top of that simple database. Categories (and to some extent, the first web notions of tagging) were built using the “What links here?” functionality. The recent changes stream (again, a predecessor to the social streams you see every day now) was constructed off of file write dates. Profile signatures were page links, and were even used to construct a rudimentary messaging system.

In other words, it was a simple database that was able to build a rough fascimile of what would later become Web 2.0.

While we’ve talked about federated wiki as a browser, it can also be used as a backend database that natively inherits the flexibility of JSON instead of the rigidity of relational databases. Here we show how a few simple concepts — JSON templates, pages as data, and stable ids allow a designer to create a custom content application free of the mess of traditional databases that still preserves data as data. We do it in 45 minutes but we cut the video down to 12 minutes viewing time.

Best of all, anything you build on top of federated wiki inherits its page-internal journaling and federation capabilities. So you’re not only putting up a site in record time, you’re putting up a federated, forkable site as well.

The first wiki showed how far one could go with form elements, cgi, and links towards creating a robust community unlike those before it. It’s possible to see federated wiki as a similar toolkit to build new ways of working. I hope this video hints at how that might be done.

Note: I’ve also constructed this page using the above methods here. I think it looks pretty nice.

That Time Berners-Lee Got Knocked Down to a Poster Session

I’ve known about the Berners-Lee Poster Session for a while, but in case you all don’t, here’s the skinny: as late as December 1991 belief in Tim-Berners Lee’s World Wide Web idea was low enough that a paper he submitted on the subject to the Hypermedia ’91 conference in San Antonio, TX was bumped down to a poster session.

Today, though, things got a bit more awesome. While looking for an Internet Archive video to test Blackboard embedding on (this is my life now, folks) I came across this AMAZING video, which has only 47 views.

In it Mark Frisse, the man who rejected Berners-Lee’s paper on the World Wide Web from the conference, explains why he rejected it, and apologizes to Tim Berners-Lee for the snub. He just couldn’t see that in practice people who hit a broken link would just back up and find another. It just seemed broken to him, a “spaghetti bowl of gotos”.

The background music is mixed a bit loud. But it is worth sitting through every minute.

Where this might lead you, if you are so inclined, is to Amy Collier’s excellent posts on Not-Yetness, which talk about how we get so hung up on eliminating messiness that we miss the generative power of new approaches.

I will also probably link to this page when people ask me “But how will you know you have the best/latest/fullest version of a page in fedwiki?” Because the answer is the same answer that Frisse couldn’t see: ninety-nine percent of the time you won’t care. You really won’t. From the perspective of the system, it’s a mess. From the perspective of the user you just need an article that’s good enough, and a system that gives you seven of those to choose from is better than one that gives you none.

 

Twitter’s Gasoline

So Twitter is going to offer opt-in direct messaging from anyone. It looks like you’ll be able to check a box and anybody will be able to DM you, even if you you don’t follow them.  Andy Baio gets it about right:

baio

Direct Messaging from Randos is not something anyone  other than brands asked for, but it is a way for Twitter to make money and  possibly compete with Facebook in the messaging arena. The fact that it takes a service which is well known for fostering online harassment and makes that harassment even easier gets a shrug from Twitter.

There’s the argument, of course, that it’s an opt-in feature, which would be a great argument if this was the first year we had had social media. But it’s not, and we all know the Opt-in Law of Social Media which is that any opt-in feature sufficiently beneficial to the company’s bottom line will become an opt-out feature before long.

I’m reminded of a conversation I had with Ward Cunningham about trackbacks to forking in federated wiki. Basically, right now if someone forks your stuff in federated wiki and you don’t know them, you never learn about it. A notification that would alert you is one of the most requested features for federated wiki, because it could make wiki spread organically. Of course, the down side is it would also be an easy way to harrass someone, continually forking their stuff and defacing it or writing ugly notes on it.

So we’re left with the problem — build something that spreads easily, but has this Achilles heel in it, or wait until we have a better idea of the best way to do this. When I first started working with Ward on this I asked why this wasn’t implemented yet — this was the key to going viral after all. His response was interesting. He said we’ve talked about it a lot, and somehow we’ll get something like it. But he said it’s “pouring gasoline on a campfire”, which I took to mean that there’s a downside to virality.

A year later we’re still talking about the baest way to do it, and paying attention to what people do without it. We’re still patiently explaining to people why connecting with people in federated wiki is hard compared to other platforms, at least for the moment. We’ve focussed on other community solutions, like shareable “rosters” and customizable activity feeds.

I think eventually Ward and others will throw the gasoline on, but only when they’re sure which way the wind is blowing and where the fire is likely to spread.

Looking at the press around this recent direct messaging decision it’s not clear to me that Twitter has done any of that. What does that say about Twitter?

PowerPoint Remix Rant

I’m just back from some time off, and I’m feeling too lazy to finish reading the McGraw-Hill/Microsoft Open Learning announcement. Maybe someone could read it for me?

I can tell you where I stopped reading though. It was where I saw that the software was implemented as a “PowerPoint Plugin”.

Now, I think that the Office Mix Project is a step in the right direction in a lot of ways. It engages with people as creators. It creates a largely symmetric reading/authoring environment. It corrects the harmful trend of shipping “open” materials without a rich, fork-friendly environment to edit them in. (Here’s how you spot the person who has learned nothing in the past ten years about OER: they are shipping materials in PDF because it’s an “open” format).

The PowerPoint problem is that everything in that environment encourages you to create something impossible to reuse. Telling people to build for reuse in PowerPoint is like putting someone on a diet and then sending them to Chuck E. Cheese for lunch every day. Just look at this toolbar:

ppt

That toolbar is really a list of ways to make this content unusable by someone else. Bold stuff, position it in pixel-exact ways. Layer stuff on top of other stuff. Set your text alignment for each object individually. Choose a specific font and font-size that makes the layout work just right (let’s hope that font is on the next person’s computer!). Choose a text color to match the background of your slides, because all people wanting to reuse this slide will have the same color background as you. Mark it up, lay it out, draw shapes that don’t dynamically resize, shuffle the z-index of elements. Get the text-size perfect so that you can’t add or subtract a bullet point without the layout breaking.

Once you’re done making sure the only people who can reuse your document must use your PPT template, with your background, your custom font, and with roughly the same number of characters per slide, take it further! Make it even more unmixable by making sure that each slide is not understandable outside of the flow of it. Be sure to make the notes vague and minimal. In the end it doesn’t matter, because there is no way to link to individual slides anyway.

You get my point. Almost every tool on this interface is designed to “personalize” your slides. Create your brand. The idea is that this is a publication and you or your univesity’s stamp should be on it, indelibly.

Most things work like this, unfortunately, encouraging us to think of our resources in almost physical terms, as pieces of paper or slides for which there is only upside to precisely controlling their presentation. But that desire to control presentation is also a desire to control and limit context, and it makes our products as fragile and non-remixable as the paper and celluloid materials they attempt to emulate. We take fluid, re-usable data and objects, and then we freeze them into brittle data-poor layout, and then wonder why nothing ever gets reused.

So I love the idea of desktop-based OER tools, of symmetric editing and authoring. But there’s part of me that can’t help but feel that the “personal” in “personal publishing tools” has a more pernicious influence than we realize. It’s “personal” like a toothbrush, and toothbrushes do not get reused by others.

End of rant. Maybe I need a bit more sleep…

Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century

Maybe I’m just not connected to the edublogosphere the way I used to be, but the story of Matt Rognlie should be on every person’s front page right now, and it’s not. So let’s fix that, and talk a bit about remix along the way.

(Let me admit the title is a bit of hyperbole, but not by much. Plus, if you have other candidates, why aren’t you posting them?)

First, the story in brief.

  • A scholar named Piketty produces the most influential economic work of the 21st century, which pulls together years of historical data suggesting that inequality is baked into our current economic system at a pretty deep level. It’s the post-war years that were the anomaly, and if you look at the model going forward, inequality is going to get worse.
  • A lot of people try to take this argument down, but mostly misunderstand the argument.
  • An MIT grad student named Matt Rognlie enters a discussion on this topic by posting a comment on the blog Marginal Revolution. He proposes something that hasn’t been brought up before: Piketty has not properly accounted for depreciation of investment. If you account for that, he claims most of the capital increase comes from housing. And if that’s the case, we should see a slowing of inequality growth.
  • He gets encouragement from Tyler Cowen and others at the blog. So he goes and writes a paper on this and posts it in his student tilde space
  • On March 20th he presented that paper at Brookings. He’s been back and forth with Piketty on this. To the extent that policy is influenced by prevailing economic models, the outcome of this debate will determine future appoaches to the question of inequality.
  • As of today, it seems whatever the outcome of that debate may be Rognlie has permanently altered the discussion around the work, and the discussion around inequality.

So first things first — this is a massive story that started as a student blog comment and germinated in tilde space. So why are we not spreading this as Exhibit A of why blogging and commenting and tilde space matters? Did we win this argument already and I just didn’t notice? Because I think we still need to promote stories like this.

Forward Into Remix

Of course, I need to add my own take on this, because this is a perfect example of why remix is important, and why Alan Kay’s dream of Personal Dynamic Media is still so relevant.

Here’s what the first comments on that post read like. You’ll recognize the current state of most blogs today:

krug

Hooray for petty ad hominem attacks and the Internet that gives us them! Paul Krugman is rich, and therefore his take on Piketty is wrong. Al Gore took a jet somewhere, so climate change does not exist. Call this a Type 1 Comment.

Matt comes in though, and does something different. Matt addresses the model under discussion.

matt

He goes on and shows the impact of this on the projections that Piketty makes. Call this a Type 2 Comment.

This is an amazing story, and an amazing comment. I don’t want to take anything away from this. No matter what the outcome of this debate we will end up with a better understanding of equality, thanks to a student commenting on a blog.

But here’s my frustration, and why I’m so obsessed with alternative models of the web. The web, as it currently stands, makes Type 1 Comments trivially easy and very profitable for people to make. The web *loves* a pig pile. It thrives on confirmation bias, identity affirmation, shaming, grandstanding, the naming-of-enemies etc.

On the other hand, the web makes Type 2 Comments impossibly hard. Matt has to read Piketty’s book, go download the appropriate data, sift through the assumptions of the data, change those assumptions and see the effect and then come back to this blog post and explain his findings in a succinct way to an audience that then has to decide whether to take him at his word.

If they do decide he might be right, they have to go re-read Piketty, download the data themselves, change the assumptions in whatever offline program they are using and then come back on the blog and say, you know, you might be right.

And it’s that labor-intensive, disconnected element that makes it the case that the most important economics blog comment of the 21st century (so far) received less comment in the forum than debates about whether Paul Krugman’s pay makes him a hypocrite.

And before people object that maybe that’s just human nature — I don’t think that’s the whole story. The big issue is that the web simply doesn’t optimize for this sort of thing. One of the commenters hints at this, trying to carry forward the conversation but lost as to how to do so. “Is this the Solow model, ” he asks, “I need a refresh, but I don’t know what to Google….”

There is a Better Way

What Matt is doing is actually remix. He’s taking Piketty’s data, tweaking assumptions, and presenting his own results. But it’s taken him a whole bunch of time to gather that data, put together the model, and run the assumptions.

Piketty does make his data sets available for the charts he presents, and that’s really helpful. But notice what happens here — Piketty builds a model, runs the data through it and presents the results in a form that resolves to charts and narrative in a book. Matt takes those charts, narrative, and the data, reinflates the model, works with the model, then does the same thing to blog commenters, by producing an explanation with no dynamic model.

Blog commenters who want to comment have to make guesses at how Matt built his model, re-read Piketty, download the data and run their own results and come back and talk about it.

I understand it’s always going to be easier to post throwaway conversation around a result than actively move a model forward, or clean up data, or cooperatively investigate assumptions. But the commenters on Marginal Revolution and other blogs like it often would like to do more, it’s just that the web forces them to redo each other’s work at each stage of the conversation.

It’s hard to get people to see this. But given Picketty’s book was primarily about the data and the model lets imagine a different world, shall we?

Let’s imagine a world where that model is published as a dynamic, tweakable model. A standard set of Javascript plugins are created which allow people to dynamically alter the model. In this alternate world Piketty publishes these representations along with page text on what the model shows and why the assumptions are set the way they are.

When Matt sees it, he doesn’t read it in a book. He reads it online, in a framework like the data aware federated wiki. He forks a graph he’s interested in, and can click in immediately and see those assumptions. He edits those or modifies those in his forked version.

When a discussion happens about Krugman’s post, he writes a shorter explanation of what is wrong with Piketty and links to his forked model. People can immediately see what he did by looking at the assumptions. They can trace it back to Piketty’s page in the fork history, and see what he has changed and if he’s being intellectually honest. If he’s introduced minor issues, they can fork their own version and fix them.

At each stage, we keep the work done by the previous contributors fluid and editable. You can’t put out an argument without giving other the tools to prove you wrong. You disagree by trying to improve a model rather than defeat your opponents.

The Future

I don’t know what the future of federated wiki is. I really don’t. I think it could be huge, but it could also be just too hard for people to wrap their heads around.

But the point is that it asks and answers the right questions, and it shows what is very hard now could be very simple and fluid.

It’s great that this comment was able to move through tilde space to have impact on the real world. But when you look at the friction introduced every step of the way you realize this was the outlier.

What would happen if we increased the friction for making stupid throwaway comments and decreased the friction for advancing our collective knowledge through remix and cooperation? Not just with data, but in all realms and disciplines. What could we accomplish?

The Remix Hypothesis says we could accomplish an awful lot more than we are doing now. It says that for every Matt out there there were hundreds of people who had an insight, but not the time to get together the data. For every Marginal Revolution there were dozens of blogs that didn’t have the time or sense to see the value of the most important comment made on them becuase it required to much research, fact-checking or rework.

This is a triumphant story for the advocates of Connected Learning — publicize it! But it’s also a depressing insight in how far we have to go to make this sort of thing a more ordinary occurence.