The OER Case for Federated Wiki

I talk a lot about the open pedagogy case for federated wiki, but not much about the OER/OCW case for it. That doesn’t mean it isn’t a good fit for the problems one hits in open materials reuse.

Here’s how you currently reuse something in WordPress, for example. It’s a pretty horrific process.

  1. Log into the WordPress source site
  2. Open the file, go to the text editor.
  3. Select all the text, cntrl-c copy it.
  4. Go log into your target WordPress site
  5. Create a new page and name it.
  6. Go into the text editor, and paste the text in.
  7. Save Draft.
  8. Go back to the source site. Right click on the images in the post and download them.
  9. Go back to the target site. Open the Media Gallery and upload the images you just downloaded.
  10. Go through you new post on the target site, and replace the links pointing to the old images with links pointing to the images you just uploaded.
  11. Preview the site, do any final cleanup. Resize images if necessary. Check to make sure you didn’t pull in any weird styles that didn’t transfer (Damn you mso-!)
  12. Save and post. You’re done!
  13. Oh wait, you’re not done. Go to the source post and copy the URL. Try to find the author’s name on the page and remember it.
  14. Go to the bottom of your new “target” page and add attribution “Original Text by Jane Doe”. Select Jane Doe and paste in the hyperlink. Test the link.
  15. Now you’re REALLY done!

It’s about an five to ten minute process per page, depending on the number of images that have to be ported.

Of course, that’s assuming you have login rights to both sites. If you don’t, replace steps one and two with trying to copy it from the actual post, attempting to paste it in the *visual* editor to preserve formatting, go through the same steps, except spend an extra five to ten minutes cleanup on step eleven.

It’s weird to me how fish-can’t-see-the-water we are about this. We’re in 2015, and we take this 15 step process to copy a page from one site to another as a given.

Conversely, once you see how absurd this process is, you can’t *unsee* it. All these philosophical questions about why people don’t reuse stuff more become a little ridiculous. There are many psychological, social, and institutional reasons why people don’t reuse stuff. But they are all academic questions until we solve the simpler problem: our software sucks at reuse. Like, if you had an evil plan to stop reuse and remix you would build exactly the software we have now. If you wanted to really slow down remix, you would build the World Wide Web as we know it now.

Conversely, here’s what the steps are in federated wiki to copy a page from one site to another:

  1. Open your federated wiki site.
  2. Drag the page from the source site to the target site and drop it.
  3. Press the fork button. You’re done!

And keep in mind you don’t need to have the front-end of your site look like federated wiki. All that matters is you have federated wiki on the backend. Here’s a short video showing how two sites with different web-facing appearance still allow the easy transfer of pages:

You’ll notice that most of the length of this video is explanation. The actual transfer of the three pages transferred here is from 1:45 in the video to 2:30. It’s about 15 seconds a page to copy, complete with images. While the question of why people don’t remix and reuse more is interesting to me from a theoretical standpoint, I think it pales in comparision to this question: what would happen if we dropped reuse time from 10 minutes to fifteen seconds? 

How is this possible? Mostly, it’s the elegance of federated wiki’s data model.

  • Data Files not Databases. Traditional sites store the real represenation of a page in a database somewhere, then render it into a display format on demand. The database takes a clean-ish copy and dirties it up with display formatting, You grab that formatting and try to clean it up to put it back in the database. Federated wiki, however, is based on data files, and when we pull that link from one federated wiki driven site to another federated wiki grabs the JSON file, not the rendered HTML.
  • JSON not HTML. HTML renders data in a display format. A YouTube video, for example, specifies an IFRAME as a device along with width, height and other display data. This hurts remixability, because our two sites may handle YouTube videos in different ways (width of player is a persistent problem). JSON feeds the new site the data (play a YouTube video with this ID, etc) but let’s the new site handle the render.
  • Images Embedded. This is a simple thing, and the scalability of it has a few problems, but for most cases it’s a brilliant solution. Federated Wiki’s JSON stores images not as a link to an external file, but as JSON data stored in the page. This means when you copy the page you bring the images with it too.  If you’ve ever struggled with this problem in another platform you know how huge this is: there’s a reason half the pages from 10 years ago display broken images now – they were never properly copied.
  • Plugin Architecture. Federated Wiki’s plugin architecture works much like Alan Kay’s vision of how the web should have worked. The display/interaction engine of federated wiki looks at each JSON “item” and tries to find the appropriate plugin to handle it. Right now these are mainly core plugins, which everyone has, but it’s trivial to build new plugins for things like multiple choice questions, student feedback, and the like. If you copy a site using a new-fangled plugin you don’t have, the page on your site will let you know that, and direct you to where you can download the plugin. Ultimately, this means we can go beyond copying WordPress style pages and actually fork in tools and assessments  with the smae ease.
  • History follows the Page.  As anyone who has reused content created and revised by multiple people knows, attribution is not trivial. It consumes a lot of time, and the process is extremely prone to error. Federated wiki stores the revision history of a page with the page. As such, your edit history is always with you and you don’t need to spend any time maintaining attribution. If the view of history in the federated wiki editor is not sufficient to your needs, you can hit the JSON “Journal” of the page and display contribution history any way you want.

We could probably say more on this, but this should do for now.


We Are Not In the Content Business, We Are In the Community Business

My daughter, who is amazing by the way, introduced me to John Green a couple years ago. Her face was always in her phone, and I thought geez, Katie, get off Facebook. And I think I actually said that. To which she replied “Why would I be on Facebook? I’m watching John and Hank Green videos.”

And I decided maybe I should check this out. And the further I got into it, the more amazed I was. There was a community around the videos called “nerdfighters” who stuck up for what they saw as nerdy values of thinking about things, caring about people, getting excited about ideas, and trying to be generally nice. Here’s John Green on what it is to be a nerd in one of his many YouTube videos:

“…because nerds like us are allowed to be unironically enthusiastic about stuff… Nerds are allowed to love stuff, like jump-up-and-down-in-the-chair-can’t-control-yourself love it. Hank, when people call people nerds, mostly what they’re saying is ‘you like stuff.’ Which is just not a good insult at all. Like, ‘you are too enthusiastic about the miracle of human consciousness’.”

The community made reaction videos, discussed deep ideas online, blogged, organized fundraisers that raised millions of dollars for third world (mostly female) entrepreneurs. If you’ve seen the film of Green’s book The Fault in Our Stars it was inspired by a real girl nerdfighter who died at sixteen of cancer, and knew she’d likely die, but spent her short life engaged with this community, trying to make the world better.

Hank Green got up on stage at a Google event recently, and in the most amazing speech I’ve heard about education this year proceeded to tear the potential YouTube advertisers in the audience to shreds at an event meant to woo them. Advertising, he says, is built on distraction. CSI Miami is a great way to distract yourself from the intense and bittersweet pain that this world you know today will disappear, you will someday die, and everything you have done may amount to nothing. And advertising is a great model for that, because CSI Miami wants a show that distracts you just enough which is what advertisers want too. And in that world, how many eyeballs you got, or downloads, or seats or whatever is probably a good measure of impact.

But here’s the thing, says Green. “We’re not in the distraction business. We’re in the community business.”

And I think — that’s it in a nutshell.

This is the big misunderstanding we have with vendors, with Silicon Valley, with brogrammers trying to sell the next killer educational gadget. We want them to be John Green — to connect us together as educators and empower us to change the world — with or without them. We want to be amazing. Not by having a better app, but by being part of a technical and personal network that allows us to far exceed what we can do personally.

We want them to be John Green. They want to be Mark Zuckerberg. We want them to be in the community business. They want to be in the social software business.

We need empowerment. But they can’t do that. They talk about community, but to them community is data that exists on a server somewhere. It’s “1 million registered” or “2 million sign-ups”. No time is spent trying to empower us outside their own narrow interest. We’re never viewed as partners.

You’ll never see the founder of Knewton tear up on stage about how much his community members inspired him and taught him. Sure, you’ll get the stories of of how the software or the “community” cured autism, or saved someone from suicide or a dead-end job.  But watch the video above. Honestly, WATCH THE VIDEO.  John Green understands that he’s the match, not the fire. And once you see that, it’s hard to unsee how fake so much else is.

If you want to really change education, you can make software, content, social apps — and you should. You should be awesome at all of that. But every development decision has to have the community you are trying to create at its core. You have to be excited about the potential of that community, and work to unleash it. You have to be in awe of it.

I’m sure that’s what many edupreneurs think they are doing. But I don’t see it. I see vendor lock-in and head-patting condescension. An unspoken assumption that the existing community of teachers and students are something to be routed around like damage. The idea that the community is bounded and defined by the product. The idea that the product must be locked-down, black-boxed, and triple-copyrighted to “protect” it from the community.

Maybe I’m wrong. If you’re the eduprenuer exception, then go ahead — WATCH THE VIDEO. If at the end of it you think that you are more John Green than Uber, then let’s talk. Otherwise…


A Thankful Wikipedia

A weird thing happened to me on Wikipedia the other day: I was thanked.

I wasn’t expecting. Far from it. I ressurected my Wikipedia account a couple months ago, with the idea I’d walk the talk and start fixing inclusivity problems on Wikipedia: everything from women tech pioneers with underdeveloped articles, to black Americans in STEM with no articles, to foreign literatures with little to no coverage.

My goal has been to make a small but meaningful edit each week in this regard. Not to dive into the expensive never-ending arguments of Wikipedia, but to do the relatively uncontroversial stuff that doesn’t get done mostly because no one puts in the time to do it. Stuff like making sure that people like Liza Loop and Thomas Talley have pages about them, and that Adele Goldberg gets credit for her work on the Dynabook.

Most underrepresentation on Wikipedia is not the result of people blocking edits, but of no one doing the work. I don’t recommend people wandering into the GamerGate article, or getting into the debate about whether Hedy Lamarr’s work on frequency-hopping can really be seen as a predecessor to Wi-fi. But, on the other hand, the main reason the Karen Spärck Jones page is underdeveloped is because no one has expanded it substantially since her death. The reason that the Kathleen Booth page has no photo is no one has gone through the laborious process of finding an openly licensed photo.

That lack of effort in these areas is why when you google Kathleen Booth (born 1922) you are greeted with this incongruous Google supplied Twitter photo that is actually the CEO of a small marketing firm. If Wikipedia had a photo in there, Google would pull it. But they don’t so they guess and this is the result:

2015-04-29_1445

 

The simple solution to this is to cut out some time you spend decompressing on Twitter an replace it with doing some of the boring yet restful work of improving articles.

But I digress — I was talking about thanking. Normally when you do this sort of thing you either get negative feedback “This source does not support this claim!” or silence.  And usually it’s silence.

Today, something different happened. I got thanked, via an experimental feature Wikipedia is trying out:

2015-04-29_1407

Here this person thanked me for finding an Adele Goldberg photo. They then went to my Liza Loop article and disagreed with a claim of mine, saying the source cited didn’t support the strong claim. Without the thank, it would be easy to think of this person as some opponent, out to undo my work. The thank changes things. Consequently, when I review their edit on the Liza Loop article, and it’s persuasive enough, I thank them back. I *want* more people working on these articles — people making sane edits and revision is a *good* thing, because over time it will improve the quality.

snip

Wikipedia gets a lot of flack for its bias, exclusivity, and toxic bureaucratic culture. And rightly so — the site is clearly working through an awkward phase in its history. It’s succeeded in becoming a much higher quality publication in the past ten years than anyone would have dreamed possible. But in the process it has also becoming a somewhat less inviting place.

Features like thanking (introduced a couple years ago, but becoming more widely used), show that they are still trying to get the right mix of hospitality and precision, and that they are correctly seeing the potential of the interface to help them change the culture. The Visual Editor is another such effort.

I’ve often said that the amazing beauty and the potential ugliness of the future of the web is there to see in Wikipedia. It’s the canary in the coal mine, the Punxsutawney Phil of our networked ambitions. We have to make it work, because if we can’t we’re in for a lot more years of winter. It’s good to see the efforts going on there. And it’s good to be back!

 


Simple Generative Ideas

I’ve been explaining federated wiki to people for over a year now, sometimes sucessfully and sometimes not. But the thing I find the hardest to explain is the simple beauty of the system.

As exhibit A, this is something that happened today:

Screenshot 2015-04-25 at 8.13.21 AM

In case you can’t see that, this is what is going on. I’m creating a new template (look at my last post to understand the power of templates). But since the way you create a template is to just create a normal page but make the last word of the name “template” some thing interesting happens: you can create templates based on other templates.

Now since the IDs of the elements follow each template, that means that your second generation template inherits the abilities you coded for the first generation template, including all the bug-checking, cross-browser compatibility fixes, etc. You get all of that without writing a single line of code. So I can hammer out a basic template, write some handlers on the client side, then others can come and build a template based on my template and extend it, sometimes without even programming. Or they can drag and drop items from other templates, and since the IDs follow those items they might be able to mix and match their way to a new template.

Did Ward, Paul, and Nick plan any of this as they designed the code? No, not a bit. But every time a feature is requested the slow discussion begins — instead of a feature, is there a way to deliver this as a capability that extends the system as a whole?  Is there a way to make this simpler? More generative? More in line with the way other things in federated wiki work?

So Ward is surprised as me that you can build templates out of templates. But we are both also not surprised, because this is how things go with software that’s been relentlessly refactored. There’s even a term for it: “Reverse bit-rot”.

A normal software process would have long ago decided to give Templates their own type and data model, soup them up with additional features, protections, tracking. The agile process says things should be constantly refactored down to a few simple and powerful ideas. It’s not as flashy, but you find you have killer features that you didn’t even intentionally write.


The Simplest Federated Database That Could Possibly Work

The first wiki was described by Ward Cunningham as the “simplest database that could possibly work.” Over the next couple of years, many different functions were built on top of that simple database. Categories (and to some extent, the first web notions of tagging) were built using the “What links here?” functionality. The recent changes stream (again, a predecessor to the social streams you see every day now) was constructed off of file write dates. Profile signatures were page links, and were even used to construct a rudimentary messaging system.

In other words, it was a simple database that was able to build a rough fascimile of what would later become Web 2.0.

While we’ve talked about federated wiki as a browser, it can also be used as a backend database that natively inherits the flexibility of JSON instead of the rigidity of relational databases. Here we show how a few simple concepts — JSON templates, pages as data, and stable ids allow a designer to create a custom content application free of the mess of traditional databases that still preserves data as data. We do it in 45 minutes but we cut the video down to 12 minutes viewing time.

Best of all, anything you build on top of federated wiki inherits its page-internal journaling and federation capabilities. So you’re not only putting up a site in record time, you’re putting up a federated, forkable site as well.

The first wiki showed how far one could go with form elements, cgi, and links towards creating a robust community unlike those before it. It’s possible to see federated wiki as a similar toolkit to build new ways of working. I hope this video hints at how that might be done.

Note: I’ve also constructed this page using the above methods here. I think it looks pretty nice.


That Time Berners-Lee Got Knocked Down to a Poster Session

I’ve known about the Berners-Lee Poster Session for a while, but in case you all don’t, here’s the skinny: as late as December 1991 belief in Tim-Berners Lee’s World Wide Web idea was low enough that a paper he submitted on the subject to the Hypermedia ’91 conference in San Antonio, TX was bumped down to a poster session.

Today, though, things got a bit more awesome. While looking for an Internet Archive video to test Blackboard embedding on (this is my life now, folks) I came across this AMAZING video, which has only 47 views.

In it Mark Frisse, the man who rejected Berners-Lee’s paper on the World Wide Web from the conference, explains why he rejected it, and apologizes to Tim Berners-Lee for the snub. He just couldn’t see that in practice people who hit a broken link would just back up and find another. It just seemed broken to him, a “spaghetti bowl of gotos”.

The background music is mixed a bit loud. But it is worth sitting through every minute.

Where this might lead you, if you are so inclined, is to Amy Collier’s excellent posts on Not-Yetness, which talk about how we get so hung up on eliminating messiness that we miss the generative power of new approaches.

I will also probably link to this page when people ask me “But how will you know you have the best/latest/fullest version of a page in fedwiki?” Because the answer is the same answer that Frisse couldn’t see: ninety-nine percent of the time you won’t care. You really won’t. From the perspective of the system, it’s a mess. From the perspective of the user you just need an article that’s good enough, and a system that gives you seven of those to choose from is better than one that gives you none.

 


PowerPoint Remix Rant

I’m just back from some time off, and I’m feeling too lazy to finish reading the McGraw-Hill/Microsoft Open Learning announcement. Maybe someone could read it for me?

I can tell you where I stopped reading though. It was where I saw that the software was implemented as a “PowerPoint Plugin”.

Now, I think that the Office Mix Project is a step in the right direction in a lot of ways. It engages with people as creators. It creates a largely symmetric reading/authoring environment. It corrects the harmful trend of shipping “open” materials without a rich, fork-friendly environment to edit them in. (Here’s how you spot the person who has learned nothing in the past ten years about OER: they are shipping materials in PDF because it’s an “open” format).

The PowerPoint problem is that everything in that environment encourages you to create something impossible to reuse. Telling people to build for reuse in PowerPoint is like putting someone on a diet and then sending them to Chuck E. Cheese for lunch every day. Just look at this toolbar:

ppt

That toolbar is really a list of ways to make this content unusable by someone else. Bold stuff, position it in pixel-exact ways. Layer stuff on top of other stuff. Set your text alignment for each object individually. Choose a specific font and font-size that makes the layout work just right (let’s hope that font is on the next person’s computer!). Choose a text color to match the background of your slides, because all people wanting to reuse this slide will have the same color background as you. Mark it up, lay it out, draw shapes that don’t dynamically resize, shuffle the z-index of elements. Get the text-size perfect so that you can’t add or subtract a bullet point without the layout breaking.

Once you’re done making sure the only people who can reuse your document must use your PPT template, with your background, your custom font, and with roughly the same number of characters per slide, take it further! Make it even more unmixable by making sure that each slide is not understandable outside of the flow of it. Be sure to make the notes vague and minimal. In the end it doesn’t matter, because there is no way to link to individual slides anyway.

You get my point. Almost every tool on this interface is designed to “personalize” your slides. Create your brand. The idea is that this is a publication and you or your univesity’s stamp should be on it, indelibly.

Most things work like this, unfortunately, encouraging us to think of our resources in almost physical terms, as pieces of paper or slides for which there is only upside to precisely controlling their presentation. But that desire to control presentation is also a desire to control and limit context, and it makes our products as fragile and non-remixable as the paper and celluloid materials they attempt to emulate. We take fluid, re-usable data and objects, and then we freeze them into brittle data-poor layout, and then wonder why nothing ever gets reused.

So I love the idea of desktop-based OER tools, of symmetric editing and authoring. But there’s part of me that can’t help but feel that the “personal” in “personal publishing tools” has a more pernicious influence than we realize. It’s “personal” like a toothbrush, and toothbrushes do not get reused by others.

End of rant. Maybe I need a bit more sleep…


Follow

Get every new post delivered to your Inbox.

Join 183 other followers