Copying a Whole Site Is Remarkably Easy In Smallest Federated Wiki

Operations in Smallest Federated Wiki tend to be page-level — dashboard style site managers have been avoided for the moment. Still, the speed at which operations can be executed makes site-wide stuff pretty easy. This video shows how to copy a small fifteen page site in about a minute.

If you think about how long it would take you to log into a dashboard interface, export a site, log into another dashboard interface, and upload the file to the import process — Smallest Federated Wiki compares favorably.

How is this speed achieved? First of all, moving the integration to the browser allows us to pull two sites together into a single interface. Importantly, neither site has to have any knowledge of the other before the drag, because to the browser a site is just another data source. It’s the difference between the two models below, with the federated model on the right.

fedwiki

 

 

Client-based integration is more amenable to fluid reuse because it can have a single integrated view of multiple sites in a way that server-based systems can not.

The second reason it’s so quick is the parallel pages structure. The multiple pages on the screen are less impressive looking than your average web page. But you pay a massive tax for that look in the form of the “click-forward, act, click-backward” actions you perform every single day. Here you see how much eliminating that speeds up interaction, as you click on a list that stays in place and then fork the pages without playing the “forward-back” game.

As a side note, having used SFW for a while, I now get frustrated in “normal” web interfaces that use the single-page model. It feels ridiculously kludgy. Forward and back in 2014? Are you kidding?

The third reason the operation flows well is the data-based nature of it. We’re not shipping layout to the new site, we are essentially copying the database record for that page. No formatting surprises to greet you after the copy operation.

So fine — this is fifteen pages. What if you wanted to fork a site of a hundred pages? Well, it’d probably take seven times this long, so maybe 10 minutes?

That’s ten minutes to fork a picture perfect copy of any SFW site in the world. I’m not even sure you can do that in GitHub in ten minutes.

(Are people beginning to get the power of these few small interface changes yet?)

 

Advertisements

The Web is Broken and We Should Fix It

Via @roundtrip, this conversation from July:web

There’s actually a pretty simple alternative to the current web. In federated wiki, when you find a page you like, you curate it to your own server (which may even be running on your laptop). That forms part of a named-content system, and if later that page disappears at the source, the system can find dozens of curated copies across the web. Your curation of a page guarantees the survival of the page. The named-content scheme guarantees it will be findable.

It also addresses scalability problems. Instead of linking you to someone’s page (and helping bring down their server) I curate it. You see me curate it and read my copy of that page. The page ripples through the system and the load is automagically dispersed throughout the system.

It’s interesting that Andreessen can’t see the solution, but perhaps expected. Towards the end of a presentation I gave Tuesday with Ward Cunningham about federated content, Ward got into a righteous rant about the “Tyrrany of Paper”. And the idea he was digging at was this model of a web page as a printed publication had caused us to ignore the unique affordances of digital content. We can iteratively publish, for example, and publish very unfinished sorts of things. We can treat content like data, and mash it up in new and exciting ways. We can break documents into smaller bits, and allow multiple paths through them. We can rethink what authroship looks like.

Or we can take the Andreessen path, which as Ted Nelson said in his moving but horribly misunderstood tribute to Doug Englebart, is “the costume party of fonts that swept aside [Englebart’s] ideas of structure and collaboration.”

The two visions are not compatible, and interestingly it’s Andreessen’s work which locked us into the later vision. Your web browser requests one page at a time, and the layout features of MOSAIC>Netscape guarantee that you will see that page as the server has determined. The model is not one of data — eternally fluid, to be manipulated like Englebart’s grocery list — but of the printed page, permanently fixed.

And ultimately this gives us the server-centric version of the web that we take for granted, like fish in water. The server containing the data — Facebook or Blogger, but also WordPress — controls the presentation of the data, controls what you can do with it. It’s also the One True Place the page shall live — until it disappears. We’re left with RSS hacks and a bewildering array of API calls to accomplish the simplest mashups. And that’s because we know that the author gets to control the printed page — its fonts, its layout, its delivery, its location, its future uses.

The Tyrrany of Print led to us gettting pages delivered as Dead Data, which led to the server-centric vision we now have of the web. The server-centric vision led to a world that looked less like BitTorrent and more like Facebook. There’s an easy way out, but I doubt anyone in Silicon Valley wants to take it.

fedwiki

Ward Cunningham’s explanation of federation (scheme on right) — one client can mash together products of many servers. Federation puts the client, not the server, in control. 

UPDATE:

It seems we got front-paged at Hacker News. So for those that don’t follow the blog I thought I’d add a one minute video to show how Smallest Federated Wiki uses a combination of JSON, NodeJS, and HTML5 to accomplish the above model. This vid is just about forking content between two different servers, really basic. Even neater stuff starts to happen when you play with connecting pages via names and people via edit journals, but leave that to another day.

This and more videos and explanations are available at the SFW tag.

The Part of Wiki Culture the Classroom Forgot

If you look at most treatments of wiki in the classroom, people talk about collaboration, group projects, easy publishing, revision control. All of these are important. But one important element of what makes a wiki a wiki has been underutilized.

Wikis not only introduced the editable page to users, but the idea of page-creating links. (In fact, this invention pre-dates wiki and even the web, having been first pioneered in the Hypercard implementation Ward Cunningham wrote for documenting software patterns).

Page-creating links are every bit as radical as the user-edited page — perhaps even more so. What page-creating links allow you to do, according to Cunningham, is map out the edges of your knowledge — the places you need to connect or fill in. You write a page (or a card) and you look at it and ask — what on this page needs explanation? What connections can we make? Then you link to resources that don’t exist yet. Clicking on those links gives you not an error, but an invitation to create that page. The new page contains both links back to concepts you’ve already documented, but also generates new links to uncreated resources. In this way the document “pushes out from the center” with each step both linking back to old knowledge and identifying new gaps.

In the video below I show this “pushing out from the center”  process on a wiki of my own and talk about how this architecture and process relates to intergrative learning. For best viewing, hit HD button and make full screen.

Using Wiki for Connected, Integrative Learning from Mike Caulfield on Vimeo.

Amelia Bedelia’s Hats Are Not the Problem

So there’s been an Ameila Bedelia Wikipedia hoax. We learn that Amelia was not inspired by a maid from Cameroon who wore sensational hats, a “fact” cited in a vandalism which survived on the site since 2009.

Yawn.

First, let me say in a world where Elsevier was recently discovered to have published half a dozen fake journals for drug companies, a world where more recently 60 articles were retracted from the Journal of Vibration and Control due to a “citation ring”, and a world where med research postdocs are faking anti-cancer trials and still working, I’m not sure to what we’re comparing this “hoax”.

But, more importantly, the main problem with Wikipedia is not about error.  The problem is more subtle than that, and it’s not something that the Daily Dot is going to cover anytime soon.

Here, for example, is a segment introduced on the Amelia Bedelia Wikipedia page in Spring of 2007, and existing through the end of that year:

The name “Bedelia” is a derivation of the common Irish name Bridget. Irish maids were portrayed as being comically inept in the vaudeville theaters of New England in the late19th century. A popular joke of the period has a maid instructed to “Serve the tomatoes undressed”; she brings the dish to the dining room, wearing only her underclothes, saying, “I won’t take off another stitch- not if I lose my place, Maam”.

Interesting, right?

vaudeville

In January 2008, that edit disappears.

bedelia

As far as I can tell, the redaction never is discussed, and the vaudeville connection never returns.

Is the beloved Amelia Bedelia a protracted Irish Maid joke? A sanitized relic of a previous age when the Irish weren’t yet considered “white”?  That seems a rather important question for a scholarly work. And after reading  about the potential vaudeville connection, it’s hard to un-see:

tumblr_l5fdweyUDY1qzjiplo1_r1_500[1]

At the same time, that’s a pretty hefty accusation to make on something that is supposed to be the reference page on Amelia Bedelia.

It’s important to note that, unlike uncaught hoaxes, this sort of thing happens on Wikipedia all the time.

And the problem here is not that the Wikipedia community allowed such a paragraph initially, or that it eventually deleted it. People can have honest disagreements about such things.

The problem here is the centralization. Someone removed the edit, no one in the community noticed or defended it, and the information disappeared for good. For all intents and purposes, it vanished from the face of the earth.

So while EJ Dickson runs down the number of places the Cameroon hoax showed up, somewhat harmlessly, in news copy and book reports, it’s more interesting to me to think how many of those people doing web research on Amelia Bedelia may have, in fact, been spared considering some of the more difficult and interesting questions Amelia Bedelia raises.

irish-african

We need to have authoritative networks to turn to when trying to answer questions. There’s a place for things like Wikipedia, which pushes groups toward consensus. But when those networks are too centralized, or hold a monopoly on truth, minority concerns get lost, deleted, and overuled. Controversy gets papered over. Difficult questions get sanded down smooth. Life gets easier, but at a substantial cost.

This is the issue that federation as an alternative model of networked community is meant to solve, and one of the issues Smallest Federated Wiki is meant to address.  I’ve talked about that before here, but the point I want to make in this post is even more basic. In short, we are not living in a 2008 Jay Leno monologue. It’s time to stop pretending that error and overload are the web’s biggest issues, and time to start looking at the vast variety of voices and bodies of knowledge that still have no home on the web, and no easy entry into the discussions that define our culture.

That’s a less amusing story than stoned kids vandalizing Wikipedia pages, but it’s the one that actually matters.

 

 

[Images are linked through to sources. Click for source.]

 

 

 

Federated Wiki for Distributed Notetaking (and the surprising pedagogical implications of that)

I mentioned earlier that I’d decided to change my explanation of federated wiki from a “top-down” explanation to a “bottom-up” one.

It makes a heck of a difference. I made this video below for one of our faculty, to show how even something as simple as notes becomes an integrative exercise in federated wiki. I don’t know about you, but watching the flow on SFW work is just *enjoyable* to me. I mean it when I say once you get fluidity with it it feels like a direct extension of your own mind.

So what about the federated stuff? The JSON stuff? The Universal Canvas stuff?

It’s still there. Once you start to think of wikis as personal it raises all the sorts of questions these things solve. But it’s better to start bottom-up than top down.

Doug Engelbart’s Grocery List

If you’ve watched the Mother of All Demos, you know that one of the aha! moments of it is when Engelbart pulls out his grocery list. The idea is pretty simple –if you put your grocery list into a computer instead of on a notepad, you could sort it, edit, clone it, categorize it, drag-and-drop reorder it.

mother-of-all-demos

That was 1968. So how are we all doing?

If you’re like my family, there’s probably multiple answers to that, but none particularly good. When Nicole shops, she writes it out on a sheet of paper, and spends a good amount of time trying to remember all the things she has to get. I sometimes write it out in an email I send myself, and then spend time trying to look for past emails I can raid for reminders.

Sorting? Cloning? Drag and drop refactoring?

Ha! What do you think this is, the Jetsons?

How the hell did we get here? Your average car’s fuel injector has more computing power than the machine that Englebart demonstrated this on. And it’s not like this problem has been solved elsewhere.

I’d argue that what happened was we moved towards database-driven single use apps. The truth is that there are DOZENS of shopping list apps, from Don’t Forget the Milk, to the creatively titled GroceryList, to whatever LiveStrong is doing this week.

But they each read like GroceryIQ:

Grocery IQ (Free) includes all the features you would expect in a powerful grocery app, including a barcode scanner, list sharing, and integrated coupons. If you are like me and buy the same things every week, the favorites list will help you save time. You can also edit your list online and it will update automatically on the app.

On one level, GroceryIQ works better than anything we see with Englebart. Barcode Scanner! Favorites list! Wow, powerful!

But chances are it’s a worthless piece of junk to you compared to the email method. Why? Well, you need your specific phone to use it. You can’t print it out. The people you share with need accounts too.  Your data is being compiled and sent somewhere for god knows what reason. You can choose favorites, but there’s no easy way to save this year’s Thanksgiving shopping list for next year  without the programmers building that for you. If you’d like to separate your list by the store you buy it at, you’re also out of luck. You’ve got to learn a whole new interface. If you ever move off of it, you lose all the stuff you put into it — no export/import functionality. And of course the idea that your grocery list app could somehow exchange data with a recipe app or a food log — not possible in the least.

On the other hand, managing my list in email gets me 90% of these things. So why go to another app I’ll use for two weeks and give up on?

The reason we keep using email is that for that set of tasks requiring more than plaintext but less than an app we have nothing. MS Word maybe. Excel. But somewhere along the line we handed the keys  to the current model, and so instead of getting general use tools that radically expand our ability to solve problems we get GroceryIQ, followed by whatever piece of crapware we download next week.

The solution is to capture some older priciples of software design. We need to look at that gap between the tools we use for unstructured data (email, Word) and the over-specified locked-in products that comprise single-use computing, or worse yet, enterprise software bloat. As Jon Udell has noted, most of our day consists of dealing with semi-structured data, yet the tools we have access are either too rigid or too loose to be of use. It’s time we fill the gap with networked, generative tools that can take us to the next level.

 

 

The Universal JSON Canvas and Ben’s Five Star Plugin

I’ve borrowed Jon Udell’s term (“universal canvas”) for talking about SFW. In this video I talk about a plugin my brother Ben wrote for SFW earlier this week, and try to show what that means in semi-mechanical terms.

One of the things I think it starts to show is how much of a construction kit SFW is. As Jon Udell noted when discussing the concept of the universal canvas (in 2006!!), most of days are spent in a world semi-structured data, yet most of the products we have access to either have no affordances for structured data at all, or are engineered to a level of precision appropriate to accounting systems.

Over-engineering data collection is not just a waste of resources. It’s a dangerous practice where data is concerned Most times we start collecting data or sharing it, we’re not even really certain what we want to collect — yet modern practice forces us to design tables and subroutines before we collect *anything*. How the heck can lead users move things forward in such an environment?

In practice, of course, no one moves forward at all. Most information that could help us is never logged anywhere, or it is logged in inaccessible, unparseable formats such as Word XML. We it comes to data, we have access to pea-shooters and inter-continental missiles, and little in-between.

A better approach is to create semi-structured data environments that rely more on conventions and culturally adopted techniques to add meaning to data. The data won’t be perfect, but because convention is more fluid than backend schemas, practice can evolve. Despite what a database admin will tell you, the biggest problem we face is not lack of data consistency. The biggest problem we face is the amount of information captured in no way at all. Using flexible JSON documents with front-end plugins starts to address that issue — and we know from history it’s a lot easier to clean up data we have retrospectively than capture data after the fact.

In the example I show here — how would we know that the fivestar plug-in is showing your overall movie rating, and not, for instance, the rough quality of cinematography in the film? Convention. We agree that the first rating is always your overall rating. We agree that unseen films should be rated as “0”.

As that convention solidifies, it gets encoded in a template. Now the template generates these objects with template-determined IDs. So now we don’t need to look for the first “fivestar” to get the rating — we look for object ‘bd3b3ea18244c038’, which is what the template called that top fivestar interface. We walk through the sitemaps in the neighborhood and find all pages named “Pulp Fiction” and average their values, exclusive of the zeros we decided would mean “not rated”.

Is this more laborious than a join? More error-prone than a SQL Stored Procedure? Well, yeah. You’re never going to get a scientific level of precision from this.

But you don’t need it. If your process to find the best movies misses a film because you filled it out pre-template and it didn’t have the right object ID you will still receive more films in your search results than if you had entered no films at all. Those are the problems we tend to have in life and work, and it’s time for a process and software approach that addresses them.