This post assumes that you’ve read some other posts on federated wiki. There’s a few dozen on this site if you have not. Click the federated wiki tag and then scroll down to see them all.
If you know what federated wiki is, the following description should get you started with federated wiki use in your classroom.
Make Page of Site Creation Links
Set up a page in your class federated wiki (owned and managed by you) that links to not-yet-existing sites for each one of your students. You’re running federated wiki in farm mode, so going to these sites will create them for the students who go there. The page will look like this:
Have Students Set up Their Sites and Bio Pages
When a student clicks on their link, it will give them a new site with the name you specified. I chose a convention of “first two letters of first name + first two letters of last name” which allows me to quickly identify a student while still giving them internet anonymity if they want it. Here’s what it looks like when the student clicks it:
Under “Pages about Us” have the student put in a name as a link. It could be their full name, their first name, a nickname. Just as long as it is recognizable to you. After adding the name as a link, they click on the link. This new page will be their bio page. At this point I show them an example bio of myself — something relatively lighthearted but substantial.
Students will draft their bio pages. A lot of students will make boring bio pages at first, but here’s part of the genius of wiki — have the students look at other student bios after they are done making theirs, and generally this will help some of the students conceptualize theirs. At the end of this you’ll end up with a lot of very cool bios.
IMPORTANT: After students set up their bios, it’s a good time to have them “claim” their sites with the big “Claim” button at the bottom. This uses a Mozilla-based Persona login that sets the student up as the sole editor of the site. If you forget to do this early, students will end up unintentionally editing other students sites, which isn’t the end of the world, but is a bit of a headache. Have them claim the site early.
Create a “Class Circle”
Now it’s your turn — you have to create what we’ll call the “Class Circle”. This will be a page that students can load to see the work of all the other students in the class — not just in the recent changes feed, but in search results, “twin” notifications, and the like. To make a circle create a bunch of factory drop areas on a page named “Our Class Sites” or something similar:
Now go to that page of links of all the student sites, and for each link:
- Click the link to go to the student page.
- Click the link to the student bio.
- Drag the student bio onto an empty “factory” drop area
This will pull a “reference” to the student site and the first paragraph of their bio into your page. (Note: I did this with the “link launch page” described above to streamline the process and standardize site names, but you could also have student self-select site names and email you the link).
I had 20 students — the process took about 10 minutes. It’s the most time-consuming part of the setup. But when you are done you should have a page that looks like this.
Tracking Student Work Using the Neighborhood
The circle page is pretty cool, because anyone can load it and see all the class activity (to be technical: it pulls class sites into their “neighborhood”). Students can (and will) fork it back into their own sites. Unlike FeedWordPress and other “hub” designs, however, the power to make circles is given to the students as well — the students can easily create their own circle page entitled “English majors” if they want, and pull in all the references to sites by English majors in the class. They can set up circles for their group, or for the three people who always do exemplary work.
Once you have your class circle in place, you be able to track the work of the class through your recent changes page. Here’s a snapshot of it the day after class:
Here I’ve loaded my class circle, clicked recent changes, and am looking at a recent submission by a student on the “redefinition” aspect of SAMR. One thing to note here is how well the form supports a “notes” aesthetic — the student here writes very well, but is allowed to put half-formed thoughts up and questions up to which they can later return.. If the metaphor for the student blog is the personal journal, the metaphor for federated wiki is the researcher’s notebook.
We also see the usefulness of the colored icons here. Scanning this changes feed, we can see that:
- The student we are looking at right now, with the teal gradient, has been very busy, and has in fact gotten all their work for next week already done.
- Four other students have done a page on the SAMR model of educational technology impact,
- Another student (purple gradient) has done the SAMR assignment, although maybe not the “note-taking strategies” assignment.
Since I used a naming scheme (first two letters of first name and first two letters of last), I can hover over these icons and know immediately which student they represent. The teal icon here has a hover text of “krde.mits.wsuv.wiki”, which tells me this is Kristin D’s work. If we click on the teal icon at the top of Redefinition, we can get her Welcome Page. Another shift-click opens up her bio page as well (click replaces the page to the right of the page clicked, shift-click adds a page in the first empty spot, giving you the page in an added column — it sounds odd, but feels awesome when you get the hang of it).
We can also look at just Kristin’s feed now that we’ve collapsed our “neighborhood” to just her.
Using “Twins” as a Student to See Other Approaches to an Assignment
Reloading our class circle and going to the page on SAMR model, we can start to see how the federated aspect works in the classroom. Any student or teacher can easily use the “twins” notification up top (that part that shows links to older and newer versions) to pull up different student work on the same subject.
The assignment was to find some articles on SAMR and to summarize them. In this case, a day after class, a couple students have found the article they want to use, but not done anything yet. One of the neat things here is I can check on work in progress — see what articles they’ve selected and the like. For the students, one of the neat things is that by seeing other student work in progress, they have some idea of what the target they are trying to hit might be.
That’s enough to get you started. We did more in class than this, but I’ll write up the next part later.
I found the process to be pretty smooth by edtech standards. Certainly orchestrating mass registration in a class always has a bit of a herding cats element to it, but this process actually compared favorably with something like signing up for Google Sites or setting up a blog. That said, there were a few issues I’d make more effort to plan around.
As I mentioned, you should be very insistent that students claim their sites early on. We did have one issue where a student looking at other student bios ended up claiming someone else’s site inadvertently, which was a bit of a mess to sort out. Before the students start to wander off their newly created site, have them claim it.
Creating the Class Circle
I found it a tad difficult to create the Class Circle while simultaneously assisting students in setting up their bio pages. I think what I would do in retrospect is have them set up bio pages, claim them, surf other bio pages, edit their own pages again — then I’d call a break. I could probably get the circle page made in about five minutes while the students go get a soda. When they came back, we’d continue.
Logouts and Yellow Borders
I’m not sure how this happened, but a couple students logged themselves out and started getting “yellow-border” pages, indicating their changes were not being saved to the server. Additionally, in the flurry of 18 people hitting the AWS micro instance at once it may be that one or two of the edits did not post because of that (note: this is only speculation). In any case, I think I would have started off explaining blue and yellow borders to students, and showing them what to do if they got a yellow (check to make sure you’re logged in, then fork the page to the server to save your offline edits).
The biggest surprise is that no one really had trouble wrapping their head around the tool. It was no harder for students to understand than blogging or social bookmarking. We even did an activity where students forked a page with a George Siemens video on it, took notes on the video, checked the notes other students had written through using the “twins” links, collaborated with students in their group on a page, then did a cross-tab drag and drop to fork the resulting video summary to their site. One or two students out of the class didn’t quite make it, but the vast majority of the class did this easily.
(If Warhol did George, it’d have looked like this).
This might all fall apart as we get deeper into the tool — here they are just executing actions without really understanding the underlying interaction model. So I don’t want to celebrate too much yet. But it may be that federated wiki is easier for people who have no extant understanding of feed-based blogging communities or standard wikis since we don’t have to unseat any exisitng ideas of how the web is supposed to work.
Then again, it could just be I got lucky — this was a heavily guided activity, and the question is whether they can do it without the guidance. We’ll find out next week.
Operations in Smallest Federated Wiki tend to be page-level — dashboard style site managers have been avoided for the moment. Still, the speed at which operations can be executed makes site-wide stuff pretty easy. This video shows how to copy a small fifteen page site in about a minute.
If you think about how long it would take you to log into a dashboard interface, export a site, log into another dashboard interface, and upload the file to the import process — Smallest Federated Wiki compares favorably.
How is this speed achieved? First of all, moving the integration to the browser allows us to pull two sites together into a single interface. Importantly, neither site has to have any knowledge of the other before the drag, because to the browser a site is just another data source. It’s the difference between the two models below, with the federated model on the right.
Client-based integration is more amenable to fluid reuse because it can have a single integrated view of multiple sites in a way that server-based systems can not.
The second reason it’s so quick is the parallel pages structure. The multiple pages on the screen are less impressive looking than your average web page. But you pay a massive tax for that look in the form of the “click-forward, act, click-backward” actions you perform every single day. Here you see how much eliminating that speeds up interaction, as you click on a list that stays in place and then fork the pages without playing the “forward-back” game.
As a side note, having used SFW for a while, I now get frustrated in “normal” web interfaces that use the single-page model. It feels ridiculously kludgy. Forward and back in 2014? Are you kidding?
The third reason the operation flows well is the data-based nature of it. We’re not shipping layout to the new site, we are essentially copying the database record for that page. No formatting surprises to greet you after the copy operation.
So fine — this is fifteen pages. What if you wanted to fork a site of a hundred pages? Well, it’d probably take seven times this long, so maybe 10 minutes?
That’s ten minutes to fork a picture perfect copy of any SFW site in the world. I’m not even sure you can do that in GitHub in ten minutes.
(Are people beginning to get the power of these few small interface changes yet?)
First there was Buzzfeed, which admittedly plagiarized material:
Take that “Faith in Humanity” write-up. Last September, NedHardy.com—“the self-anointed curator of the Internet,” a kind of poor man’s BuzzFeed—posted an item called, “7 Pictures That Will Restore Your Faith in Humanity.” Then, last month, NedHardy posted another piece, “13 Pictures To Help You Restore Your Faith in Humanity.” Half of the photos in BuzzFeed’s post appear in NedHardy’s two compilations. NedHardy isn’t mentioned anywhere in BuzzFeed’s “21 Pictures” post.
Then the derp began to grow. Rick Perlstein, author of a new Reagan biography, has been accused of plagiarism in what seems to be a political tactic:
In the letters, Shirley [a longtime political operative] claims that Perlstein lifted “without attribution” passages from “Reagan’s Revolution,” and substantially ripped off his work even when attributing. He demands that all copies of “The Invisible Bridge” be destroyed, with an additional request of a public apology and $25 million in damages.
In the letters, Shirley claims that Perlstein lifted “without attribution” passages from “Reagan’s Revolution,” and substantially ripped off his work even when attributing. He demands that all copies of “The Invisible Bridge” be destroyed, with an additional request of a public apology and $25 million in damages.
Rick Perlstein would have to be the worst plagiarist in history, by citing his victim 125 times in source notes and thanking him in the acknowledgments.
And then there’s Newsweek editor Fareed Zakaria, who has been accused by bloggers of passage rip-offs like this:
This is insane. Let’s start with the Buzzfeed example. Certainly Buzzfeed did build off the work of Ned Hardy without attribution. Just as Ned Hardy posted photos he had seen elsewhere without hat-tipping those who had found them. Just as he took Buzzfeed’s famous formula of “X pictures that Y” and put it to use on his site.
The Reagan example is a bit of an odd case, but speaks to the dangers of this road. The Zakaria example borders on parody.
What is it that we’re arguing here? That Zakaria should spend time rewriting a sentence like “In 2009, Senate Republicans filibustered a stunning 80% of major legislation.”? For what purpose? What if that is the most obvious way to say it, and other formulations just subtract from the impact?
What do we expect would happen if Zakaria cited Beinart for this sentence? What damage has occurred to Beinart as a result of Zakaria not citing it? Was there a legion of Zakaria fans who would have said — “Wow, that sentence from Beinart is brilliant — I need to read more Beinart!”
These things seem small, but they are not. Much (if not most) of our daily work flows written descriptions, curation of resources, and other recomposition of texts. Developing a culture that allowed for fluid reuse of the work of others would free up our capacity to solve problems instead of wasting time rearranging clauses. We are held back from fluid reuse by cultural conventions which force us to see wholesale copying of unique insights and pedestrian descriptions of Senate procedure as the same thing. We are held back by technologies that have not moved past cut-and-paste models of reuse. We are held back by the plagiarism police who demand that our attributions be placed in ways that break the flow of reading, or send users to source websites only to find the source was linked for trivial reasons.
Some people need to make a living off of words, and the reputation generated by their words. We need to preserve that. But we also need to radically rethink plagiarism if we are going to take advantage of the ability the web gives us to build off of the work of others. And we seem to be going in the opposite direction.
David Wiley with a great comment on yesterday’s post:
The answer, more or less, is yes. And initially that seems like a dealbreaker.
But here’s the history of the web, from me, condensed.
A long time ago very smart people decided that web pages had to all look different, that your stuff would only exist on your site and people had to link to your page as their way of reusing/quoting your stuff, rather than copying it to their own site. And we built a whole web around this idea that everybody would have different looking sites that contained only their content, everything would exist in exactly one place, and copyright would all keep us nice and safe. And every single one of these decisions made reusing and remixing a huge pain in the butt. But it was what we wanted, right?
Today most web activity happens on Facebook, Twitter, Tumblr, and Pinterest, and the way it works is that other people repost your stuff on *their* page, and everybody’s pages look the same, and people more or less like that because it makes resharing and reblogging and giving credit easy. So the web is more or less like Smallest Federated Wiki now, with the exception that instead of you having an open license, Facebook, Tumblr, Twitter, and Pinboard own your stuff, and none of them talk to one another.
So yes, it requires open licensing, But it’s honestly the system we’re at today, just refactored to account for what people actually ended up wanting. It builds the idea of “reuse, revise, reply, and reshare in your own space” into the core of the system so that you don’t need a third party site to make that happen.
Via @roundtrip, this conversation from July:
There’s actually a pretty simple alternative to the current web. In federated wiki, when you find a page you like, you curate it to your own server (which may even be running on your laptop). That forms part of a named-content system, and if later that page disappears at the source, the system can find dozens of curated copies across the web. Your curation of a page guarantees the survival of the page. The named-content scheme guarantees it will be findable.
It also addresses scalability problems. Instead of linking you to someone’s page (and helping bring down their server) I curate it. You see me curate it and read my copy of that page. The page ripples through the system and the load is automagically dispersed throughout the system.
It’s interesting that Andreessen can’t see the solution, but perhaps expected. Towards the end of a presentation I gave Tuesday with Ward Cunningham about federated content, Ward got into a righteous rant about the “Tyrrany of Paper”. And the idea he was digging at was this model of a web page as a printed publication had caused us to ignore the unique affordances of digital content. We can iteratively publish, for example, and publish very unfinished sorts of things. We can treat content like data, and mash it up in new and exciting ways. We can break documents into smaller bits, and allow multiple paths through them. We can rethink what authroship looks like.
Or we can take the Andreessen path, which as Ted Nelson said in his moving but horribly misunderstood tribute to Doug Englebart, is “the costume party of fonts that swept aside [Englebart's] ideas of structure and collaboration.”
The two visions are not compatible, and interestingly it’s Andreessen’s work which locked us into the later vision. Your web browser requests one page at a time, and the layout features of MOSAIC>Netscape guarantee that you will see that page as the server has determined. The model is not one of data — eternally fluid, to be manipulated like Englebart’s grocery list — but of the printed page, permanently fixed.
And ultimately this gives us the server-centric version of the web that we take for granted, like fish in water. The server containing the data — Facebook or Blogger, but also WordPress — controls the presentation of the data, controls what you can do with it. It’s also the One True Place the page shall live — until it disappears. We’re left with RSS hacks and a bewildering array of API calls to accomplish the simplest mashups. And that’s because we know that the author gets to control the printed page — its fonts, its layout, its delivery, its location, its future uses.
The Tyrrany of Print led to us gettting pages delivered as Dead Data, which led to the server-centric vision we now have of the web. The server-centric vision led to a world that looked less like BitTorrent and more like Facebook. There’s an easy way out, but I doubt anyone in Silicon Valley wants to take it.
Ward Cunningham’s explanation of federation (scheme on right) — one client can mash together products of many servers. Federation puts the client, not the server, in control.
It seems we got front-paged at Hacker News. So for those that don’t follow the blog I thought I’d add a one minute video to show how Smallest Federated Wiki uses a combination of JSON, NodeJS, and HTML5 to accomplish the above model. This vid is just about forking content between two different servers, really basic. Even neater stuff starts to happen when you play with connecting pages via names and people via edit journals, but leave that to another day.
This and more videos and explanations are available at the SFW tag.
If you look at most treatments of wiki in the classroom, people talk about collaboration, group projects, easy publishing, revision control. All of these are important. But one important element of what makes a wiki a wiki has been underutilized.
Wikis not only introduced the editable page to users, but the idea of page-creating links. (In fact, this invention pre-dates wiki and even the web, having been first pioneered in the Hypercard implementation Ward Cunningham wrote for documenting software patterns).
Page-creating links are every bit as radical as the user-edited page — perhaps even more so. What page-creating links allow you to do, according to Cunningham, is map out the edges of your knowledge — the places you need to connect or fill in. You write a page (or a card) and you look at it and ask — what on this page needs explanation? What connections can we make? Then you link to resources that don’t exist yet. Clicking on those links gives you not an error, but an invitation to create that page. The new page contains both links back to concepts you’ve already documented, but also generates new links to uncreated resources. In this way the document “pushes out from the center” with each step both linking back to old knowledge and identifying new gaps.
In the video below I show this “pushing out from the center” process on a wiki of my own and talk about how this architecture and process relates to intergrative learning. For best viewing, hit HD button and make full screen.
Blue Hampshire, a political community I gave years of my life to, is in a death spiral. The front page is a ghost town.
It’s so depressing, I won’t even link to it. It’s so depressing, that I haven’t been able to talk about it until now. It actually hurts that much.
This is a site that at the point I left it had 5,000 members, 10,000 posts, and 100,000 comments. And at the point co-founders Laura Clawson and Dean Barker left it circa 2011(?), it had even more than that.
And what comments! Because I say that *I* put sweat into it, or Laura and Dean did, but it was the community on that site that really shone. Someone would put up a simple post, and the comments would capture history, process, policy, backstory — whatever. Check out these comments on a randomly selected post from 2007.
The post concerns an event where the local paleoconservative paper endorsed John McCain for their Democratic candidate, as a way to slight a strong field of Democrats in 2008.
What happens next is amazing, but it was the sort of thing that happened all the time on Blue Hampshire. Sure, people gripe, but they do so while giving out hidden pieces of history and background that just didn’t exist anywhere else on the web. They relate personal conversations with previous candidates, document the history the paper has of name-calling and concern-trolling.
Honest to God, this is one article, selected at random from December 2007 (admittedly, one of our top months). In December 2007, our members produced 426 articles like this. Not comments, mind you. Articles. And on so many of those articles, the comments read just like this — or better.
That’s the power of the stream, the conversational, news-peg driven way to run a community. Reddit, Daily Kos, TreeHugger, what have you.
But it’s also the tragedy of the stream, not only because sites die, but because this information doesn’t exist in any form of much use to an outsider. We’re left with the 10,000 page transcript of dead conversations that contain incredible information ungrokable to most people not there.
And honestly, this is not just a problem that affects sites in the death spiral or sites that were run as communities rather than individual blogs. The group of bloggers formerly known as the edupunks have been carrying on conversations about online learning for a decade now. There’s amazing stuff in there, such as this recent how-to post from Alan Levine, or this post on Networked Study from Jim. But when I teach students this stuff or send links to faculty I’m struck by how surprisingly difficult it is for a new person to jump into that stream and make sense of it. You’re either in the stream or out of it, toe-dipping is not allowed.
And so I’m conflicted. One of the big lessons of the past 10 years is how powerful this stream mode of doing things is. It elicits facts, know-how, and insights that would otherwise remain unstated.
But the same community that produces those effects can often lock out outsiders, and leaves behind indecipherable artifacts.
Does anyone else feel this? That the conversational mode while powerful is also lossy over time?
I’m not saying that the stream is bad, mind you — heck, it’s been my way of thinking about every problem since 2006. I’m pushing this thought out to all you via the stream. But working in wiki lately, I’ve started to wonder if we’ve lost a certain balance, and if we pay for that in ways hidden to us. Pay for our lack of recursion through these articles, pay for not doing the work to make all entry points feel scaffolded. If that’s true, then — well, almost EVERYTHING is stream now. So that could be a problem.