Name-Based Approaches to Networks, and Why They are Crucial to the Personal Web

Alan Levine made a great comment on a previous post — the frustration in Smallest Federated Wiki is that “you never know where you are.”

This got me thinking about some foundational issues that need to be explained — issues bigger than any directed how-to. One of the reasons people feel lost on SFW is that 20 years of web browsing has taught us to think about the web in terms of “location” instead of “name”.

If we ask someone to get a book for us, there’s a couple ways we can do that. The first way is to specify by location: “Get me the  book on the third floor of the library, first bookcase, 5th shelf down, three books from the right.”

This is roughly how the internet works. An address is in fact a location.

And that makes sense in a standard scenario:

  • Our library is always open
  • Books don’t change places
  • There is really only one library in which you can find each book you want

And if you look at that, that’s ARPANET circa 1970.  Always-on, immobile server machines . A situation where most content was stored only a small number of places.

If you think about libraries though, they don’t define books by location. They define books by ID. You know that books are defined by ID and not location because the Library of Congress ID number you get when you look up a book in your library is the same number that is stamped on copies of that book all around the world. You don’t look up a book in a catalog and find an ID that says “second-shelf-two-books-in”.  You get something like “G2207.P4C76 G7 1995”.

And so your path is different. You hand that number to the librarian. She says, nope, that books not in, but let’s check Interlibrary Loan.

This is useful, because the book exists in various locations, and the book at each location is intermittently available. If we were to build a system where books were identified by their location, the system would break every time a book was unavailable or the library was reorganized.

This situation — a series of intermittently available resources widely duplicated across a network — is much closer to the reality of the web in 2014 than the first model. Yet we hold on to the location-based model. And what that does is ensure that that the parts of the web that don’t look like 1970 — us, our laptops, our phones, our personal servers — can’t be full members of the web, because in a location-based system, transience is the cardinal sin.

Smallest Federated Wiki works the second way. Your link is to named content. Your system searches your trusted network for that named content — first looking on your starting context (site you started from), but then looking other places as needed. This means that very un-1970-like things can happen, such as  the following:

  • I run a server on my laptop, post an article to an SFW site on my laptop.
  • Tim Owens comes by on his laptop, running his personal server, and writes an article that links to mine (on my laptop)
  • Amy Collier sees my article and forks it to her site on her laptop
  • I shutdown my laptop, my SFW site is no longer available for the day.

OK, so far, so good. But here’s where the magic happens.

  • Jim Groom comes to Tim’s page and clicks the link to mine.
  • I’m not online, so, assuming Amy is in his neighborhood, the link transparently pulls up Amy’s version of the page.

Notice that none of this required coordination. When Tim linked to my page, Amy’s copy didn’t even exist. Amy might not even be in Tim’s neighborhood. He might not even know Amy exists.  Yet here Jim is arriving at Amy’s page via a link that Tim wrote.

This is only one example of the benefits of named content systems, but hopefully it sufficient to show that it opens up an array of possibilities just not possible before.

Does that reduce the compelling need to know “who the heck’s server is this?” on SFW? Not completely. After all, the content IDs used by SFW can point to radically different content. Authorship matters. And I’ll demonstrate some ways to deal with such things later. But it seemed impossible to do the “how-tos” without dealing with at least some of the “why-fors” up front.


Minimally Invasive Assessment and the New Canvas Suite

Instructure has a new announcement about Canvas, and it’s in an area close to my heart. They are rolling out a suite of tools that allow instructors to capture learning data from in-class activities.

But Mike, you say, the LMS is evil, and more LMS is eviler. Why you gotta be Satan’s Cheerleader?

Well, here’s my take on that. The LMS is not evil. What is evil is making the learning environment of your class serve the needs of the learning management system rather than serve the needs of the students.

One area where the LMS has traditionally distorted practice is in the devaluing of in-class work. The LMS treats the homework you turn in nowadays as an artifact demonstrating competency. It’s matched to outcomes. It shows up in your outcomes mapping, institutional assessment, the whole bit.

That great comment you made in class, though, the one that demonstrated passion, engagement, and understanding of a core concept? The LMS couldn’t care less. And as the use of analytics becomes more and more prominent, the risk of distortion of practice becomes more acute.

That’s what we’re fighting against. Or at least what I’m fighting against.

Let me give you an example. My wife is an art teacher in a K-3 setting. You think you have a lot of students? You know how many my wife has?


Six hundred and thirty students. And here’s the kicker — if she wants the district to support offering these kids art class, she’s going to have to explain how her instruction is helping kids progress, both in the state standards and in areas that overlap with the Common Core.

Luckily, there’s a lot of overlap. When you bisect a page with a horizon line, that’s understanding ratios. When you make your own “in the style of Van Gogh” picture of your classroom in fingerpaint and marker, that meshes well with the Common Core standards on understanding authorial style. When you talk about what The Scream conveys, that’s getting at elements of authorial intent.

Nicole would love to capture the incredible amount of learning that goes on in the classroom — both to demonstrate its value, as well as to make sure that in juggling 630 kids she is not letting some slip through the cracks. She’d love to have a discussion with second graders about American Gothic and be able to know which students she has never seen tackle authorial intent, and see how they do when asked. She’d love to wander around a room of kindergarteners drawing horizon lines and record whether or not they got the concept of “equal halves”.

So when I told her about the new Canvas product suite, which includes a tool which gives you slide-left/slide-right style assessment of students for in-class use, she was ecstatic. She could have her mobile phone out, walk around the room talking to the students, and at the same time do quick, unobtrusive assessments of the students. And what she likes about that vision is she doesn’t have to change a thing about how her classroom works to implement the reporting and analysis.

We can argue whether we should be using behavioral objectives or conceptual ones, whether we should be using Bloom or Dee Fink — but at the end of the day no matter what we choose to track, we need the tracking to stay out of the way of everything else. We’ve seen what happens when we say the only assessment that counts is a test, and it’s not pretty.

I’ve never bought into Mitra’s concept of “Minimally Invasive Education“. While it has some value, at its core it’s a TED-talk vision of the world where we are one technology-drop away from curing hunger. Meh.

But Minimally-Invasive Assessment? Assessment that flows with the activities that help students learn rather than against them? That’s something I can get behind. Kudos to Canvas for getting behind it as well.


One Minute Federated Wiki: Pulling Something From Twitter

I’m going to start documenting how to do various things in Smallest Federated Wiki. These little tidbits will be helpful to people who have already started using SFW but may not know some of its less obvious features.

In this video, I deal with the problem of getting SFW pages you found through Twitter and other means onto your site. Often when you find a site in SFW your “left-most” context is your site, because that’s where you came from. When coming from Twitter, or a link in an email, or while browsing a site you “collapsed” (more on that later) you need a way to get another site’s page into your site’s context so you can fork it. It sounds complex; it’s easy in practice. Check out the video.

Making Class Wikis vs. Thinking in Wiki

In general I describe myself as a blogger, partially because my work title (Director of Blended and Networked Learning) just leads to too many questions, and partially because it ties together some experiences I’ve had over the past decade or so. Blogger is not quite accurate even there — the work I did with Blue Hampshire was technically more about running a pretty volatile online community than blogging, but it’s good enough description most days, even if blogging consumes only 20 or so minutes a day.

The other reason I describe myself as a blogger, though, is that after you blog for a while, you start “thinking in blog”. Your mind is writing blog posts everywhere, constantly trying to synthesize new experience into a meaningful blend of narrative and exposition. It changes you, mostly for the better.

I can’t find the reference now, but I read something recently that argued that the difference between the way a botanist looks at a flower and the way a layperson does is that the botanist looks at the flower with a question. And the point the person was making is when you write every day you start to look at everything with a question. In a way, daily writing defamiliarizes the world to you and makes it more difficult, because thoughts must be reconstructed for others who do not have reference to your experience or share your dispositions.

That said, however, different forms accomplish this in different ways, and are suited to different sorts of things. Blogging is a great tool in that it pushes you to see posts as steps in a journey to a current (and future) understanding. You link to past posts. You watch your thinking evolve. It also places you into a conversation with other bloggers, so that you understand how your conceptions map on to a larger communal consensus or disagreement. I could go on, but you get the point: the reverse-chronology structure of blogging combined with trackbacks, comments, blogrolls, and RSS pushes us to see knowledge in a certain way.

It’s been interesting playing with wiki the past few months, because what I’ve realized is that while I’ve used wikis (and taught with wikis) I’ve seldom *thought* in wikis.

As a simple example, I’ve done stuff with TV Tropes before. In TV Tropes you give certain tropes (repeated conventions) names, for instance Incredibly Obvious Bomb. Over time this library builds up to where many scenes in movies can be quickly analyzed with these ideas. You remember, for instance, the scene in Casablanca where the Jerk With a Heart of Gold makes an Iconic Song Request of his Black Best Friend?

If that seems silly, it’s not. Not in the least. By “chunking” large, complex observations and histories into terms and pages you make it possible for people to see patterns that would otherwise be invisible. The fact is, you’ll find that Jerks With a Heart of Gold tend to make a *lot* of Iconic Song Requests. That’s kind of interesting, right? It may be even more interesting that Jerks With a Heart of Gold often have Black Best Friends in a bizarre (and racist) form of Pet the Dog.

What’s Pet the Dog? Pet the Dog is another nexus of ideas. Here’s a snippet of that page:

This term was coined by cynical screenwriters, basically meaning: show the nasty old crank petting a dog, and you show the audience, aw shucks, he’s all right after all. Often used to demonstrate that a Jerkass is really a Jerk with a Heart of Gold, or, if more limited, that the character is goal oriented rather than sadistic and/or thoroughly evil. If used as an Establishing Character Moment then you skip right past the jerkass phase. Of course, this doesn’t mean specifically petting a cute animal, but any sign of nobility within a morally ambiguous character.

Sub Tropes include Photo Op With The DogEven Bad Men Love Their MamasMorality Pet (a character’s entire relationship with a villain is one long Pet the Dog moment), andAndrocles Lion (where the dog would later reward the one who petted him/her).

Now you can ask a meaningful question — to what extent is having a Black Best Friend in a film an instance (or non-instance) of Pet the Dog. Does that change over time? That’s a question in a simple sentence that as complex as any cultural studies article abstract, but rather than being established through a specialized jargon that implicitly references certain touchstone works it accomplishes the same density of meaning through simple parts well connected.

So here’s the thing — I’ve known this is the point of much wiki when reading things like TV Tropes. But I’ve never made use of it when running a class wiki (or co-designing a wiki with an instructor for a class). We’ve sat down and written encyclopedias, collaborative class notes, community resource mapping sites — all of which are excellent uses of wiki.

But we’ve never asked the class to develop a new language in the study of a subject, or to extend an old one. Instead, we gravitate to more traditional modes of academic production, but wikified.

Does anyone have examples of a class producing their *own* analytical language through wiki, TV Tropes style? If you do, can you share links in the comments?

Flipped Classroom, 1972-style (and early visions of connected home computing)

Today I did two articles for the HHOL project (reminder: you should join the project!). The first article I wrote was on Ancient Roman Assessment. The second was on the late 60s/early 70s system called TICCIT, which used a combination of videotapes, servers, computers, and color terminals to deliver instruction into homes and dormitories over cable-TV infrastructure.

As I was looking through a 1972 write-up on the project, I was struck by how much more literate these technologists were in instructional design than the current crop of disrupters. Sure, it’s a mash-up of Bloom’s Mastery Learning and Skinner’s Programmed Instruction, but that’s pretty enlightened for 1972. And the development and review of project materials were overseen by an instructional psychologist (pg. 19), which happens today approximately never.

Technically, too, these people stand head and shoulders above today’s crowd. A main argument of the report is that the coaxial infrastructure of cable-TV holds a remarkable potential when it is used to connect home computers to servers delivering interactive content (pg. 45). It then goes on to detail some of the services that could be delivered through the coaxial cable infrastructure, including email, meter-reading, online shopping, electronic newspaper delivery, travel route-planning, “cashless society” transactions, and computer dating (pg. 49-55). And they top it off by predicting online education will make the biggest dent in the adult education market, with traditional students see a smaller amount of imapct. Again, not bad for 1972.

But the gem, for me, was this simple, unassuming paragraph:

Flipped Classroom, 1972

Of course, it’s a restatement of the dream since Pressey — remove the burden of the repeatable so teachers can focus on the sort of personalized tutoring they do best. But the difference here is that these people know that. No one is wandering around claiming to have invented “flipped classroom”. It’s been invented. The question, as always, is how to make it work.