Explaining Federation Through Family Movie Night, Part I

I’ve been struggling to explain to people why federation is necessary. In practice, federation doesn’t get you much until there are people around to federate with.

Worse, it doesn’t get you anywhere until there is valuable material in your federation. Valuable material takes time to produce, and people aren’t going to spend that time making federated content until they see the value. So we have a bit of a Catch-22 here.

Luckily, I’ve come up with an example of solving a simple, pressing problem using federation that does not take much time investment. It’s about family movie night.

This video explains the problem and why a non-federated solution will not work. The next video will show how federation can solve it.

NeoVictorian Computing, and the Cult of the Lowest Common Denominator

M. C. Morgan (my first friend met through federated wiki) pointed me to this series on NeoVictorian Computing by the guy who wrote Tinderbox, a Mac-only hypertext computing tool. The primary point he makes throughout the series is how our fetish for “transparent computing” is making both users and programmers miserable.

What do I mean “our fetish for transparent computing”? You see it everywhere — that every program should look the same, that every bug must be eliminated, no matter how small, that a user interface must be immediately understandable to the novice.

This results in a sort of fast food computing that pleases the senses but ultimately leaves us unsatisfied, unhealthy, and unproductive. We expect our software to demand about as much out of us as watching YouTube #FAIL videos, and we end up getting about as much out of it as we should expect in such circumstances. Problems that software could solve (and could have solved ages ago) remain unsolved because if it doesn’t fit in a bugless File > Edit > Tools menuing system, or worse, the intuitive touchiness of Tablet Computing, then no one is going to use it.

Our response to this trend is interesting. How many times have we heard the story about the toddler who “just starts working the iPad naturally?” And what amazing progress this is!

Step back from that and analyze that statement. The device we are using for our jobs can be used by a toddler. And we’re proud of that!

Would you feel the same way about books? “This book on utility computing is so simple that my third-grader gets it. You have to buy it!”

The problem is that the whole point of your computer is it is NOT an intuitive physical object, but rather an instrument relatively unconstrained by the physical world, and unconstrained by the program author’s intention. It’s supposed to push the boundaries of what’s possible.

A car simple enough for a third-grader to drive is an accomplishment, because we are not counting on the car to do anything radically new.

On the other hand, an interface a four-year-old can use on top of an information technology product is probably a failure, because it means your product encourages a four-year-old’s vision of the world.

I’m not saying you should keep all things hard forever hard. I’m not saying it’s a virtue to force your user to compile their own code and run it on Node.js on an S3 instance (after installing pm2, of course, because we all know screen crashes!). Eventually, such things need to be made easier. (Though obvioulsy, in beta states this is how things may have to be).

No one is asking for installation and setup to become less transparent. That’s just making your user do work you couldn’t automate.

But interface elements that are essential to advancing the way we do things? New gestures that pay off after a week of use? New models of thinking about media elements? We under-use these. And we give ourselves too many excuses for not engaging with them. Sure, Google Wave was corporate Google-ware, but the tech press gave it, what, a week?

And that tablet computing is making our applications even simpler is not an achievement, but rather a threat to our ability to solve new and complex problems.

What’s the alternative? More software. More specialized software. Small pieces loosely joined. Long term relationships with software instead of acquaintances. NeoVictorian Computing. Read it.

UPDATE: In response to Scott’s comment, I wanted to clarify things. This is not a defense of lazy, crappy software, or software that forces you to understand your system’s file-structure to make it work. I don’t buy into the whole “editing your config file will set you free” line of thought any more than I bought the “to truly drive a car you have to rebuild an engine” line of thought. That’s laziness posing as edification.

The claim is actually meant to be the opposite. Think of Google Wave, which was actually a pretty slick piece of software that was far more refined and far less machine-like than email. I don’t know if Wave should have succeeded or failed. But the critique of Wave was not that it was hard to get running, or difficult to use, or forced you to know the internals of it to really use it right. The critique of it was it forced people to reconceptualize their mail, and they couldn’t do that after ten minutes of playing with it, and therefore it was doomed.

I understand why the general public felt that way. But why do we support that? Nelson’s OpenXanadu is yet another example — “It requires to much reformulation to make XanaDocs” is probably an OK response, but the response that will kill it is that it is “too confusing”. Never mind that he is trying to create a whole new paradigm.

So yes, any system that makes me generate reports by saving a csv somewhere and uploading to another place that produces a PDF to download from a third location — please stop making crap-ware like this. It wastes my time in exchange for yours.

But we also have to be careful we don’t fall down this rabbit hole of making only software that does not take any time to master, or software that is only general in nature. What I want from a developer is some very careful thought about what the experience of using this system will be like 30 hours into it, not a relentless focus on my first 30 seconds.

Personalized Learning, 1700s Style

I’ll throw this into the discussion. This customized level of challenge idea has been around a long time. The sociological implications are far from neutral. See below for a circa 1800 example from Gregor Girard’s school:

girard

(from The Mother Tongue, English translation published 1848)

“Providence does not give to all alike” — call me cynical, but this is where this discussion STILL ends up far too often today.

 

Why Personalized Learning Fails

There’s a great discussion going on about the myth of personalized learning, both at Dan Meyer’s blog and at Benjiman Riley’s. Michael Feldstein has also stepped into the conversation, pointing out the two (or more) definitions that seem to be in play here.

I’ve covered this area  more fully before (see last year’s Are Conversation and Customization Orthogonal?). But I’ll just add this.

If you look at the methodologies that have tended to produce great results, structured discussion ranks very highly. That could be peer instruction for physics. It could be one of Dan Meyer’s puzzlers. It could be your Socratic dialogue on the strengths and weaknesses of democratic systems.

I often warn  about overgeneralizing across disciplines but let me overgeneralize across disciplines here: if there is one thing that almost all disciplines benefit from, it’s structured discussion. It gets us out of our own head, pushes us to understand ideas better. It teaches us to talk like geologists, or mathematicians, or philosophers; over time that leads to us *thinking* like geologists, mathematicians, and philosophers. Structured discussion is how we externalize thought so that we can tinker with it, refactor it, and re-absorb it better than it was before.

Is personalization orthogonal to structured discussion? That’s debatable, I suppose.

In practice, do the current forms of personalization in vogue (see, for instance, Rocketship) undermine the ability of a skilled teacher to run productive structured discussions?

Absolutely. Not a doubt in my mind.

Sure, you can have a book club where everyone is on a different chapter. You can have a meeting where people have read the pre-meeting documents at some random time over the past three months. All these things are possible. They just don’t work that well. If the meat of your instruction is discussion, you have to make sure the personalization approach supports that, and that’s harder to do than it looks.

We’ve gotten so used to running around saying education is broken that we forget what an amazing feat it is that what are essentially biological cavepeople go through twelve to twenty years of talking with other cavepeople and at the end of it can land a probe on Mars or dissect the sociological implications of street art. That’s a lot of success to put on the line on a hunch we could do a bit better if we let everyone go at different paces on their iPads. I wonder how many people realize that?

 

 

 

Smallest Federated Wiki as a Universal JSON Canvas

Watching Alan Kay talk today about early Xerox PARC days was enjoyable, but also reminded me how much good ideas need advocating. As Kay pointed out repeatedly, explaining truly new ways of doing things is hard.

People looked at the PARC stuff, and many saw a solution to this problem or that problem. But that wasn’t the point. If you walked away from PARC saying — wow, fonts! or if you walked out of the Mother of All Demos saying “I’ve been *looking* for a way to organize my grocery list” you didn’t really get it.

These technologies were not solutions to problems as much as a whole new way to think about solving problems.

Anyway, the reason that I keep hammering on about federated wiki is that I think it is one of those rarer meta-technologies. In the way it upends certain ways of thinking about the web it offers us the ability to get beyond our assumptions about how people and computers co-operate.

And yet I find myself cornered into explaining it as a new way to organize a shopping list. (And doing it).

However, I’m lucky to have a good co-analyst in this endeavor. Jon Udell recently looked at Smallest Federated Wiki and, I think, saw something like what I see. SFW borrows ideas from GitHub and re-blogging platforms (like Tumblr, for instance). But it does so NOT in the context of a specialized use, but as a universal canvas. What’s a universal canvas? The Jon Udell of today points to the Jon Udell of 2006  to explain:

The most common workflows, by far, are mundane collaborations involving chunks of semi-structured data. Despite its warts, we continue to rely on e-mail with attachments as the standard enabler of these collaborations because it is a universal solvent. Our HR folks, for example, work for a different organizational unit than I do. Implementing a common collaboration system would require effort. Exploiting the e-mail common denominator requires none.

But while e-mail dissolves barriers to the exchange of data, we need another solvent to dissolve the barriers to collaborative use of that data. Applied in the right ways, that solvent creates what I like to call the “universal canvas”  an environment in which data and applications flow freely on the Web.

Jon, you’ll remember, literally wrote the book on Internet groupware. And he highlights a piece of this I don’t highlight nearly enough. SFW is a platform that allows you to apply a federated, mashup-friendly workflow to ANYTHING. Right now it’sthe *Smallest* Federated Wiki, and it only has a few plugins. But the plugin architecture is JSON-based and extensible.

A quick example. When I worked at Keene State we looked at using a Customer Relationship Management system to help us make sure we weren’t stepping on each other’s toes in dealing with faculty, and also to keep detailed notes on projects we were doing so that if someone had to step in when someone else was away on vacation they could.

But like so many collaborations, it fell apart. I did quite a bit of committee work and wanted my documents on faculty to keep track of nearly every email regarding decisions made, because having those emails at the ready is poitically important. Jenny, in Academic IT, on the other hand, wanted to track IT time more centrally and keep the “customer” record clean of intermittent communication. Becca, who worked in service learning, had about two hundred people off-campus she was working with that no one else wanted to see in the directory.

You’ll recognize this by now as a problem that a federated approach might be able to solve. But if SFW was just a wiki (and not a JSON canvas) you’d have to give up your structured data to use it, and that would hardly be worth the trade.

Here we don’t have to trade. You create a couple JSON plugins:

  • An email plugin that allows you to drag an email onto the Factory drop-area, and store the email in the page
  • An “customer card” plugin that allows you to enter structured data about your contacts, again as an item in the page’s story
  • A “meeting record” plugin that allows you to log future appointments and past meetings as structured, dated data. Maybe it can even accept a drag and drop from Outlook Calendar.
  • A “tickle-file” plugin that allows you to drop a “call next week” reminder on a customers page.

You build these, construct the proper view permissions and set it free. Everyone has their own site, but the records are connected across sites through naming conventions. Everyone has the power to organize their site the way they find most effective.

Better yet, since it’s a universal canvas, if we set up another federated wiki as a site for a project we are doing, we can just fork the pages on the people who are involved with that project over to our project wiki. And if we make notes on those pages in out project wiki, those notes can flow back to the CRM. Because it’s universal, see?

And all this flowing back and forth does not result in one bit of data loss. A page with that “customer card” plugin can get forked a dozen times through half a dozen projects and at the end of the line the JSON data is just as parseable as it was on day one.

“So now it’s a CRM?”

No, I know none of you are thinking that. You all realize the point is that this gives us a new way of thinking about how to think about problems. Not *just* CRM issues. *Even* CRM issues.

SFW isn’t really even a wiki to some extent. It’s more like a networked document.

I’m happy Jon sees this. I absolutely love my edtech friends and colleagues who, as Alan Levine put it, are committed to understanding SFW if only because of my excitement about it. But at some point you begin to doubt your own judgment… 😉

(Incidentally, Alan Kay also used to talk about a universal canvas, and claimed the most idiotic thing about the Internet was that your page of mixed-mode data didn’t arrive with the code you needed to interpret it. So the web became text with some hope-you-have-this-extension content. I’ll try and find that video.)

 

 

Name-Based Approaches to Networks, and Why They are Crucial to the Personal Web

Alan Levine made a great comment on a previous post — the frustration in Smallest Federated Wiki is that “you never know where you are.”

This got me thinking about some foundational issues that need to be explained — issues bigger than any directed how-to. One of the reasons people feel lost on SFW is that 20 years of web browsing has taught us to think about the web in terms of “location” instead of “name”.

If we ask someone to get a book for us, there’s a couple ways we can do that. The first way is to specify by location: “Get me the  book on the third floor of the library, first bookcase, 5th shelf down, three books from the right.”

This is roughly how the internet works. An address is in fact a location.

And that makes sense in a standard scenario:

  • Our library is always open
  • Books don’t change places
  • There is really only one library in which you can find each book you want

And if you look at that, that’s ARPANET circa 1970.  Always-on, immobile server machines . A situation where most content was stored only a small number of places.

If you think about libraries though, they don’t define books by location. They define books by ID. You know that books are defined by ID and not location because the Library of Congress ID number you get when you look up a book in your library is the same number that is stamped on copies of that book all around the world. You don’t look up a book in a catalog and find an ID that says “second-shelf-two-books-in”.  You get something like “G2207.P4C76 G7 1995”.

And so your path is different. You hand that number to the librarian. She says, nope, that books not in, but let’s check Interlibrary Loan.

This is useful, because the book exists in various locations, and the book at each location is intermittently available. If we were to build a system where books were identified by their location, the system would break every time a book was unavailable or the library was reorganized.

This situation — a series of intermittently available resources widely duplicated across a network — is much closer to the reality of the web in 2014 than the first model. Yet we hold on to the location-based model. And what that does is ensure that that the parts of the web that don’t look like 1970 — us, our laptops, our phones, our personal servers — can’t be full members of the web, because in a location-based system, transience is the cardinal sin.

Smallest Federated Wiki works the second way. Your link is to named content. Your system searches your trusted network for that named content — first looking on your starting context (site you started from), but then looking other places as needed. This means that very un-1970-like things can happen, such as  the following:

  • I run a server on my laptop, post an article to an SFW site on my laptop.
  • Tim Owens comes by on his laptop, running his personal server, and writes an article that links to mine (on my laptop)
  • Amy Collier sees my article and forks it to her site on her laptop
  • I shutdown my laptop, my SFW site is no longer available for the day.

OK, so far, so good. But here’s where the magic happens.

  • Jim Groom comes to Tim’s page and clicks the link to mine.
  • I’m not online, so, assuming Amy is in his neighborhood, the link transparently pulls up Amy’s version of the page.

Notice that none of this required coordination. When Tim linked to my page, Amy’s copy didn’t even exist. Amy might not even be in Tim’s neighborhood. He might not even know Amy exists.  Yet here Jim is arriving at Amy’s page via a link that Tim wrote.

This is only one example of the benefits of named content systems, but hopefully it sufficient to show that it opens up an array of possibilities just not possible before.

Does that reduce the compelling need to know “who the heck’s server is this?” on SFW? Not completely. After all, the content IDs used by SFW can point to radically different content. Authorship matters. And I’ll demonstrate some ways to deal with such things later. But it seemed impossible to do the “how-tos” without dealing with at least some of the “why-fors” up front.

 

Minimally Invasive Assessment and the New Canvas Suite

Instructure has a new announcement about Canvas, and it’s in an area close to my heart. They are rolling out a suite of tools that allow instructors to capture learning data from in-class activities.

But Mike, you say, the LMS is evil, and more LMS is eviler. Why you gotta be Satan’s Cheerleader?

Well, here’s my take on that. The LMS is not evil. What is evil is making the learning environment of your class serve the needs of the learning management system rather than serve the needs of the students.

One area where the LMS has traditionally distorted practice is in the devaluing of in-class work. The LMS treats the homework you turn in nowadays as an artifact demonstrating competency. It’s matched to outcomes. It shows up in your outcomes mapping, institutional assessment, the whole bit.

That great comment you made in class, though, the one that demonstrated passion, engagement, and understanding of a core concept? The LMS couldn’t care less. And as the use of analytics becomes more and more prominent, the risk of distortion of practice becomes more acute.

That’s what we’re fighting against. Or at least what I’m fighting against.

Let me give you an example. My wife is an art teacher in a K-3 setting. You think you have a lot of students? You know how many my wife has?

630.

Six hundred and thirty students. And here’s the kicker — if she wants the district to support offering these kids art class, she’s going to have to explain how her instruction is helping kids progress, both in the state standards and in areas that overlap with the Common Core.

Luckily, there’s a lot of overlap. When you bisect a page with a horizon line, that’s understanding ratios. When you make your own “in the style of Van Gogh” picture of your classroom in fingerpaint and marker, that meshes well with the Common Core standards on understanding authorial style. When you talk about what The Scream conveys, that’s getting at elements of authorial intent.

Nicole would love to capture the incredible amount of learning that goes on in the classroom — both to demonstrate its value, as well as to make sure that in juggling 630 kids she is not letting some slip through the cracks. She’d love to have a discussion with second graders about American Gothic and be able to know which students she has never seen tackle authorial intent, and see how they do when asked. She’d love to wander around a room of kindergarteners drawing horizon lines and record whether or not they got the concept of “equal halves”.

So when I told her about the new Canvas product suite, which includes a tool which gives you slide-left/slide-right style assessment of students for in-class use, she was ecstatic. She could have her mobile phone out, walk around the room talking to the students, and at the same time do quick, unobtrusive assessments of the students. And what she likes about that vision is she doesn’t have to change a thing about how her classroom works to implement the reporting and analysis.

We can argue whether we should be using behavioral objectives or conceptual ones, whether we should be using Bloom or Dee Fink — but at the end of the day no matter what we choose to track, we need the tracking to stay out of the way of everything else. We’ve seen what happens when we say the only assessment that counts is a test, and it’s not pretty.

I’ve never bought into Mitra’s concept of “Minimally Invasive Education“. While it has some value, at its core it’s a TED-talk vision of the world where we are one technology-drop away from curing hunger. Meh.

But Minimally-Invasive Assessment? Assessment that flows with the activities that help students learn rather than against them? That’s something I can get behind. Kudos to Canvas for getting behind it as well.