It’s become trivial to find these examples, I suppose, but here’s some snapshots from today, around 8 a.m. Pacific Time.
Facebook (snapshot via @eliparser, I use Facebook maybe once a month myself).
I’m curious why this happens (and maybe I should read Eli’s book?). In this case it’s not a Friendly Web issue — there are plenty of people to “like” the SCOTUS ruling. And while the population of Twitter is surely more socially involved (for good and for ill) it’s hard to see this repeated pattern as merely a demographic difference.
Yet one of these looks like a passable future, and the other looks like Neil Postman’s worst nightmare.
We’ve talked a lot about the fallacy of technodeterminism in the past here, and I’m not going to defend the reductionist version of that. But this looks like two very different futures to me, and it’s worth thinking about how the technology we promote in our classrooms shapes the future we’re launching our students into.
Hoisted from the journal:
David Graeber has a far too long essay in The Baffler, which is not worth reading in full. In the end, though, it comes to a common but worthwhile point: the structure of research today can’t be open-ended in any real way, due to creeping managerialism, and this kills any possibility of revolutionary technology:
That pretty much answers the question of why we don’t have teleportation devices or antigravity shoes. Common sense suggests that if you want to maximize scientific creativity, you find some bright people, give them the resources they need to pursue whatever idea comes into their heads, and then leave them alone. Most will turn up nothing, but one or two may well discover something. But if you want to minimize the possibility of unexpected breakthroughs, tell those same people they will receive no resources at all unless they spend the bulk of their time competing against each other to convince you they know in advance what they are going to discover.
This is a major problem in technology, though maybe not for reasons Graeber would identify. The main problem with our current setup, where companies make tools for broad edtech markets, is you lose the synergy between technology and practice. As Engelbart noted, the Tool System is only one half of the equation. True progress uses the Tool System to leverage change in the Human System, and in turn uses changes in the Human System to identify necessary tool modifications.
Engelbart’s solution to this, still underappreciated, was to have a team of developer-users that could alternate quickly between designing tools and constructing the culture and practice around them. That takes time, but as the Mother of All Demos showed, it can have fantastic results, because sometimes the future is only comprehensible when delivered as a package.
Current models of development don’t allow that sort of development to occur, and while that is not the reason that flying cars never came about, it is the reason that computer technology has advanced so slowly since the 1960s.
If you wanted to really revolutionize educational technology, for example, here is what I think you could do. Get together a representative group of developers to pair with a small laboratory school, and work so closely with it that the developers could walk in each day and observe ways in which the latest build had succeeded or failed. Talk with teachers about what works and what doesn’t. Organize technology around a new curriculum, then organize the new curriculum around the new affordances of technology.
Do this with ten, twenty, fifty schools, each school no larger than 500-1000 students. Leave these experiments alone for seven years.
I guarantee you at the end of seven years, one of those schools will have truly revolutionized education, and produced more innovation and “progress” than we’ve seen in the past 50 years. And the reason would be that the practice and the technology and the culture and the curriculum all grew together, reacting to the possibilities each exposed, rather than being developed separately.
Ee can’t do that sort of thing because we get too concerned with “waste” and “metrics” and “accountability” (as Graeber notes) but more importantly, we can’t do that because market-driven design *has* to design for *existing* culture. Without the “bootstrapping” framework of Engelbart we plod along at a snails pace.
For a related view see Phil Hill’s post on the LMS as a barrier to innovation.
Michael Feldstein has a must-read post on interoperability and learning management systems, the sort of writing we used to call nuanced and detailed but are now contractually obligated to call a “long-read”. It’s probably an “explainer” too, for that matter, from one of the best explainers of what-the-real-roadblocks-are around. This post is primarily a nudge to get you to read that post so that we can move to a deeper level of conversation on the problems engendered by the LMS.
I will add one (multi-paragraph) comment to what it presents, however. A testimonial of sorts.
It’s been eye-opening working on federated wiki because you simultaneously get amazed by the possibilities of stuff-done-at-the-right-level-of-abstraction and frustrated with people’s inability to comprehend things done at that level. People say they want a classic LEGO set, but in practice most conversations with actual people push you towards providing the Millenium Falcon set Michael mentions (via Amy Collier’s not-yetness presentation).
This is why in the consumer-driven space we get 22 “track your pet’s eating habits” apps next to 63 “track your water consumption” apps next to 98 “what did you eat today” apps, each with a different database, login, API, interface, and small company that will be out of business in a year anyway.
The cycle reinforces itself. In a world where you have gosh-darn so many apps, each app must be dirt simple to learn since you get a new app every week (and as quickly forget them). When presented with a classic LEGO set app people ask “How could I ever learn this in five minutes?”, unaware that the reason you have to learn things in five minutes is that you are dealing with problems at the wrong level of abstraction.
As Michael notes, the stuff that happens at the operating system level can support many things, but is useful primarily to developers, not users. The Millenium Falcon LEGO sets, on the other hand, are user-focused but over-specific. They lead one into a never-ending infancy, where one can quickly become competent with a tool, but never adept or creative with it.
What’s missing is tools in the middle — general purpose end-user tools. We get these every once in a while. Word processors, Excel, Hypercard, the web browser. Each a tool you enter to find a blinking prompt and a couple powerful, generative ideas waiting for you to tap into them. Each a tool that unleashes new capabilities and creativity.
But until users can see the relationship between their app-adopting behavior and their larger situation I’m not sure I see solutions like this in the near future. I’ll continue to promote and work on such solutions, because that’s where the potential is. But it’s the cultural issue that needs solving, and I’m still working out how we overcome that.
It’s quite possible that 2015 is to annotations what 2004 was to self-publishing. As annotations move mainstream, wiki can make them better.
Take Pinboard, which can be seen as a rudimentary annotation system. In Pinboard you read a page and write a summary, or disagreement, or whatever. It looks like this:
Pinboard as it stands. You summarize an article and tag it.
That’s interesting, and I’m glad I can find it later. Also, the process of summarizing is good for my comprehension.
Pinboard even goes further, allowing you to create “notes” which are free standing, and not tied to pages. So, for instance, we could tag bunch of articles gdp-education, and write a couple notes on the subject too. When we want to see everything we’ve read and written on the subject we hit the tag and voila!, an instant library.
But why can’t I link? Why does each of these annotations have to be an island?
Consider this small change: Annotations and Notes can linked by name, the same way wiki can. Now I can not only annotate this article, but connect it to other articles and ideas I’ve been working on.
In this next image, we imagine a world where we link this article annotation to another article annotation and to a note in Pinboard that captures our evolving understanding of these issues. We do this based on annotation/note title:
Here we’ve wikified the text. (Again, this is just a though experiment. You can’t do this in Pinboard).
Cross-national data shows no association between increases in human capital attributable to the educational attainment of the population.
This is a confusing finding given [[Barro’s Determinants]] which found education a primary factor. See [[Education and GDP]] for full discussion.
In our thought experiment, “Barro’s Determinants” links to an annotation on Barro’s Determinants of Economic Growth, whereas “Education and GDP” links to a note we’ve been writing and updating every time we read something like this, summarizing our understanding of the relationship and linking to some other annotations and notes.
I haven’t thought through enough the way this might work with newer annotation tools like Hypothes.is and Genius. In those tools, there are many annotations for a page, most unnamed. But I think there are possibilities there as well.
Making annotations act like wiki could move annotations from being webpage utility to being a network of their own. Annotation space may soon be the one general purpose open standard read/write space most people have access to. Let’s make it a first class citizen of the web.
We have developed this new feature in federated wiki called Rosters. I think the implications of it are pretty big for how fedwiki develops. I want to tell you about it.
So let’s start at the beginning. People have had trouble connecting with one another on federated wiki in the happenings. The architecture of federated wiki made it very easy to grow a large organic network of collaborators over time, but somewhat difficult to spin up a group quickly in the way that many groups spin up.
So Ward and Paul built this thing called the Roster.
Rosters are ways to organize all the sites you read. Up at the top you can see I write or have written eight or so sites I might want to being into my search neighborhood, in a “I know I wrote something about that somewhere” sort of way. I’m also watching Ward Cunningham’s writings on the four sites of his I care about (but not, for example, watching his sites on programming issues).
When I want to “load” these sites into my neighborhood (which is what we call the search, activity, and link resolution context) I click that little sideways chevron arrow at the end of the squares that represent the sites and my neighborhood loads information about those sites that allows neat things to happen.
The rosters are configured with text. Here’s the text that produces the Roster above:
You just click in and edit it yourself.
Now this is cool stuff in and of itself, because it makes getting connected to others in the federation much easier than before, and allows you to organize those connections in ways that support your workflow.
However, there’s a neat and potentially game-changing feature we’ve added in — rosters can consume other rosters.
So for instance, I can remotely pull in a roster of Ward’s like so:
And it comes through like this:
You see how I can pull in the people that Ward follows, and supplement my page with them?
It actually gets even crazier, because you can use these rosters to customize your activity feeds. So, for example, I can go into my Recent Activity feed and tell my feed to look at only the sites that are in Ward’s Developer category. So here I edit my activity feed to show just the stuff from Developers, and to get really fancy, I say I only want to see the stuff that has been forked at least once by some other site.
And when I do that I see the set of pages that are presumably important or interesting or useful to multiple developers of Ward’s choosing:
And because we pull this roster by reference, not value, if Ward updates his roster we pull updates from the new set of people he’s defined. And before you ask, yes, you can even have rosters that pull in rosters of other rosters. I don’t know if that’s advisable, but it’s there for you (Ward has built in a protection against infinite recursion, though, so experiment all you want).
You see the point though, right? I can subscribe to a Roster that is maintained by someone else — a teacher, an expert, or a stranger that seems to have good taste. As they maintain the list what I see changes.
This makes possible all sorts of new use scenarios.
But wait a minute, I thought you were all about the decentralization!
We are. But part of aiding individuals is giving them the power to self-organize, or to divide/assign community maintenance tasks. Creating a open network with no tools for self-organization is disempowering. The question is what kind of tools you create.
I like this approach to centralization — you give people the ability to build and maintain their own communities with a set of LEGO blocks, but like LEGOs they can be pulled apart and reassembled as something else in the case that your current community becomes tyrannical, harassing, or just boring over time. You can take your pages with you and reassemble into newer, hopefully better communities.
Enough, when is the next Happening, and what’s it about?
I want to start the next happening as soon as possible. Here’s what it is going to be about.
We’re going to test this roster functionality by setting up “pods”. Pods will form around a task or research question — for instance, you could structure a pod around the task of improving the Wikipedia coverage of Mario Bava, or increasing the quality of articles on the world’s oceans, or something non-wikipedia like compiling every piece of information about where the “Images are processed 60,000 times as fast as text” myth comes from.
You put together your set of people you want working on this, and build a roster you share out to the others. And then you start to build out your knowledge in that weird federated wiki way, where things start to link together in ways you had not imagined. When the time is up, you consolidate your work — moving things into wikipedia, sharing as a more “normal” looking wiki, publishing your results or whatever.
All we need are Pod Leaders who are willing to facilitate the investigation of a subject of general interest. We could keep the pod sizes small — 3 to 12 people so that there is not that much community maintenance involved. And the experience could be two weeks or a slow burn at a few months. If you’re Pod Leader you’d get to choose the timeline and the topic.
So email me at email@example.com if you would like to be a Pod Leader or learn more about being a Pod Leader, and we’ll get this show on the road.
I used to think the main problem with Blackboard was that it applied an enterprise solution to a consumer software problem. I increasingly think the main problem is that it’s just lousy enterprise software.
Case in point: today we learned that all of the YouTube videos that all of our professors had embedded in Blackboard using their embedded video function (bizarrely named the “mashup” function) don’t work anymore. Every student in every Bb class is clicking on the YouTube videos their professor has embedded using “Mashups” and seeing this message as they frantically try to watch them before class:
“We are unable to display the mashup content. This happens if the system detects an invalid URL. Remove the mashup item and try again to resolve the issue.”
Right off the bat, there are some problems with even the error message. After the student stops wondering what the Intro to Genetics class video has to do with “mashups” (a musical form popularized by the hit show Glee) they have two other messages to decipher. First they are told the the system has detected and invalid URL, which is useless information to them. Then they are told to remove the mashup item, which they actually can’t do, and even if they could it doesn’t get them what they need, which is to see the video.
I want to repeat — this is happening right now in every class across the world with every video that any instructor embedded using the Blackboard YouTube tool. Professors are spending thousands of person-hours fielding emails from students unable to play videos or to understand the error message trying to produces.
Now, you might ask next, why is this happening at all? Blackboard will likely tell you “It’s YouTube! They pulled the API we used! We’re scrambling to react!”
So is that true? Yes, but not in the way you think. The API that was pulled has been deprecated for a little over a year.
At that time Google notified people support for the YouTube v2.0 API would be pulled on April 20, 2015, and that developers should migrate to the 3.0 spec.
So what happened on April 20, 2015? Well, YouTube published a deprecation plan, saying hey — we were serious. We’re going to start shutting down the API piece by piece, and your stuff will stop working around the end of May:
So what did Blackboard do in the 14 months since they learned that the API would break? Apparently nothing. Nada. Zilch.
They didn’t even fix the ERROR message. Let that sink in. They couldn’t be bothered to update the error message. That message could give you a link to the video you could click, saving the student a crisis and thousands of professors a flood of emails about broken videos. But they don’t care enough to do this.
This is really par for the course with Blackboard. It really is.
Is there any other industry where this would be tolerated?
Anyway, if you hear that Mashup-ageddon was “caused by a YouTube change”, politely decline to accept that answer. And if you’re considering purchasing Blackboard, you might ask them for an explanation of how this sort of thing continues to happen.
For my encore, I would love to detail their inability to keep our hosted Blackboard server up, even during finals week, with a server failure issue that they have not been able to fix despite working on it for over four months. But I have to go manually re-embed several hundred YouTube videos that are not working for our students. So I’ll have to write that post next week. Stay tuned!
I talk a lot about the open pedagogy case for federated wiki, but not much about the OER/OCW case for it. That doesn’t mean it isn’t a good fit for the problems one hits in open materials reuse.
Here’s how you currently reuse something in WordPress, for example. It’s a pretty horrific process.
- Log into the WordPress source site
- Open the file, go to the text editor.
- Select all the text, cntrl-c copy it.
- Go log into your target WordPress site
- Create a new page and name it.
- Go into the text editor, and paste the text in.
- Save Draft.
- Go back to the source site. Right click on the images in the post and download them.
- Go back to the target site. Open the Media Gallery and upload the images you just downloaded.
- Go through you new post on the target site, and replace the links pointing to the old images with links pointing to the images you just uploaded.
- Preview the site, do any final cleanup. Resize images if necessary. Check to make sure you didn’t pull in any weird styles that didn’t transfer (Damn you mso-!)
- Save and post. You’re done!
- Oh wait, you’re not done. Go to the source post and copy the URL. Try to find the author’s name on the page and remember it.
- Go to the bottom of your new “target” page and add attribution “Original Text by Jane Doe”. Select Jane Doe and paste in the hyperlink. Test the link.
- Now you’re REALLY done!
It’s about an five to ten minute process per page, depending on the number of images that have to be ported.
Of course, that’s assuming you have login rights to both sites. If you don’t, replace steps one and two with trying to copy it from the actual post, attempting to paste it in the *visual* editor to preserve formatting, go through the same steps, except spend an extra five to ten minutes cleanup on step eleven.
It’s weird to me how fish-can’t-see-the-water we are about this. We’re in 2015, and we take this 15 step process to copy a page from one site to another as a given.
Conversely, once you see how absurd this process is, you can’t *unsee* it. All these philosophical questions about why people don’t reuse stuff more become a little ridiculous. There are many psychological, social, and institutional reasons why people don’t reuse stuff. But they are all academic questions until we solve the simpler problem: our software sucks at reuse. Like, if you had an evil plan to stop reuse and remix you would build exactly the software we have now. If you wanted to really slow down remix, you would build the World Wide Web as we know it now.
Conversely, here’s what the steps are in federated wiki to copy a page from one site to another:
- Open your federated wiki site.
- Drag the page from the source site to the target site and drop it.
- Press the fork button. You’re done!
And keep in mind you don’t need to have the front-end of your site look like federated wiki. All that matters is you have federated wiki on the backend. Here’s a short video showing how two sites with different web-facing appearance still allow the easy transfer of pages:
You’ll notice that most of the length of this video is explanation. The actual transfer of the three pages transferred here is from 1:45 in the video to 2:30. It’s about 15 seconds a page to copy, complete with images. While the question of why people don’t remix and reuse more is interesting to me from a theoretical standpoint, I think it pales in comparision to this question: what would happen if we dropped reuse time from 10 minutes to fifteen seconds?
How is this possible? Mostly, it’s the elegance of federated wiki’s data model.
- Data Files not Databases. Traditional sites store the real represenation of a page in a database somewhere, then render it into a display format on demand. The database takes a clean-ish copy and dirties it up with display formatting, You grab that formatting and try to clean it up to put it back in the database. Federated wiki, however, is based on data files, and when we pull that link from one federated wiki driven site to another federated wiki grabs the JSON file, not the rendered HTML.
- JSON not HTML. HTML renders data in a display format. A YouTube video, for example, specifies an IFRAME as a device along with width, height and other display data. This hurts remixability, because our two sites may handle YouTube videos in different ways (width of player is a persistent problem). JSON feeds the new site the data (play a YouTube video with this ID, etc) but let’s the new site handle the render.
- Images Embedded. This is a simple thing, and the scalability of it has a few problems, but for most cases it’s a brilliant solution. Federated Wiki’s JSON stores images not as a link to an external file, but as JSON data stored in the page. This means when you copy the page you bring the images with it too. If you’ve ever struggled with this problem in another platform you know how huge this is: there’s a reason half the pages from 10 years ago display broken images now – they were never properly copied.
- Plugin Architecture.
Federated Wiki’s plugin architecture works much like Alan Kay’s vision of how the web should have worked.The display/interaction engine of federated wiki looks at each JSON “item” and tries to find the appropriate plugin to handle it. Right now these are mainly core plugins, which everyone has, but it’s trivial to build new plugins for things like multiple choice questions, student feedback, and the like. If you copy a site using a new-fangled plugin you don’t have, the page on your site will let you know that, and direct you to where you can download the plugin. Ultimately, this means we can go beyond copying WordPress style pages and actually fork in tools and assessments with the smae ease.
- History follows the Page. As anyone who has reused content created and revised by multiple people knows, attribution is not trivial. It consumes a lot of time, and the process is extremely prone to error. Federated wiki stores the revision history of a page with the page. As such, your edit history is always with you and you don’t need to spend any time maintaining attribution. If the view of history in the federated wiki editor is not sufficient to your needs, you can hit the JSON “Journal” of the page and display contribution history any way you want.
We could probably say more on this, but this should do for now.