Here’s My Problem With

Here’s my problem with Hypothesis, the annotation tool. I think it’s also an opportunity.

A friend shares a list of “100 biggest Clinton Wikileaks Revelations” on Facebook. As expected, it’s just a mess of fuzzy thinking and misinformation, tied weakly to links to emails.

I think — hey, here’s a good opportunity for Hypothesis. Maybe I can put a damper on some of this stuff. Or at least find some solace in the quiet mental work of annotation so that I don’t drink myself to death before November 8th.

So here’s the site’s issue number five:


Ok, so I know ALL this is false, but what catches my eye here is the claim “Bird-dogging is a term coined by high-level Clinton staffers who openly talk about it in the video.They boast about inciting violence at Trump rallies, paying for every protest…”

Wait, what? Bird-dogging is about violence?

I was a bird-dogger for some events in 2008 and as a blogger got to know a bunch of bird-doggers. Clinton didn’t invent the term and it has nothing to do with violence. Bird-dogging is well-known in blogging and activist circles. What the term means has shifted over time, but in the modern age a bird-dogger goes to an event and tries to get the candidate on the record about issues they care about. You go to an event with a camera (or better, with a friend with a camera) and try to get called on. If your issue is Common Core, you ask a question about Common Core. If your issue is loose nukes, you ask a question about loose nukes. You get it on camera, and share it.

Here’s an example of how it works from 2007. I went as a Blue Hampshire blogger to a Huckabee house party, and found a bunch of activists from the AIDS relief group ONE were at the event was well. They were bird-dogging all the candidates on the issue of whether they would support AIDS funding. I caught Huckabee’s answer to that on video, and pushed it up to Huffington post and created a minor stir. (It actually made HuffPo’s top news story for a bit, and was their firsr OffTheBus story to do that — the first time independent blogger-produced news made the front page of the site).


So when the person in the video is talking about bird-dogging, that’s the context. It has nothing to do with Clinton, or violence, or even protest. Like a dog on a hunt, the job of the bird-dogger is to flush the hidden into the open. If you watch the video thinking otherwise, you’re going to be very misinformed.

OK, so annotation — I go to annotate bird-dogging and link it to resources. But what I find is that there is no good summary of bird-dogging in politics. There are guides on how to bird-dog, mentions of bird-dogging, and definitions that are too vague to be useful. If I jump people into this mess of links, they are just going to get lost trying to piece everything together which is what makes conspiracy so resilient on the internet in the first place.

So I’m going to have to write significant text here. For this one word. And that’s good, because I’ve discovered a hole in the web that needs filling. But it’s bad because why the heck am I going to write a comment that is only visible from this one page? There are hundreds (maybe thousands) of pages on the internet making use of the fact that there is no clear explanation of this on the web.

My point (I guess) is we have a solution for this. It’s called hypertext. It allows small pieces of information and explanation to be reused in multiple ways through a system of links, so that we can maximize the utility of the research we produce and practice the principle of keystroke preservation, by passing by reference (and letting others use that reference). And if we do that, we’ll start to produce annotations that are more significant than comments in scope and scale and usefulness, ones that start to plug some of the holes in our information environment. If we don’t, all work of note will be stranded as responses to individual documents.


Here’s my note on bird-dogging in Hypothesis. You’ll notice that there are Annotations (tied to words) and Page Notes (tied to documents) at the top. Why not add one more category — Personal Wiki — tied to nothing specific, that I can reuse wherever I see fit? Otherwise it’s going to take a lot of typing to fight this Wikileaks nonsense…

Neoliberalism and Textbooks (I promise this is better than it sounds)

There is a lesser known argument about neoliberalism which sees neoliberalism not as a power grab by the elite, but as a form of statecraft which allowed politicians to distance themselves from hard decisions.

From Matt Stoller’s summary of Greta Krippner’s Capitalizing on Crisis:

The argument popularized by Inside Job filmmaker Charles Ferguson and Roosevelt Institute fellow Jeff Madrick in his book Age of Greed is that financialization occurred because of a nefarious set of players who sought to reorganize society on a social darwinian model. This is the Gordon Gecko narrative, that greed is good. Another argument, broached by Tom Ferguson and Sidney Blumenthal in the 1980s, and then popularized in the 2000s, is that the conservative movement was the result of a group of far-sighted and well-funded businessmen who saw in the 1970s rise of Ralph Nader politics an implacable set of enemies who needed to be defeated in the realm of ideas. But this isn’t entirely fair – one thing I learned from Krippner’s book is that Ralph Nader, among other consumer advocates, supported financial deregulation, specifically the end of Regulation Q, which capped interest on savings accounts.

A simple way to state Krippner’s thesis is follows. In the 1970s, politicians got tired of fighting over who would get what, and just turned those decisions over to the depoliticized market. This is known as ‘financialization’. Then political leaders didn’t have to say “no” anymore to any constituency group, they could just say “blame the market”. It’s very much akin to the rationalization for inequality one hears from elites these days, that it’s globalization and technology, as if those are just natural trends with no human agency or decision-making involved.

This argument was relatively new to me, but the more I think about it, the more it matches my experience.

Take textbooks. I thought, before I got deep into promoting open textbooks, that most faculty hated publishers. After all, a larger class at a research university is taking $100,000 or so of student money a and giving it to the publisher, whereas the average professor is making less a tenth of that for teaching the class.

Come to find out, however, that a lot of faculty, especially in the larger classes, love publishers. I’ve heard story after story about how Publisher X flew down an expert to personally help the faculty member integrate a product with their class or set up a private videoconference to talk about course restructuring around active learning or the addition of analytics. Very often, faculty are pulled into projects with publishers as partners or consultants and get a small fee, but the big thing publishers have going for them is they act as course consultants, providing experts and support for faculty building the course. The phrase I hear time and time again is “There is simply no way I could teach this course without the help of Publisher X.”

Now compare this to Teaching, Learning and Technology centers (TLTs). Teaching, Learning, and Technology centers do exactly the sort of work that publishers do above, but more generally, and more efficiently. An average TLT might have a budget equal to the amount of student money spent on the top five or ten textbooks used at your university (this is a guesstimate, but only the order of magnitude here matters). The average TLT’s budget, in fact, looks like a rounding error compared to the amount spent on publisher resources. Yet TLTs are famously under continual attack for wasted money, even though they serve far more people per dollar spent than publishers do. The general rule at a TLT is that if you’ve had an eight year run without getting disbanded or dissolved due to faculty pressure you’ve had a good run, and it might be time to brush up that CV because it can’t possibly last.

So why are TLTs under constant attack, whereas publishers are much loved? Because TLTs are 1970s-style liberal statecraft, based on the idea that we collectively decide what is important for us as an institution, and deliver it as effectively and efficiently as we can. The problem with this is evident: it presents choices as zero-sum, so the money that supports one initiative is being “stolen” from another. The TLT budget could be used for faculty raises or new lines or whatever. So the TLT has to go.

Neoliberalism solves that problem by masking it. The net effect is still the same — we shift some money out of tuition to textbook costs. But now that that money doesn’t run through the institution it gives the appearance of not being zero-sum. It seems like trade-offs are not being made. No one looks at publisher efforts and says “If it wasn’t for publishers we could afford more history professors or better salaries for adjuncts.” They say, wow, that publisher is amazing! So helpful, always there when we need them, so well resourced!” When that consultant is flown in to help you with your class, you don’t think “Wow, how can the institution afford this with the state budget cuts?” Instead, you think “This publisher is indispensable. They care.”

But of course trade-offs are being made — students don’t react to the cost pressures of tuition alone. They react to total cost of attendance, which includes textbook cost. So in one case you take student money and spend it on a TLT support (via tuition) and it’s a never-ending existential battle about whether the TLT has a right to exist, and in another you take student money (via textbook purchase) and spend it on publisher support and everyone feels relatively fine about that, because it’s just the “market at work”. Ocassionally people complain about the cost of those textbooks, but after all what can you do — it’s just the market at work.

I write this not to let the publishers off the hook, but to point out something I didn’t fully realize until now — publishers exist because they solve an internal political conflict. They allow administrators to make sure that students and faculty get some of the support they need while creating the impression that such support happens as a consequence of a market, not political decisions. But of course, political decisions create the carve-out that allow the market to function.

In other words, the textbook market can be seen as a form of institutional politics which allows institutions to distance themselves from the results of their own policies, solving the problem of warring academic constituencies.

Anyway, just a thought I had coming into work today. Curious what people think.

Internet of Broken Things


A Trash Can in kernel panic. By @BenNunney

A couple years back, in 2014, Ward Cunningham wrote a piece on wiki called “Internet of Broken Things”. After dealing with the failure of a home sensor network he wrote:

This is how the internet of things will work. All the things will be interesting. We will think we own them because we will have bought them. But we won’t own all the pieces that give them utility.

The pieces will include some service that promised to provide value unless you read the fine print. Companies will be bought and sold. Databases will accumulate mistakes. Things will stop working. The compounding of complexity will make it in no ones interest to go fix the thing, even if it is just one line missing.

I’ve been asked why I run wires throughout my house to connect together sensors. Wouldn’t radio be better? Yes, but those sensors (and radios) still need power. I’d rather do without the weak link of anything that needs routine attention, even if just once a year. I need to replace ruby with something that will last.

It’s worth thinking about this on the day the internet ground to a halt due to what appears to be a IoT-based DDOS botnet.

Today, the company that sold you an IoT security cam may still be around, may still put out a patch (maybe).

But what happens two years down the road after buyouts and mergers? What happens when the free-for-life service that connects or manages your scale or mood lighting or runs your boiler is sold to a company that wants to re-monetize the service? Or shut it down without notice?

When it comes to security, where will this sea of abandoned devices get security patches from? Who will write them, and how will they get paid?

Like Ward, I worry that it’s not just an internet of things, but a proprietary mess of interdependent services built on the shifting sands of unstable business models. Unless we develop standards and protocols that reduce that proprietary interdependency we’re eventually going to have a lot bigger problem on our hands than Twitter outages.

Opening Up the National Academies Could Radically Expand OER Impact

The National Academies were incorporated by the government at various times — I think the Sciences (NAS) was incorporated by President Lincoln, and the other academies sometime after that. (I know, a deep history here).

They serve the public interest in a number of ways, but one of the more prominent is they gather experts to produce reports on various scientific and social issues. These reports used to sell to libraries, but now can be read on the web for free. In many ways the academies are the best representation of what this country could be — gathering experts to explain serious challenges and intriguing scientific and technological possibilities to the public.

The fascinating thing about these works is their accessibility to general audiences. These are most often written for policy makers, administrators, and  people in the field whose background may not be research. Here’s a bit from their report on microbial forensics, for example:


If you look at this carefully, you see it is written like a textbook for a niche area. This is because what the academies are doing is explaining research and disciplinary concerns to people outside the field, which is what textbooks do as well.

Why am I fascinated with this? Because I’m currently looking at how to resource a potential “Z-degree” for a Human Development program. And it is hard because there are not any open textbooks on niche subjects such as Child Maltreatment. In fact, there’s hardly enough open material out there to repurpose.

But when you go to the National Academies site, you find this:

child abuse.PNG

And the text in these books is wonderful. It’s clear, it’s full of examples, and it makes clear connections between research and pressing problems. And there it is, free and online.

It’s 90% of the way to a solution for resourcing a Human Development class on this topic. But unless I can go in there and cut and slice it and throw it into my class framework, it’s difficult to get that last 10% of the way. The 1993 text has great definitions of issues for example, but they need to be separated from some of the older research. The 2014 text is phenomenal, but could benefit from questions for reflection and self-tests of comprehension.

This is pressing partially because opening up these works could help serve these upper-level classes that are being left out of the OER revolution. Open up these reports to modification and redistribution via a Creative Commons license, and you could have a workable Child Maltreatment course text within a couple of weeks, which in turn is going to have a greater impact on public understanding of these issues than anything the academies are currently doing. And of course this holds with issues from genetic engineering to climate change to quantum mechanics. There are over 8,000 books and articles currently available on their website.

Why not do it? Why not open these works — already free to view — up to reuse and modification? I can see very little harm, and a huge potential benefit. And surely we could set this up so that both higher education and the National Academies benefit. I think Old Abe would have liked that, don’t you?

To check out the works available, go to the NAP website.

Slow-Writing with Wikity

A short note about something that occurred to me today, one that will only make sense to people who have been following my Wikity project.

When I first started to play around with Wikity as a PLE (Personal Learning Environment), I would usually follow this pattern:

  • I’d set aside time for a writing session.
  •  I’d find material to blockquote and blockquote it.
  •  I’d give it a title.
  •  I’d write a summary of it at the top.
  •  I’d go down and think about what connections I could make to other Wikity cards.

I describe this process in a summer blog post.

I’ve just noticed that as I’ve gotten more comfortable with Wikity i do this less and less in a single sitting.

What I do more and more is this:

  •  As the day goes by, I see interesting stuff, can’t bother to stop and process it, but I capture it with the Wikity bookmark and title it.
  • In the morning before work, when I’m actually at my sharpest (See First Hours, Best Hours) I get a cup of coffee and find blockquotes in Wikity without abstracts, and write the abstracts, and find links.
  • Ocassionally, when I want to share my previous work with someone else (as I did with Jim, when he said he was interested in Project Cybersyn), I’ll do a quick tidy and make sure that a cluster of articles is well connected to things discovered since the initial writing. Usually I’ll find a connection or two I missed, or some writing that needs to be cleaned up.


In other words, the foraging, summarizing, and linking parts of the process are often separated by weeks. The surprising thing is how right this flow feels, and how much better it works. By leaving a lot of the blockquoted observations unsummarized and unlinked, I always have interesting work to do in the morning which (most mornings) helps keep me off Twitter. Even more importantly, this spaced exposure helps me remember finds better — it’s a sort of naturally occurring spaced retrieval.

This isn’t earth-shattering, of course, but it was interesting to me. It’s the essence of the Garden approach to media, I think.

New Directions in Open Education

Keynote given at Metropolitan State’s TLTS conference in Denver, CO. 

A Sense of Audience

I’m going to start by telling a story about how I got here. I’ve mentioned this on my blog once or twice, but this is the first time I’ve told this end to end in this way.

I got here because of a student.

In 1995, I was teaching English Composition at Northern Illinois University. I was a grad student. So you’re basically a 25 year old teaching 18 year-olds.

And the way you taught composition back then was all argumentative essays. So students would read a bunch of pro/con stuff on gun control, abortion, etc., and then they would do some research and write an argumentative essay. And you’d read – in-between your graduate classes – 60 or so essays on gun control. So fun stuff. But the program was really prescribed, all the graduate teachers had to follow it.

I can’t remember the student’s name, and his face has gotten a bit fuzzy over the years. He wasn’t that great of a student or a writer, and goofed off a lot during class. At the same time, he’d always try squeeze concessions out of me. Could he get extra credit for this? If he did this could he skip another class? What if his grandmother died in the final week? Would he still have to do the work?

Protip to students. Don’t ask at the beginning of the term what happens if your grandmother dies in the last week of classes. You’re setting yourself up for a very big coincidence.

One day, he was pestering me for my position on gun control. He wanted to write his essay, but he wanted to know what I thought before taking a position.

“Ha ha.” I said. “Nice try.”

“Just choose a position and defend it,” I said.

He kept asking what I thought. Was I pro or con? Could I give him a hint?

I really can’t reveal that I told him. It doesn’t really matter what I think anyway.

He stopped a minute.

“But,” he said, “aren’t you always telling us that to write an argumentative essay you have to know your audience?”

“Yep.” I said. “Absolutely.”

And he said this next thing completely earnestly. He said

“But aren’t you my audience?”

I had no answer to that. I really didn’t.

I had no answer because he was right. If I was the audience, then asking me what I thought was audience research. It was what you were supposed to do – in fact, what I’d been telling them to do. But I wasn’t *really* the audience, was I?

I tried to shake the conversation but I couldn’t. In graduate school I was writing papers for an audience – the idea was that the paper would be a rough draft of something you might submit to a journal. My students, not so much. Who was this person we were pretending they were writing to?

“Aren’t you my audience?”


The next year my assistantship ran out and I had to hustle to get a new one. I had done a lot of programming before I got into the Literature and Linguistics program, so when I saw one about making the college’s first website I thought, sure, I could do that. It was 1996, so that was still a unique skill.

And so I did it. I built them a site, design, programming, copy-writing, everything.  And they were really happy about it. They really liked the site, and we finished it ahead of time.

And so I capitalized on my success and I said, look, you’ve made a marketing site, but what the real power of the internet is going to be about is teaching and learning. So how about we add something called The Gallery and it will be a collection of teaching projects by faculty and students?

And by then it was 1997, but still almost no one knew anything about the web. So they just assumed, OK, I suppose this is what you do. And we made it.


I miss the 1990’s sometimes. People had no expectations, and so they thought bigger.

So, the “viewer” here was actually an animation viewer that allowed faculty and students to create educational animations that you could view over modems as slow as 14.4/kbps, because it transmitted vector graphics. It was called “FutureSplash” but later was bought by Macromedia and rebranded as Flash.

The first project we did, which never got much lift at all, was an attempt to get faculty to work with us to use FutureSplash to make explanatory animations. Back before Flash was a way to market the new season of The Good Wife to you, it was actually a really cool technology. But for various reasons we just couldn’t get folks to wrap their heads around the idea. It required a different way of thinking about presenting, and really was just too technically difficult to pull off.

The second project we did was something called the Persona Project.

The Persona Project came out of the question that that student had asked me “Aren’t you my audience?” As I worked on the college website I realized that this was potentially the answer to that question: Students could write publishable work, put it up on the Internet. And the world would be their audience.

Wiki was being invented at this time by Ward Cunningham in Portland, but most people wouldn’t know about the technology for another three or four years. We certainly didn’t. So we put together something relatively traditional compared to wiki.


The way it worked was this. If you were a student in an English Composition class, you could write (initially for extra credit, which is the duct tape of pedagogy) a short biography of a person you were interested in, and submit it to your teacher. If it was good, they “sponsored” it, and it went up on the web. And if it wasn’t good, they told the student what they needed to do to make it good. And the key here was they would be able to tell the student that in a real context of an audience.

In my mind it was finally a solution to the coach/referee problem of teaching. In teaching, you’re both the coach and the referee when it comes to judging student work. This mitigated that, right?

And it worked! Here’s a snapshot of a student article:


I left NIU shortly after this, and for various predictable reasons the project fell apart. My next side project was something called Transcript Media. This was a project where my wife and I would go find public domain encyclopedias, stuff from the early 1920s and earlier. Because the thing I realized when we were doing the Persona Project was that students had no access to materials that they could use for such things. That Marie Antoinette article? At the time there was no copyright-free picture of Marie Antoinette. So what could you do?

And my wife –she was an art teacher. How could she and her students create open projects with real audiences without some photographs?


I love the marketing language here. I was passionate! I was going to change the system from the outside! (Note: I hacked the logo and preview photo here into the Wayback capture to reconstruct the page. Back then Wayback did not capture larger images.)

So we’d get these encyclopedias on these relatively fun Sunday jaunts through Seattle, and we’d slice them down the spine (sorry book lovers) and then we’d run any interesting media – charts, maps, images that we found through the sheetfed scanner, and put them up on the site “for educational use only” because there was no Creative Commons back then.

But all around, people were coming to the same realization, that these barriers were preventing a better sort of education from happening. The same time Nicole and I were dismembering books for educational use, David Wiley was realizing that what education needed was a license along the lines of the Free Software movement. In 1998, he published the Open Content License followed by the Open Publication License, which looked somewhat like the Creative Commons licenses that would replace it in 2002, right down to the no derivatives option.

In 1999, Rice put together Connexions, a radical website that allowed users to build and “fork” educational content – to make derivatives of other peoples work where attribution and versioning could be tracked. If you want to read about how mindblowing that was at the time, check out Chris Kelty’s book “Two Bits” which details how that project came together.

Elsewhere on the web people were building stuff. Ward Cunningham was off in Portland, having launched wiki, which when combined with open licensing would produce, among many other things, Wikipedia.

I apologize for hopping around here, this isn’t meant to be a history. There were so many people back then doing so many crazy cool things. If you were around back then, you have stories like this too.

The point is from the very beginning we had these two related things:

  • Open Pedagogy, like the Persona Project, thins the walls of the classroom, gives students control over the their own learning environment, uses the internet to put students into real authentic contexts.
  • Open Educational Resources, like the Transcript Media project, reduce the cost of education, but more importantly, they make Open Pedagogy possible.

The two things are unalterably intertwined. But strangely, for a while, many forgot that.

This presentation is named New Directions in Open Education. If there’s a theme to it, it is this: the future of open education is in getting to a holistic vision which finds the right synergies between these two elements: open pedagogy and open resources.  And ultimately that balance has to be in the service of the whole student.

The Human Core of Open

When we talk about OER as “freely modifiable materials”, one question we might ask is “Why do materials need to be modified?”

This is a serious question. I don’t have a different physics from you. When we learn the fundamentals of physics they are the same for both of us.  If we both jump out of a plane, we both accelerate at the same rate.

And our competencies are more alike than different. Our brains are wired, in broad terms, about the same. If I can hold seven random digits in my head, then you can hold six or eight, but not twenty. If, as a newbie to physics, envisioning a 4D space is hard for me, it’s probably hard for you as a newbie too.

So, if you take this very clinical view of what a human is, there’s no reason that there shouldn’t be One Best Physics Class, that could serve All the Humans.

And honestly, people have been trying to build that one best class, or textbook, or program for a hundred years. And they keep failing, because it doesn’t exist.

Why? Because we have non-academic needs wrapped up with the academic ones.  And those needs turn out to vary.


Take belonging. A sense of belonging is quickly becoming one of the predictors folks in retention and success programs are using to spot students who will have more trouble than most. More than mindset, more, in some cases, than socioeconomic status, a sense of belonging is going to predict how well you do in college.

Don’t believe me? Here’s a chart showing the difference in the performance of students who took part in a series of exercises designed to increase the sense of belonging at college against those who didn’t:


Data from article in Science. Chart  reformatted by Mindset Scholars Network.

Take a close look here — that axis on this graph – it’s at zero. For this particular belonging exercise you are watching a racial gap shrink by two-thirds. That’s serious stuff.

Now –this stuff is preliminary. And maybe we won’t see effect sizes like that after we account for a number of other things. There’s some issues of reverse causality and common cuase that come into play. But the truth is you don’t see an impact that big without something going on. And outside of the data, this matches with our common experience.

Why is this? When students feel they belong in a class, students see challenges as surmountable. When they feel that the class or discipline isn’t “for people like them” they tend to believe that challenges are a result of a bad fit between them and the class, or institution, or whatever.

Good teachers know this. My wife, as I mentioned, is an art teacher. And when she selects art for the kids to look at, she tries to make it representative of that class. About 25% of the kids in that class are minorities – she makes sure that she isn’t showing all white, western painters. Half the students are female – she makes sure that she’s not showing just men.

There’s a large Mexican population in her school, and so she makes sure that she pulls Mexican artists and artistic traditions into her class.

Why do this? Because the first thing that students have to get is not the Ten Greatest Artists of all time on some canonical list. No one is going to be harmed for life if she doesn’t cover Rubens. But if that kid walks out of the class thinking “people like me aren’t artists”, they cannot and will not succeed. You’ve done real harm.

This holds for the sciences as well, frankly. I’m not a fan of the “let’s throw in some pop culture references and it’ll make elasticity fun!” school of personalization. But if you’re from a given culture and every example of elasticity is drawn from polo examples, you might start thinking “I’m not really quite sure this book was written for me.”

I’ve just dealt with issues of gender, race, and ethnicity here. But it doesn’t stop there. You’re an adult learner among college kids, reading examples about dorm life. This really happens, by the way. You’ll have this hip textbook that uses frisbee on the quad examples to explain calculus, and you’re a 42 year-old with three kids taking night classes. I want you to try and feel how those examples feel to a person who is already not sure of their place.

It doesn’t have to be about social status either. Maybe you’re a biology major in a physics class, and the examples all come out of engineering.

The key here is that your materials and your course have to think carefully about this question of belonging. And since how you create that belonging and what sort of belonging you have to create will vary from school to school, class to class, maybe even year to year. The idea that there can be One Best Class to Rule Them All is just not supported by what we know about this.


Relevance is also huge. Students want to do work that *means* something. Scratch that: we ALL want to do work that means something. When things are meaningful to us, we work harder at it, we persevere. We find thinking easier, and more enjoyable.

That question of the initial student — “aren’t you my audience?” That was a question about relevance.

The thing about relevance is that, almost by definition, it has to vary. I taught a quantitative reasoning class for a few years. Statistical literacy, right? And what we found is that you can’t really *just* teach a class on statistical literacy. You need topics, and they have to be topics the students find relevant. And when you set up projects, the students work hardest and best if they believe their work will have impact – even small impact.

What that opportunity for impact is going to be is going to shift a lot. Maybe it’s local – trying to change the policy of your university through statistical argument. Maybe it’s more distant – compiling and publishing stats on gender bias in Wikipedia, or just showing off the animated gifs you made to capture the essence of a Chinese language film. But again, almost be definition, it’s something that you – the faculty or ID and the students have to provide. You can’t weld relevance into a class and print a million copies. That’s not how relevance works. Materials promote relevance on the whole by making space for it.

Diversity of Experience

Finally there’s this diversity of experience issue. Some people call this “relevance” as well, but I think it’s worth treating separately.

John Dewey knew this and talked about this. No class can be the same, he realized, because different children come in with different experiences and knowledge and skills. And since learning is really the process of connecting new experience and knowledge with old experience and knowledge, your students unique perspectives and experiences are part of the raw material from which you make the class.

I’ll take another example from my wife’s art class. She has to teach things like line, shape, composition, right? Well in some classes there are a bunch of kids who are obsessed with anime and manga. So any teacher realizes that a good explanation will connect these particular students’ knowledge of (and interest in) anime and manga to the explanation of these concepts. But of course the same explanations that works there will fall flat somewhere else.

One of the fascinating things BTW, is that so many people believe in learning styles (“My brain is different! I need kinesthetic learning stat!) which is largely unsupported by evidence. Yet these fundamental facts – that what we already know is different, our life experience is different, that what we find relevant is different, that our personal identification may be fragile – these things are undoubtedly true, and yet edtech ignores them, concentrating instead on sci-fi garbage like software that’s going to say “I find you study best between 2:39 and 3:14 using auditory methods.”

That’s not where the differences lie. The differences are in what we bring to the class.

I guess by now you’re wondering where the “open” is in this talk. I’ve spent some time getting to it. But here it is:

  • Addressing belonging requires the ability to modify and customize materials.
  • Addressing relevance requires space for students to contribute, to publish, to remix.
  • Addressing diversity of student strengths and knowledge requires an ecosystem of many explanations, not just the “textbook” explanation.

These are all, broadly, issues of openness. If you look carefully, you can see the 5Rs in there, the rights to remix, to revise, to redistribute. You can see the impetus for projects like the Persona Project, and the way the internet can make these things possible.

And if we really want to talk about the human core of open, this is where you’ll find it. In issues of relevance, and belonging, and diversity.

Classroom Exhaust

Open hasn’t always done well on these issues, however, at least in an institutional setting.

Take one of the more amazing moments in Open Education history. In 2001 MIT announced that it would be making almost all the materials from all its courses free on its site.  Here’s a list of some lecture slides from the Age of Pericles.


It was an incredible moment for open, accessible learning. I’m sure you all know the story, so I won’t bore you with it. But with that move, MIT almost single-handedly legitimized the idea for the public that materials for learning could and should be free on the Internet. UNESCO followed with their initial Paris Report of 2002. Philanthropic organizations began to see a way achieve an outsized impact on education by seeding MIT-like projects.   The history of open education begins long before MIT, UNESCO, and Hewlett funding, but nobody contests the impact these moves made on the acceptability of what we would come to call (following Hewlett’s lead) OER.

Now I worked for MIT for a bit, as the director of community outreach, back when it ran the OpenCourseWare Consortium. I was and am a big supporter of OpenCourseWare. I promoted it, worldwide, in concert with the hundreds of institutions that adopted an OpenCourseWare model. So please understand what I say next as a gentle criticism looking backward: OpenCourseWare proved very difficult to reuse institutionally, and a lot of it came down to misunderstanding this human core of open.

Why? Because there was a lot of focus on what David Wiley has called “Classroom Exhaust”. A class exists somewhere – at MIT or Yale or Stanford – and the materials produced by that class are released to the public as a byproduct of the existence of the class.

This isn’t a bad model. It is in fact the model that built a lot of open software projects. And for certain purposes it worked quite well. There are thousands of autodidacts in the world that have taught themselves Physics or Calculus or Economics for MIT OCW or Open Yale Courses. Outside of the U.S. there were a lot of applications too.

The courses were also hugely influential in guiding the design of other courses. I could go and look at the syllabus, the slides, the videos, and get a sense of things I might want to cover in my own class, or swipe a handy explanation.

But it didn’t really work for institutional reuse.

I found this out while I was working for the OCWC, but it really came home when I went back to a State College and tried to push these materials to faculty to use in their own class.

There was a course I had watched on Open Yale Courses that I loved – a set of lectures by Kelly Brownell, talking about the psychology, biology, and politics of food. And there was a quantitative skills class that had the politics and biology of food as a theme. So the instructor and I thought why not take these videos and use them to flip some of the class?

How could you go wrong, right?

Except it did go wrong. The students were almost resentful about the videos.

Why? We tried to drill down on it, and we found a couple of things:

  • For one, the videos were at Yale. Here they are at a third tier state college and they are watching these videos in a place that’s clearly above their head. How are they expected to get this stuff that’s for Yale students?
  • Another problem: there were all these little asides in the videos about advising periods, course pickups, study sessions, extra help, and office hours. Announcements about research opportunities. Now they knew of course that these were not for them – no one rushed out to go find Yale’s quantitative skills center or anything, or tried to apply for the research opportunity announced. But each one of these announcements was another chip away at their sense of belonging.
  • There was also this feeling that these were the warmed over leftovers of another class. We thought about this in comparison to the Khan Academy videos on various subjects. People might not like those videos, but they don’t feel alienated by them. And yet here were students feeling somewhat alienated. We thought about just the frame of this class. There’s some students in this class – the quote-unquote real class – and they get to ask questions. You, on the other hand, get to watch the carbon copy of that experience.

The students kept asking the teacher – why can’t you just do the videos?

So what we have here is a failure to promote a sense of belonging to those outside the actual class.

Now it’s worth noting that when I watched the videos myself, I felt none of these things. I thought the videos were a blast, and thought it was kind of fun to get a slice of life at Yale.

But this matches what we’re learning from the recent research on belonging. People with a sense that they do belong and high confidence they can meet course expectations don’t even notice this stuff half the time.

People without that sense, people nervous about whether they measure up, whether “people like them” truly belong? Every little thing is a sign to them.

So with the Yale Open Course materials, for example, we had the right to modify these course videos and so on, but in technical terms, the way they were produced it was almost impossible to remove all those belonging issues.

It’s so funny, because if these materials had been released as (for example) a series of radio interviews with “Kelly Brownell, author”, I doubt we would have had any of these issues. People thought that the Yale brand was the big draw, and I’m sure it was for some. But for people like us it was more a barrier than a benefit.

From Classroom Exhaust to Customizable Copies

People began to realize these problems in the mid-aughts, and a lot of money began to flow into materials that were less alienating to current students. Some people think textbooks are the most boring thing on earth, but in many ways the shift in focus from Classroom Exhaust to Open Textbooks was a huge leap forward. These materials addressed the student directly and could be more easily modified.

Where it gets more interesting is what we see recently is this shift to Open Course Frameworks. These frameworks are sequenced content and assessment, but importantly the content is more modular and easily editable.

So for example, the Open Textbook Network is now giving faculty a PressBooks sandbox on which they can load their course materials on and easily edit and customize. In fact, they are working with Hugh McGuire, the founder of PressBooks, on a $500,000 grant to create a workflow for OER called the Rebus Project.

Lumen Learning has a platform out called Candela, also WordPress-based, which embeds in Blackboard or Canvas or another LMS via something called LTI and lets faculty directly edit a hosted web copy of their textbook.

So every page has an edit button on it, and as you go through and read it from the perspective of your students and realize those small tweaks that really matter, you are always one click away from editing it and updating it.

This idea seems simple, but it was surprisingly counter-intuitive for a long while.  People wanted to use websites for textbooks instead of books, and the common practice with websites is you have a single website that you link to from your LMS or whatever. Are you really going to have your own copy of this website?


Well yeah, you are, because it matters. And it matters for the reasons we’ve been talking about. As an example of this the slide you’re looking at is from Lumen’s Student Success offering. Consider what a printed, centralized student success handbook might look like.

Now think of what a student success textbook would be like if it could reference things like the specific resources for your students on campus. Think of the difference in impact if you replace the photos in it with photos of your campus. If you go through it and make sure the pictures reflect the diversity of your student base. If you make it welcoming.

This is part of the future, that we’re going to get increasingly comfortable with what we call in the world of programming “forking” — breaking off from the standard version of things to customize something for ourselves, or add needed improvements that the rest of the users may not agree with.

One of the tensions in that is we want the freedom to change things but also the benefits of central updates. The good thing is that we have a decade of experience of managing this balance in domains like software engineering, and there are ways to do that – ways to let you customize materials while making sure the material you haven’t touched continues to get updated.

As we see more and better platforms to do this, my sense is we’re finally going to unlock the potential of customizing these works to the specific needs of your class.

From MOOCS to Loosely-Coupled Classrooms

So remember MOOCs? Good times, right?

I’ll just let that sit there for a moment.

I’m not disrespecting the earlier connectivist MOOCs, of course, and I don’t mean to impugn all of what have been termed “xMOOCs” — the second wave of MOOCs that grabbed the spotlight around 2012.

But the vast majority of the high publicity xMOOCs were, as many people pointed out at the time, glorified discussion boards with some multiple choice slides attached.

They were going to revolutionize education because they were open to everyone. But they didn’t.

They didn’t partially because the learning design of the top ones was often abysmal, indistinguishable from early 1990s computer-based training.

And they didn’t because their dream of finding heretofore unknown patterns in the big data of the sections turned out to be a fool’s errand. If you have a multiple choice infrastructure, how much difference is a big data going to make on learning? Your options for reacting to data are locked down here before you start making the data a bit irrelevant.

The thing about xMOOCs that I found frustrating, though, was there was so much potential there. It just wasn’t tapped because of some unfortunate assumptions.

xMOOCs and Institutional Reuse

For a while, Amy Collier – then at Stanford, now at Middlebury – and I were looking at the phenomena of blended courses at other institutions that used MOOCs as a core digital resource for the course. We called this the “Distributed Flip” and thought it had potential – imagine the power of a course that could tap into the collaborative power of classrooms across the country, right?

But when we looked at how those students in other blended classrooms engaged with the MOOC we found something depressing. Here’s a visualization of what your average student who was *not* in a class would do.


It looks a bit overwhelming, but I’m going to ask you to zoom in on just one thing here. This represents the actions of one user who took the MOOC but that was not in a blended course. Probably an autodidact, or self learner, taking this on their own time. If you see the width of those black lines, that width represents an action. So those blue squares, for example, are about five actions wide.

The blue sections are them working through the modules, and the yellow is them participating in forums. And for the self learners it looks pretty good. They do the exercises, hang out in the forums, and they do alright.

This self-teaching student is really typical. She had 56 sessions in the forums and visited them about 70% of the times she came to the site.

Then we looked at the students who were taking a course and using the MOOC. So, for example, there was a course in Puerto Rico, taught at a university, that used this same MOOC. All the students followed along with the MOOC and used the teacher in a flipped classroom model.

And you know what the median number of forum visits for those students was?

One. One visit.

Now maybe they just didn’t need the forums. But it occurred to us that maybe it was also because these forums didn’t make use of what these different classes could bring to the process.

And what we saw was that when we looked at other, non-MOOC models of doing massive courses was that these other models made better use of what the students brought to the table, and hence engaged them more.

For example, what we called “loosely-coupled courses” — courses that were connected not in this lockstep we-read-everything-on-the-same-day way, but through mutual meaningful activities – these loosely-coupled courses did a lot better at engaging connected classes.

The Assignment Bank as a Loosely-Coupled Connection

The classic loosely-coupled course, for me, is one called ds106. It’s out of University of Mary Washington.

There’s a lot of innovations in the course, but for me the core innovation – the one that has broad potential for application outside of the digital storytelling environment — was the assignment bank.


Screenshot of Ravelry, an inspiration for the ds106 Assignment Bank.

There’s an interesting story about where some of the social features of the assignment bank come from. I’m probably going to get this story a bit wrong, but here goes. There’s a site called Ravelry. It’s a knitting/crocheting community and you can go look at patterns for knitting. You get the pattern, and all the stuff you need to complete it.

Fine. A lot of sites do that. But here’s the fascinating bit. If you hit that third tab in, the projects tab, you get to see pictures people have uploaded of their finished projects based on it. So you get to see other people’s work.


Projects tab of Ravelry, which shows a variety of executions of a common pattern.

And if you look at these hats, what do you notice? Everyone brings their own take, their own style, their own “me” to the project. And so the pattern is only a piece of the process, right? Because you go look at the pattern, but you also get inspired by other people in the community as well.

And so the story I’ve heard from Jim Groom, Alan Levine, and others goes that Tom Woodward proposed the idea of an assignment bank for ds106, and UMWer Martha Burtis volunteered to implement it, doing so in the crazy turnaround time of two weeks.

Martha is apparently a crocheter, and in implementing it, she was inspired by the Ravelry model and wondered why the assignment bank couldn’t be like Ravelry – you execute the pattern and post the results for others to see. And the crucial bit is that the pieces posted become part of the learning and inspiration of others.


And so they made the assignment bank – interestingly, as a WordPress theme. And so you can go here, and choose an assignment on digital storytelling.


This is the bottom of the page of an assignment called Splash of Color. The idea here is to add color to a black and white photo in a way that makes an impact.

Great. But the experience is enhanced, because when you go in there you see the other examples and other tutorials on it contributed by the community. These are people from all around the country, but they are also from other classes that use the assignment bank as part of their own open courses taught elsewhere.

And they all bring a different element, a different take to the assignment. And it makes it stronger.

Here’s one of the completions of this assignment:


Photo Credit: Brianna McClain

Here’s another:


Photo Credit: Annie Melzer

I’ve chosen a simpler ds106 assignment here because I don’t want to get distracted by something too complex. We’re just thinking about a person who is learning to use Photoshop to create a certain mood using color.

But even here, in the simplest of examples, you can see how the community brings new perspectives, which broaden the understanding of the class as a whole.The general thrust of the assignment is a splash of color draws attention to a detail, but as students post their work, you see that play out in different ways.  In the lighthouse one, a student sees warmth, and safety.  In this rainbow, it’s a sort of hope. In another, it’s more of a way to focus on a detail of the photo.

I saw this and I have to say it blew me away. And the key again is this isn’t only open work, but it’s that again, it’s around this human core of open. The students work is relevant. They have a real audience. And when you do instructional design like this, the diversity of your students becomes a strength. Each new perspective deepens the understanding of the students.

This isn’t your typical xMOOC. Here, the students are truly co-creating the course environment. And it’s done in a way that you don’t have to buy into the whole experience. You can teach your class in the way that you want, but have the students do these assignments with a global community. That’s the “loosely-coupled” bit – if your class opts into it then you’re connected at these points of activity, but you’re able to teach around that in the way that you like.

Anth 101 and the New Open Pedagogy

Now Amy and I said that we thought this could be a model going forward for classes that went beyond making things. I actually proposed a class on water policy and water science where students would do different assignments around documenting their communities particular water issues in the same way as ds106 (I know, riveting). It was called Water106, but it never got off the ground. The people who were interested locally just didn’t have a class that was a good fit, and the people who had a class that fit weren’t really interested.


Screenshot of Water106 entry screen. Although it was mostly a mockup to explain to faculty what could be done, the Assignment Bank and Waterfeed were actually functional elements.

But I think we’re about to see this model on an amazing scale. Because the 2017 class I’m almost deliriously excited about is Michael Wesch and Ryan Klataske’s Anth101 course. I honestly just learned about this, and I’m so excited to share it with you.

There’s two big innovations here. First, the course has been completely structured around big ideas. Each week is a big idea of anthropology. People are different. We hate others for a variety of reasons. Or here’s Week 4:


And then is you scroll down there’s publicly accessible readings, and videos, and other media for the students to read.

Then, after you do it, you do the assignment. And the assignment is in some ways a learning experience and in some ways it’s anthropological field work. So here’s what you do after you read about how our  daily routines and rituals encode basic assumptions. You go and do the challenge.


So here’s the assignment.

  1. do at least 1 hour of participant observation in a place, event, activity or situation that makes you uncomfortable.
  2. Find the assumption, bias or shortcoming in your self that makes it difficult to enjoy or get comfortable in this situation.
  3. Try to overcome the assumption, bias or shortcoming by participating and then writing about it using “thick description” – an exquisitely detailed description.

And if you scroll down, you see other students’ “thick description”.


One student goes to a party, when they never do. Another student returns to a Catholic Church after leaving the faith. And this is my favorite – in a third a male student goes to a generally female wine-tasting weekend. They write this up in a legitimate anthropological style called “thick description”. In each case they try to analyze what underlying assumptions make them uncomfortable.

Importantly, as more students use the site, the diversity of all these classes and individuals connected to this becomes a strength. It’s inspiring. A number of other students see the Church assignment, and decided to do that one – and now you have a cluster of people analyzing their own underlying assumptions about organized religion, each in a different way, each building off each other’s work. One student, a devout evangelical Christian even attends an LDS service and finds it’s not as nearly different as he thought.

Wesch’s vision is that students all around the world from many different backgrounds will post their experiences and challenge results here, and we’ll all learn from one another a bit more about what it is to be human.

Again, this is what I think about, when I think of this human core of open:

  • We are encouraged to modify materials to create a sense of local belonging
  • We use the power of the open internet to create work that is relevant and impactful, with a real audience
  • We see the diversity of our students not as challenge to be solved, but as potential to be tapped

And maybe we learn a bit about others along the way.

From One Best Book to Choral Explanations

Now something like Mike and Ryan’s work may seem massive. I think it’s less hard to put together this sort of thing than we think, and I’m actually exploring with some of my faculty right now if we could do something like this in Neurobiology. Maybe it will come together, maybe it won’t.

The thing is, as big as Anth101 seems, my guess is everyone in this room knows at least one faculty member who could pull that off. So show them the site and ask if they’d like to do something like this. What’s the worst they could say?

But I’m also interested in more incremental solutions as well, things that could be integrated into an existing class in less than a day, but which get to these same concerns.

One of the things I’ve been working on for a while now is an idea of Choral Explanations.

Choral Explanations have their roots in a project I worked on for a couple years called Federated Wiki, but the easiest way to understand them is to go to one of the newer question and answer sites, such as Quora.

The way these newer sites work is this.  They are sort of like a wiki and sort of like a forum. A user asks a question, but they are not really starting a discussion. The question asked is like the title of a wiki page; and everybody puts their version of the answer below it, trying to make the best explanation.

So if you ask a question like “How is a rainbow formed?” You’ll get 23 different answers. I’ve pulled some of those answers together for you here:


So normally you don’t look at it like this – you scroll down the page, like a forum. But I collected some and pasted them side by side for the purposes of this slide.

Now here’s what I want to ask:


“Why are there so many posts about rainbows?”

That was a long setup. I hope some of you appreciated it.

No, but seriously: from the traditional “open” view, this is all wasted effort. What we’d say from a traditional wiki perspective, for instance, is we need to get all these people to work together and come up with the best possible for all time explanation of why rainbows happen. Twenty-two explanations – that’s chaos!

Except it isn’t. Yes, posts are upvoted, and I read one of the better explanations first, from a neuroscience student, who in part draws on his background in understanding optical phenomenon in the answer. You’ll notice the neuroscience one on the far left is one of the few that really tries to place the position of the observer in the diagram, just like you’d expect a neuroscientist would.

And if you wanted you could stop there, kinda-sort-of understanding the explanation. But if you scroll down and read additional ones, two interesting things happen. First, you’re testing your understanding. If anything in this new one doesn’t make sense, you’ve misunderstood something.

But you’re also getting new information. Our neuroscientist explained a lot about the prismatic effects of raindrops, but didn’t actually talk about much about how that prismatic effect works, whereas Mr. Physics down-page goes into great detail about that. Further down the page, someone reminds me that as you drive towards a rainbow it will disappear, because it’s all tied to your position, which connects what neuroscience guy was saying.

So now I have an example of a common driving experience that ties to the first explanation and reinforces it. All this time when a rainbow disappeared on a drive my natural assumption was to think — well the rainbow has disappeared. But of course it hadn’t. I’d just moved out of the position from which the optical effect was being viewed.

We had another rainbow up here last week, as I was driving home. And as it disappeared for me, I realized the people just a bit behind me would be still seeing it. And that reinforced for me that I had really been misunderstanding rainbows all my life. I wouldn’t have got that from just the first explanation, and I wouldn’t have understood the significance with just the driving example further down page. Understanding comes out of trying to mesh together, in your own mind, the multiple explanations.

In other words, the multiple explanations help you triangulate your way to a deeper understanding as well as identify any misunderstandings.And it does that better than any single explanation can.

Choral Explanations

So I’m obsessed with this approach to things, which I call “choral explanations” based on the idea of a chorus, that somehow all these multiple passes at a subject in this way form a greater harmony than any one explanation can alone. The resonate with one another.

And where I think this could revolutionize education is student-created OER. So if you imagine an online open textbook, but embedded in the textbook is a little box that says “Insights and Perspectives: community-contributed explanations and examples.”


And what the students would do is they read something on solute potential or mitosis and if they are confused they can click in and get the chorus of other students and maybe teachers providing dozens of explanations and examples of these things.


And when they understand it, they can contribute back to it, write their own explanation.

For one student, maybe mitosis is important because it explains why chemotherapy (which targets mitotic cells) affects the hair so much. For another it explains why stem cell research is needed for Parkinson’s disease (nerve cells don’t divide, and so can’t regenerate). For a third it explains why Human Growth Hormone is necessarily going to have a downside.

This isn’t hard to do, actually, building in these spaces for the chorus into textbooks. And I don’t want to give too many details, but I’m working with an OER publisher to see if we can get this feature into their works. We’re shooting to have a small live demo of this by Open Ed.

Keeping the Human Core of Open

We looked at a few trends here. We’re moving

  • From Classroom Exhaust to Customizable Copies
  • From MOOCS to Loosely-Coupled Classrooms
  • From One Best Book to Choral Explanations

These are some of the big opportunities.

And there’s so many other things I could talk about (and we can talk about if you want). Annotation, I think, is coming into its own in education, and has some great implications for Open Pedagogy. The Wikimedia Education Foundation has been working tirelessly to make Wikipedia a more friendly place for students and teachers, and has rolled out some interesting tools for Wikipedia-based courses. Reclaim Hosting’s Domain of One’s Own project is growing at an astounding rate. Lumen Learning’s Waymaker platform has brought student-centered personalization to OER. I could go on and on.

But these things are exciting to me and I hope exciting to you because they treat our students as real people. They allow us to personalize to build a sense of local belonging. They treat our student diversity of talent as a strength, not an obstacle. They allow us to give students work that will have real audiences and real-world impact.

We have to be able to say back to that student who asks “Are you my audience?” That no, we’re not. But we can help them find their audience, and it will change their life.

I was lucky to be invited to speak here today. And I was lucky to have you all as an audience. It may not always be evident, but in talking about these things to you it pushed me to clarify, refine, and  expand my ideas. It pushed me, honestly, to spend far more hours writing this than I would have ever dreamed of doing, making this as concise and clear as possible. I came out of that, I think, a little bit smarter and a bit more aware.

And so having an audience, a real audience, is a gift. So is having a sense of belonging, and a sense that your uniqueness matters. I happen to think these are gifts that Open Education is in a unique position to  bestow on our students. We just have to hold on to that human core of open, and not get distracted by the shiny things of edtech. We have to latch on to that vision and carry it forward. And that’s what I hope we will do.

Thank you.

Storage-Neutral Apps and Web Applications Are Not That Hard

Bit of discussion on Twitter today about whether the decentralized web is a pipe dream or a near-term possibility. My response to that is longer than a tweet, so I put it here.

Many things about the decentralized web are hard. IPFS, the torrent-like file system that makes servers irrelevant, is pretty geeky right now. Federated Wiki requires some initial guidance.

What can happen right now, however, is the that the data layer and the application layer of apps and web sites can be separated. This is the idea behind the Berners-Lee project SOLID, and is the idea I outlined a couple years ago.

This is a simple idea. When used to by MS Word, I’d pay money to Microsoft for it, but then I would point it to my private hard drive. Corporations did well, but the difference was that I maintained control and portability of my data. Microsoft did not own my documents. I didn’t have to store them on a Microsoft drive.

I liked that world.

A few days ago, Jim Groom was prodding me as to why I still host this blog on And it’s true — I hate it serves people ads. I’d like to get off it.

But what I hate more is that half my old posts are completely messed up because of a hack back in 2011. Back then I was hosting it on BlueHost, I got behind on updates, I got hacked, and had to recover posts from an export I had made in 2009. For posts between 2009 and 2011 I actually had to reconstruct them from captures.

I know, I know — there’s auto-updates now, yadda yadda yadda, it will never happen again.

But the question I have to ask, again, is why I can’t have WordPress or Reclaim or whoever maintain my web application layer while paying for commodity storage  somewhere which provides permissions based access to that data on a per application basis.

Such a scheme, among other things, would allow multiple of applications to plug into the same data unlocking the power the web used to have. Third party apps could use standard storage APIs instead of relying on the charity of Twitter or Google. They would also give me direct control of my data. And because data does not have to be ported, it would eliminate much of the current lock-in we find ourselves in. And tool providers can go back to selling us products instead of monetizing data. I think we’d all be happier, especially Jim Groom, who wouldn’t have to look at ads like the one below.



The Web Stream Was Designed for Information Underload

Readers here will have been following my discussion of the use of the Stream as a guiding metaphor for the web. The Stream has its roots in conversation. It organizes communication as a string of sequential events. This is opposed to the Garden, which has its roots in literary culture, and organizes knowledge spatially, as a sort of knowledge map.

Some technologies have a very clear bias. Email, with its inbox and explicit replies, is very stream-driven. HyperCard, with its multiple paths and web of references was very garden. The web is an unusual beast, having been conceived as a garden technology (a hypertext network of documents) onto which a stream model has been grafted and is now the dominant paradigm (Twitter, Facebook, Instagram, everything).

I’ve mentioned that the earliest streams appeared on the web very soon after the web’s inception. The first example, in fact, is probably the NSCA’s What’s New page.


The page shows a simple but powerful idea. It’s a simple web page, but when new content was found on the web (generally by virtue of a new server coming online) the new info was added to the top of the page. This model is doubly powerful because so many people set the What’s New page as their home page. Later, link bloggers and others would follow this pattern. In the mid-aughts Twitter, Facebook, and others would adopt this as their dominant paradigm for presentation of information, and that’s how we get here.

What occurred to me this morning was that this technique was originally designed not for information overload, but information underload. The problem that NCSA was dealing with, in part (besides lack of a search engine at that time), was that if you went out on the web each day it would seem relatively unchanged and static. There really wasn’t much to do. Channeling this activity in this “What’s New” way helped because if this information was in a directory structure instead you might never see the new stuff at all.

If you look at a number of other prominent streams, from early link blogging to the first Facebook feeds, you see this pattern repeat itself. Streams are born not because people are overloaded with good choices of things to read but because people perceive a paucity of information. Without the stream there is nothing to read.

Over time, that information grows, and the stream becomes less about discovery and more about curation, whether that curation is human or algorithmic.

I don’t really know what to do with this insight, but just thought I’d note it.