Choral Explanations on the (Not-So) Cheap

Once you start to see this “choral explanations” pattern, you start seeing it everywhere. I’ve mentioned before how you see it on sites such as Stack Exchange, and in the dialogue of accomplished tutors. In all these situations, people are not given the “one best explanation”, but rather, they are provided with an array of explanations for a concept or a task, and they use them to triangulate a deeper truth or understanding.

My point has been why we can’t better support this pattern with our educational materials. I think we can do a lot better. But even now we do do this to some extent.

Currently, I’m in the middle of a large institutional OER transition, and going through different gateway classes and seeing what’s currently assigned to students and whether there are suitable free and open replacements. And as I look at what’s assigned, I see a lot of this: a major text assigned with a “for dummies” book or a study guide.

chorusjUsAAuX0x

A snapshot of required books for a class from the bookstore.

Think about this for a second. Pearson has spent probably a million dollars on its general chemistry text here to produce the best possible explanation of how chemistry works. Every word has been pored over, edited and re-edited for perfect clarity, diagrams have been commissioned and recommissioned.

And yet it’s paired with Barron’s “E-Z Chemistry”, which was probably written in a month, back to front, by a single individual.

Why? If you believe we must find the “one best explanation” and present it to the students, this makes no sense at all. Clearly, Pearson must already come close to that.

But if you believe, as I do, that students do best when presented with an array of explanations, of different difficulty, with different examples, and the like, then this makes perfect sense. In fact, in a perfect world, the student would have three or four textbooks to hop between for whenever something didn’t make sense.

Anyway, if you haven’t read the piece on Choral Explanations yet, give it a go. I swear it’s worth it, because once you start thinking in this way, you won’t be able to unsee it. It’s honestly everywhere, except in the textbooks themselves.

Choral Explanations and OER: A Summary of Thinking to Date

For the past few months I have been talking about “choral explanations” and how they might transform our approach to OER. This is an outgrowth of my work on federated wiki and Wikity, but is a much more specific and immediately applicable idea. In fact, as I will show below, choral explanations are already in use elsewhere on the web: it is just that education has not yet made use of this pattern.

This post is my attempt to pull together in one place my thinking about choral explanations as a way of approaching educational materials: why it’s important, and what opportunities it provides. We’ll start by talking about how textbooks are currently produced, move on to new collaboration models we’re seeing on the web, walk through a specific example, and finish by talking about the long-term vision.

The Chum and the Ore

A little over a decade ago, award-winning author Tamim Ansary wrote an article on the way commercial textbooks get made.  He had been tasked to help an army of writers to write a book “from scratch”. The process turned out to be less than inspiring:

Sounds like a mandate for innovation, right? It wasn’t. We got all the language arts textbooks in use and went through them carefully, jotting down every topic, subtopic, skill, and subskill we could find at each grade level. We compiled these into a master list, eliminated the redundancies, and came up with the core content of our new textbook. Or, as I like to call it, the “chum.”

After several more steps (including adding in whatever hot new pedagogical theory was in vogue) the writers got down to writing. Sort of.

Finally, they divide the outline into theoretically manageable parts and assign these to writers to flesh into sentences.

What comes back isn’t even close to being the book. The first project I worked on was at this stage when I arrived. My assignment was to reduce a stack of pages 17 inches high, supplied by 40 writers, to a 3-inch stack that would sound as if it had all come from one source. The original text was just ore. A few of the original words survived, I suppose, but no whole sentences.

Though academic publishing in higher education is a bit different than what is described in some of his article, contacts I have in the industry tell me it’s not different by much.

From Voice into Pulp

Key to the process that Ansary describes is an obsession with eliminating individual voice, originality, and viewpoint.

To avoid the unwelcome appearance of originality at this stage, editors send their writers voluminous guidelines. I am one of these writers, and this summer I wrote a ten-page story for a reading program. The guideline for the assignment, delivered to me in a three-ring binder, was 300 pages long.

As Ansary describes it, it’s an expensive and difficult process to coordinate. Getting a coherent, single voice out of multiple contributors is difficult. Getting it “objective” is even harder.

Oddly, when I read the Ansary’s piece I was reminded most of Wikipedia. Oddly, because Wikipedia is often held up by the publishing community as a cautionary tale when it comes to open materials. But this process – trying to get dozens or hundreds of voices and past snippets of writing pounded into a coherent whole – is what Wikipedia does as well. It’s both the key to its success and the cause of its current malaise. Ten of thousands of hours are spent editing Wikipedia’s top articles, but for the most part they aren’t spent coming up with new ways to explain things, or updating articles with new research. They are spent are the never-ending pulping out of voice, perspective, bias, and differing opinion about what belongs in the article. Only a sliver of writing – even good writing – makes it through.

And the writing that does make it through? It’s often uninspiring, dry, and voiceless.

As I’ll make clear later in this article, I believe Wikipedia does some things very well, and remains one of the great achievements of networked culture, just as the modern, bureaucratic publication process followed by the textbook companies was a great 20th century achievement. (I’m not kidding here: it was).

But two questions arise when we look at these processes. First, are there more efficient ways to put together articles than this endless process of pulping things into a single narrative? And second, is it possible that those ways might make for a better experience for the reader as well?

I believe the answer to both questions is yes. I believe that by moving past our romance with the textbook as a single authoritative voice that we can ultimately produce more effective works that are also more amenable to collaborative production by both faculty and students. And I think (or perhaps hope) that producers of Open Educational Resources can lead the way on this, and renew their commitment to open pedagogy in the process.

The Problems of Textbook Publishing Are the Problems of Wiki

I spent many years, from the early aughts onward,  promoting wiki as the prime example of networked collaboration, to varied success. Then several years ago I came across a video of Ward Cunningham, the inventor of wiki, saying he may have got key elements of wiki wrong.

The problem, said Ward, was that wiki was a relentless consensus engine. And for certain things (e.g. encyclopedias) that might not be a bad thing, but as a way of working it had its drawbacks.

First, let’s acknowledge the benefits. Works that aim for what we might call the “encyclopedic voice” can be more accessible to newcomers. They rely less on reader background knowledge, and can be much more helpful to a novice than being dropped into a cacophony of different, competing voices. Works produced by either Wikipedia-like consensus or the editorial pulping that Ansary describes can be particularly useful to students for some aspects of their course, providing the student a map of the territory before pushing them to delve into to a particular journey.

The drawbacks, however, should be familiar to anyone who has worked in a consensus publishing environment, whether on a textbook, a Wikipedia article, or a policy document. Consensus can be off-putting to contributors. It often suppresses important minority viewpoints. The process of document-by-consensus tends to value the bureaucratic over the human, and push people into spending  more time on edit wars and turf-defending than on production of knowledge or new insights. And in the end the work that is produced is often acceptable to everyone but exciting to none.

Above all, encyclopedic voice is expensive: expensive in terms of time involved, coordination cost, and toll on the production community. No matter whether one finds themselves embedded in Wikipedia’s headless bureaucracy or a publisher’s 300-page process, when multiple voices must be merged, the cost of production jumps sharply. Well-known and much-feared behaviors emerge: bike-shedding, edit wars, and the like. Once consensus is achieved, even small edits threaten to undo the hard-won agreements, making fluid evolution difficult. Over time, the result is often an unexpected rigidity that actively suppresses rather than spreads community knowledge.

Ward looked at these problems in 2011 and proposed a new direction for wiki, one he termed the “chorus of voices.” The idea of the chorus was that a wiki page’s title could form a hub for a number of individual, personal takes on a single idea, the way a hashtag can form the hub of a conversation on the web.  In the chorus, wiki editors don’t edit a single page: each editor creates their own version of the page, often out of the materials of previous pages (through a process called “forking”). Clicking on a link to a page on “The Causes of the Cambrian Explosion”, for example, would deliver to you one version of that wiki page, but make you aware of multiple other attempts to explain the same thing, often based on other versions of the same thing.

Part of the idea was to do here what processes like git had done for software: focus people on creating new work and fixing existing work rather than arguing about small points or winning edit wars by exhausting the opposition. But the more radical piece of the vision was making peace with “the chorus”; understanding that the “meaning” of a given page in wiki was the intersection of all the work individual authors had written against that title.

This would allow a lot of the personal perspective that gets pulped out in a traditional wiki process to stay in. The marine biologist’s take on the Cambrian Explosion could focus on what they know best without having to be reconciled with the eye experts’ take on it, in which the evolutionary period is seen primarily as being about the emergence of vision. And sure, the anti-evolution set could write their own page, denying it ever happened, but in writing it as a separate work they would not exhaust the editors trying to make a scientifically accurate page with endless flame wars. In turn, this freedom from having to incessantly defend obvious revisions would attract new editors.

Over time, thought Ward, people would adopt or adapt the most useful pages, in effect creating an evolutionary process of their own, where the strongest pages survive not based on who has admin rights or the most time to babysit pages, but on which versions of pages were most useful to a given community.

I ended up working directly with Ward for over a year and a half on educational applications of his new approach to wiki, commonly known as “federated wiki”.  The experience ranks as one of the most influential of my life.

Together, we tried a series of experiments with writing this form, both in communities and in personal work, and in the process discovered a number of interesting patterns. And working in this way – seeing collaboration in this choral, almost upside-down way – made me attentive to things I might have otherwise missed. Subsequent work on a Shuttleworth-sponsored project called Wikity honed that radar further.

And so it was when I watched Joel Spolsky’s presentation to the Wikimedia foundation last fall — I saw more in it than I might have three years ago. Spolsky, in case you don’t know, is co-founder of Stack Exchange, a wildly popular “question and answer” site.  And in his Wikipedia talk he mentions that when “active editors” were defined in comparable ways, Stack Exchange (founded 2008) had actually surpassed the number of active editors found on English Wikipedia (founded 2001). And unlike English Wikipedia, Stack Exhange was still growing.

Wait, what?

As I thought about the similarities between Stack Exchange (formed 2008) and federated wiki (formed 2011), It occurred to me that Stack Exchange and Federated Wiki were in fact part of a broader movement in how collaboration was now happening in communities, and that it was time to bring this more general process into the OER movement.

Choral Explanations

To get at why the StackExchange model is so interesting requires a bit of a detour into how question and answer sites used to work.

Older sites (e.g. Yahoo answers) were essentially transactional. A person with a question would pose a question, and the answers below the question would respond to it. Eventually, the original poster of the question would select an answer as sufficient or best.  The question would be closed and people move on.

These older Q&A sites were simple variants of general forum architecture. And they get good results occasionally, but it also have the sort of problems a forum runs into — they tend to produce answers that look more like replies than generalized explanations.

For example, here’s some of the bottom answers to the question “How can a person increase their chances in a lottery?”

image001

Again, what’s key here is the transactional nature of it. These answers do not read like wiki entries. They read like responses, and the reader must read them as they would any forum. Additionally, there is no real pooling of effort here – this question is asked on Yahoo dozens of times but those answers don’t form a chorus as much as a cacophony of scattered events:

image002

Sites like Quora and Stack Exchange turned this process on its head. Instead of envisioning the Q&A site as a single-purpose forum, the new breed of Q&A site sees the model as half-wiki/half-forum. The question asked is analogous to the title of a wiki page; it’s not transactional but communal. A question like “How can a person increase their chances in the lottery?” is the place where the community will store their collective knowledge on that point, and it is not owned by the person who asked the question, but by the community itself.

On Quora, in fact, the question can (and often is) edited by the community for clarity, and on Stack Exchange posters who pose badly formulated questions are pushed by moderators to reformulate their question in ways more beneficial to the site. Duplicate questions are shut down, just as duplicate wiki pages would be, so that effort can be pooled. The original poster of the question has no more power than any other user to rate specific answers more useful than others or to close the question. And as with wiki, answers posted are meant to be complete answers, not lazy responses to or discussions with the original poster. Each answer is also self-sufficient (a pattern I have termed elsewhere as “hospitable”).

Posting a question on these sites is really not about starting a conversation at all. It’s saying “Let’s gather our community knowledge on this particular issue,” just as one might do with wiki.

Unlike wiki, however, individual control of writing is preserved, and multiple unique passes at a subject are appreciated. And big questions get a lot of passes. Here’s a snapshot of a few of the sixty-eight responses to Quora’s question of why many physicists believe in a multiverse.

image003

The most fascinating thing? Unlike earlier sites, it’s not about the best or first adequate answer. People looking to learn a theory or a skill find seeing the multiple explanations a benefit. Since each response takes a different approach to providing an answer, the reader can read multiple explanations that get at a subject in different ways, at different levels of complexity. Some are nuanced, and some are ridiculously simplified. Some exercise metaphorical thinking, others dive into math, others illustrate with diagrams.

And this approach – multiple routes into the same concept for the learner – is supported by the research. There are no “learning styles”, as we know – no “kinesthetic learners”. But lost in the discussion about learning styles is the research base that shows that most students benefit from multiple approaches into a subject using a variety of styles. Segregating students into different style groups has no effect, but teaching in a variety of ways is quite effective.

The multiple explanations help in other ways too. Like most other users of Quora or StackExchange, I find that the tenuous understanding gathered reading an initial explanation is slowly solidified and clarified as I read subsequent explanations. In fact, this process –  reading multiple treatments of the same issue to add nuance to understanding – is a best practice for learners, accommodated and encouraged by this format.

There’s some important caveats here. The writers do follow community norms in their answers, and importantly, these explanations are presented as parallel attempts to answer the same tightly defined question. This allows a reader to quickly scan and process them in a way that is not possible when skipping through the results of a web search, for example.

Additionally, both StackExchange and Quora pay a lot of attention to formulating questions at the right level of specificity for their particular audiences. On the original Stack Overflow, Joel defined the ideal question as “I got this far with the code I wrote, but I can’t work out the next step.” Questions that ask things too far from a specific problem (“Is Python a better language to program in than Perl?”) get quickly deleted by moderators. Quora norms are different but just as particular about the types of questions that can be productively asked.

But largely what both these models represent is something like the chorus of voices Ward has been advocating – people who control their own answer, but place it in relation to other explanations on the site. For Ward, what ties it together is a page title. For Stack Exchange what ties it together is a question. But these seem to be part and parcel of a larger trend, one that split the difference between individual voice and community needs in a new and elegant way.

And the result? The site has quickly become one of the most popular and useful sites on the web.

Imagining Choral Explanations for OER

Why is all this important? Because for years we have imagined (or at least most people I know have imagined) an approach to OER production that looks like wiki or open source code production. In this model many people work on and gradually improve a small set of crowd-sourced textbooks. Maybe they create variations for their own purposes, and maybe some of their changes make it back to the mainline content. Maybe you end up with a very different version of the textbook you are using, but you ultimately end up handing the student one version of that textbook. We’ve seen models like this in WikiEducator, Connexions, etc.

But even though these methods are open and stigmergic, in these models we risk replicating the pulping process that Ansary describes. Even as we revise, remix, and redistribute, we still come to a work that is mono-tonal, bent towards a single voice, and often as soulless a voice as the ones Ansary laments.

I understand why that is, and why we spend time on this effort. If you’ve ever read a textbook where the voice was not hammered into something relatively uniform, you’ll know that it can be a painful and frustrating experience. The hard work that legions of former English majors do to turn these works (both commercial and open) into a unified whole should not be underestimated or go underappreciated.

But is it possible, just possible, that by adopting a different textbook model we could reduce the effort needed to produce such works, while making a product that is more effective for our students? And maybe even a product that allows them to easily contribute to its production?

First Step: Textbook Core as Operating System, not Application

To do this we’d have to start be reimagining what the core textbook looks like.

It’s interesting that Ansary, in the latter part of his article, doesn’t want to throw away the encyclopedic style of the textbook completely. The “view from nowhere”, as Thomas Nagel has called it, has its applications. People don’t buy a textbook on economics expecting to get solely Paul Krugman’s thoughts on economics or buy a physics textbook to get “Things Some Random Prof Thinks Are Important in Physics” – they buy a textbook specifically because they are interested in getting a more global view of a discipline than any one person can provide.

The problem is that we try to cover all the material in a course in this manner, when in reality most subjects only need a skeleton of ideas.

In his conclusion, Ansary gets the metaphor right:

In content areas like history and science, the core texts would be like mini-encyclopedias, fact-checked by experts in the field and then reviewed by master teachers for scope and sequence.

Dull? No, because these cores would not be the actual instructional material students would use. They would be analogous to operating systems in the world of software. If there are only a few of these and they’re pretty similar, it’s OK.

People have, of course, proposed this “textbook as OS” idea before, and I can hear some of the groans of the old-timers even as I propose this (for the record, I’m an old-timer too). It’s pretty hard to do “textbook as OS” when most faculty review textbooks by checking the table of contents to see if their three niche subjects are covered.

But in this case, I’m not proposing that material be removed from the textbook, but rather that It be separated in to two separate tracks, an encyclopedic, carefully sequenced core and a marketplace of choral explanations.

Here’s a glimpse of how that might work, and why it might be more effective for both students and textbook producers.

How Does Water Get Up That Tree and What Does It Carry With It?

For people who say that we don’t need educational materials or textbooks, I like to remind them of what it is like to be a student. (I have a sneaking suspicion that many people who complain that formal educational materials are not necessary have not tried to learn things outside their domain in a long while).

Here’s a paragraph expressing some of what you’re expected to understand to pass your biology class, taken from a model answer in a workbook. If you’ve stayed with this post this far, try and make an honest attempt to read the whole thing; we’ll be using this subject as our example for the rest of the post:

Water potential (Ψ) is a measure of the difference in potential energy between a water sample and pure water. The water potential in plant solutions is influenced by solute concentration, pressure, gravity, and matric potential. Water potential and transpiration influence how water is transported through the xylem in plants. These processes are regulated by stomatal opening and closing. Photosynthates (mainly sucrose) move from sources to sinks through the plant’s phloem. Sucrose is actively loaded into the sieve-tube elements of the phloem. The increased solute concentration causes water to move by osmosis from the xylem into the phloem. The positive pressure that is produced pushes water and solutes down the pressure gradient. The sucrose is unloaded into the sink, and the water returns to the xylem vessels. [Source: College Biology Learning Exercises & Answers, by Textbook Equity]

(Short tangent: When I write articles like this, readers often remark that I should use simpler examples that are more accessible to the general public. They point out it’s not a great idea to make your reader feel dumb by pushing them to slog through something like the above.  But what I’ve found is this – if we are not honest about the difficulties of comprehending dense technical prose we continue to end up with facile solutions and rhetoric around the issue of educational materials. So if reading that hurt your head or bored you out of your mind, well, I’m afraid that was the point. Welcome to being a student!)

As you’ve noticed, the sample paragraph is a bit dense. It was pulled from a set of model answers to workbook questions, and represents what students should know about water potential by the end of the chapter. But it requires that students master quite a bit of material even to effectively read it, never mind produce it.

To deal with this issue, what we generally do in textbooks is attempt to expand that compressed treatment, introducing the conceptual dependencies in sequence, referencing what the student should have learned before, and tying it to the new knowledge. So, for instance, we’ll take the brief mention in this paragraph of solute potential, and recognize that we’ll need to explain that in the text as part of our water potential explanation. And what we try to do there is come up with the “best possible” explanation of solute potential, often hammered out after reading how ten or twenty other textbooks and three contributing authors have explained the idea.

Here is the OpenStax Introductory Biology explanation of solute potential. Again, for maximum understanding of why choral explanations are necessary, please read the text below:

Solute potential (Ψs), also called osmotic potential, is negative in a plant cell and zero in distilled water. Typical values for cell cytoplasm are –0.5 to –1.0 MPa. Solutes reduce water potential (resulting in a negative Ψw) by consuming some of the potential energy available in the water. Solute molecules can dissolve in water because water molecules can bind to them via hydrogen bonds; a hydrophobic molecule like oil, which cannot bind to water, cannot go into solution. The energy in the hydrogen bonds between solute molecules and water is no longer available to do work in the system because it is tied up in the bond. In other words, the amount of available potential energy is reduced when solutes are added to an aqueous system. Thus, Ψs decreases with increasing solute concentration. Because Ψs is one of the four components of Ψsystem or Ψtotal, a decrease in Ψs will cause a decrease in Ψtotal. The internal water potential of a plant cell is more negative than pure water because of the cytoplasm’s high solute content (Figure 30.32). Because of this difference in water potential water will move from the soil into a plant’s root cells via the process of osmosis. This is why solute potential is sometimes called osmotic potential.

Plant cells can metabolically manipulate Ψs (and by extension, Ψtotal) by adding or removing solute molecules. Therefore, plants have control over Ψtotal via their ability to exert metabolic control over Ψs. [Source: OpenStax]

These are hard concepts, and the OpenStax copy here does its best to explain the concept in a concise but approachable way. It even adds a helpful diagram to demonstrate the relationship between solute and water flow:

Picture of solute affecting flow of water in a curved tube.

Source: OpenStax

But that’s it. That’s all you get. It may be the best explanation, or the most concise, but it’s also your only one. So what do you do as a student if it doesn’t work for you?

Well, you probably read that explanation again and again, and hope you understand it. If you’re an adept autodidact, maybe you’ll seek out some help on the internet, sorting through a maze of questionable and often confusing Google results. But if you’re not that autodidact, you’re stuck with this “one best explanation” the textbook has decided to provide you.

But why? How does that make any sense at all?

The idea of choral explanations in OER is that the textbook becomes an operating system on which multiple parallel community-provided explanations run. From the student perspective, the text branches off into multiple available explanations of the same concept, explanations authored individually by a wide range of instructors, researchers, and students. You can keep reading until you find the explanation that makes sense, or you can start with simpler explanations and work your way to nuance. (In the humanities and social sciences there are other more complex configurations that could be used, but we’ll leave those aside for the moment)

Here’s a mockup of what textbook as operating system might look like in practice:

image005

When clicking through those links, you would come to Quora or Stack Exchange-type page where multiple people would take their crack at explaining the concept.  Unlike wiki-style production, each person explaining would get recognition for their work in the form of an avatar, and perhaps even a Quora-like bio which explains the source of the person’s expertise in a blurb (which could be anything from “Nobel Prize winner” to “A student who likes to explain things.”).

Here’s a quick mockup I made last month on mitosis which borrows directly from Quora’s design. What I want to give here the sense of the experience: bylined items with different approaches, and just as with Stack Exchange or Quora, you keep scrolling down until you start to understand.

image006

 

You can imagine the same approach with a question such as “What is solute potential?” Or “Why is solute potential important?” (We’ll show an example of this in a minute).

Stigmergic Production

What about the production side of the equation? How do choral explanations fit into a new model of producing texts? Part of the answer can be found in the concept of stigmergy.

David Wiley has argued that stigmergy is the future of OER production (2004, 2016). He makes the case that coordinated, central work in OER publishing is too expensive to form a long term solution to most of our educational material needs. In the stigmergic pattern, what we hope to see is people creating the materials we need with no centralized governance. Collaboration comes about as a result of people pursuing individual aims in an environment that provides feedback to individuals on where profitable opportunities for work are.

What does this mean in practice? As an example, take Twitter. No one at Twitter says “these will be the hashtags for today”. Rather, someone tweets a message using a hashtag; other retweet it, or adopt the hashtag which exposes the hashtag to others, and so on. Eventually the hashtag trends. In stigmergic systems the environment is altered so that the work that one person does leaves “traces” — in this example, retweets, trending hashtags, visible omissions — that can direct the work of future workers and float the best or most interesting work to the top.

The attraction of stigmergic production is that by making participation lightweight and providing hints about work that needs to be done rather than control structures, people can produce coherent works without explicit coordination, usually at a lower cost and with greater variety than traditional systems.

Wiki is often cited as a stigmergic medium: since anyone can edit and errors and omissions are publicly visible, to use a resource like Wikipedia is also to discover the work that needs to be done. One person looks for a page or clicks a “red link” and doesn’t find it, and writes a stub article. A second person finds the stub and expands it, with errors. A third person finds it useful, but sees it needs a grammar cleanup and an error check and fixes it. A fourth person notices some “red links” in it that point to non-existent pages, goes and creates new stubs for them, and the process starts again.

For a long while I thought that this more traditional wiki use would be the route that stigmergic OER production would take.  In wiki, work tends to form around a set of articles that are slowly improved as they are used.

There may still be a central place for wiki-like production in OER, but newer choral styles of explanation provide an alternate means of production that has less overhead around consensus and synthesis. At the same time the process is still stigmergic.

Stigmergic Production Using Choral Explanations

Our  process begins with the writers of the core textbooks who identify initial productive questions and link to them in the text.  As shown above, these questions are chosen based on their relevance to the lesson at hand, and known student misconceptions:

solute 8

The questions read: What is solute potential? Why is solute potential important? Where can you see solute potential at work in daily life?

From the links faculty can see what questions had many explanations and which were relatively uncovered. From there, if they want to contribute an additional explanation it would be as easy as clicking one of the questions while logged in.

When looking at a set of explanations in answer to a particular question, a faculty member could choose which explanations to show to students in a default view – over time the most used explanations could float to the top. Other criteria could be ranked as well, based on student and faculty feedback:

image009

We show an “add” button here, but could explore a “remix” button as well. Ideally you’d balance simplicity with power in the interface.

Faculty who do not wish to choose which explanations show up in the default view could simply choose to show questions with a given rating or above, or a set of explanations approved by an individual they trust.

If faculty find that the explanations are inadequate, or believe they have a unique way of explaining that is not covered, they can write their own explanation, which can then be used by other faculty. And if the textbook provided prompts aren’t sufficient, they can always create a new prompt.

What sorts of explanations might they produce? Maybe they film a video of themselves showing osmosis through a membrane in a beaker. Maybe they write up an example of how leeches and slugs react to salt to bring home a point about osmotic pressure that helps explain solute potential. Maybe that’s the same story they’ve used time and time again in their lectures, and they just know for some students it just works:

What is solute potential?

Maybe as a kid you were horrible (many kids are) and you tortured slugs in the back yard by putting salt on them and watching them shrivel up. Or maybe as a kid at the lake you were latched onto by leeches that were horrible (most leeches are) and you watched with amazement as you mom slayed them effortlessly with the salt shaker.

If you think about what’s happening here, you’ll have a bit more of a grip on solute potential.

Consider this. You put salt on the slug’s back, which mixes with the water on it. Now you have a membrane (slug skin) with very salty water on the outside and very non-salty water on the inside. The nature of things is in this case that osmosis wants to equalize the saltiness. So a bunch of the water inside the slug rushes past the membrane (skin) to the outside of the slug. Unfortunately, more water on the outside just dissolves more of the salt, which makes the salt imbalance worse, which in turn creates more osmotic pressure to push more water out of the slug. This cycle continues until the slug is a shriveled mess lying in a pool of salty water.

So what’s the solute potential in this example? In this case salt is the solute and water is the solvent. So the solute potential here, at the point the salt is applied, is high on the inside of the slug and low on the outside of it. Putting the salt on the outside of the slug lowers the solute potential of the water on the outside of the slug.

A way of remembering that is that the water on the inside of the slug has a “high potential” of moving to the outside of the slug. The osmotic pressure is a result of the difference in the potential on the two sides of that slug-skin membrane, the salty low-potential outside and the less salty high potential inside. The difference between the solute potential of the outside and the inside is one of a few factors that determine overall water potential.

If you think about this, this is not just a property of slugs. People dry flowers with salt, and before refrigeration we would sometimes dehydrate foods in this way. That salty french fry on the floor of your car that lasts for years without molding? It’s been dehydrated too, and the way the water got from the inside to the outside of the fry is through a difference in solute potential.

You’ll note a couple of things about this explanation. The first is that it’s not going to work for every student. For students who have seen a slug shrivel, this may be the perfect hook to hang this concept on. For ones who haven’t it’s meaningless.

It’s also at a relatively simple level. It doesn’t go into the chemistry of high and low potential, and explain how hydrogen bonds are behind the difference in potential – when salt binds with hydrogen atoms in distilled water, it consumes some of the energy of that water, taking it from a high-energy state to a lower one. But we don’t get into the math in that one.

But over time a library of explanations are created. Some use slugs as examples, some use dried flowers. Some build 3-D simulations, some show video of experiments. Some have mnemonics, some have diagrams. Some go deep into math and some stay out of the math.

As teachers decide to include or not include various explanations for their students, the highest teacher rated materials float to the top. As students read the explanations and rate them as helpful or not helpful the system builds a profile of explanation authors the student seems to prefer and suggest the student “subscribe” to that author, put their material to the top of the student’s view.

Over time, teachers find the best explanations and students find the explanations most targeted to their background, needs, and sensibilities. And if what floats to the top doesn’t work students can keep scrolling down until they find something that does.

If you’ve been following along and reading the pieces on solute potential, you’ll notice one final thing. By coming at the question a different way, the core explanation probably seems a bit more accessible at this point. Does this sentence start to make some sense?

Water potential (Ψ) is a measure of the difference in potential energy between a water sample and pure water. The water potential in plant solutions is influenced by solute concentration, pressure, gravity, and matric potential.

It does, right? Now that you have the example of the slug, what seemed gibberish before starts to become more readable. My guess is that if you read a couple more explanations taking different approaches it would make even more sense. As each explanation takes a different angle on the explanation, novices are able to bootstrap themselves up into an understanding of the denser prose.

(This relates to the way in which choral explanations supports true personalization in a way most modern systems don’t which I’ve written about here).

A Chorus of Student Voices

As we look at the production process it becomes clear that this is also a way to accomplish a long-standing goal of the OER community: to bring students more fully into the production of OER. (The attempt to bring students into OER production at scale has a long history. For some of the history and current directions of this effort, see David Wiley’s recent post on “renewable assignments“).

If having students produce educational materials sounds like a distraction from the work the students should be doing in class, it shouldn’t. Consider the learning objectives for the chapter we’ve been looking at, for example:

By the end of this section, you will be able to:

  • Define water potential and explain how it is influenced by solutes, pressure, gravity, and the matric potential
  • Describe how water potential, evapotranspiration, and stomatal regulation influence how water is transported in plants
  • Explain how photosynthates are transported in plants

When we look at these objectives, it becomes clear that the most fitting artifact to demonstrate competency is in fact an explanation. This is why the dream of a larger system of student produced OER has been so  attractive over the years – oftentimes the tasks that are the most authentic have a poor match with learning objectives, and the tasks that match with learning objectives often are inauthentic. Student-produced OER has the unique advantage that real, impactful work will have a close match to course objectives, almost by definition.

Using the Internet to publish student produced OER is not new: students have been involved with production of web-published educational materials many times before. The usual mode has been wiki or similar technology.

Most of these web projects were accomplished through running special class wikis that aim to cover a subject thoroughly in a separate site. While this allowed students a lot of freedom in the writing process it came with drawbacks. Class sites, when not promoted, often had little to no audience. Also, from the moment they were completed they began to decay. The story we told students – that they were contributing to the sum of human knowledge by publishing on the internet – was often belied by the state of these sites a year or two after the class – stagnant, often hacked and unseen by others. Additionally, finished sites often occupied an awkward middle space – without previous work to extend, these sites spun up from nothing often couldn’t cover enough material to be a resource of note, at least in the space of a semester (one thing I and many others have tried in order to mitigate this is running sites over several semesters, with each subsequent set of students extending the work of previous students, which has its own set of difficulties).

More recently a movement has grown up around having students contribute to Wikipedia pages. This solves many of the problems mentioned above: the use of Wikipedia guarantees student work will be seen: Wikipedia is, after all, still one of the top 20 destination sites on the internet, and still the source for most introductory subject information. It also puts the maintenance in the hand of someone else, and allows students to integrate their knowledge into a larger work.

Yet two major problems occur repeatedly with this model. The first is the problem of other people. Editing on Wikipedia, students spend a disproportionate amount time defending their changes to other Wikipedia editors. Sometimes this is pedagogically helpful – a student being pressed to find a source to support a claim made in the text. More often, though, the time goes to the pulping and blanding process, the removal of any individual style, insight, or personal experience the student might bring to the piece.

The other problem is even more difficult to overcome. If we look at those outcomes again we’ll see objectives like “Describe how water potential, evapotranspiration, and stomatal regulation influence how water is transported in plants.” But those explanations are made in Wikipedia on a few pages at most, which means that only a few students (if any) will have the chance to explain these things on Wikipedia from scratch. Other students will wrestle not with explaining core concepts in a unique way, but in the more mundane work of polishing old articles, a process far more about the use of language than about biology or physics.

And it’s not just a Wikipedia problem — any scheme which encourages students edit and revise a few central articles is going to have the same problem.

Choral explanations provides for a third way that promises some of the benefits of wiki without the drawbacks. As students use externally provide explanations in their class, they’ll be asked to rate them for helpfulness, depth, and completeness. But they’ll also be looking for concepts they believe that they can explain better or more fully.

Student teams can then get together and put together an explanation. Maybe one team figures out how to start an explanation on solute potential by showing a couple clips of people stranded at sea drinking saltwater in the movies, and moves on to show why you can’t grow a fresh-water plant in salt water.  Maybe another team wants to do a piece on how solute potential explains why salinization of soil is such a big problem in arid climates, and link it under the “Where can you see solute potential at work in daily life?”

Students create these pieces alongside the other explanations in the system in a private space. Because the model is a “chorus” and not a single central work, there is always room for students to bring something new and meaningful to the process (unlike Wikipedia, where mature articles offer little opportunity for additional contribution).

At the same time the work can be meaningful. With student permission, faculty can submit student work they deem sufficiently accurate into the central pool of explanations. These explanations, in turn, move up or down in prominence based on their usefulness to students. But in the context of student produced OER, they also serve as models to students of what sorts of things they could do on their projects, truly creating a system of renewable assignments.

Conclusion

As Open Educational Resources move into the mainstream, producers of these works must decide whether they wish to replicate the standard textbook process and format or explore new options in presentation and production. The choices we make now determine whether we will merely reproduce traditional textbook publishing at less cost or use this opportunity to leverage greater pedagogical change.

Yet even when publishers are committed to change, the nature of the model matters. Over the past twenty years we’ve learned a lot about collaboration in traditional wiki environments. Traditional wiki approaches address some of the long term needs of the open education community, but can also replicate unhelpful aspects traditional textbook publishing.

Choral Explanations is presented here as an alternative mode of working together on educational resources, one that allows collaboration and promotes coherence while preserving unique perspectives and facilitating easier community contribution. Additionally, in the way the model mimics differentiated instruction, it may provide a better and more personalized experience for the student as well.

This post is not meant to be a final blueprint of how such a scheme would work, but rather is an attempt to open up a conversation about how we might proceed in building an environment that balances individual control and voice with the pooling of effort communities require. A lot has happened since we began this fight to open up educational resources. As we suddenly find ourselves on the winning side of history, it’s important that we address these issues in serious, critical ways that take stock of what has and hasn’t worked in the past, and design these systems based on what we know in 2016. We have a massive opportunity in front of us: let’s not squander it.

 

 

 

 

 

 

The Problem With Zuckerberg Telepathy

There’s a lot wrong in this statement from Mark Zuckerberg on machine telepathy:

While some of these ideas might seem more like the stuff of sci-fi, Zuckerberg says there is scientific research going on in these fields right now.

Telepathy is one such area. “You’re going to be able to capture a thought in its ideal and perfect form in your head,” he says. “You’ll be able to share that with the world, in a format where they can get it.” [source]

One of the more obvious problems is that the “thought in your head” is really a network of ideas and experiences. More specifically of your ideas and experiences.

For this reason, ideas can’t be transferred, only recreated in a different but related form in someone else’s mind (i.e. networked with _their_ experiences and ideas in an analogous way). Language is a means by which we try to trigger that recreation process in a directed way.

This is to say that the thought, as it exists in your head, is _not the ideal form for transfer_, not by a long shot, even if the science catches up in the ways Zuckerberg predicts. The analogy is loose, but the thought in your head is as useless to another person as is machine code from a machine which exists nowhere else in the world. Given his background, Zuckerberg of all people should get this.

This is not to say we can’t do better. Interfaces that can track subvocalization of language may appear sometime in the distant or not-to-distant future, and could revolutionize the way with interact with machines. But ultimately it’s the language stream you’d want to tap into, not the thought stream, because it’s the language stream that is the recipe, and recreation is about the recipe not the cake.


This article is from Wikity, my personal wiki, which is where I do my thinking nowadays. You can look at some other social bookmarks and writings on this subject by going there. To get you started, here’s some stuff on memory.

How My “Disarm Hate” Slogan Went Viral (A Lesson in Network Communities and Networks, Part I)

Recently, a slogan I created went viral. Since my area of (day job) expertise is how we use networks to learn and collaborate, I thought I might talk about how that happened, and what its path to fame can tell us about information flow in networks.

Today, I want to just set the story up. Tomorrow I will discuss how what happened undermines most people’s “common sense” understanding of networks, but is supported by current and older research on networked behavior.

The Avatar

When I woke up to the news of the Orlando shooting a little over a week ago, I was horrified like most people at what had happened. But I also have been in online activism long enough to know that the “meaning” of an event often gets hashed out in places like Twitter in the hours after it. I thought I could make a difference there.

I decided I would make an avatar to express that the main story here was not about ISIS. It seemed to me that there was one set of interpretations floating out there that would end up at “ban muslims” and another set that wouldn’t. And I desperately didn’t want Islamic terrorism to become the primary frame through which we viewed Orlando.

Thinking Things Through: The Two Community Problem

I started out by taking an transparent screen of a rainbow flag and putting over my current avatar of my face, expressing solidarity with our LGBTQ folks (including my oldest daughter). Except that didn’t work, because it neglected the gun violence message, casting this as solely an event of hatred, and hiding the fact that this was also yet another in a chain of high-magazine assault weapon tragedies.

I thought back to Charleston, the Dylan Roof shooting a year ago. Back then, people casting it as a result of white supremacy (which it was) claimed that the gun control folks were minimizing their message by portraying it as “just” another gun tragedy. People in the gun control movement were upset that they were supposed to stay silent on this issue so that the racism issue could be highlighted, even though the gun control movement relies on making the most of small windows of public outrage after an event like this.

Every image and phrase I came up with had tripwires. I could put “Not One More” or “Stop the Violence” on top of a rainbow flag, but that seemed to equate one of the worst hate crimes in modern American history with school shootings and San Bernadino. I made one that said “Stop the Hate”, but now I was ignoring the gun control issue. It started to feel like I was just going to have to pick a side. It was Charleston all over again.

Then about 90 minutes into brainstorming avatars, it hit me. I had just made a “Stop the Hate” one when I thought of a small tweak that pulled it all together. I showed it to my wife, Nicole.

ngQjWbJr

“Disarm Hate,” I said. “Is it too cutesy?”

Nicole thought a second. “No, I like it. It says a bunch of things in two short words. It’s clever, but it’s not cute.”

I uploaded it to my Twitter profile, which borked the readability of it a bit. I spent another hour fiddling with font-sizes, drop-shadows, and letter positioning, uploaded the finished version and tweeted out that other people should steal it and make it their avatar.

ClW7mTBUsAEI-Vn

As you can see from the avatars at the bottom, some people came by relatively immediately and grabbed it for their avatar. (Special thanks to Amy Collier in particular, who picked it up and changed it to her avatar two minutes after I tweeted this).

People started grabbing the avatar from other people, and tweeting the hashtag #DisarmHate. It felt small, but it still felt good.

Seven Days Later

My daughter was at the Portland gay pride parade yesterday and pictures that had her tagged in them started coming back to Nicole’s Facebook account (I’m not on Facebook). Suddenly she pauses.

“Hey, Mike,” she says, “What was your avatar slogan again?”

“Disarm Hate?”

“I think someone has it on the back of their shirt.”

She showed me the photograph:

dis

One of the pictures Katie’s ex-girlfriend took in Portland.

 

Wait, what? It’s not exactly my avatar, but boy is that familiar…

I went onto Twitter and typed #DisarmHate.

dis2

And from today….

warren

I looked at the news:

disarmhate

By the way, here’s the news up to day I coined it — the phrase (as phrase and not random collisions of words) simply doesn’t exist in the media:

disarm4

So what the heck happened? How did my tagline go viral? How’d a slogan invented on Sunday become the term of choice for a movement by Wednesday?

It’s probably not what you think.

Tomorrow I’ll talk about how the “organic” model that most people assume creates virality is a load of bunk. (Short summary: the phrase got a huge boost people whose job it is to find these sorts of things and promote them through the network; organic is just another name for “well-oiled and well-funded advocacy machine”).

Is “The Web As a Tool For Thought” a Gating Item?

In instructional design “gating items” are items on tests which, if not answered or performed correctly, cause failure of the test as a whole.

As a simple example, imagine a driving test that starts in a parking lot with the car parked. The driving test has a lot of elements — stopping at stop signs, adjusting mirrors, smooth braking, highway merging, etc. These are all important, and rated by weighted points.

But none of these can be tested unless the student driver can release the emergency brake, place the car in reverse, and back out of the initial parking spot. That part of the test may be worth 10% of it, but it forms the gateway to the majority of the test, and if you don’t make it through it, you’re toast.

I’ve been thinking about gating items in relation to my work on Wikity. There’s a lot of ideas in Wikity that people don’t get, and they don’t seem to me to be hierarchical for the most part. This isn’t a sort of “you have to learn about averages before you learn about standard deviation” sort of problem. But I’m starting to think that there may be a gating item that is keeping us stuck in the parking lot.

What Wikity Is, in My Mind at Least

Let’s start by talking about what Wikity is, at least in my view.

Wikity encompasses, currently, a lot of ideas counter to our current web narrative. In all cases, it’s not meant to supplant current sorts of activity, but to maybe pull the pendulum back into a better balance. Here’s some of those ideas:

  • Federation, not centralization. Wikity allows, through the magic of forking and portable revision histories, a way for people to work on texts and hypertexts across a network of individually owned sites.
  • Tool for thinking, not expression. Wikity is meant as a way to make you smarter, more empathetic, more aware of complexity and connection. You put stuff on your site not to express something, but because it’s “useful to think with”.  By getting away from expression you also get away from the blinders (and occasional ugliness) being in persuasive mode comes with.
  • Garden, not Stream. The web today is full of disposable speech acts, that are not maintained, enriched, or returned to. Tweets, Facebook posts, contextually dependent blog posts. Consequently entering new conversations feels like sifting through the Nixon tapes. Wikity aims to follow the Wiki Way in promoting the act of gardening — of maintaining a database of our current state of knowledge rather than a record of past conversations.
  • Linked ideas and data, not documents. Things like social bookmarking tools, Evernote, Refme, and Hypothes.is act as annotation layers for documents. But the biggest gains in insight come when we deconstruct documents into their component pieces and allow their reassembly into new understandings. Our fetish for documents (and summaries, replies, and annotations of documents) welds important insights and data into a single context. Wikity doesn’t encourage you to annotate documents — it encourages you to mine them and remix them.
  • Connected Copies, not Copies or Links by Reference. We generally have had two ways of thinking about passing value (e.g. text, media, algorithms, calendar data, whatever). We can pass by value (e.g. make a copy) or by reference (point to a copy maintained elsewhere). We’ve often defaulted to Links by reference, because of the strengths of that, but as web URLs deteriorate at ever faster rates, a hybrid mode can solve some of our problems. Connected copies learn from GitHub and federated wiki: They are locally owned copies that know about and point to other copies, allowing for a combination of local control and network effects.
  • A Chorus, not a Collaborative Solo. We tend to think of collaborations being, at their best, many people tending towards one output. Collaborative software generally follows this model, allowing deviations, forks, track changes and the like, but keeping the root assumption that most deviations will either die or be adopted into the whole. For some things this makes sense, but for others an endless proliferation variations and different takes is a net positive. Wikity tries to harness this power of proliferation over consolidation.

These ideas aren’t mine. They are pulled from giants of our field, people like Ward Cunningham, Jon Udell, Ted Nelson, Vannevar Bush, and Doug Engelbart.

But while they are my entry points into this, most don’t seem to be a great entry point for others. They form, for most people, a confusing collection of unrelated and undesired (or only faintly desired) things.

This is sad, because using Wikity and Federated Wiki has been life-changing for me, giving me a vision of a web that could more effectively deliver on its goal of augmenting human intellect and understanding by rethinking what web writing looks and acts like.

The Web As a Tool for Thought, Not (Just) Conversation

What I’ve come to realize is while “Web as a tool for thinking, not expression” is not foundational to the other concepts in a normal sense, it acts as a bit of a gate to getting their relevance. If the web is (just) conversation and collaboration, then

  • Why would you want copies of other people’s stuff on your site?
  • Why would you care about the chorus? (If it happens great, but your job is your solo, right?)
  • Why would you post ideas and data that are not embedded (and welded to) the argument you wish to make and presumably win?
  • Why would you manage and update past speech acts to be less context-driven (Garden) when you could just make new speech acts for a new context (Stream)?

I think you can probably talk about federation and copies and linked data separately, but it’s difficult to get to those parts of the conversation if the vision of the web is “how do we talk and share things with one another” instead of  “how can this machine and network make me smarter and more empathetic?”

Conversation is one way that can happen. But there are so many other important ways to use networked knowledge to think and feel that aren’t “I’ll say this and then you say that”. In fact, I’d argue that the web at full scale is not particularly *good* at conversation, and our over-reliance on “My site/feed/comment is my voice” as a metaphor is behind a lot of the nastiness we get into.

And as I think about it, it’s not just Wikity/Federated Wiki that struggles with this. Hypothes.is is an annotation platform that could alter the conversational paradigm, but what I see people using it as (mostly) is a form of targeted commenting. In this case, understanding the web as a tool for expression is not gating the adoption of the tool, but may be gating people using it to its full potential.

Jon Udell has recently started to push users towards a new understanding of annotation as something other than comments. And what he says, I think, is interesting:

Annotation looks like a new way to comment on web pages. “It’s like Medium,” I sometimes explain, “you highlight the passage you’re talking about, you write a comment about it, the comment anchors to the passage and displays to its right.” I need to stop saying that, though, because it’s wrong in two ways.

First, annotation isn’t new. In 1968 Doug Engelbart showed a hypertext system that could link to regions within documents. In 1993, NCSA Mosaic implemented the first in a long lineage of modern annotation tools. We pretend that tech innovation races along at breakneck speed. But sometimes it sputters until conditions are right.

Second, annotation isn’t only a form of online discussion. Yes, we can converse more effectively when we refer to selected passages. Yes, such conversation is easier to discover and join when we can link directly to a context that includes the passage and its anchored conversation. But I want to draw attention to a very different use of annotation.

Jon’s absolutely right — it’s really tempting to try to approach annotation as commenting, because that’s a behavior users understand. But the problem is that it’s a gating item — you can’t get to what the tool really is unless you can overcome that initial conception of the web as a self-expression engine. Otherwise you’re just a low-rent Medium.

The first, biggest, and most important step is to get people to think of the web as something bigger than just conversation or expression. Once we do that, the reasons why things like annotation layers, linked data, and federated wiki make sense will be come clear.

Until then, we’ll stay stuck in the DMV parking lot.

 

 

Stereotype Threat and Police Recruitment

From an interview on the World Economic Forum site (which is surprisingly good). A description of how a small change to an invitation email increased pass rates on police recruitment exam:

Small, contextual factors can have impacts on people’s performance. In this particular case, there is literature to suggest that exams for particular groups might be seen as a situation where they are less likely to perform at their best. We ran a trial where there was a control group that got the usual email, which was sort of, “Dear Cade, you’ve passed through to the next stage of the recruitment process. We would like you to take this test please. Click here.” Then for another randomly selected group of people, we sent them the same email but changed two things. We made the email slightly friendlier in its tone and added a line that said, “Take two minutes before you do the test to have a think about what becoming a police officer might mean to you and your community.” This was about bringing in this concept of you are part of this process, you are part of the community and becoming a police officer is part of that — trying to break down the barrier that they are not of the police force because it doesn’t look like them.

….

Interestingly, if you were a white candidate, the email had no impact on your pass rate. Sixty percent of white candidates were likely to pass in either condition. But interestingly, it basically closed the attainment gap between white and nonwhite candidates. It increased by 50% their chance of passing that test, just by adding that line and changing the email format. That was an early piece of work that reminded us of the thousands of ways that we could be re-thinking recruitment practices to influence the kind of social outcomes we care about.

There’s a lot to take away from this. The finding they have applied here originally comes from educational research, and the obvious and most important parallel is in how we approach our students in higher education. How often do we provide the sort of positive and supportive environment our at-risk students need?

The larger pattern I see here with design is just how much small things matter. There’s a reason why no major extant community uses out-of-the-box software. If you’re Reddit, Facebook, Instagram, Twitter, etc. and you want to encourage participation, or minimize trolling, or reduce hate speech you have to have control of the end-to-end experience. Labeling something a “like” will produce one sort of community, and labeling it “favorite” will produce another.

We get hung up on “ease-of-use” in software, as if that was the only dimension to judge it. But social software architectures must be judged not on ease of use, but on the communities and behaviors they create, from the invite email to the labels on the buttons. If one sentence can make this much difference, imagine what damage your UI choices might be doing to your community.

BTW — I write a lot of stuff like this over the day as I process stuff on Wikity (though it’s usually shorter). It’s all there, and you might find something interesting. I post this here because it is just too important to leave on my unread wiki, but it’s only on wiki that you’ll see the connection to the Analytics of Empathy or Reducing Abuse on Twitter.

Predicting the Future

I’m a person that generally doesn’t spend much time predicting the future. I’m more comfortable trying to imagine the possible futures I find desirable, and that’s mostly what I do on this blog, talk about the futures we should strive for.

But two and a half years ago, at the encouragement of the folks at e-Literate, and with the world just coming out of its xMOOC binge, I made some predictions about the future of edtech for e-Literate. I decided to put aside my 10 year visions of the desirable, and just straight up predict what would actually happen.

I spent about a week thinking through all the stuff I talk about and trying to be brutally honest with myself about the future of each item. I literally had a pad where I crossed out most of my most beloved futures. Most things I loved were revealed to be untenable in the short term, due to the structure of the market, the required skills, cultural mismatches, or the lack of a business model.

It was immensely painful. Still, when I was done, a few things survived. They weren’t like most people’s predictions of the time, and in fact ran against most of the major narratives in play as of December 2013.

Here were the predictions. I made three firm predictions under the title “Good Opportunities That Will Be Taken Seriously by the Powers That Be”. I’ll put aside one of these “Local Online”, as I noted even at the time it was a bit of a cheat: local online was a transition that had already happened; it was just no one had noticed.

I’ll deal first with my two other major predictions, which ran counter to the narratives of the time.

  • At a time when asynchronous learning was king, I predicted the rebirth of synchronous learning.
  • At a time when Big Data was the rage, I predicted the rise of Small Data.

How’d I do?

Synchronous Online

In a time when the focus was on asynchronous and self-paced learning, I predicted a renaissance of synchronous learning:

Synchronous online is largely dismissed — the sexy stuff is all in programmed, individuated learning these days, and individuated is culturally identified with asynchronous. That’s a mistake.

I went on to describe how the emergence of new tools based on APIs like WebRTC would make possible the merging of traditional synchronous learning sessions with active learning pedagogies, and how this would result in a fast-growing market, as it would address the needs of a huge existing population of students currently underserved. I compared the market for videoconferencing products to where the market was for the LMS on the eve of Canvas entering it: people believed the LMS wars were over, but in fact they had just begun, because Blackboard had treated the LMS as a corporate tool rather than an educational one:

Adobe Connect and Blackboard Collaborate are, I think, in a similar place. They are perfect tools for sales presentations, but they remain education-illiterate products. They don’t help structure interaction in helpful ways. I sincerely doubt that either product has ever asked for the input of experts on classroom discussion on how net affordances might be used to produce better educational interaction, and I doubt there’s all that much more teacher input into the products either. The first people to bother to talk to experts like Stephen Brookfield on what makes good discussion work *pedagogically* and implement net-based features based on that input are going to have a different pitch, a redefined market, and the potential to make a lot of money. For this reason, I suspect we’ll see increasing entrants into this space and increasing prominence of their offerings.

Suggested tag line: “We built research-driven video conferencing built for educators, and that is sadly revolutionary.”

I don’t know if you can remember how unpopular synchronous was in January 2014, but contemporary takes on it ranked it somewhere between Nickelback and Creed as far as coolness.

So where are we today? Well, WebRTC is propelling a billion dollar industry. Blackboard Collaborate got its first refresh in a decade in 2015 (based on a WebRTC purchase they made in November 2014). Minerva, the alt-education darling, released its platform later that year, which was based on synchronous video learning.

And today, we find an extended article in the Chronicle about the surprising new trend in online education: the rebirth of synchronous education, the hottest trend in learning right now. The reasons for it?

What’s giving rise to the renewed interest in more-formalized synchronous courses is that the technology for “high-touch experiences” in real time is getting more sophisticated, says Karen L. Pedersen, chief knowledge officer at the Online Learning Consortium, a nonprofit training and education group. Institutions are catching up to their professors, and tools are now widely available that let professors share whiteboards simultaneously or collect comments and on-the-spot poll results in real time.

The article goes on to explain that the recent ability of tools has paired traditional synchronous classes with active learning, which makes the difference.

I have some ambivalence on where this will go, as mentioned in the intro to this post, these were predictions, not my top desired futures. Opportunities. And opportunities can be perverted. But this was surprisingly on target.

Small Data

At the height of Big Data madness, I predicted the rise of small data products:

Big Data is data so big you can’t fit it in a commercial database. Small Data is the amount of data so small you can fit it in a spreadsheet. Big Data may indeed be the future. But Small Data is the revolution in progress.

Why? Because the the two people most able to affect education in any given scenario are the student and the teacher. And the information they need to make decisions has to be grokable to them, and fit with their understanding of the universe.

Small Data was a relatively new term at the time the prediction was made. The Wikipedia page for the term was actually birthed on January 2, 2014, about the same time I was writing the post, and looking back now I only see a smattering of uses of the term in 2013. I was at the time reading the wonderful critiques of Big Data by writers like Michael Feldstein and Audrey Watters and thinking through the question, if not “Big Data” then what?

Then in Spring of 2013 I saw a presentation by the local school district on their use of data. The head of their operation said the most useful data for them had been the “One-F” test. They would just compile the grades of the students in all their classes and look for students that had an F in one subject but A’s and B’s in others. Then they’d go to the student and say — look, you obviously can do the work in other classes, what’s happening here? And they’d go to the teacher and say hey, did you know this student is an A student in their other classes — what is going wrong in this class?

And the reason why it worked, they said, was you could talk about standard deviations or point-biserial correlations all day, but it would never make sense to the people whose actions had to change. But people could understand the “One-F” metric. It wasn’t a p-valued research finding: instead it was a clue, understandable by both teacher and student, that something needed investigating and a bit of guidance on where the problem might be, and how to address it. And that — not research level precision on average behavior — was where the value was.

And so it was really Lisa Greseth, the IT head of Vancouver Public Schools at the time, who showed me the way on this. “Small Data” seemed to encompass this idea — it was theory-informed data collection. It was data as a clue for action. And most importantly, it was data that is meant to be understood, in its raw form, by the students and teachers involved.

How’d this prediction go? Pretty well. In the two and a half years since there’s been an explosion of interest in small data. Here’s the first eight results from a Google search on “small data education”:

The Washington Post. May 9, 2016: ‘Big data’ was supposed to fix education. It didn’t. It’s time for ‘small data’

EdWeek. May 16, 2016: Can Small Data Improve K-12 Education? –

InformationWeek. Nov 24, 2015 – McGraw-Hill Education’s chief digital officer has driven the company’s effort to leverage small data to improve student outcomes.

Helix Education. Oct 22, 2015: Big and Small Data are Key to Retention

Portland Community College. Mar 9, 2015: Distance Education: Using small data

Pando Daily. March 9, 2014: The power of small data in education

Center for Digital Education. Jul 1, 2015: 7 Benefits of Using Small Data In K-12 Schools

Times Higher Education Journal. Jul 1, 2015: The Power of Small Data

The prediction, of course, was about the introduction of “small data products”, and there’s been growth there too. McGraw-Hill, for example, is pushing a small-data focus in its Connect Insight series. In many ways, this is a return to a data focus that existed before Big Data madness, a focus on small, teacher-grokable data points collected for a specific purpose. And though McGraw-Hill calls it “Small Data” explicitly, it is the direction that most products seem to be re-exploring after the crash of Big Data hype.

By the way, I still believe Big Data has a place, applied to the right problems. It just wasn’t the place people were predicting two and a half years ago. Maybe I’ll save thoughts about that for a future prediction post.

Other Predictions

I had a category for things that I thought would develop but mostly remain under the radar, and not see broad institutional adoption. I put the return of Education 2.0 (blogs, wikis, etc) in there as well as “privacy products”. I think I was more or less right on those issues. In Education 2.0 we’ve seen real growth, particularly with Reclaim Hosting’s efforts, but it’s still off the institutional mainstream for the moment. On privacy products there has been less development than even I thought there would be, though the recent development of the Brave browser and increasing use of personal VPN provide some useful data points.

I did make the brave, and completely wrong, prediction that Facebook had peaked, thinking that many of its features could be supplanted by OS-level notification systems. Looking back on this prediction I learned something about making predictions: don’t make predictions about things you don’t use, at least not without observing closely how other people use them. My use of Facebook at that time was limited to a quasi-monthly visit.

So lesson learned there? In the time since that I’ve worked on Wikity and Federated Wiki, I’ve come to a greater understanding of what Facebook provides people, almost invisibly. And I have to say, paired with my prediction from 2014, it has really demonstrated to me that what a lot of people build to “replace Facebook” (including things I build) don’t really replace what Facebook provides people. If you look at Facebook and the rise of Slack you start to realize that maybe centralized control of these platforms is key to the sorts of experiences people crave. It may be that you can’t make a federated Facebook anymore than you can make an alcohol-free bar.

I’m not saying that many things can’t be federated. But I have a new appreciation for why they aren’t. (And, as expected, it’s probably this failure of prediction that is most useful to me at this point).

Anti-Predictions

Finally I made some anti-predictions about hyped trends of the time that I believed would go nowhere. Here I predicted that Gamification and Mobile Learning would crash and burn.

I turned out to be largely correct. Gamification seems to be entering its death throes, as it is really just rehashed behaviorism, with the dubious distinction of being even less self-reflective than behaviorism. (The “good” parts of “gamification” are really just learning theory — scaffolding, feedback, and spiral designs come from Vygotsky, Bruner, and others, not Atari).

More interestingly, my prediction about mobile came out more correct than I imagined. As predicted, we’ve gone through the iPad optimism of 2013 and 2014 to find that, unsurprisingly, learning and creating are not really mobile endeavors. Deep learning, it turns out, tends to be an activity that benefits from focused time and rich input systems. (We tried to tell you). So as we watch the iPad one-to-one programs crash and burn, let me revise my previous claim that Education Analysts Have Predicted Seven of the Last Zero Mobile Revolutions.

They’ve now predicted eight of them.

Conclusion

I don’t know. I feel like this is a pretty good record. The Facebook prediction was arrogant and misplaced. I am seriously contemplating that error at this point, hoping for some insight there.

Most of the rest of the predictions were arrogant as well, but came true anyway.

What was behind the right and wrong predictions? There’s no overall trend, but the Facebook failure is instructive when put next to the other predictions.

The key in all these things is to try to truly understand where the value in the current system is, as well as what the true pain points are. And the key is to imagine technological solutions that that address the true pain points without taking away the existing value of the system.

  • Synchronous Online manages to preserve valuable elements of synchronous learning while addressing its main problem: feelings of isolation and disengagement.
  • Small Data builds on the strengths of a system built on the intuitions of the teacher, instead of the data analyst, and works backwards from their needs as a consumer of data.

Things that don’t take off tend to misunderstand central features as flaws. The iPad misunderstood the rich input systems of the laptop as a hindrance rather than a benefit. And its “benefit” of being a “personal” device didn’t map to a classroom where devices weren’t personal, but constantly swapped between students and classes.

Likewise, the centralization of Facebook turns out to be one of its great features: people are actually craving more filters, not less, for the information they consume, and they’d prefer to stay in a standard environment rather than venture out onto the web for most things. Plus, in the two and a half years since I wrote this we’ve seen what has happened to the notifications panel on phones: it’s a Tragedy of the Commons. With every app now pushing messages into the notifications panel, I can’t go to it without finding it littered with thirty or forty ridiculously mundane “updates” from 18 different apps, all cloying for my attention. Facebook’s centralized, privatized ownership of its newsfeed allows it to reduce noise in a way that federated systems have trouble doing.

The biggest blindspot tends to be our own experience. I was able to see the mobile mismatch, because it matches my own experience as a learner. I couldn’t see the strength of Facebook because I don’t *want* the world to like the things about Facebook that it so obviously likes, and I never should have predicted anything about it until I understood its present value to people.

On a personal note, going back through this reminds me that I should probably try to predict more. My tendency is towards futurism, unfettered by reality, and I remember how painful the process of trying to truly predict things was. But truth is, if you can dredge up some ruthless honesty, you can see what the likely routes forward are. That’s not quite as fun as advocating what should be, but it’s probably a useful skill to develop.