Simon’s Watchmakers and the Future of Courseware

Herbert Simon, a Nobel Laureate known for his work in too many areas to count, used to tell a story of two watchmakers, Tempus and Hora. In the story Tempus and Hora make watches of similar complexity, both watches become popular, but as one watch becomes popular the watchmaker expands and becomes rich, and as the other becomes popular the maker is driven slowly out of business.

What accounts for the difference? A closer look at their designs reveals the cause. The unsuccessful watchmaker (Tempus) has an assembly of a thousand parts, and for the watch to be working these must all be assembled at once; interruptions force the watchmaker to start over again from scratch. For a watch to be finished the watchmaker needs a large stretch of uninterrupted time.

The other watchmaker (Hora) has chosen a different model for her watch: she uses subassemblies. So while there are a thousand pieces, she can complete a subassembly of 10 or so pieces and put it down without it falling apart.

Simon actually gets mathematical at this point (italics mine):

Suppose the probability that an interruption will occur while a part is being added to an incomplete assembly is p. Then the probability that Tempus can complete a watch he has started without interruption is (1- p)1 000 – a very small number unless p is.001 or less.

Each interruption will cost, on the average, the time to assemble 1/p parts (the expected number assembled before interruption). On the other hand, Hora has to complete one hundred eleven subassemblies of ten parts each. The probability that he will not be interrupted while completing anyone of these is (1- p)10, and each interruption will cost only about the time required to assemble five parts.

“Now if P is about.01 – that is, there is one chance in a hundred that either watchmaker will be interrupted while adding anyone part to an assembly – then a straightforward calculation shows that it will take Tempus, on the average, about four thousand times as long to assemble a watch as Hora.

From The Sciences of the Artificial, p. 188-189

The point here is not about multitasking, but about evolution. We tend to see emergent complexity as the accumulation small deviations leading to new forms of life, thought, or practice. But that’s not enough. If we must get 1,000 changes or features in to create something worthwhile, we’re lost from the start. Changes that require these sorts of vast sequences are unlikely to occur.

Instead, evolution must proceed through a series of stable, intermediate forms, via a pattern Simon calls “aggregation of subassemblies.” You can’t get from nothing to the human eye unless there are stable and useful states between nothing and the eye as we know it, and the structure of your eye is as determined by those intermediate states as by anything else. We’ll be stuck forever with a blind spot in our field of vision because there’s no desirable set of intermediate states that would result in a new blind-spot free system.

Writing in 1961, Simon saw this issue as one of the more underappreciated elements of how change happens, any sort of change. To some extent we’ve caught up to Simon these days.

For example, programmers and designers have embraced emergent design:

6a00d8341ca4d953ef01a511e114a3970c.png

Today, policy analysts talk about “path-dependence” — the reason why it’s so hard to get to a European health care system in the United States, for example, is precisely this problem of stable, intermediate states.

And just the other day I was flipping through the Andressen-Horowitz slide deck on network effects, and what did I see in there but this:

single

A non-network mode for a network product can be seen as another example of a stable intermediate state. In other words, single player mode is a form of sub-assembly.

I should caution people: getting the right intermediate forms is of course crucial, as is understanding the previous forms on which you are building. If we take the Henrik Kniberg picture up-page as an example, I’ve seen too many examples where Silicon Valley has offered educators a skateboard in exchange for the aging car they are driving. Emergence is more than incrementalism, and it’s one of the reasons why an deep understanding of the history of problems is necessary to solve them. Hora with the wrong sub-assemblies doesn’t do much better than Tempus.

Still, Simon’s thinking on these issues was very much ahead of his time but very much of our own time. The move from the industrial age to the information age has been a transition in part from Tempus to Hora, from structured design to aggregation by sub-assembly. We think like this now because the complexity of our current systems demands it.

OER and the Drake Equation

Of course, as Gibson once said, the future is here, it’s just not evenly distributed. We’re an information age economy that still has many bubbles of factory thought.

It’s been one of the great ironies that the information age’s biggest industrial age bubble has been in the sale of information. As an example of this, the textbook company Pearson Education recently penned an op-ed attacking the idea that open educational resources (which are produced on an emergent, open-source model) could ever compete with the factory precision of a company like Pearson.

My response to such a column would have been that this was like arguing in 1972 that we would never make it to the moon. The tipping point for OER in the classroom was year or two ago; much of what happens now is inexorable, watching the results of the past decade’s actions play out.

Still, others were more patient and less snarky than I. David Wiley wrote a fascinating rebuttal that summarized decades of careful thinking in the OER space about how open materials could compete with closed materials, and, just as importantly, what “compete with” means and how it might be judged.

In that essay (and it really does rise from the level of “post” to “essay”) Wiley attempts to explain how scale influences and informs the possibilities of open production. To do that he uses the Drake equation, an equation astronomers use to estimate the likelihood that advanced civilizations exist on other planets.

I won’t go into a detailed discussion of the equation here, but the point of Wiley’s example is this. Most of the equation seems to indicate that life is unbelievably improbable: only a fraction of stars have planets, only a fraction of those planets can support life, only a fraction of those planets would have developed life, and so on.

It seems pretty hopeless until you plug in what all these fractions of fractions are fractions of. In the Drake Equation, everything starts with the rate of star formation. And when you plug that in things change. There’s about 20 sextillion stars in the universe, which written out with zeroes looks like this:

20,000,000,000,000,000,000,000

When you start with that in your numerator you can do a lot of slicing and dicing, and you still end up with a lot of life in a lot of places. So while the likelihood of any given star producing an advanced civilization is nearly indistinguishable from nothing, the current estimate is that advanced species with the capability to transmit radio waves or other communication across space have existed in about 10 billion other instances.

Ten billion. (That startles me out of edtech thinking for a minute, the beauty of that. Think about that. 10 billion civilizations rising and falling throughout the history of the universe, and our experience here just the tiniest fraction of a fraction of this larger story. It’s one of the most beautiful things the mind can contemplate).

But back to OER. David’s point is that the chance of any one faculty member producing their own end-to-end course out of free materials that they stitch together is admittedly quite small, but when you look at the numerator (the vast amount of course creation that happens anyway each semester in a discipline) it’s not only possible that these open works will emerge — it’s highly probable, even when you plug in fairly pessimistic filters.

Hacking the Drake Equation

Once these works are pulled together initially, David argues, the filters become far less pessimistic:

Perhaps most importantly, the major efforts made by OER workhorses like Khan or Sousa catalyze additional, incremental work by others over time. As Benkler has explained, the smaller the contribution to be made, the more people there are who will have the time and inclination to contribute (c.f. Shirky’s idea of cognitive surplus).

This creates the opportunity for asynchronous, uncoordinated, incremental, continuous improvement that harkens back to Eric Raymond’s notion that “Every good work of software starts by scratching a developer’s personal itch.” Individual instructors make no improvements, small improvements, or large improvements to existing OER based on their own needs and available resources. Some subset of the group that makes changes share those changes back with the community. This kind of “snowball development” is a key characteristic of the most interesting and effective OER.

The future of the sustainable development of effective OER will be characterized by stigmergy. Stigmergy is the watchword for the next decade of OER.

I do think stigmergy, the self-organizing principle we find at work in so much of the social web, is the watchword for the next decade or OER, even if we don’t present it under that name. And it has to be that way, because as we get beyond those efforts with a large numerator (Introductory Psychology, Anatomy and Physiology, U.S. History) we have to start to optimize those denominator filters to make the whole thing work.

As an example, consider the issue I deal with on a daily basis. Many of our faculty who want to get off of paid textbooks teach advanced or niche courses. I had one faculty member who teaches State and Local Government who wanted to ditch their textbook. But State and Local Government is a course only taken a particular political science sequence, which means there’s a small number of people teaching it.  (In the terms of the Drake equation, the subject has a low “R”). The chances of an end-to-end State and Local Government textbook emerging any time soon are fairly low.

But we can fix this. The current educational materials ecosystem doesn’t exist in orbit around Alpha Centauri. It’s right here, right now, and we can tweak the environment to make conditions more favorable to intelligent life.

Still, while stigmergy is a powerful model, I’d propose that a simpler lens on how to make change is provided by Herbert Simon’s watchmakers, which provides some powerful insights into how to move forward in OER (and into how we are already moving forward).

Simply put, the textbook as produced by Pearson and others (and even as produced by many OER publishers) is Tempus’s Watch. Rather than being an aggregation of sub-assemblies, it’s an end to end treatment of a subject meant to be tightly coupled to a course sequence. Because of this, the emergence of new soup-to-nuts textbooks tends to cluster around subjects with a high numerator. Like Tempus, if you are working end-to-end you need a lot of tries (and failures) before you get a complete, usable, product.

But the future does not belong to Tempus, it belongs to Hora. Rather than large, complete works that then accrue stigmergic change, we should look forward to a shift towards evolution of sub-assemblies which are composed and recomposed into larger works.

It’s this shift away from the industrial textbook model that will radically change (and is already radically changing) what is possible with OER.

Building Hora’s Watch

I’ve spent eight years of my life looking at why the bottom-up reuse environment we envision might be possible in OER never quite emerges. And honestly, the reasons are complex, and somewhat ironically, a path-dependence problem.

In short, everything from our technical architecture to our file formats to our institutional policies assume a Tempus-like product. If you think the problem is simple or already solved, you probably haven’t done your homework.

The subassemblies of Hora’s watch must be both stable (that is, presently useful) and intermediate (that is, providing clear proximate paths for both evolution and assembly into different projects). Our architectures tend to emphasize stability (present usefulness) over the intermediate nature of such things.

As an example, we often choose formats which privilege sharing (present usefulness) at the expense of remix (intermediacy). I first encountered this when looking at OpenCourseWare (OCW) in the mid-aughts. In the quest to share things in “open” formats, providers had shared PowerPoints in PDF, a widely readable format that increased shareability at the expense of remix. In one fell swoop, this decision transformed what could have been an emergent pattern into yet another for of publication.

Once tuned into this, however, you see this clash between present stability and intermediacy all over the web. Old timers will remember that early internet video was often circulated via email or downloadable MPEG files (Remember “All your base”?). That solution favored intermediacy (the ability to edit and remix the file) at the expense of present usefulness (you might not have the proper player to play it). YouTube changed this, making playing videos trivial, and radically increasing shareability, but at the same time killing remix.

Here are other examples:

Intermediacy Stability
Usenet Webpage
Wiki Content Management Systems
MP3s Spotify

One of the things you’ll notice is that we have often moved away from intermediacy for a variety of good and less than good reasons. But that’s for another novel.

Thinking Like a Photographer

People sometimes say that I’m wrong about reuse, that they reuse images *all the time*.  To which I say: Exactly!

You see, for a variety of historical reasons images have remained intermediate (and these reasons date all the way back to HTML’s special include syntax for them). The use of images on the web is actually a pretty good guide on how to make intermediacy happen:

  • Images can take on many meanings in different contexts.
  • People take images with this in mind. E.g. not every picture I take of a building has me in it.
  • I can, with a couple clicks, get the original image directly from your server and have a copy of it up on my server (whereas if I want the raw pre-templated and processed text from your database, good luck).
  • I can transclude your image into my own work with a simple dialog box.

When we think about intermediacy, it’s helpful to think about the way people take pictures which then get included in a wide variety of contexts. My friend Alan Levine, for example, has his pictures reused all the time. And he’s a good technical photographer to be sure, but check out this. Here’s a snippet of Alan’s photostream from Flickr, a tiniest sample of thousands of photos he has.

reused.JPG

As I said, he’s got great technical talent. But more than that, he’s got an eye for the reusable, remixable image. Each image is complete in itself, but at the same time is framed for remix and reuse in a variety of contexts. I don’t know where that recycling bin photo will end up, but my sense is it captures a network of ideas and metaphors and associations that is going to allow it to drop into a slidedeck or blog post like it was created just for that use.

Since so much of what photography does is to feed into other media, this is a big part of what we mean when we talk about having an eye for the perfect image. Alan is in these images, surely, but he’s being quiet here, and leaving space for you, the reader or remixer to use them in ways that he has not imagined.

When it comes to writing, the primary artform for remixable writing is wiki, of which Wikipedia is an an example, but certainly not the most instructive one. If you want to get a sense of what wiki can be, I usually recommend a trip to TV Tropes. Here’s an example of an article from that site, which discusses the tendency of  TV, comic book, and film characters to have battles in the middle of rainstorms:

battleintherain.JPG

Note that the skill here in identifying and explaining the idea is similar to Alan’s photographic skill. Here, an idea has been identified which is linky and dense with potential connections. It’s big enough to be meaningful on it’s own, but sized so that it could support other writing as well.

The links here are where the action is. The trope is an example of the Empathic Environment trope, where nature mimics the moods of characters in the story. It’s counter but related to the Battle Amongst the Flames.

Wiki pages are some of the few truly hypertext pages on the web, designed to be part of a nonlinear reading experience rather than either a linear read or the spoke and hub model of blogs. Wiki writers, like Alan behind the camera, create work to scratch their own itch, but always with an eye for leaving space for future extension and remix. A good wiki article is like a well conceived subroutine — it sees a general gap that needs filling, and fills it in a way that can solve more than the present problem.

(It’s partially because of this that I believe that anyone looking at the future of OER must start with a deep understanding of hypertext, from Vannevar Bush through Ward Cunningham; there is a deep and rich literature here that has already dealt with the problems we claim to be encountering anew. As just one example, Wikipedia is the largest digital education project in existence, and yet there are many in the OER movement whose understanding of wiki culture doesn’t go beyond a five minute summary. That’s nuts.)

(In)conclusion

I didn’t sit down to write how we solve our OER problems, though having just passed 3,000 words in this post maybe I should have. There’s a host of things we need to do, both technically and culturally to move OER into its next, emergent stage. Perhaps in some other venue I can expand on those.

For now, however, let’s just summarize the current situation:

  1. Early OER texts had a high “Drake Equation R”, and as such could emerge as nearly whole works, to then be iterated by others.
  2. As we move forward with OER, we will move more and more into “Low R” courses, where we are less likely to emerge whole.
  3. This necessitates a different approach (Hora’s watch) to OER than some of the approaches that are currently working. But neither technology or culture is currently well-aligned with the approach we need to move to.
  4. At the same time, other models, such as Flickr libraries, Pinterest, and wiki writing show that the transition can be made.
  5. Maybe we should look at these, amirite?

I really just meant to write a short post here, and surprised by the current word count. But let me know what you think in the comments.

 

Wikity 0.31 Released (Bug-fix for Quote Problems)

Wikity 0.31 is released. Download 0.31a here. New installers and curious tourists will also want the Wikity Guide, and the 0.30 release notes. Usual disclaimers about free code people give you on the internet apply.

Wikity 0.31 is a bug fix release to 0.30. There is no new functionality, and it fixes only one small but incredibly annoying bug: Titles with quotes in them were not forking properly. Thanks to George Veletsianos, Dan Blickensderfer and others who pointed me to the solution, and to Mike Goudzwaard who identified the initial problem. You’re all amazing.

There is also a bit of overdue code cleanup. Through sheer force of will, the clarity of the code has been moved from “crime against humanity” level to “national tragedy” level. We continue to work towards the day when we can proudly hit “small town kerfuffle” level, but appreciate your patience as we strive towards that lofty goal.

People have asked when I’m putting this on GitHub, and the answer is June 1st. Until then, watch this blog.

 

To Make Content Findable, Put It Everywhere

I’ve mentioned before that the impulse many people have about OER — that we need a central high visibility location where we can put ALL THE OER and everyone will know to go there — is flawed. We know it’s flawed because it’s failed for 15 years or so (more if you count early learning object attempts).

If you want someone to find something, don’t put it in one place — put it everywhere.

This graphic of the Buzzfeed network reminds me of that fact. Buzzfeed is one of the most recognizable destination sites on the web. If anyone could survive making people come to them, it would be Buzzfeed. And what does Buzzfeed do?

buzzfeed_networks

They put it *everywhere*. They publish in something like 30 platforms, an 80% of their views come from places other than Buzzfeed.

Now let me ask you — if Buzzfeed can’t rely on a central distribution site, what chance does OER have?

Make copies of good stuff, lots of copies. Put them everywhere. Duplicate incessantly, host redundantly, fork recklessly. Then, and only then, will we have solved the dicoverability problem.

Deep and Lovely

It feels a bit silly sharing reflections on Prince when so many people have done it better. I’m particularly moved by the glimpses we have gotten over the last day of Prince, the person, a guy who loved to laugh and saw his mission in life as helping others. I’ve loved the meditations on both his visual style and his musical style, the thoughts on his approach to technology, the sharing his epic guitar solos, and the multiple re-watchings of what will forever be the greatest halftime show in the history of football.

What I’d like to add to all of this, as a bit of a music geek, is something about his lyrical style. Why it grabbed me so much as a teenager, and why it still pulls me in today. I’ll be quick about it.

This is the beginning of “Manic Monday” a song that like so many others he gave away:

Six o’clock already
I was just in the middle of a dream
I was kissin’ Valentino
By a crystal blue Italian stream
But I can’t be late
‘Cause then I guess I just won’t get paid
These are the days
When you wish your bed was already made

In some ways, it’s the quintessential Prince lyric: his obsession with dreamlife and movies (“I was dreaming when I wrote this, forgive me if it goes astray…”), the workaday world (“I was working part time in a five and dime, my boss was Mr. McGee”), and the perpetual feeling that we’re late for something, be it our day job or Judgment Day.

But it’s the intersection of these things that matters here, and it’s that last line that has always stood out to me:

These are the days
When you wish your bed was already made

In that line the experience — lateness, messiness, never having quite enough time — becomes universalized as something surprisingly deep & lovely. It’s not trivialized, it’s not a gimmicky Bruno Mars “Lazy Song”. It’s permission to take ourselves seriously, to see and embrace the romantic under the mundane, to bring just a smidgen of that dreamlife into our waking life.

You see this all over his lyrics. In “Raspberry Beret”, after setting up a cinematic scene, he invites us in:

The rain sounds so cool when it hits the barn roof
And the horses wonder who U are
Thunder drowns out what the lightning sees
U feel like a movie star

I could go on — “When Doves Cry” begins by asking us, as the addressed lover, to imagine a dreamlike scene (“Dig if you will, a picture / of you and I engaged in a kiss”). In “Let’s Go Crazy” we’re told if the elevator tries to bring us down, “Go crazy, punch a higher floor.”

I know that for Prince these lyrics and this distinction was (or became) ultimately religious. The higher floor is heaven, of course, and the basement is hell.

That piece of it never spoke to me, but it didn’t need to. Because in the Prince mythos the devil was always the one telling you to ignore the beauty in the mundane, to get lost in the overwhelming details of day-to-day life. And heaven was seeing that if you could transcend that, if you could take your experience seriously, if you could get out of your head and see the bigger picture and the larger arc, your life was as deep and lovely as any film or dream.

It’s a message that I needed as a teenager, and maybe one we all need even more as we approach the day-to-day of middle age. I thank Prince for delivering that message to me when I needed it as a kid, and I’ll try to keep reminding myself of it as I travel through this “thing called life.”.

Rest in peace, Prince.

 

 

 

Wikity Version 0.3 Released

I’m releasing Wikity 0.3 today. There’s some neat updates to this version. Current users can replace your old theme directory with this one. New users should read the Wikity Guide and follow the install directions there, but use this newer folder (we’ll eventually get the documentation updated).

The biggest change is the “pathways” feature. The pathways feature is intended to eventually be an implementation of Vannevar Bush’s “trails” functionality in the Memex, and we get closer to the inspiration here. When users are in “catalog view” they can construct their own trails (we call them “paths”) using the card checkboxes:

cards

These paths create a special card that will lead you through articles in sequence:

path

Order can be arranged and edited in both catalog view and card view.

What’s particularly exciting is that we’ve allowed you to more easily create paths out of other people’s content as well. Simply go to someone else’s catalog and select the cards you want, and then point the dialog box to your own site. Not only will the path be copied to your site, but — here’s the really neat thing — all the associated cards will be copied to your site as well. So the pathways feature is actually also a mass-forking mechanism.

We’re going to go further on Pathways pretty soon. For, example, I want simple copying of other people’s paths, just like Bush would have wanted.

Accessibility

We changed the catalog view to be more accessible. Click-to-edit functionality was cool, but not “tabbable” the way that web accessibility guidelines demand. So we have an edit button now that is in the tab order. The click-to-edit functionality is preserved as an alternative with a twist — clicking to edit will open the card at the top of the card, using the button will drop you at the bottom of the card. This meshes well with the two operations that we find most people engage in — writing the abstracts at the top of the cards and the associations at the bottom.

Bug Fixes

We did a batch of bug fixes. The people that would notice them are the people who asked for them, so I won’t deal with them all here. To my knowledge we’ve fixed all the bugs that people have asked about — if using Wikity you find we’ve missed something let us know.

UPDATE: Here’s a video on how to use Pathways —

 

Retweeting and Comprehension

More fascinating research out of China on social media, this time directly related to my obsession with the Garden and Stream models of social media.

Roughly, the finding of the study is this: when readers have the option to retweet a message their comprehension of the message falls significantly. The researchers found that:

…“repost” did not promote but hindered participants’ online information comprehension. Messages that were reposted were more likely to be understood incorrectly than correctly. This finding has overarching implications given that the majority users of micro-blogging sites only read and repost others’ messages (Fu and Chau, 2013 and Kaplan and Haenlein, 2011). …

How much more incorrectly? Students in the repost condition got *twice* as many comprehension questions wrong on the messages they read as the control group, which was presented the exact same messages, with no option to repost.

There’s some caveats here — the participants were reading tweet-sized messages, but only had 300 ms apiece to read them. That’s pretty tight, and it is meant to test their model, which assumes that this is a resource contention issue — if you have to be asking yourself “Should I retweet this?” at the same time you are reading the tweet your cognitive resources are split, and comprehension suffers.

At the same time, this matches the experience that many of us have on Twitter, where one half of our brain in on a loop asking “Is this retweetable?” while the other half deals with the mundane task of, you know, understanding what we are looking at.

The study presents an even more stunning finding (and one I am still not sure I am reading correctly). People in the reposting condition, when presented an offline document after reading and reposting, still really suck at comprehension:

For the offline reading comprehension test, participants were first asked to read an article, “More than a feline: The true nature of cats, from New Scientist. The article was translated into Chinese with a total of 2176 characters. A comprehension test was compiled based on this article, including 11 multiple-choice questions that all had excellent discrimination values in a pretest. Participants’ scores on the test (0–11) were used as the index of offline information comprehension.

The results? People in the no-reposting group did 50% better on the comprehension test, even though the test was on an offline document with no reposting option.

Participants in the no-feedback group (M = 5.95, SD = 1.23) outperformed those in the feedback group (M = 4.05, SD = 1.99) on offline reading comprehension, t(39) = 3.63, p = .001, d = 1.15.

The authors hypothesize that this as well is due to cognitive depletion of resources — the mind, exhausted from dual-tasking through the repost activity, has less to give the final task.

I find these experiments interesting, even if they are only the beginnings of real research on these issues. From my perspective, I wonder if the cognitive resources issue is only part of it — as I’ve said before in my presentation on the Garden and the Stream, the nature of the stream is it pushes you away from comprehension and into rhetoric. Rather than seeking to understand, the denizen of the modern Twitter or Weibo feed seeks to sort incoming information as right or wrong, helpful or unhelpful, worth retweeting or not retweeting, worth getting into a righteous rage about or not.

Once the information is sorted as foe or ally, witty or dull, etc. we are done. At its most extreme, the stream replaces comprehension with classification, with each decision forming an irreversible ruling on the item, never to be revisited, recombined, reoganized, or rethought. In this race to do this we retweet articles after reading two paragraphs in, and vilify links we haven’t even clicked through. It doesn’t just compete with existing resources — it perverts the questions we ask of what we read.

That’s not in this study’s data, of course, but I think it is consistent with its findings. I look forward to more work in this area.

 

Answer to Leigh Blackall

Leigh Blackall, who was an early advocate of using wiki in education and proponent of projects like WikiEducator, asks the million dollar question:

You started out describing a project where you are setting up a WordPress wiki template, to host what I’m presuming to be student-generated-content activity. Without knowing any of the details that led to that decision, but basing it on years of being part of similar discussions, my first question is why you didn’t try and incorporate existing online project spaces and work out how a mutually beneficial arrangement could be found between the student learning objectives and the goals of the existing project being adopted. Wikipedia and Wikibooks springs to mind, but there are many others, they are just my preferred projects.

On this basis, I wondered if your Wikity project resembled the 90s projects being referred to, at least in the conceptual approach, where the mindset being encouraged might be the same as the mindset that lead the 90s developments.

On the other hand, the reverse could be just as true!

Either way, I do think your touching on an excellent principle of development. More detail on the purpose of the Wikity project would probably help me answer my questions.

This is a point Leigh has been making for a long time — the way we tend to do wiki in classes is not very wiki-like, because

  • it does not contribute to future work by others
  • it does not build on previous work

Instead, our wiki projects live and die in the glass terrarium of our classrooms. Others may see what we do, but no one can add, extend, correct, or argue, and when the class is over the contribution begins to decay until the plug is finally pulled on the wiki and it is gone.

However, the other option — to do all work in existing work in the common space of other communities — has proven to be just as problematic. Consider the set of articles I’ve written on the opioid crisis on my personal Wikity site:

Opioid Articles

This, to me, is what initial thinking on an issue looks like as a student starts to explore and connect. We follow the wiki convention (pre-Wikipedia) of giving ideas, facts, and data a name so we can connect them from many different angles. We provide for organic discovery, connection, and extension.

ddd

Can one do this in Wikipedia? No. You’d last about 40 minutes after posting the relationship of the JCAHO and the epidemic before you’d end up in an edit war with someone or other about whether it was noteworthy or germane to the article you were posting to. Wikipedia is a mature project, and is not really a great place for personal emergent knowledge. I say this as a person who has posted many an article to Wikipedia, and consciously worked to counterbalance some of the demographic biases of Wikipedia — it’s a great place to post refined, well-organized and defended knowledge, but it’s a lousy place to explore a subject.

At that to the fact that for your class you’re going to want to customize your treatment to the issue at hand, or to a local perspective or concern. You can’t do that in a general resource, because by it’s nature it has to be general.

I’ve struggled with this tension — we want to build on the work of others and have others build on our work, but at the same time, local concerns make it too hard to release control of our own work. For a while I thought the answer might be large cross-institutional wikis, but even those suffer from the same problems — at some point, someone must control the wiki, own the wiki, maintain the wiki.

Wikity, inspired by recent work of Ward Cunningham, sees the answer to the problem as federation. Individual groups create, maintain, and extend wiki pages (or “cards’), but these cards are, through the magic of an API, forkable to other sites. If people fork your stuff and improve it in ways that support your local aim, you can fork it back. If they fork it and take it in an unrelated direction, well that’s OK too.

This allows people to build on and contribute to the work of others while still preserving their local aims and unique insights.

The “Federated Information Lifecycle” video is from my work with Ward on his federated wiki technology, but remains the best overview for those that get wiki:

There’s a wealth of other stuff (again, mostly under the term “federated wiki”) that explains the thinking that eventually led to Wikity. Thumb through it, and let me know what you think.

The Missing Jury

I disabled my Facebook account yesterday. Don’t worry, this isn’t quitlit: I’m sure I will return to Facebook sometime in the future, and I’m not going to go off on some long self-righteous rant that I will have to walk back in a month.

But the reason I disabled Facebook, roughly, was because in this Democratic primary season it was:

  1. Making me dumber, and
  2. Making my friends dumber

It sucks to become dumber, but it’s also relatively painless, as the condition is in fact its own analgesic. It’s more painful to watch people you respect acting like idiots.

(And no, I’m not referring to the commentators on my “free college” Facebook post, since some of them read me here and may be wondering – you were all great. I’m talking about the never-ending streams of others).

Ultimately Facebook began to remind me of that moment where a Thanksgiving dinner starts to go wrong and if you’re smart you just disengage before you do any permanent damage.

There’s plenty of people in the Democratic Party who have good reasons for supporting Hillary Clinton as the nominee. I’d say, broadly, that people who support Clinton have a “coalition” view of politics, and believe that (for the moment) our best chance at doing good will come from incremental actions from a broad body of people with differing opinions and aims.

On the Sanders side, the theory of change is best described as “populist”. Here the idea is that there already exists a silent majority that has been ill-served by the current regime, and that — if properly educated — would all desire and fight for the same set of solutions. You don’t need a coalition in this model, you just need to break through the media bubble and have everyone see that our different problems and concerns stem from a few fundamental factors, normally imposed by an elite.

The populist/coalition dichotomy is a fascinating one with a rich history, and we could all learn a lot by talking to one another about it. We could become better people, more effective-problem solvers,  with increased self-awareness about our own motivations and beliefs.

But of course that is not what happens on Facebook. On that platform it is currently just an endless parade of outrage — people sharing articles that prove the point that their opponents are the most-awfulest-people-in-the-world, did you see this new outrage or article that proves we’re the victims, you’re the oppressors, and we’re actually even 10% more right than we *initially* thought (for a grand total of 163% right!).

To what extent do the current tools we use promote this sort of thing? It’s a thorny question. If you’re familiar with the research, you know we suck at this stuff even without these tools. We crave certainty to a fault. Confirmation bias is the rule, not the exception. Once we’ve made a decision and acted on it, explaining it to others makes us more cognitively rigid, not less. In fact, a major cognitive theory of the moment proposes that most of what we call reasoning was not developed for problem solving at all, but for persuasion.

Honestly, I could fill this page up with links on this. It’s depressing. My daughter even recently told me about a paper she read where birds (birds!) outperformed humans on a pattern-recognition task that involved confirmation bias.  Ultimately your brain is designed to convince people to get a seed for you, and so is configured to present certainty to the outside world. The bird’s brain just wants to figure out the best method to get the damn seed.

At the same time I keep coming back to the social media we have and thinking there is something very unique here. On Facebook people are taking positions on events that happened minutes ago multiple times a day. Most information that comes in (outside of the kitten-stuck-in-boxes videos) is immediately routed through the personal spin machine, pumped out and committed to. Reactions by others sink their own stakes in the ground. The entire day is spent boundary-making.

In an ideal world, maybe this produces a good result — a prosecutor puts forward the best possible case for their side, the defender for theirs, the jury contemplates and decides, benefiting from the work of both. But just as the traditional reader has disappeared in the new media ecosystem (replaced by a reader/writer), so has the jury. There is no jury anymore — we’re all committed, doubled-down on whatever we wrote five minutes after we read something.

When I look at the structure of social media now, this is the big thing I see — it’s a future where every story and incoming piece of information is immediately tagged according to its usefulness to our ongoing argument and narrative and is then pumped out to a jury of our peers as evidence of our correctness. But the jury box is empty; the jurors all became lawyers long ago. They are off arguing their own cases, to other people on their legal team, a never-ending stream of self-righteousness streaming out to empty courtrooms for eternity.

Things will hopefully get better as the primary resolves — we’ll still be hopelessly fractured between right and left, but at least there might be some productive discussion on the left. Maybe. I’ll check back into Facebook in a month or two to find out. As I said, this is not quitlit, but it maybe is an argument for a pause.

——-

You might be interested in some related thoughts from Jon Udell back in January on David Gray’s Liminal Thinking work.

 

 

 

The Opioid Epidemic

I write a lot of things in wiki nowadays, and it’s good. I think better in there, and get away from the web’s echo chamber of endless reactions to reactions.

But occasionally wiki is a bit too under the radar. About a year ago I started following a thread of stories on opioid addiction in America. As one thing led to another, one statistic to an even more shocking one, and it seemed amazing it was not being covered more broadly.

I think it’s now out in the open, now a bigger story, but for various reasons, I want to link to some of that work here today, the place where people actually read.

The story, in short, is this. Pharmaceutical companies (in particular a company named Purdue Pharma) began to market new classes of opioid based painkillers to people in the mid-1990s. Despite the fact these were known to be highly addictive and prone to tampering they continued to market them. Purdue Pharma spent a massive effort pushing these drugs, and suddenly “pain management” was all the rage — you couldn’t go into the doctor’s office for a booster shot without being asked to rate your pain on a scale of red sad face to yellow happy face, and drugs were the follow-up should you answer wrong.

See Opioid Increase 1997-2002

Weird things happened in small and medium-sized towns all over the U.S. Suddenly there was a heroin epidemic everywhere at once, particularly in the Northeast, but other places as well. The assumption was that it was like previous epidemics, a cyclical wave of addiction, that occasionally pops up when younger folks are far enough away from previous waves to have forgotten what a heroin addict looks like when they hit bottom.

Except it wasn’t like previous waves. It was bigger, and weirdly ubiquitous. There seemed, apart from whiteness, to be very few predictors to who was becoming addicted. What was happening behind the scenes? People were becoming addicted through an entirely new route: their doctors. For the first time in history, the vast majority of heroin addicts were becoming addicted initially through pills prescribed to them by a doctor. When the prescriptions got cut off, they went to heroin, the cheaper alternative.

“There aren’t a lot of people saying, ‘Hm, heroin sounds like a fun drug to try!’ The people who are using heroin are people who have opioid addiction,” said Kolodny, who is also a senior scientist at Brandeis University’s Heller School for Social Policy & Management. “Some develop [a painkiller addiction] for taking drugs exactly as they were prescribed.” See Route to Heroin Abuse and 80% of Heroin Users Started with Painkillers

Even when addicts stayed with painkillers, they put themselves at huge risks, with opioid pills surpassing all other drugs in terms of death count. While this toll was increasing, Purdue Pharma was using big data to find and target doctors writing the majority of prescriptions — not to turn them in, but to market to them more heavily.

Screenshot-2015-12-29-at-11.04.05-PM[1]

See Big Data and Oxycontin

And overdoses were not the only impact. In fact, far more people may be dying from the associated forms of liver disease and depression caused by addiction. The combined effects are so large that it has played a large part in reversing decreases in all-cause mortality in a number of demographics, a trend not seen in modern times. Though just one of the causes of increases in white mortality, the combined effect (along with alcohol and suicide) is so large that researchers say the closest thing they can compare it to is the AIDS epidemic in the 1980s and 1990s. Drug overdoses are now surpassing traffic accidents as a cause of death in the U.S. See Opioids, Alcohol, Suicide

In short, it’s the biggest health crisis in the U.S., created not by a bunch of curious kids, but by the pharmaceutical industry. It is touching all ages, all incomes. And yet I still find people that do not know this crisis exists, people who think they are alone in their struggle, people who blame themselves or others for an crisis that is as corporate-created as Deep Horizon. And as our friends in the activist community said in the 1980s so succinctly, Silence = Death.

So this is me trying not to be silent, and letting people know that if you or someone you know is struggling with this, you’re not alone. This is part of a bigger story, bigger than any one person and bigger than any series of individual choices. And to get out of this we need more than individual action and willpower — we need group awareness and collective action, we need to talk about this a lot, and we need to call out the series of actions that got us here. We need to recognize this as an epidemic, and remove the shame and the silence and the personal guilt and get things done.

At some later point I’ll try to pull some of the older pieces I wrote on this subject off of federated wiki and on to Wikity, and maybe even put them into a centralized stand alone resource. Until then this will have to do.