Some Preliminary Results On Cynicism and Online Information Literacy

We (AASCU’s Digital Polarization Initiative) have a large information literacy pilot going on at a dozen institutions right now using our materials. The point is to gain insight into how to improve our instruction, but also to make sure it is working in the way we think it is. Part of that involves formal assessment which I am working on with Tracy Tachiera, my co-PI.

A few weeks ago we finished the Washington State classes in the pilot, in what we’re calling our high-fidelity implementation. For those unfamiliar with educational pilots I should note that that doesn’t mean the other implementations were worse or lower-value. It just means that since the materials were delivered by someone intimately familiar with how to deliver them (me) we have a high confidence that the intervention we are testing is what we think we are testing. 

In any case, we have some of our first pre/post data in, on a decent implementation.

One important caveat to start: I am reporting here *only* on the four classes I taught directly. We have well over a thousand students in the multi-institutional pilot with something like 1,300 assessments already logged; it’s a big knotty, messy assessment problem that will take some time and money to finish. But what we’re seeing in the interim is important enough that reporting on the smaller group more immediately seemed warranted.

Rating Trustworthiness

So here’s the assessment directions the students got. They are pretty bland:

You will have 20 minutes total to answer the following four questions. You are allowed to use any online or offline method you would normally use outside of class to evaluate evidence, with the exception of asking other students directly for help.

You are allowed to go back and forth between pages, and can revisit these instructions at any time.

The students then took an A or B version of the test before the 4 hour intervention and the opposite test afterwards. The instruction was delivered in the classes over three weeks, with a three week gap after it before the post-assessment in order to capture skills decay. (Due to a scheduling conflict, one of the four classes received only 2 hours and 40 minutes of instruction, they are not included in the post-test data here, but their results were generally somewhere between the pre- and post-test results of the other classes).

Key to the assessment was we had a mixture of what we called “dubious” prompts, where a competent student should choose a very low or low level of trust (depending on the prompt), and  trustworthy prompts, where competent students would rate it moderate or higher.

So, for example, this is a dubious prompt: a conspiracy story that has been debunked by just about everyone:

Our target for this is that the students rate trust in it “Very Low” due to information you can find quite easily on it (using our “check other coverage” move)

And here is a paired trustworthy prompt in the news story category (our other prompts are in photographic evidence, policy information, and medical information):

In the above case, of course, the story is true, having been reported by multiple local and national outlets, and supported by multiple quotes from school officials. We set the target on this as meriting high or very high trust. The story as presented happened, and apart from minor quibbling about the portrayal, a fact’s a fact.

As you can see, this is all rough, which is why we are ultimately more interested in the free text replies. People might mean different things by “high” or “very high”. Arguments could be made that a prompt we considered very low should be rated “low”. Students might get the answer right for wrong reasons. Scoring the free text will show us if the students truly increased in skill and helpful dispositions.

But even with this very rough data we’re seeing some important patterns.

Finding One: Initial Trust of Everything Is Low

First, students rate dubious prompts low before the intervention:

Great, right?

Yeah, except not so much. Here are the trust ratings on the trustworthy prompts right next to them in blue:

On average, students in our four WSU classes rated everything, dubious and trustworthy alike, as worthy of low to moderate trust.

This actually doesn’t surprise me, as it’s what I’ve seen in class activities over the past couple of years, a phenomenon I call “trust compression“. We’re looking to make sure that this phenomenon is not a result of subject headings around the prompts or student expectations around material but we expect it to hold.

Finding Two: After Instruction the Students Differentiate by Becoming Less Cynical

I was going to do a big dramatic run-up here, but let’s skip it. After the pre-test we did (in three of the classes) four hours of “four moves” style instruction. And here’s what trust ratings look like on the assessment after that 4 hours of instruction (these are raw results, so caveat emptor, etc):

That’s the same y-axis there. You can see what is happening — the students are “decompressing” trust — rating trustworthy items more trustworthy and (with the exception of the baking soda prompt) dubious prompts more untrustworthy. The graph is a bit hard to read without understanding what an appropriate response is on each — on gun poll trust, for example, 2 is an acceptable answer  — it’s a survey done by a trustworthy firm and in line with many other findings, but is sponsored by Brookings and pushed by the Center for American Progress, neither of which can be seen as neutral entities. The Chemo prompt is deserving of at least a three, and the rocks prompt should be between three and four. But the pattern seems clear –most of the gap opening up is from the students trusting trustworthy prompts *more*.

How the students do this is not rocket science of course. They become more trusting because rather than relying on the surface features and innate plausibility of the prompts they check what others say –Snopes, Wikipedia, Google News. If they find overwhelming consensus there or reams of linked evidence on the reliability of the source, they make the call.

(Potential) Finding Three: Student answers may be less tribal after intervention

Emphasis on may, but this looks promising. We have not gone deep into to the free answers, but an initial scan of them seems to indicate that students are less tribal in their answers. To be fair, tribalism doesn’t figure much into either pre- or post- responses. Fatalism about the ideological filters of older adults may be warranted, but at least on the issues we tested with our first years (including coal industry benefits, nuclear power risks, alternative medicine, gun control, and child sex-trafficking conspiracy) there was far less tribalism in evidence than current discussion would have you think.

Where there was tribalism it tended to disappear in the post-test, for an obvious reason. The students in the pre-test were reacting to the page in front of them, trying to recognize misinformation. In doing so, they fell back on their assumptions of what was likely true and what was not, usually informed by tribal understandings. If you stare at a picture mutated flowers and ask whether it seems plausible then your answer is more likely to touch on whether you believe nuclear power is safe or not. This is the peril of teaching students to try and “recognize” fake news — doing so places undue weight on preconceptions.

If, on the other hand, you look not to your own assumptions but to the verification and investigative work of others for an answer, you’re far less likely to fall back on your belief system as a guide. You move from “This is likely because I believe stuff like this happens” to “There might be an issue here, but in this case this is false.”

(Much) more to come

We have a lot of work to do on with our data. We need to get the WSU free responses to the prompts scored, and as other institutions in our twelve institution pilot finish their interventions we need to get the free text scored there as well. If the variance and difficulty of the tests match, we’d like to get it all paired up into a true pre/post, and maybe even compare motion of high-performers to low performers. (Update 2/8/2019: As mentioned above,
The student free text responses were too spotty in terms of length and descriptiveness to reliably quantify changes in strategy, but have provided unique insights into where students went right and wrong which we are applying to curriculum and future assessment design )

But as I look at the data I can’t help but think a lot of what-if fatalism about tribalism and cynicism is misplaced. I’ve talked repeatedly as fact-checking as “tools for trust”, a guard against the cynicism that cognitive overload often produces. I think that’s what we’re seeing here. It makes students more capable of trust.

I’m also just not seeing the knotty education problem people keep talking about. True, much of what we have done in the past — CRAAP, RADCAB, critical-thinking-as-pixie-dust and the like — has not prepared students for web misinformation. But a history of bad curricular design doesn’t mean that education is pointless. It often just means you need better curriculum. 

I’ll keep you all updated as we hit this with a bit more data and mathematical rigor.

Stop Reacting and Start Doing the Process

Today’s error comes to you from a Tulsa NBC affiliate:



Of course, this was all the rage on Twitter as well, with many smart people tweeting the USA Today story directly:


It’s a good demonstration of why representativeness heuristics fail. Here’s the story everyone fell for:

usa today

So let’s go through this — good presentation, solid source. Headline actually not curiosity gap or directly emotional. Other news stories look legit. Named author. Recognizable source with a news mission.

Now the supporters of recognition approaches will point out that in the body of the article there is some weird capitalization and a punctuation mistake. That’s the clue, right!


When we look back, we can be really smart of course, saying things like “The capitalization of Kerosene and the lack of punctuation are typical mistakes of non-native speakers.” But in the moment as your mind balances these oddities against what is right on the page, what are your chances of giving that proper weight? And what would “proper weight” even mean? How much does solid page design balance out anachronistic spelling choices? Does the lack of clickbaity ads and chumbuckets forgive a missing comma? Does solid punctuation balance out clickbait stories in the sidebar?

Your chances of weighting these things correctly are pretty lousy. Your students’ chances are absolutely dismal. When actual journalists can’t keep these things straight, what chance do they have?

Take the Tulsa news site. Assuming that USA Today was probably a better authority on whether we still capitalize “kerosene” (which was once a brand name like Kleenex), the Tulsa writer rewrites the story and transcribes the misspelling faithfully while risking their entire career:


We know looking at surface features doesn’t work. Recognition for this stuff is just too prone to bias and first impressions in everyone but an extremely small number experts. And even most *experts* don’t trust recognition approaches alone — so, again, what chance do your students have?

How do our processes work, on the other hand? Really well. Here’s Check for Other Coverage, which has some debunks now but importantly shows that there is actually no USA Today article with this title (and has shown this since this was published).

And here’s Just Add Wikipedia which confirms there is no such “usatoday-go” URL associated with USA Today.

Both of these take significantly less time than judging the article’s surface features, and, importantly, result in relatively binary findings less prone to bias concerns. The story is not being covered in anything indexed by Google News. The URL is not a known USA Today URL. Match, set, point. Done.

Can they fail? Sure. But here’s the thing — they’ll actually fail less than more complex approaches, and when they do fail (for instance if the paper is not found in Wikipedia or does not have a URL) they still put you in good position for deeper study if you want it. Or, just maybe, if they don’t work in the first 30 seconds you’ll realize the retweet or news write up can wait a bit. The web is abundant with viral material, passing on one story that is not quickly verifiable won’t kill you.

Civix Releases New Online Media Literacy Videos

I worked with Civix, a Canadian non-profit, to do a series of videos showing students basic web techniques for source verification and contextualization. I had boiled it down to four scripts running six minutes apiece; Civix and their production partner managed to cut them down to about three minutes each after filming.

Here’s the introduction, which features a bit of narrative around Sam Wineburg and Sarah McGrew’s work and how it informs what we do:

This study came out after Web Literacy for Student Fact-Checkers, but it’s been one of the biggest influences on the continued development of our curriculum. It’s hard to summarize a study in six minutes — and my six minutes were cut down further by editing to three — but I think the presentation of the study survives here.

The second video encourages students to investigate the source before they invest time in reading it (with a heavy lean on Wikipedia as a first stop).

This is one example of how we’ve honed what we teach over the past 18 months.  Initially, we gave students a method for searching for information on a site by doing a search like [[]]. This finds coverage of a site that is not from the site itself.

It’s a great search strategy! And people loved it in workshops — the secret language of search!

But when I’d talk to faculty who were in the workshops a few months later they would say — hey, how’d that trick go again? I want to show it to my students.

Maybe I worry too much, but my guess is if the faculty member has to ask me for the trick two months later (“It’s site something something, or negative site, right?”) then I doubt their students are going to hang on to it either. So we went from the researcher-like [[]] to “just add Wikipedia to the omnibar.”

It’s the same with a lot of our techniques. We started with a book of more than two dozen verification techniques. We’ve got the core down to five starter techniques associated with three moves.


It reminds me of the old joke about the student who goes up to the expert and asks them “How long did it take you to  write that speech?”

“Ten years.”

It’s not a funny joke, but it’s applicable. Teaching this to faculty constantly over the past 18 months has boiled this down, down, down.

Video three reminds students to find the original source of reporting.

There’s so much more to say about this — of course in some cases intermediate reporting sources add additional verification or analysis, etc, etc. But the social web often pushes us low-quality re-reporting of higher quality originals, and the propaganda techniques of leveling and sharpening distort original stories along the way. Finding the source is an essential skill.

Finally a video on finding trusted sources:

For some, the “rely on established sources” piece of this is going to be the most controversial bit of advice. It became a bit more highlighted in the editing here than in my original script. When we drop this into a longer sequence of classes we have discussions about the necessity of non-established sources to a dynamic ecosystem, and about the worry that algorithms can filter out significant minority points of view.

But I actually like the video as it came out here.  When it comes to news reporting (which is the subject of the Civix/NewsWise videos here) we want publications with a history we can evaluate, and reporters who have either learned their craft from journalistic culture or long experience.  That can come from excellent non-profits like ProPublica, or for-profits like the LA Times. Even advocacy journalism like David Corn at Mother Jones. But that culture takes time to develop, and the truth is that older publications often have a level of rigor on this that newer online initiatives can’t touch. If I have to choose between sourcing a fact to hip new online mag  Babel or my hometown newspaper, I’m going to choose my hometown newspaper.

The bigger point in this video is the “broadening” technique that gets students out of the habit of just reading the version of the story that comes to them. The “Search Google News” habit reminds people when they learn of an interesting story through their feed they are not required to read that version of the story. The can go back and fish in Google News for a higher quality story on the same topic before they invest their time.

This is a simple realization that most people still haven’t had yet.  As I say — it’s the internet — you’re not stuck with that one story that comes to you. By going out and actively choosing a better story you will not only filter out false stories but also see the variety of ways an event is being covered.

Anyway, thanks to the Civix people for some great work — we had a deal that I’d help out on the videos as long as I could sculpt them so they would be useful for our Digipo Initiative classes as well. It worked out great and we intend to use these in our own work. We hope you will too!


Geeking out as a conversational paradigm


After I graduated college I couldn’t find a job straight off, and I didn’t know what I wanted to do. I ended up staying home with my parents for a bit, in suburbia, and nearly losing my mind.  The one thing that saved me was weekly four-hour coffeeshop sessions with two friends.

The conversations gave me something I had in high school and college, but now was suddenly in short supply. It was a sort of conversational style that wasn’t really expressive or rhetorical, but on a good night it could feel effortless. I just thought of it as “good conversation”, but it was clearly more of a style.


A picture of a Denny’s for our non-American readers. It’s a chain, the coffee is horrible, but it has free refills and they tend to not kick you out.

I said this to Milo, one of those two friends, one night at the Denny’s.

“Oh, you mean geeking out?” Milo asked.

“Geeking out?” I said. It was 1993, and the first time I’d heard the term.

Milo outlined the nature of geeking out. To him, a “geek out” was a wide-ranging conversation that obeyed different sorts of rules than other conversations. It was emotional, but not primarily expressive. It encompassed disagreement, but it was not debate.

The major rule of the geek out session was each conversational move should build off previous moves, but extend them and supply new information as well.  I tell you something, you find an interesting connection to something you know and you make that connection.

It had disagreement, but it didn’t work like a debate. The goal of a geek out when it came to disagreement was to map out the disagreement more fully. If you dropped a stunning proposition like “Mad About You is the most underrated show on TV” on the table, that’s exciting in a geek out, even if it’s painfully wrong, because it hints that we may share profoundly different information contexts, and this disagreement has surfaced them. Now we get to dig in, which is sure to bring in some novel information or connections.

In an expressive conversation I want you to know exactly how I feel. In a debate, I want you to understand and respect my point of view.

In a geek out I want to know the most valuable and interesting things you have in your head and I want to get them in my head. The people that understand the form may look like they are debating or expressing, but they are doing something much much different.


I don’t know if all this was so succinctly expressed at the table that night. I do know that when I went back to school I became fascinated with discourse analysis. I entered the Literature and Linguistics program at Northern Illinois University. I initially went to work on stylistics, but a course with Neal Norrick turned me on to the possibilities of conversational analysis.

Over the next few years I’d record dozens of conversations of this sort and play them back, listening for the conversational moves. My friends just got used to me having the tape recorder around. My wife, Nicole, looked at the tape recorder a bit weird when I brought it on our second date back in 1995, but when others told her — “Oh, that’s just Holden with his project” she rolled with it, and didn’t run screaming, for which I am forever grateful.


A selection of my mid-1990s recordings of conversations. I made too many to buy decent tapes. The tape names reflect either subjects or participants.

Because I was a grad student at the time, and grad students need to find a niche, I was particularly obsessed with a type of geeking out involving what I called “possible world creation.” But the broad insight that fascinated me was that people co-construct many “geek out” conversations the way that improv artists construct a scene. A conversation is something you have, but it’s also something you build.

It’s 20 years later, and the term “geeking out” has been claimed by others now, I suppose. But looking at it now after soaking in Connectivism and theories of social learning for a decade, I see something else that fascinates me. It’s true that the conversation of the “geeking out” session (as defined by Milo) is co-built. But it looks like something else too. It looks like network mapping.

In fact, if alien robots were to observe geeking out, I think this is what they’d see. We’re little creatures that roam around, experiencing things while disconnected from the network, learning things while disconnected from the network.

Occasionally we meet up, and there’s this problem — I want your insights, your point of view, the theories, trivia, and know-how you have. And as importantly, I want to know how you’ve connected it and indexed it.  So we traverse the nodes. I say I have a data record about John Dewey. You say, I’ve got one of those too, it’s connected to this fact over here about James Liberty Tadd’s weird drawing pedagogy. I’ve never heard of that, but as you talk about it I realize it connects with this 1890s obsession with repeated designs and Japanese notan, and how that led to the book that would lay much of the foundation for art education, the Elements of Composition.

When you start thinking of geeking out as a sort of database synchronization protocol, it makes a lot of sense. Consider the following geek out session, and note the way the moves try to reconcile multiple conflicting networks of knowledge during our sync-up session. I’ve compressed the moves from the stop and start they’d normally be to make it more apparent what’s going on:

  1. You tell me about your disappointment with the last Joss Whedon film.
  2. I say that relates to a piece I read on Whedon and the death of auteur theory and describe it. Others ask about the article.
  3. A third person says, how come music didn’t go through auteur theory? Kind of interesting, right?
  4. Person #4 says well, it sort of did. Dylan was auteur theory in music.
  5. How’s that, other people at the table say?
  6. Person # 4: Because he wrote his own music, he introduced the “singer/songwriter” vs. the Tin Pan Alley model.
  7. But wait, you say – Leadbelly was a singer songwriter. The blues guys were singer/songwriters. So how exactly did Dylan invent it?
  8. Hmm, that’s interesting person #4 says. But of course they were altering traditional songs.
  9. So was Dylan, you say, so I don’t quite buy it. His first album was all covers, right?
  10. Wait, I say, I don’t so much care if Dylan *was* the reboot of the auteur — he was seen that way, and that’s what’s interesting.
  11. We talk about the early 60s a bit. Person #3 brings up Lou Reed because he always brings up Lou Reed.
  12. We groan. You know — some things don’t related to Lou Reed, we say.
  13. Person #3 resumes. You got a lot of things going on in 1960 — in film there was industrialization, at least from the perspective of the Cahiers crowd. But I think there was a sort of media as a lifestyle thing. Media subcultures.
  14. That’s bullcrap, says person #4. Media subcultures are as old as civilization.
  15. Give me an example of that, I say.
  16. Oh there must be hundreds. says person #4. You know how Aristophanes was “low humor”?
  17. Wait, who was Aristophanes, says person #2.
  18. Person #4: “Ancient Greek playwright. Made biting political satire but also the occasional fart joke. So anyways, some greeks thought he was the best thing ever, others thought it was the end of civilization. That’s a media subculture, right?”
  19. But isn’t modern media different, you say? It’s more than what you consume. You remember reading a Tom Wolfe piece from the early 60s on how teens use the radio. And the thing he said was — and you’re interpreting here — is they didn’t so much listen to music as use it as a personal soundtrack.
  20. Is that in that “Kandy-Colored” whatever collection about custom cars and stuff I ask.
  21. Yeah, you say. And we continue…

If you have a minute, go through those moves. There’s not a lot of debate or expression. It’s an intense session we’re you’re networking information together, and where there are clashes it’s almost like a data inconsistency error. Look, I want to take in your Dylan connection, but it conflicts with my Leadbelly knowledge map — how do I resolve that, show me….

Of course, I’m sure what I call geeking out goes back to the beginning of humanity. The structure of storytelling, for example, is very similar. You tell a story, and I say that reminds me of this other story — have you heard it? Night after night cultural information propagates, but so do the connections between those stories. We don’t just get the content, we get the map.


Federated wiki tends to operate in this way, at least in the happenings we’ve had (and we’ll have another soon, get in touch if you’re interested). Federated wiki is asynchronous, but it seems to follow in the same grooves. I thought initially that people would re-edit people’s pages a lot, and they do edit them. But the main thing they do in those edits supplement the information by adding examples and connections to the page or by linking to other pages where they share a related fact.

What’s weird, when you think about it, is not really that federated wiki falls into this “geeking out” structure.  What’s weird is so little on the web does. The primary modes of the internet are self-expression and rhetoric. I’m doing it here in a blog post. This isn’t geeking out – it’s some exposition, mostly persuasion, outside a link here or there, nothing that couldn’t have been published in print a couple thousand years ago. Twitter is debate and real-time thought stream. Blog comments are usually debate. Some forums have little flashes of this, but they don’t traverse as much ground.

That said, maybe I’m missing something. Are there other forms on the web where the primary form of communication is this free flowing topical trapeze? Did the geeks really build a web that doesn’t support geeking out? And if so, how did that happen?

My thought is that we’re increasingly frustrated with conversational forms that are not a great fit for the web. But this one conversational form, which is built on something that feels like the hyperlinking of small documents – we don’t seem to have technologies around that. Why?

Twitter’s Gasoline

So Twitter is going to offer opt-in direct messaging from anyone. It looks like you’ll be able to check a box and anybody will be able to DM you, even if you you don’t follow them.  Andy Baio gets it about right:


Direct Messaging from Randos is not something anyone  other than brands asked for, but it is a way for Twitter to make money and  possibly compete with Facebook in the messaging arena. The fact that it takes a service which is well known for fostering online harassment and makes that harassment even easier gets a shrug from Twitter.

There’s the argument, of course, that it’s an opt-in feature, which would be a great argument if this was the first year we had had social media. But it’s not, and we all know the Opt-in Law of Social Media which is that any opt-in feature sufficiently beneficial to the company’s bottom line will become an opt-out feature before long.

I’m reminded of a conversation I had with Ward Cunningham about trackbacks to forking in federated wiki. Basically, right now if someone forks your stuff in federated wiki and you don’t know them, you never learn about it. A notification that would alert you is one of the most requested features for federated wiki, because it could make wiki spread organically. Of course, the down side is it would also be an easy way to harrass someone, continually forking their stuff and defacing it or writing ugly notes on it.

So we’re left with the problem — build something that spreads easily, but has this Achilles heel in it, or wait until we have a better idea of the best way to do this. When I first started working with Ward on this I asked why this wasn’t implemented yet — this was the key to going viral after all. His response was interesting. He said we’ve talked about it a lot, and somehow we’ll get something like it. But he said it’s “pouring gasoline on a campfire”, which I took to mean that there’s a downside to virality.

A year later we’re still talking about the baest way to do it, and paying attention to what people do without it. We’re still patiently explaining to people why connecting with people in federated wiki is hard compared to other platforms, at least for the moment. We’ve focussed on other community solutions, like shareable “rosters” and customizable activity feeds.

I think eventually Ward and others will throw the gasoline on, but only when they’re sure which way the wind is blowing and where the fire is likely to spread.

Looking at the press around this recent direct messaging decision it’s not clear to me that Twitter has done any of that. What does that say about Twitter?

Convivial Tools and Connected Courses

Excellent, must-read post from the Terry Elliot in the Connected Courses conversation which pulls in ideas of Christopher Alexanders’ System A (the organic, generative) and System B (the industrial, dead). Key grafs (for me at least):

I have a lot of questions about whether any of the web-based tools we are using actually fit the mold of System A. I don’t often feel those spaces as convivial and natural. Behind the artifice of interface lay the reality of code. Is that structure humane? Is it open, sustainable, and regenerative? Does it feel good? Does the whole idea behind code generate System A or System B? I really don’t know.

What I do know is that I get the very distinct feeling that certain systems I use are not convivial. Google+, Facebook, WordPress, Twitter while full of humans, feel closed, feel like templates to be filled in not spaces to be lived in. Hence, the need for outsiders more than ever to raise the question especially in this week of connected courses where we are talking about the why of why.

As readers know, I’ve been on an Alexander kick lately. And it’s less that Alexander led me to these sorts of questions than questions that have been disturbing me have led me to Alexander. So I probably have a less useful perspective than someone that comes to this with a wealth of Alexandrian insight.

“Templates to be filled, not places to be lived in.” Hmmm.

Maybe some of this unavoidable. But I wonder in particular if some of it is the perils of StreamMode, that tendency to conceptualize all of out digital life as a stream of events and statements reacting to other events and statements in a never-ending crawl. The problem with StreamMode is that the structures that make StreamMode coherent are past conversations and concepts newbies don’t have access to. StreamMode also relies heavily on personalities, and hence, popularity.

Look at this blog post, for instance. You want to know what StreamMode is? Do I link to to a definition? No, not hardly. I link you to an older piece that kinda-sorta defines the term in a context that involves a bunch of people and posts you don’t know about. How humane is that?

StateMode is a little different. StateMode is like a wiki — at any given point in time the wiki represents the total documented understanding of the community. The voice that develops is generic or semi-generic, and aims to be architecture, not utterance. If you want the feeling of StateMode, go to a place like TV Tropes. Look past the ads and you’ll find the site invites you into the community as living architecture instead of stream. New articles form as ways to make older articles more meaningful, or understandable. The process is recursive, not episodic.

The problem is that StreamMode builds community at the expense of coherence, and StateMode builds coherence at the expense of community.

I think this may be one of those irreducible conundrums, but I also think over the past 10 years we have veered too much into StreamMode, which gives us not that timeless sense but an overwhelming wave of personality pinging off of personality.

Ages ago on the Internet you used to stuble onto weird and wonderful mini-sites, like secret gardens found in the middle of the woods. Now we find streams of conversation, endlessly repeating, pushing us to live in a narrative that is not ours. The expressive nature of the web is to be treasured, but I think we’ve lost something.

Blue Hampshire’s Death Spiral

Blue Hampshire, a political community I gave years of my life to, is in a death spiral. The front page is a ghost town.

It’s so depressing, I won’t even link to it. It’s so depressing, that I haven’t been able to talk about it until now. It actually hurts that much.

This is a site that at the point I left it had 5,000 members, 10,000 posts, and 100,000 comments. And at the point co-founders Laura Clawson and Dean Barker left it circa 2011(?), it had even more than that.

And what comments! Because I say that *I* put sweat into it, or Laura and Dean did, but it was the community on that site that really shone.  Someone would put up a simple post, and the comments would capture history, process, policy, backstory — whatever. Check out these comments on a randomly selected post from 2007.

The post concerns an event where the local paleoconservative paper endorsed John McCain for their Democratic candidate, as a way to slight a strong field of Democrats in 2008.

What happens next is amazing, but it was the sort of thing that happened all the time on Blue Hampshire. Sure, people gripe, but they do so while giving out hidden pieces of history and background that just didn’t exist anywhere else on the web. They relate personal conversations with previous candidates, document the history the paper has of name-calling and concern-trolling.

Honest to God, this is one article, selected at random from December 2007 (admittedly, one of our top months). In December 2007, our members produced 426 articles like this. Not comments, mind you. Articles. And on so many of those articles, the comments read just like this — or better.

That’s the power of the stream, the conversational, news-peg driven way to run a community. Reddit, Daily Kos, TreeHugger, what have you.

But it’s also the tragedy of the stream, not only because sites die, but because this information doesn’t exist in any form of much use to an outsider. We’re left with the 10,000 page transcript of dead conversations that contain incredible information ungrokable to most people not there.

And honestly, this is not just a problem that affects sites in the death spiral or sites that were run as communities rather than individual blogs. The group of bloggers formerly known as the edupunks have been carrying on conversations about online learning for a decade now. There’s amazing stuff in there, such as this recent how-to post from Alan Levine, or this post on Networked Study from Jim. But when I teach students this stuff or send links to faculty I’m struck by how surprisingly difficult it is for a new person to jump into that stream and make sense of it. You’re either in the stream or out of it, toe-dipping is not allowed.

And so I’m conflicted. One of the big lessons of the past 10 years is how powerful this stream mode of doing things is. It elicits facts, know-how, and insights that would otherwise remain unstated.

But the same community that produces those effects can often lock out outsiders, and leaves behind indecipherable artifacts.

Does anyone else feel this? That the conversational mode while powerful is also lossy over time?

I’m not saying that the stream is bad, mind you — heck, it’s been my way of thinking about every problem since 2006. I’m pushing this thought out to all you via the stream. But working in wiki lately, I’ve started to wonder if we’ve lost a certain balance, and if we pay for that in ways hidden to us. Pay for our lack of recursion through these articles, pay for not doing the work to make all entry points feel scaffolded. If that’s true, then — well, almost EVERYTHING is stream now. So that could be a problem.




Reclaim Hackathon

Kin and Audrey have already written up pretty extensive summaries about the Reclaim event in Los Angeles. I won’t add much.

Everything was wonderful, and I hope I don’t upset people by choosing one thing over another. But there were a few things for me that stood out.

Seeing the Domain of One’s Own development trajectory. I’ve seen this at different points, but the user experience they have for the students at this point is pretty impressive.

JSON API directories. So I really like JSON, as does Kin. But at dinner on Friday he was proposing that the future was that the same way that we query a company for its APIs we would be able to query a person. I’d honestly never thought of this before. This is not an idea like OAuth, where I delegate some power/data exchange between entities. This is me making a call to the authoritative Mike Caulfield API directory and saying, hey how do I set up a videochat? Or where does Mike post his music? And pulling back from that an API call directly to my stuff. This plugged into the work he demonstrated the next day, where he is painstakingly finding all his services he uses, straight down to Expedia, and logging their APIs.  I  like the idea of hosted lifebits best, but in the meantime this idea of at least owning a directory of your APIs to stuff in other places is intriguing.

Evangelism Know-how. I worked for a while at a Services-Oriented Architecture obsessed company as an interface programmer (dynamically building indexes to historical newspaper archives using Javascript and Perl off of API-returned XML). I’m newer to GitHub, but have submitted a couple pull requests through it already. So I didn’t really need Kin’s presentation on APIs or GitHub. But I sat and watched it because I wanted to learn how he did presentations. And the thing I constantly forget? Keep it simple. People aren’t offended getting a bit of education about what they already know, and the people for whom it’s new need you to take smaller steps. As an example, Kin took the time to show how JSON can be styled into most anything. On the other hand, I’ve been running around calling SFW a Universal JSON Canvas without realizing people don’t understand why delivering JSON is radically different (and more empowering) than delivering HTML (or worse, HTML + site chrome).

Known. I saw known in Portland, so it wasn’t new to me. But it was neat to see the reaction to it here. As Audrey points out, much of day two was getting on Known.

Smallest Federated Wiki. Based on some feedback, I’ve made a decision about how I am  going to present SFW from now on. I am astounded by the possibilities of SFW at scale, but you get into unresolvable disagreements about what a heavily federated future would look like. Why? Because we don’t have any idea. I believe that for the class of documents we use most days that stressing out about whether you have the the best version of a document will seem as quaint as stressing out about the number of results Google returns on a search term (remember when we used to look at the number of results and freak out a bit?). But I could be absolutely and totally wrong. And I am certain to be wrong in a lot of *instances* — it may be for your use case that federation is a really really bad idea. Federation isn’t great for policy docs, tax forms, or anything that needs to be authoritative, for instance.

So my newer approach is to start from the document angle. Start with the idea that we need a general tool to store our data, our processes, our grocery lists, our iterated thoughts.  Anything that is not part of the lifestream stuff that WordPress does well. The stuff we’re now dropping into Google Docs and emails we send to ourselves. The “lightly-structured data” that Jon Udell rightly claims makes up most of our day. What would that tool have to look like?

  • It’d have to be general purpose, not single purpose (more like Google Docs than Remember the Milk)
  • It’d have to support networked documents
  • It’d have to support pages as collections of sequenced data, not visual markup
  • It’d have to have an extensible data format and functionality via plugins
  • It’d have to have some way to move your data through a social network
  • It’d have to allow the cloning and refactoring of data across multiple sites
  • It’d have to have rich versioning and rollback capability
  • It’d have to be able to serve data to other applications (in SFW, done through JSON output)
  • It’d have to have a robust flexible core that established interoperability protocols while allowing substantial customization (e.g. you can change what it does without breaking its communication with other sites).

Of those, the idea of a document as  a collection of JSON data is pretty important, and the idea of federation as a “document-centered network” is amazing in its implications. But I don’t need to race there. I can just start by talking about the need for a general use, personal tool like this, and let the networking needs emerge from that. At some point it will turn out that you can replace things like wikis with things like this or not, but ultimately there’s a lot of value you get before that.







Gruber: “It’s all the Web”

Tim Owens pointed me to this excellent piece by John Gruber. Gruber has been portrayed in the past as a bit too in the Apple camp; but I don’t think anyone denies he’s one of the sharper commentators out there on the direction of the Web. He’s also the inventor of Markdown, the world’s best microformat, so massive cred there as well.

In any case, Gruber gets at a piece of what I’ve been digging at the past few months, but from a different direction. Responding to a piece on the “death of the mobile web”, he says:

I think Dixon has it all wrong. We shouldn’t think of the “web” as only what renders inside a web browser. The web is HTTP, and the open Internet. What exactly are people doing with these mobile apps? Largely, using the same services, which, on the desktop, they use in a web browser. Plus, on mobile, the difference between “apps” and “the web” is easily conflated. When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well. What should that count as, “app” or “web”?

I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”?

I say: who cares? It’s all the web.

I firmly believe this is true. But why does it matter to us in edtech?

  • Edtech producers have to get out of browser-centrism. Right now, mobile apps are often dumbed-down version of a more functional web interface. But the mobile revolution isn’t about mobile, it’s about hybrid apps and the push of identity/lifestream management up to the OS. As hybrid apps become the norm on more powerful machines we should expect to start seeing the web version becomeing the fall-back version. This is already the case with desktop Twitter clients, for example — you can do much more with Tweetdeck than you can with the Twitter web client — because once you’re freed from the restrictions of running everything through the same HTML-based, cookie-stated, security-constrained client you can actually produce really functional interfaces and plug into the affordances of the local system. I expect people will still launch many products to the web, but hybrid on the desktop will become a first class citizen.
  • It’s not about DIY, it’s about hackable worldware. You do everything yourself to some extent. If you don’t build the engine, you still drive the car. If you don’t drive the car, you still choose the route. DIY is a never-ending rabbit-hole as a goal in itself. The question for me is not DIY, but the old question of educational software vs. worldware. Part of what we are doing is giving students strategies they can use to tackle problems they encounter (think Jon Udell’s “Strategies for Internet citizens“). What this means in practice is that they must learn to use common non-educational software to solve problems. In 1995, that worldware was desktop software. In 2006, that worldware was browser-based apps. In 2014, it’s increasingly hybrid apps. If we are commited to worldware as a vision, we have to engage with the new environment. Are some of these strategies durable across time and technologies? Absolutely. But if we believe that, then surely we can translate our ideals to the new paradigm.
  • Open is in danger of being left behind. Open education mastered the textbook just as the battle moved into the realm of interactive web-based practice. I see the same thing potentially happening here, as we build a complete and open replacement to an environment no one uses anymore.

OK, so what can we do? The first thing is to get over the religion of the browser. It’s the king of web apps, absolutely. But it’s no more pure or less pure an approach than anything else.

The second thing we can do is experiment with hackable hybrid processes. One of the fascinating things to me about file based publishing systems is how they can plug into an ecosystem that involves locally run software. I don’t know where experimentation with that will lead, but it seems to me a profitable way to look at hybrid approaches without necessarily writing code for Android or iOS.

Finally, we need to hack apps. Maybe that means chaining stuff up with IFTTT. Maybe it means actually coding them. But if we truly want to “interrogate the technologies” that guide our daily life, you can’t do that and exclude the technologies that people use most frequently in 2014. The bar for some educational technologists in 2008 was coding up templates and stringing together server-side extensions. That’s still important, but we need to be doing equivalent things with hybrid apps. This is the nature of technology — the target moves.




Teaching the Distributed Flip [Slides & Small Rant]

Due to a moving-related injury I was sadly unable to attend ET4Online this year. Luckily my two co-presenters for the “Teaching the Distributed Flip” presentation carried the torch forward, showing what recent research and experiementation has found regarding how MOOCs are used in blended scenarios.

Here are the slides, which actually capture some interesting stuff (as opposed to my often abstract slides — Jim Groom can insert “Scottish Twee Diagram” joke here):


One of the things I was thinking as we put together these slides is how little true discussion there has been on this subject over the past year and a half. Amy and I came into contact with the University System of Maryland flip project via the MOOC Research Initiative conference last December, and we quickly found that we were finding the same unreported opportunities and barriers they were in their work. In our work, you could possibly say the lack of coverage was due to the scattered nature of the projects (it’d be a lousy argument, but you could say it). But the Maryland project is huge. It’s much larger and better focused than the Udacity/SJSU experiment. Yet, as far as I can tell, it’s crickets from the industry press, and disinterest from much of the research community.

So what the heck is going on here? Why aren’t we seeing more coverage of these experiments, more sharing of these results? The findings are fascinating to me. Again and again we find that the use of these resources energizes the faculty. Certainly, there’s a self-selection bias here. But given how crushing experimenting with a flipped model can be without adequate resources, the ability of such resources to spur innovation is nontrivial. Again and again we also find that local modification is *crucial* to the success of these efforts, and that lack of access to flip-focussed affordances works against potential impact and adoption.

Some folks in the industry get this — the fact the the MRI conference and the ET4Online conference invited presentations on this issue shows the commitment of certain folks to exploring this area. But the rest of the world seems to have lost interest when Thrun discovered you couldn’t teach students at a marginal cost of zero. And the remaining entities seem really reluctant to seriously engage with these known issues of local use amd modification. The idea that there is some tension between the local and the global is seen as a temporary issue rather than an ongoing design concern.

In any case, despite my absence I’m super happy to have brought two leaders in this area — Amy Collier at Stanford Online and MJ Bishop at USMD — together. And I’m not going to despair over missing this session too much, because if there is any sense in this industry at all this will soon be one of many such events. Thrun walked off with the available oxygen in the room quite some time ago. It’s time to re-engage with the people who were here before, are here after, and have been uncovering some really useful stuff. Could we do that? Could we do that soon? Or do we need to make absurd statements about a ten university world to get a bit of attention?