Geeking out as a conversational paradigm

1993

After I graduated college I couldn’t find a job straight off, and I didn’t know what I wanted to do. I ended up staying home with my parents for a bit, in suburbia, and nearly losing my mind.  The one thing that saved me was weekly four-hour coffeeshop sessions with two friends.

The conversations gave me something I had in high school and college, but now was suddenly in short supply. It was a sort of conversational style that wasn’t really expressive or rhetorical, but on a good night it could feel effortless. I just thought of it as “good conversation”, but it was clearly more of a style.

Dennys-San-Clemente-4

A picture of a Denny’s for our non-American readers. It’s a chain, the coffee is horrible, but it has free refills and they tend to not kick you out.

I said this to Milo, one of those two friends, one night at the Denny’s.

“Oh, you mean geeking out?” Milo asked.

“Geeking out?” I said. It was 1993, and the first time I’d heard the term.

Milo outlined the nature of geeking out. To him, a “geek out” was a wide-ranging conversation that obeyed different sorts of rules than other conversations. It was emotional, but not primarily expressive. It encompassed disagreement, but it was not debate.

The major rule of the geek out session was each conversational move should build off previous moves, but extend them and supply new information as well.  I tell you something, you find an interesting connection to something you know and you make that connection.

It had disagreement, but it didn’t work like a debate. The goal of a geek out when it came to disagreement was to map out the disagreement more fully. If you dropped a stunning proposition like “Mad About You is the most underrated show on TV” on the table, that’s exciting in a geek out, even if it’s painfully wrong, because it hints that we may share profoundly different information contexts, and this disagreement has surfaced them. Now we get to dig in, which is sure to bring in some novel information or connections.

In an expressive conversation I want you to know exactly how I feel. In a debate, I want you to understand and respect my point of view.

In a geek out I want to know the most valuable and interesting things you have in your head and I want to get them in my head. The people that understand the form may look like they are debating or expressing, but they are doing something much much different.

1995

I don’t know if all this was so succinctly expressed at the table that night. I do know that when I went back to school I became fascinated with discourse analysis. I entered the Literature and Linguistics program at Northern Illinois University. I initially went to work on stylistics, but a course with Neal Norrick turned me on to the possibilities of conversational analysis.

Over the next few years I’d record dozens of conversations of this sort and play them back, listening for the conversational moves. My friends just got used to me having the tape recorder around. My wife, Nicole, looked at the tape recorder a bit weird when I brought it on our second date back in 1995, but when others told her — “Oh, that’s just Holden with his project” she rolled with it, and didn’t run screaming, for which I am forever grateful.

IMG_20150523_0905448_rewind

A selection of my mid-1990s recordings of conversations. I made too many to buy decent tapes. The tape names reflect either subjects or participants.

Because I was a grad student at the time, and grad students need to find a niche, I was particularly obsessed with a type of geeking out involving what I called “possible world creation.” But the broad insight that fascinated me was that people co-construct many “geek out” conversations the way that improv artists construct a scene. A conversation is something you have, but it’s also something you build.

It’s 20 years later, and the term “geeking out” has been claimed by others now, I suppose. But looking at it now after soaking in Connectivism and theories of social learning for a decade, I see something else that fascinates me. It’s true that the conversation of the “geeking out” session (as defined by Milo) is co-built. But it looks like something else too. It looks like network mapping.

In fact, if alien robots were to observe geeking out, I think this is what they’d see. We’re little creatures that roam around, experiencing things while disconnected from the network, learning things while disconnected from the network.

Occasionally we meet up, and there’s this problem — I want your insights, your point of view, the theories, trivia, and know-how you have. And as importantly, I want to know how you’ve connected it and indexed it.  So we traverse the nodes. I say I have a data record about John Dewey. You say, I’ve got one of those too, it’s connected to this fact over here about James Liberty Tadd’s weird drawing pedagogy. I’ve never heard of that, but as you talk about it I realize it connects with this 1890s obsession with repeated designs and Japanese notan, and how that led to the book that would lay much of the foundation for art education, the Elements of Composition.

When you start thinking of geeking out as a sort of database synchronization protocol, it makes a lot of sense. Consider the following geek out session, and note the way the moves try to reconcile multiple conflicting networks of knowledge during our sync-up session. I’ve compressed the moves from the stop and start they’d normally be to make it more apparent what’s going on:

  1. You tell me about your disappointment with the last Joss Whedon film.
  2. I say that relates to a piece I read on Whedon and the death of auteur theory and describe it. Others ask about the article.
  3. A third person says, how come music didn’t go through auteur theory? Kind of interesting, right?
  4. Person #4 says well, it sort of did. Dylan was auteur theory in music.
  5. How’s that, other people at the table say?
  6. Person # 4: Because he wrote his own music, he introduced the “singer/songwriter” vs. the Tin Pan Alley model.
  7. But wait, you say – Leadbelly was a singer songwriter. The blues guys were singer/songwriters. So how exactly did Dylan invent it?
  8. Hmm, that’s interesting person #4 says. But of course they were altering traditional songs.
  9. So was Dylan, you say, so I don’t quite buy it. His first album was all covers, right?
  10. Wait, I say, I don’t so much care if Dylan *was* the reboot of the auteur — he was seen that way, and that’s what’s interesting.
  11. We talk about the early 60s a bit. Person #3 brings up Lou Reed because he always brings up Lou Reed.
  12. We groan. You know — some things don’t related to Lou Reed, we say.
  13. Person #3 resumes. You got a lot of things going on in 1960 — in film there was industrialization, at least from the perspective of the Cahiers crowd. But I think there was a sort of media as a lifestyle thing. Media subcultures.
  14. That’s bullcrap, says person #4. Media subcultures are as old as civilization.
  15. Give me an example of that, I say.
  16. Oh there must be hundreds. says person #4. You know how Aristophanes was “low humor”?
  17. Wait, who was Aristophanes, says person #2.
  18. Person #4: “Ancient Greek playwright. Made biting political satire but also the occasional fart joke. So anyways, some greeks thought he was the best thing ever, others thought it was the end of civilization. That’s a media subculture, right?”
  19. But isn’t modern media different, you say? It’s more than what you consume. You remember reading a Tom Wolfe piece from the early 60s on how teens use the radio. And the thing he said was — and you’re interpreting here — is they didn’t so much listen to music as use it as a personal soundtrack.
  20. Is that in that “Kandy-Colored” whatever collection about custom cars and stuff I ask.
  21. Yeah, you say. And we continue…

If you have a minute, go through those moves. There’s not a lot of debate or expression. It’s an intense session we’re you’re networking information together, and where there are clashes it’s almost like a data inconsistency error. Look, I want to take in your Dylan connection, but it conflicts with my Leadbelly knowledge map — how do I resolve that, show me….

Of course, I’m sure what I call geeking out goes back to the beginning of humanity. The structure of storytelling, for example, is very similar. You tell a story, and I say that reminds me of this other story — have you heard it? Night after night cultural information propagates, but so do the connections between those stories. We don’t just get the content, we get the map.

2015

Federated wiki tends to operate in this way, at least in the happenings we’ve had (and we’ll have another soon, get in touch if you’re interested). Federated wiki is asynchronous, but it seems to follow in the same grooves. I thought initially that people would re-edit people’s pages a lot, and they do edit them. But the main thing they do in those edits supplement the information by adding examples and connections to the page or by linking to other pages where they share a related fact.

What’s weird, when you think about it, is not really that federated wiki falls into this “geeking out” structure.  What’s weird is so little on the web does. The primary modes of the internet are self-expression and rhetoric. I’m doing it here in a blog post. This isn’t geeking out – it’s some exposition, mostly persuasion, outside a link here or there, nothing that couldn’t have been published in print a couple thousand years ago. Twitter is debate and real-time thought stream. Blog comments are usually debate. Some forums have little flashes of this, but they don’t traverse as much ground.

That said, maybe I’m missing something. Are there other forms on the web where the primary form of communication is this free flowing topical trapeze? Did the geeks really build a web that doesn’t support geeking out? And if so, how did that happen?

My thought is that we’re increasingly frustrated with conversational forms that are not a great fit for the web. But this one conversational form, which is built on something that feels like the hyperlinking of small documents – we don’t seem to have technologies around that. Why?

Advertisements

Twitter’s Gasoline

So Twitter is going to offer opt-in direct messaging from anyone. It looks like you’ll be able to check a box and anybody will be able to DM you, even if you you don’t follow them.  Andy Baio gets it about right:

baio

Direct Messaging from Randos is not something anyone  other than brands asked for, but it is a way for Twitter to make money and  possibly compete with Facebook in the messaging arena. The fact that it takes a service which is well known for fostering online harassment and makes that harassment even easier gets a shrug from Twitter.

There’s the argument, of course, that it’s an opt-in feature, which would be a great argument if this was the first year we had had social media. But it’s not, and we all know the Opt-in Law of Social Media which is that any opt-in feature sufficiently beneficial to the company’s bottom line will become an opt-out feature before long.

I’m reminded of a conversation I had with Ward Cunningham about trackbacks to forking in federated wiki. Basically, right now if someone forks your stuff in federated wiki and you don’t know them, you never learn about it. A notification that would alert you is one of the most requested features for federated wiki, because it could make wiki spread organically. Of course, the down side is it would also be an easy way to harrass someone, continually forking their stuff and defacing it or writing ugly notes on it.

So we’re left with the problem — build something that spreads easily, but has this Achilles heel in it, or wait until we have a better idea of the best way to do this. When I first started working with Ward on this I asked why this wasn’t implemented yet — this was the key to going viral after all. His response was interesting. He said we’ve talked about it a lot, and somehow we’ll get something like it. But he said it’s “pouring gasoline on a campfire”, which I took to mean that there’s a downside to virality.

A year later we’re still talking about the baest way to do it, and paying attention to what people do without it. We’re still patiently explaining to people why connecting with people in federated wiki is hard compared to other platforms, at least for the moment. We’ve focussed on other community solutions, like shareable “rosters” and customizable activity feeds.

I think eventually Ward and others will throw the gasoline on, but only when they’re sure which way the wind is blowing and where the fire is likely to spread.

Looking at the press around this recent direct messaging decision it’s not clear to me that Twitter has done any of that. What does that say about Twitter?

Convivial Tools and Connected Courses

Excellent, must-read post from the Terry Elliot in the Connected Courses conversation which pulls in ideas of Christopher Alexanders’ System A (the organic, generative) and System B (the industrial, dead). Key grafs (for me at least):

I have a lot of questions about whether any of the web-based tools we are using actually fit the mold of System A. I don’t often feel those spaces as convivial and natural. Behind the artifice of interface lay the reality of code. Is that structure humane? Is it open, sustainable, and regenerative? Does it feel good? Does the whole idea behind code generate System A or System B? I really don’t know.

What I do know is that I get the very distinct feeling that certain systems I use are not convivial. Google+, Facebook, WordPress, Twitter while full of humans, feel closed, feel like templates to be filled in not spaces to be lived in. Hence, the need for outsiders more than ever to raise the question especially in this week of connected courses where we are talking about the why of why.

As readers know, I’ve been on an Alexander kick lately. And it’s less that Alexander led me to these sorts of questions than questions that have been disturbing me have led me to Alexander. So I probably have a less useful perspective than someone that comes to this with a wealth of Alexandrian insight.

“Templates to be filled, not places to be lived in.” Hmmm.

Maybe some of this unavoidable. But I wonder in particular if some of it is the perils of StreamMode, that tendency to conceptualize all of out digital life as a stream of events and statements reacting to other events and statements in a never-ending crawl. The problem with StreamMode is that the structures that make StreamMode coherent are past conversations and concepts newbies don’t have access to. StreamMode also relies heavily on personalities, and hence, popularity.

Look at this blog post, for instance. You want to know what StreamMode is? Do I link to to a definition? No, not hardly. I link you to an older piece that kinda-sorta defines the term in a context that involves a bunch of people and posts you don’t know about. How humane is that?

StateMode is a little different. StateMode is like a wiki — at any given point in time the wiki represents the total documented understanding of the community. The voice that develops is generic or semi-generic, and aims to be architecture, not utterance. If you want the feeling of StateMode, go to a place like TV Tropes. Look past the ads and you’ll find the site invites you into the community as living architecture instead of stream. New articles form as ways to make older articles more meaningful, or understandable. The process is recursive, not episodic.

The problem is that StreamMode builds community at the expense of coherence, and StateMode builds coherence at the expense of community.

I think this may be one of those irreducible conundrums, but I also think over the past 10 years we have veered too much into StreamMode, which gives us not that timeless sense but an overwhelming wave of personality pinging off of personality.

Ages ago on the Internet you used to stuble onto weird and wonderful mini-sites, like secret gardens found in the middle of the woods. Now we find streams of conversation, endlessly repeating, pushing us to live in a narrative that is not ours. The expressive nature of the web is to be treasured, but I think we’ve lost something.

Blue Hampshire’s Death Spiral

Blue Hampshire, a political community I gave years of my life to, is in a death spiral. The front page is a ghost town.

It’s so depressing, I won’t even link to it. It’s so depressing, that I haven’t been able to talk about it until now. It actually hurts that much.

This is a site that at the point I left it had 5,000 members, 10,000 posts, and 100,000 comments. And at the point co-founders Laura Clawson and Dean Barker left it circa 2011(?), it had even more than that.

And what comments! Because I say that *I* put sweat into it, or Laura and Dean did, but it was the community on that site that really shone.  Someone would put up a simple post, and the comments would capture history, process, policy, backstory — whatever. Check out these comments on a randomly selected post from 2007.

The post concerns an event where the local paleoconservative paper endorsed John McCain for their Democratic candidate, as a way to slight a strong field of Democrats in 2008.

What happens next is amazing, but it was the sort of thing that happened all the time on Blue Hampshire. Sure, people gripe, but they do so while giving out hidden pieces of history and background that just didn’t exist anywhere else on the web. They relate personal conversations with previous candidates, document the history the paper has of name-calling and concern-trolling.

Honest to God, this is one article, selected at random from December 2007 (admittedly, one of our top months). In December 2007, our members produced 426 articles like this. Not comments, mind you. Articles. And on so many of those articles, the comments read just like this — or better.

That’s the power of the stream, the conversational, news-peg driven way to run a community. Reddit, Daily Kos, TreeHugger, what have you.

But it’s also the tragedy of the stream, not only because sites die, but because this information doesn’t exist in any form of much use to an outsider. We’re left with the 10,000 page transcript of dead conversations that contain incredible information ungrokable to most people not there.

And honestly, this is not just a problem that affects sites in the death spiral or sites that were run as communities rather than individual blogs. The group of bloggers formerly known as the edupunks have been carrying on conversations about online learning for a decade now. There’s amazing stuff in there, such as this recent how-to post from Alan Levine, or this post on Networked Study from Jim. But when I teach students this stuff or send links to faculty I’m struck by how surprisingly difficult it is for a new person to jump into that stream and make sense of it. You’re either in the stream or out of it, toe-dipping is not allowed.

And so I’m conflicted. One of the big lessons of the past 10 years is how powerful this stream mode of doing things is. It elicits facts, know-how, and insights that would otherwise remain unstated.

But the same community that produces those effects can often lock out outsiders, and leaves behind indecipherable artifacts.

Does anyone else feel this? That the conversational mode while powerful is also lossy over time?

I’m not saying that the stream is bad, mind you — heck, it’s been my way of thinking about every problem since 2006. I’m pushing this thought out to all you via the stream. But working in wiki lately, I’ve started to wonder if we’ve lost a certain balance, and if we pay for that in ways hidden to us. Pay for our lack of recursion through these articles, pay for not doing the work to make all entry points feel scaffolded. If that’s true, then — well, almost EVERYTHING is stream now. So that could be a problem.

Thoughts?

 

 

Reclaim Hackathon

Kin and Audrey have already written up pretty extensive summaries about the Reclaim event in Los Angeles. I won’t add much.

Everything was wonderful, and I hope I don’t upset people by choosing one thing over another. But there were a few things for me that stood out.

Seeing the Domain of One’s Own development trajectory. I’ve seen this at different points, but the user experience they have for the students at this point is pretty impressive.

JSON API directories. So I really like JSON, as does Kin. But at dinner on Friday he was proposing that the future was that the same way that we query a company for its APIs we would be able to query a person. I’d honestly never thought of this before. This is not an idea like OAuth, where I delegate some power/data exchange between entities. This is me making a call to the authoritative Mike Caulfield API directory and saying, hey how do I set up a videochat? Or where does Mike post his music? And pulling back from that an API call directly to my stuff. This plugged into the work he demonstrated the next day, where he is painstakingly finding all his services he uses, straight down to Expedia, and logging their APIs.  I  like the idea of hosted lifebits best, but in the meantime this idea of at least owning a directory of your APIs to stuff in other places is intriguing.

Evangelism Know-how. I worked for a while at a Services-Oriented Architecture obsessed company as an interface programmer (dynamically building indexes to historical newspaper archives using Javascript and Perl off of API-returned XML). I’m newer to GitHub, but have submitted a couple pull requests through it already. So I didn’t really need Kin’s presentation on APIs or GitHub. But I sat and watched it because I wanted to learn how he did presentations. And the thing I constantly forget? Keep it simple. People aren’t offended getting a bit of education about what they already know, and the people for whom it’s new need you to take smaller steps. As an example, Kin took the time to show how JSON can be styled into most anything. On the other hand, I’ve been running around calling SFW a Universal JSON Canvas without realizing people don’t understand why delivering JSON is radically different (and more empowering) than delivering HTML (or worse, HTML + site chrome).

Known. I saw known in Portland, so it wasn’t new to me. But it was neat to see the reaction to it here. As Audrey points out, much of day two was getting on Known.

Smallest Federated Wiki. Based on some feedback, I’ve made a decision about how I am  going to present SFW from now on. I am astounded by the possibilities of SFW at scale, but you get into unresolvable disagreements about what a heavily federated future would look like. Why? Because we don’t have any idea. I believe that for the class of documents we use most days that stressing out about whether you have the the best version of a document will seem as quaint as stressing out about the number of results Google returns on a search term (remember when we used to look at the number of results and freak out a bit?). But I could be absolutely and totally wrong. And I am certain to be wrong in a lot of *instances* — it may be for your use case that federation is a really really bad idea. Federation isn’t great for policy docs, tax forms, or anything that needs to be authoritative, for instance.

So my newer approach is to start from the document angle. Start with the idea that we need a general tool to store our data, our processes, our grocery lists, our iterated thoughts.  Anything that is not part of the lifestream stuff that WordPress does well. The stuff we’re now dropping into Google Docs and emails we send to ourselves. The “lightly-structured data” that Jon Udell rightly claims makes up most of our day. What would that tool have to look like?

  • It’d have to be general purpose, not single purpose (more like Google Docs than Remember the Milk)
  • It’d have to support networked documents
  • It’d have to support pages as collections of sequenced data, not visual markup
  • It’d have to have an extensible data format and functionality via plugins
  • It’d have to have some way to move your data through a social network
  • It’d have to allow the cloning and refactoring of data across multiple sites
  • It’d have to have rich versioning and rollback capability
  • It’d have to be able to serve data to other applications (in SFW, done through JSON output)
  • It’d have to have a robust flexible core that established interoperability protocols while allowing substantial customization (e.g. you can change what it does without breaking its communication with other sites).

Of those, the idea of a document as  a collection of JSON data is pretty important, and the idea of federation as a “document-centered network” is amazing in its implications. But I don’t need to race there. I can just start by talking about the need for a general use, personal tool like this, and let the networking needs emerge from that. At some point it will turn out that you can replace things like wikis with things like this or not, but ultimately there’s a lot of value you get before that.

 

 

 

 

 

 

Gruber: “It’s all the Web”

Tim Owens pointed me to this excellent piece by John Gruber. Gruber has been portrayed in the past as a bit too in the Apple camp; but I don’t think anyone denies he’s one of the sharper commentators out there on the direction of the Web. He’s also the inventor of Markdown, the world’s best microformat, so massive cred there as well.

In any case, Gruber gets at a piece of what I’ve been digging at the past few months, but from a different direction. Responding to a piece on the “death of the mobile web”, he says:

I think Dixon has it all wrong. We shouldn’t think of the “web” as only what renders inside a web browser. The web is HTTP, and the open Internet. What exactly are people doing with these mobile apps? Largely, using the same services, which, on the desktop, they use in a web browser. Plus, on mobile, the difference between “apps” and “the web” is easily conflated. When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well. What should that count as, “app” or “web”?

I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”?

I say: who cares? It’s all the web.

I firmly believe this is true. But why does it matter to us in edtech?

  • Edtech producers have to get out of browser-centrism. Right now, mobile apps are often dumbed-down version of a more functional web interface. But the mobile revolution isn’t about mobile, it’s about hybrid apps and the push of identity/lifestream management up to the OS. As hybrid apps become the norm on more powerful machines we should expect to start seeing the web version becomeing the fall-back version. This is already the case with desktop Twitter clients, for example — you can do much more with Tweetdeck than you can with the Twitter web client — because once you’re freed from the restrictions of running everything through the same HTML-based, cookie-stated, security-constrained client you can actually produce really functional interfaces and plug into the affordances of the local system. I expect people will still launch many products to the web, but hybrid on the desktop will become a first class citizen.
  • It’s not about DIY, it’s about hackable worldware. You do everything yourself to some extent. If you don’t build the engine, you still drive the car. If you don’t drive the car, you still choose the route. DIY is a never-ending rabbit-hole as a goal in itself. The question for me is not DIY, but the old question of educational software vs. worldware. Part of what we are doing is giving students strategies they can use to tackle problems they encounter (think Jon Udell’s “Strategies for Internet citizens“). What this means in practice is that they must learn to use common non-educational software to solve problems. In 1995, that worldware was desktop software. In 2006, that worldware was browser-based apps. In 2014, it’s increasingly hybrid apps. If we are commited to worldware as a vision, we have to engage with the new environment. Are some of these strategies durable across time and technologies? Absolutely. But if we believe that, then surely we can translate our ideals to the new paradigm.
  • Open is in danger of being left behind. Open education mastered the textbook just as the battle moved into the realm of interactive web-based practice. I see the same thing potentially happening here, as we build a complete and open replacement to an environment no one uses anymore.

OK, so what can we do? The first thing is to get over the religion of the browser. It’s the king of web apps, absolutely. But it’s no more pure or less pure an approach than anything else.

The second thing we can do is experiment with hackable hybrid processes. One of the fascinating things to me about file based publishing systems is how they can plug into an ecosystem that involves locally run software. I don’t know where experimentation with that will lead, but it seems to me a profitable way to look at hybrid approaches without necessarily writing code for Android or iOS.

Finally, we need to hack apps. Maybe that means chaining stuff up with IFTTT. Maybe it means actually coding them. But if we truly want to “interrogate the technologies” that guide our daily life, you can’t do that and exclude the technologies that people use most frequently in 2014. The bar for some educational technologists in 2008 was coding up templates and stringing together server-side extensions. That’s still important, but we need to be doing equivalent things with hybrid apps. This is the nature of technology — the target moves.

 

 

 

Teaching the Distributed Flip [Slides & Small Rant]

Due to a moving-related injury I was sadly unable to attend ET4Online this year. Luckily my two co-presenters for the “Teaching the Distributed Flip” presentation carried the torch forward, showing what recent research and experiementation has found regarding how MOOCs are used in blended scenarios.

Here are the slides, which actually capture some interesting stuff (as opposed to my often abstract slides — Jim Groom can insert “Scottish Twee Diagram” joke here):

 

One of the things I was thinking as we put together these slides is how little true discussion there has been on this subject over the past year and a half. Amy and I came into contact with the University System of Maryland flip project via the MOOC Research Initiative conference last December, and we quickly found that we were finding the same unreported opportunities and barriers they were in their work. In our work, you could possibly say the lack of coverage was due to the scattered nature of the projects (it’d be a lousy argument, but you could say it). But the Maryland project is huge. It’s much larger and better focused than the Udacity/SJSU experiment. Yet, as far as I can tell, it’s crickets from the industry press, and disinterest from much of the research community.

So what the heck is going on here? Why aren’t we seeing more coverage of these experiments, more sharing of these results? The findings are fascinating to me. Again and again we find that the use of these resources energizes the faculty. Certainly, there’s a self-selection bias here. But given how crushing experimenting with a flipped model can be without adequate resources, the ability of such resources to spur innovation is nontrivial. Again and again we also find that local modification is *crucial* to the success of these efforts, and that lack of access to flip-focussed affordances works against potential impact and adoption.

Some folks in the industry get this — the fact the the MRI conference and the ET4Online conference invited presentations on this issue shows the commitment of certain folks to exploring this area. But the rest of the world seems to have lost interest when Thrun discovered you couldn’t teach students at a marginal cost of zero. And the remaining entities seem really reluctant to seriously engage with these known issues of local use amd modification. The idea that there is some tension between the local and the global is seen as a temporary issue rather than an ongoing design concern.

In any case, despite my absence I’m super happy to have brought two leaders in this area — Amy Collier at Stanford Online and MJ Bishop at USMD — together. And I’m not going to despair over missing this session too much, because if there is any sense in this industry at all this will soon be one of many such events. Thrun walked off with the available oxygen in the room quite some time ago. It’s time to re-engage with the people who were here before, are here after, and have been uncovering some really useful stuff. Could we do that? Could we do that soon? Or do we need to make absurd statements about a ten university world to get a bit of attention?