Geeking out as a conversational paradigm


After I graduated college I couldn’t find a job straight off, and I didn’t know what I wanted to do. I ended up staying home with my parents for a bit, in suburbia, and nearly losing my mind.  The one thing that saved me was weekly four-hour coffeeshop sessions with two friends.

The conversations gave me something I had in high school and college, but now was suddenly in short supply. It was a sort of conversational style that wasn’t really expressive or rhetorical, but on a good night it could feel effortless. I just thought of it as “good conversation”, but it was clearly more of a style.


A picture of a Denny’s for our non-American readers. It’s a chain, the coffee is horrible, but it has free refills and they tend to not kick you out.

I said this to Milo, one of those two friends, one night at the Denny’s.

“Oh, you mean geeking out?” Milo asked.

“Geeking out?” I said. It was 1993, and the first time I’d heard the term.

Milo outlined the nature of geeking out. To him, a “geek out” was a wide-ranging conversation that obeyed different sorts of rules than other conversations. It was emotional, but not primarily expressive. It encompassed disagreement, but it was not debate.

The major rule of the geek out session was each conversational move should build off previous moves, but extend them and supply new information as well.  I tell you something, you find an interesting connection to something you know and you make that connection.

It had disagreement, but it didn’t work like a debate. The goal of a geek out when it came to disagreement was to map out the disagreement more fully. If you dropped a stunning proposition like “Mad About You is the most underrated show on TV” on the table, that’s exciting in a geek out, even if it’s painfully wrong, because it hints that we may share profoundly different information contexts, and this disagreement has surfaced them. Now we get to dig in, which is sure to bring in some novel information or connections.

In an expressive conversation I want you to know exactly how I feel. In a debate, I want you to understand and respect my point of view.

In a geek out I want to know the most valuable and interesting things you have in your head and I want to get them in my head. The people that understand the form may look like they are debating or expressing, but they are doing something much much different.


I don’t know if all this was so succinctly expressed at the table that night. I do know that when I went back to school I became fascinated with discourse analysis. I entered the Literature and Linguistics program at Northern Illinois University. I initially went to work on stylistics, but a course with Neal Norrick turned me on to the possibilities of conversational analysis.

Over the next few years I’d record dozens of conversations of this sort and play them back, listening for the conversational moves. My friends just got used to me having the tape recorder around. My wife, Nicole, looked at the tape recorder a bit weird when I brought it on our second date back in 1995, but when others told her — “Oh, that’s just Holden with his project” she rolled with it, and didn’t run screaming, for which I am forever grateful.


A selection of my mid-1990s recordings of conversations. I made too many to buy decent tapes. The tape names reflect either subjects or participants.

Because I was a grad student at the time, and grad students need to find a niche, I was particularly obsessed with a type of geeking out involving what I called “possible world creation.” But the broad insight that fascinated me was that people co-construct many “geek out” conversations the way that improv artists construct a scene. A conversation is something you have, but it’s also something you build.

It’s 20 years later, and the term “geeking out” has been claimed by others now, I suppose. But looking at it now after soaking in Connectivism and theories of social learning for a decade, I see something else that fascinates me. It’s true that the conversation of the “geeking out” session (as defined by Milo) is co-built. But it looks like something else too. It looks like network mapping.

In fact, if alien robots were to observe geeking out, I think this is what they’d see. We’re little creatures that roam around, experiencing things while disconnected from the network, learning things while disconnected from the network.

Occasionally we meet up, and there’s this problem — I want your insights, your point of view, the theories, trivia, and know-how you have. And as importantly, I want to know how you’ve connected it and indexed it.  So we traverse the nodes. I say I have a data record about John Dewey. You say, I’ve got one of those too, it’s connected to this fact over here about James Liberty Tadd’s weird drawing pedagogy. I’ve never heard of that, but as you talk about it I realize it connects with this 1890s obsession with repeated designs and Japanese notan, and how that led to the book that would lay much of the foundation for art education, the Elements of Composition.

When you start thinking of geeking out as a sort of database synchronization protocol, it makes a lot of sense. Consider the following geek out session, and note the way the moves try to reconcile multiple conflicting networks of knowledge during our sync-up session. I’ve compressed the moves from the stop and start they’d normally be to make it more apparent what’s going on:

  1. You tell me about your disappointment with the last Joss Whedon film.
  2. I say that relates to a piece I read on Whedon and the death of auteur theory and describe it. Others ask about the article.
  3. A third person says, how come music didn’t go through auteur theory? Kind of interesting, right?
  4. Person #4 says well, it sort of did. Dylan was auteur theory in music.
  5. How’s that, other people at the table say?
  6. Person # 4: Because he wrote his own music, he introduced the “singer/songwriter” vs. the Tin Pan Alley model.
  7. But wait, you say – Leadbelly was a singer songwriter. The blues guys were singer/songwriters. So how exactly did Dylan invent it?
  8. Hmm, that’s interesting person #4 says. But of course they were altering traditional songs.
  9. So was Dylan, you say, so I don’t quite buy it. His first album was all covers, right?
  10. Wait, I say, I don’t so much care if Dylan *was* the reboot of the auteur — he was seen that way, and that’s what’s interesting.
  11. We talk about the early 60s a bit. Person #3 brings up Lou Reed because he always brings up Lou Reed.
  12. We groan. You know — some things don’t related to Lou Reed, we say.
  13. Person #3 resumes. You got a lot of things going on in 1960 — in film there was industrialization, at least from the perspective of the Cahiers crowd. But I think there was a sort of media as a lifestyle thing. Media subcultures.
  14. That’s bullcrap, says person #4. Media subcultures are as old as civilization.
  15. Give me an example of that, I say.
  16. Oh there must be hundreds. says person #4. You know how Aristophanes was “low humor”?
  17. Wait, who was Aristophanes, says person #2.
  18. Person #4: “Ancient Greek playwright. Made biting political satire but also the occasional fart joke. So anyways, some greeks thought he was the best thing ever, others thought it was the end of civilization. That’s a media subculture, right?”
  19. But isn’t modern media different, you say? It’s more than what you consume. You remember reading a Tom Wolfe piece from the early 60s on how teens use the radio. And the thing he said was — and you’re interpreting here — is they didn’t so much listen to music as use it as a personal soundtrack.
  20. Is that in that “Kandy-Colored” whatever collection about custom cars and stuff I ask.
  21. Yeah, you say. And we continue…

If you have a minute, go through those moves. There’s not a lot of debate or expression. It’s an intense session we’re you’re networking information together, and where there are clashes it’s almost like a data inconsistency error. Look, I want to take in your Dylan connection, but it conflicts with my Leadbelly knowledge map — how do I resolve that, show me….

Of course, I’m sure what I call geeking out goes back to the beginning of humanity. The structure of storytelling, for example, is very similar. You tell a story, and I say that reminds me of this other story — have you heard it? Night after night cultural information propagates, but so do the connections between those stories. We don’t just get the content, we get the map.


Federated wiki tends to operate in this way, at least in the happenings we’ve had (and we’ll have another soon, get in touch if you’re interested). Federated wiki is asynchronous, but it seems to follow in the same grooves. I thought initially that people would re-edit people’s pages a lot, and they do edit them. But the main thing they do in those edits supplement the information by adding examples and connections to the page or by linking to other pages where they share a related fact.

What’s weird, when you think about it, is not really that federated wiki falls into this “geeking out” structure.  What’s weird is so little on the web does. The primary modes of the internet are self-expression and rhetoric. I’m doing it here in a blog post. This isn’t geeking out – it’s some exposition, mostly persuasion, outside a link here or there, nothing that couldn’t have been published in print a couple thousand years ago. Twitter is debate and real-time thought stream. Blog comments are usually debate. Some forums have little flashes of this, but they don’t traverse as much ground.

That said, maybe I’m missing something. Are there other forms on the web where the primary form of communication is this free flowing topical trapeze? Did the geeks really build a web that doesn’t support geeking out? And if so, how did that happen?

My thought is that we’re increasingly frustrated with conversational forms that are not a great fit for the web. But this one conversational form, which is built on something that feels like the hyperlinking of small documents – we don’t seem to have technologies around that. Why?

The OER Case for Federated Wiki

I talk a lot about the open pedagogy case for federated wiki, but not much about the OER/OCW case for it. That doesn’t mean it isn’t a good fit for the problems one hits in open materials reuse.

Here’s how you currently reuse something in WordPress, for example. It’s a pretty horrific process.

  1. Log into the WordPress source site
  2. Open the file, go to the text editor.
  3. Select all the text, cntrl-c copy it.
  4. Go log into your target WordPress site
  5. Create a new page and name it.
  6. Go into the text editor, and paste the text in.
  7. Save Draft.
  8. Go back to the source site. Right click on the images in the post and download them.
  9. Go back to the target site. Open the Media Gallery and upload the images you just downloaded.
  10. Go through you new post on the target site, and replace the links pointing to the old images with links pointing to the images you just uploaded.
  11. Preview the site, do any final cleanup. Resize images if necessary. Check to make sure you didn’t pull in any weird styles that didn’t transfer (Damn you mso-!)
  12. Save and post. You’re done!
  13. Oh wait, you’re not done. Go to the source post and copy the URL. Try to find the author’s name on the page and remember it.
  14. Go to the bottom of your new “target” page and add attribution “Original Text by Jane Doe”. Select Jane Doe and paste in the hyperlink. Test the link.
  15. Now you’re REALLY done!

It’s about an five to ten minute process per page, depending on the number of images that have to be ported.

Of course, that’s assuming you have login rights to both sites. If you don’t, replace steps one and two with trying to copy it from the actual post, attempting to paste it in the *visual* editor to preserve formatting, go through the same steps, except spend an extra five to ten minutes cleanup on step eleven.

It’s weird to me how fish-can’t-see-the-water we are about this. We’re in 2015, and we take this 15 step process to copy a page from one site to another as a given.

Conversely, once you see how absurd this process is, you can’t *unsee* it. All these philosophical questions about why people don’t reuse stuff more become a little ridiculous. There are many psychological, social, and institutional reasons why people don’t reuse stuff. But they are all academic questions until we solve the simpler problem: our software sucks at reuse. Like, if you had an evil plan to stop reuse and remix you would build exactly the software we have now. If you wanted to really slow down remix, you would build the World Wide Web as we know it now.

Conversely, here’s what the steps are in federated wiki to copy a page from one site to another:

  1. Open your federated wiki site.
  2. Drag the page from the source site to the target site and drop it.
  3. Press the fork button. You’re done!

And keep in mind you don’t need to have the front-end of your site look like federated wiki. All that matters is you have federated wiki on the backend. Here’s a short video showing how two sites with different web-facing appearance still allow the easy transfer of pages:

You’ll notice that most of the length of this video is explanation. The actual transfer of the three pages transferred here is from 1:45 in the video to 2:30. It’s about 15 seconds a page to copy, complete with images. While the question of why people don’t remix and reuse more is interesting to me from a theoretical standpoint, I think it pales in comparision to this question: what would happen if we dropped reuse time from 10 minutes to fifteen seconds? 

How is this possible? Mostly, it’s the elegance of federated wiki’s data model.

  • Data Files not Databases. Traditional sites store the real represenation of a page in a database somewhere, then render it into a display format on demand. The database takes a clean-ish copy and dirties it up with display formatting, You grab that formatting and try to clean it up to put it back in the database. Federated wiki, however, is based on data files, and when we pull that link from one federated wiki driven site to another federated wiki grabs the JSON file, not the rendered HTML.
  • JSON not HTML. HTML renders data in a display format. A YouTube video, for example, specifies an IFRAME as a device along with width, height and other display data. This hurts remixability, because our two sites may handle YouTube videos in different ways (width of player is a persistent problem). JSON feeds the new site the data (play a YouTube video with this ID, etc) but let’s the new site handle the render.
  • Images Embedded. This is a simple thing, and the scalability of it has a few problems, but for most cases it’s a brilliant solution. Federated Wiki’s JSON stores images not as a link to an external file, but as JSON data stored in the page. This means when you copy the page you bring the images with it too.  If you’ve ever struggled with this problem in another platform you know how huge this is: there’s a reason half the pages from 10 years ago display broken images now – they were never properly copied.
  • Plugin Architecture. Federated Wiki’s plugin architecture works much like Alan Kay’s vision of how the web should have worked. The display/interaction engine of federated wiki looks at each JSON “item” and tries to find the appropriate plugin to handle it. Right now these are mainly core plugins, which everyone has, but it’s trivial to build new plugins for things like multiple choice questions, student feedback, and the like. If you copy a site using a new-fangled plugin you don’t have, the page on your site will let you know that, and direct you to where you can download the plugin. Ultimately, this means we can go beyond copying WordPress style pages and actually fork in tools and assessments  with the smae ease.
  • History follows the Page.  As anyone who has reused content created and revised by multiple people knows, attribution is not trivial. It consumes a lot of time, and the process is extremely prone to error. Federated wiki stores the revision history of a page with the page. As such, your edit history is always with you and you don’t need to spend any time maintaining attribution. If the view of history in the federated wiki editor is not sufficient to your needs, you can hit the JSON “Journal” of the page and display contribution history any way you want.

We could probably say more on this, but this should do for now.

We Are Not In the Content Business, We Are In the Community Business

My daughter, who is amazing by the way, introduced me to John Green a couple years ago. Her face was always in her phone, and I thought geez, Katie, get off Facebook. And I think I actually said that. To which she replied “Why would I be on Facebook? I’m watching John and Hank Green videos.”

And I decided maybe I should check this out. And the further I got into it, the more amazed I was. There was a community around the videos called “nerdfighters” who stuck up for what they saw as nerdy values of thinking about things, caring about people, getting excited about ideas, and trying to be generally nice. Here’s John Green on what it is to be a nerd in one of his many YouTube videos:

“…because nerds like us are allowed to be unironically enthusiastic about stuff… Nerds are allowed to love stuff, like jump-up-and-down-in-the-chair-can’t-control-yourself love it. Hank, when people call people nerds, mostly what they’re saying is ‘you like stuff.’ Which is just not a good insult at all. Like, ‘you are too enthusiastic about the miracle of human consciousness’.”

The community made reaction videos, discussed deep ideas online, blogged, organized fundraisers that raised millions of dollars for third world (mostly female) entrepreneurs. If you’ve seen the film of Green’s book The Fault in Our Stars it was inspired by a real girl nerdfighter who died at sixteen of cancer, and knew she’d likely die, but spent her short life engaged with this community, trying to make the world better.

Hank Green got up on stage at a Google event recently, and in the most amazing speech I’ve heard about education this year proceeded to tear the potential YouTube advertisers in the audience to shreds at an event meant to woo them. Advertising, he says, is built on distraction. CSI Miami is a great way to distract yourself from the intense and bittersweet pain that this world you know today will disappear, you will someday die, and everything you have done may amount to nothing. And advertising is a great model for that, because CSI Miami wants a show that distracts you just enough which is what advertisers want too. And in that world, how many eyeballs you got, or downloads, or seats or whatever is probably a good measure of impact.

But here’s the thing, says Green. “We’re not in the distraction business. We’re in the community business.”

And I think — that’s it in a nutshell.

This is the big misunderstanding we have with vendors, with Silicon Valley, with brogrammers trying to sell the next killer educational gadget. We want them to be John Green — to connect us together as educators and empower us to change the world — with or without them. We want to be amazing. Not by having a better app, but by being part of a technical and personal network that allows us to far exceed what we can do personally.

We want them to be John Green. They want to be Mark Zuckerberg. We want them to be in the community business. They want to be in the social software business.

We need empowerment. But they can’t do that. They talk about community, but to them community is data that exists on a server somewhere. It’s “1 million registered” or “2 million sign-ups”. No time is spent trying to empower us outside their own narrow interest. We’re never viewed as partners.

You’ll never see the founder of Knewton tear up on stage about how much his community members inspired him and taught him. Sure, you’ll get the stories of of how the software or the “community” cured autism, or saved someone from suicide or a dead-end job.  But watch the video above. Honestly, WATCH THE VIDEO.  John Green understands that he’s the match, not the fire. And once you see that, it’s hard to unsee how fake so much else is.

If you want to really change education, you can make software, content, social apps — and you should. You should be awesome at all of that. But every development decision has to have the community you are trying to create at its core. You have to be excited about the potential of that community, and work to unleash it. You have to be in awe of it.

I’m sure that’s what many edupreneurs think they are doing. But I don’t see it. I see vendor lock-in and head-patting condescension. An unspoken assumption that the existing community of teachers and students are something to be routed around like damage. The idea that the community is bounded and defined by the product. The idea that the product must be locked-down, black-boxed, and triple-copyrighted to “protect” it from the community.

Maybe I’m wrong. If you’re the eduprenuer exception, then go ahead — WATCH THE VIDEO. If at the end of it you think that you are more John Green than Uber, then let’s talk. Otherwise…

A Thankful Wikipedia

A weird thing happened to me on Wikipedia the other day: I was thanked.

I wasn’t expecting. Far from it. I ressurected my Wikipedia account a couple months ago, with the idea I’d walk the talk and start fixing inclusivity problems on Wikipedia: everything from women tech pioneers with underdeveloped articles, to black Americans in STEM with no articles, to foreign literatures with little to no coverage.

My goal has been to make a small but meaningful edit each week in this regard. Not to dive into the expensive never-ending arguments of Wikipedia, but to do the relatively uncontroversial stuff that doesn’t get done mostly because no one puts in the time to do it. Stuff like making sure that people like Liza Loop and Thomas Talley have pages about them, and that Adele Goldberg gets credit for her work on the Dynabook.

Most underrepresentation on Wikipedia is not the result of people blocking edits, but of no one doing the work. I don’t recommend people wandering into the GamerGate article, or getting into the debate about whether Hedy Lamarr’s work on frequency-hopping can really be seen as a predecessor to Wi-fi. But, on the other hand, the main reason the Karen Spärck Jones page is underdeveloped is because no one has expanded it substantially since her death. The reason that the Kathleen Booth page has no photo is no one has gone through the laborious process of finding an openly licensed photo.

That lack of effort in these areas is why when you google Kathleen Booth (born 1922) you are greeted with this incongruous Google supplied Twitter photo that is actually the CEO of a small marketing firm. If Wikipedia had a photo in there, Google would pull it. But they don’t so they guess and this is the result:



The simple solution to this is to cut out some time you spend decompressing on Twitter an replace it with doing some of the boring yet restful work of improving articles.

But I digress — I was talking about thanking. Normally when you do this sort of thing you either get negative feedback “This source does not support this claim!” or silence.  And usually it’s silence.

Today, something different happened. I got thanked, via an experimental feature Wikipedia is trying out:


Here this person thanked me for finding an Adele Goldberg photo. They then went to my Liza Loop article and disagreed with a claim of mine, saying the source cited didn’t support the strong claim. Without the thank, it would be easy to think of this person as some opponent, out to undo my work. The thank changes things. Consequently, when I review their edit on the Liza Loop article, and it’s persuasive enough, I thank them back. I *want* more people working on these articles — people making sane edits and revision is a *good* thing, because over time it will improve the quality.


Wikipedia gets a lot of flack for its bias, exclusivity, and toxic bureaucratic culture. And rightly so — the site is clearly working through an awkward phase in its history. It’s succeeded in becoming a much higher quality publication in the past ten years than anyone would have dreamed possible. But in the process it has also becoming a somewhat less inviting place.

Features like thanking (introduced a couple years ago, but becoming more widely used), show that they are still trying to get the right mix of hospitality and precision, and that they are correctly seeing the potential of the interface to help them change the culture. The Visual Editor is another such effort.

I’ve often said that the amazing beauty and the potential ugliness of the future of the web is there to see in Wikipedia. It’s the canary in the coal mine, the Punxsutawney Phil of our networked ambitions. We have to make it work, because if we can’t we’re in for a lot more years of winter. It’s good to see the efforts going on there. And it’s good to be back!


Simple Generative Ideas

I’ve been explaining federated wiki to people for over a year now, sometimes sucessfully and sometimes not. But the thing I find the hardest to explain is the simple beauty of the system.

As exhibit A, this is something that happened today:

Screenshot 2015-04-25 at 8.13.21 AM

In case you can’t see that, this is what is going on. I’m creating a new template (look at my last post to understand the power of templates). But since the way you create a template is to just create a normal page but make the last word of the name “template” some thing interesting happens: you can create templates based on other templates.

Now since the IDs of the elements follow each template, that means that your second generation template inherits the abilities you coded for the first generation template, including all the bug-checking, cross-browser compatibility fixes, etc. You get all of that without writing a single line of code. So I can hammer out a basic template, write some handlers on the client side, then others can come and build a template based on my template and extend it, sometimes without even programming. Or they can drag and drop items from other templates, and since the IDs follow those items they might be able to mix and match their way to a new template.

Did Ward, Paul, and Nick plan any of this as they designed the code? No, not a bit. But every time a feature is requested the slow discussion begins — instead of a feature, is there a way to deliver this as a capability that extends the system as a whole?  Is there a way to make this simpler? More generative? More in line with the way other things in federated wiki work?

So Ward is surprised as me that you can build templates out of templates. But we are both also not surprised, because this is how things go with software that’s been relentlessly refactored. There’s even a term for it: “Reverse bit-rot”.

A normal software process would have long ago decided to give Templates their own type and data model, soup them up with additional features, protections, tracking. The agile process says things should be constantly refactored down to a few simple and powerful ideas. It’s not as flashy, but you find you have killer features that you didn’t even intentionally write.

The Simplest Federated Database That Could Possibly Work

The first wiki was described by Ward Cunningham as the “simplest database that could possibly work.” Over the next couple of years, many different functions were built on top of that simple database. Categories (and to some extent, the first web notions of tagging) were built using the “What links here?” functionality. The recent changes stream (again, a predecessor to the social streams you see every day now) was constructed off of file write dates. Profile signatures were page links, and were even used to construct a rudimentary messaging system.

In other words, it was a simple database that was able to build a rough fascimile of what would later become Web 2.0.

While we’ve talked about federated wiki as a browser, it can also be used as a backend database that natively inherits the flexibility of JSON instead of the rigidity of relational databases. Here we show how a few simple concepts — JSON templates, pages as data, and stable ids allow a designer to create a custom content application free of the mess of traditional databases that still preserves data as data. We do it in 45 minutes but we cut the video down to 12 minutes viewing time.

Best of all, anything you build on top of federated wiki inherits its page-internal journaling and federation capabilities. So you’re not only putting up a site in record time, you’re putting up a federated, forkable site as well.

The first wiki showed how far one could go with form elements, cgi, and links towards creating a robust community unlike those before it. It’s possible to see federated wiki as a similar toolkit to build new ways of working. I hope this video hints at how that might be done.

Note: I’ve also constructed this page using the above methods here. I think it looks pretty nice.

That Time Berners-Lee Got Knocked Down to a Poster Session

I’ve known about the Berners-Lee Poster Session for a while, but in case you all don’t, here’s the skinny: as late as December 1991 belief in Tim-Berners Lee’s World Wide Web idea was low enough that a paper he submitted on the subject to the Hypermedia ’91 conference in San Antonio, TX was bumped down to a poster session.

Today, though, things got a bit more awesome. While looking for an Internet Archive video to test Blackboard embedding on (this is my life now, folks) I came across this AMAZING video, which has only 47 views.

In it Mark Frisse, the man who rejected Berners-Lee’s paper on the World Wide Web from the conference, explains why he rejected it, and apologizes to Tim Berners-Lee for the snub. He just couldn’t see that in practice people who hit a broken link would just back up and find another. It just seemed broken to him, a “spaghetti bowl of gotos”.

The background music is mixed a bit loud. But it is worth sitting through every minute.

Where this might lead you, if you are so inclined, is to Amy Collier’s excellent posts on Not-Yetness, which talk about how we get so hung up on eliminating messiness that we miss the generative power of new approaches.

I will also probably link to this page when people ask me “But how will you know you have the best/latest/fullest version of a page in fedwiki?” Because the answer is the same answer that Frisse couldn’t see: ninety-nine percent of the time you won’t care. You really won’t. From the perspective of the system, it’s a mess. From the perspective of the user you just need an article that’s good enough, and a system that gives you seven of those to choose from is better than one that gives you none.



Get every new post delivered to your Inbox.

Join 183 other followers