Cultural Resistance

Fuzzy Notepad posted Twitter’s Missing Manual today, noting that obscure UI interactions in Twitter often drive people away.

Reading through the list they have compiled, however, I don’t think this stuff has much to do with lack of Twitter uptake. If the worst thing users have to deal with is the difference between “@” and “.@” you’re doing pretty well.

Twitter’s real learning curve is cultural, and it’s interesting to consider why Twitter’s cultural rules are so developed. Here some things you might encounter in Twitter when you first open up your feed, for instance:

  1. Tweetstorms
  2. Subtweeting vs. mentioning
  3. Weird Twitter
  4. Live-tweeting
  5. Tweet-ups
  6. ASCII art
  7. Bots
  8. Hashtag activism
  9. Tweet-stealing
  10. Reputation stealing (e.g. using “RT” vs. retweeting)
  11. Sea-lioning
  12. Hashtag meta-jokes (e.g. #sorrynotsorry)
  13. Screenshotting text to share it

All of these things are culturally complex. When you livetweet a TV show or debate, for instance, you have to walk a complicated balance that does not overload your non-interested followers while engaging with your fan subgroup. Subtweets appear bizarre to people who are not familiar with the practice. So does screenshotted text. These things are handled by cultural norms and related user innovations.

It reminds me that Twitter, despite its problems, is truly a *community* whereas Facebook is a piece of software. Twitter has a cultural learning curve and Facebook doesn’t, but that’s mostly because Facebook has little culture to speak of.

And here, it’s Facebook that’s the oddball, not Twitter: from the early PLATO online communities to Usenet to LiveJournal to Friendster to MySpace these online spaces developed community identities, conventions, and norms that grew increasingly complex and rich over time. Online communities look exactly like Twitter after they grow four or five years. It’s practically a law of physics.

But Facebook seems, more or less, to have avoided that. There’s little to no user innovation in the space, and about as much culture as an Applebee’s. You don’t log into Facebook one week and find everyone is experimenting with animated gif avatars, or that people have found a workaround that allows them to do ASCII art.There’s no deciphering Shruggie (¯\_(ツ)_/¯), there’s no Horse_ebooks, no bots or psuedobots.

And so the answer to the question “Why is Twitter so culturally complex?” is that it’s the wrong question. It’s Facebook that is the weird thing here, a community that doesn’t develop an overall culture overtime.

I wonder what’s going on? Why is Facebook so culture-resistant? And what does it say if it’s community culture that is getting in the way of Reddit, Twitter, and Tumblr from getting the valuations they want?

Advertisements

The Tragedy of the Stream

I think on my most popular day on this blog I got about 14,000 hits on a post. Most posts get less than that, but getting 600-800 visitors over the first week is pretty usual, and the visitors are generally pretty knowledgeable people.

Yesterday I got a lot of hits on my post asking for examples of the best blog-based classes in higher education that people could look at, with a caveat that I’d love to get beyond the usual examples we use — I’m looking for variety over innovation in some ways. The result was crickets.

I just need a list that I can show faculty that describes the class, the methods used, and links to it. I want to share it with faculty. I’d like the list to be up-to-date. I’d like someone to have checked the links and make sure they are not linking to spam sites at this point. Maybe someone could also find the best example of a student post from the class and link to that. Maybe it could be ordered by discipline.

Does such a thing exist? I don’t know. Maybe. I sure as hell can’t find it, and I’ve been a part of this movement a decade now.

Do individual pages on these these sorts of experiences exist? Absolutely. I’ve read blog posts for the past ten years on this or that cool thing someone was doing. But as far as I can tell, no one has chosen to aggregate these things into a maintained or even semi-maintained list. We love to talk. Curate, share, and maintain? Eh.

This is the Tragedy of the Stream, folks. The conversations of yesterday, which contain so much useful information, are locked into those conversations, frozen in time. To extract the useful information from them becomes an unrewarding and at times impossible endeavor. Few people, if any, stop to refactor, rearrange the resources, gloss or introduce them to outsiders. We don’t go back to old pieces to add links on them to the things we have learned since, or rewrite them for clarity or timelessness.

And so it becomes little more than a record of a conversation, a resource to be mined by historians but not consulted by newbies. You want an answer to your question? Here’s eighteen hours of audio tape. If you play it from the beginning it makes sense. Have fun!

There are some things which survive better than others: Quora answers, Stack Exchange replies and the like.

But in our community at least I see a whole body of knowledge slowly rotting and sinking back into the sea. Perhaps it might be time to focus less on convincing and more on documenting our knowledge?

What Are the Close-to-Best Examples of Blog and Wiki-Based Classes in Each Discipline?

We’re making a push here on both blog and wiki use in classes, but finding that while there’s many posts on this and that blog/wiki project in higher education, that

  • There’s not many lists compiled that show a variety of examples across many disciplines and institutions.
  • Many of the examples we continue to use are quite old, giving the appearance of a wave that that broke around 2011.

I’d like to compile a list for my faculty of the best examples from each discipline of blogs and wikis utilized as a core part of traditional for-credit classes. I know the big ones that we all talk about all the time; I want the ones a level below that.

Can you help me out in the comments? Just a link to your favorite project that needs more recognition, and write a line or two about what you like about it.

All projects welcome, but bonus points for: projects in the hard sciences, projects involving data gathering, projects that engage with a local community, projects involving first-year students, cross-course projects, anything that wasn’t at UMW (we love UMW, but too many examples from UMW raise concerns that the model is not generally transferable).

Can Blogs and Wiki Be Merged?

I’ve been thinking lately about the architecture underlying blogs and wiki, how different these architectural choices are (RSS, revision histories, title-as-slug, etc), and whether it’s worthwhile to imagine a world where data flows seamlessly across them. It might not be. They are very different things, with different needs.

Wiki and blogs have two different cultures, two different idioms, two different sets of values.

Blogs are, in many ways, the child of BBS culture and mailing lists. They are a unique innovation on that model, allowing each person to control their part of the conversation on their own machine and software while still being tied to a larger conversation through linking, backlinks, tags, and RSS feeds.

Blogs value a separation of voices, the development of personalities, new posts over revision of old posts. They are read serially, and the larger meaning is created out of a narrative that expands and elaborates themes over time, becoming much more than the sum of its parts to the daily reader.

Through reading a good blogger on a regular basis, one is able to watch someone of talent think through issues, to the point that one is able to reconstruct the mental model of a blogger as a mini Turing machine in one’s head. I have been reading people like Jim Groom, Josh Marshall, Digby, Atrios, and Stephen Downes for years, watching how they process new information, events, and results. And when thinking through an issue, I can, at this point, conjure up a rough facsimile of how they would go about analyzing a thing.

Wiki is perhaps the only web idiom that is not a child of BBS culture. It derives historically from pre-web models of hypertext, with an emphasis on the pre. The immediate ancestor of wiki was a Hypercard stack maintained by Ward Cunningham that attempted to capture community knowledge among programmers. Its philosophical godfather was the dead-tree hypertext A Pattern Language written by Christopher Alexander in the 1970s.

alexander

Alexander’s A Pattern Language used bolded, numbered in-text links to connect “patterns” that were related ideas. The sections were formed around these patterns and written to be approached from many different directions.

What wiki brought to these models, which were personal to start with, was collaboration. Wiki values are often polar opposites of blogging values. Personal voice is meant to be minimized. Voices are meant to be merged. Rather than serial presentation, wiki values treating pages as nodes that stand outside of any particular narrative, and attempt to be timeless rather than timebound reactions.

Wiki iterates not through the creation of new posts, but through the refactoring of old posts. It shows not a mind in motion, but the clearest and fairest description of what that mind has (or more usually, what those minds have) arrived at. It values reuse over reply, and links are not pointers to related conversations but to related ideas.

These are, in many ways, as different as two technologies can be.

Yet, the recent work of Ward Cunningham to create federated wiki communities moves wiki a bit more towards blogging. Voices are still minimized in his new conception, but control is not shared or even negotiated. I write something, you make a copy, and from that point on I control my copy and you control yours. In Federated Wiki (the current coding project) what you fork can be anything: data, code, calculations, and yes, text too.

As I’ve been working on Wikity (my own federated wiki inspired project) I’ve been struggling with this question: to what extent is there value in breaking down the wall between blogging and wiki, and to what extent are these two technologies best left to do what they do best?

The questions aren’t purely theoretical. Ward has designed a data model perfectly suited for wiki use, which represents the nature and necessities of multiple, iterative authorship. Should Wikity adopt that as the model it consumes from other sites?

Or should Wikity tap into the existing community of bloggers and consume a souped up version of RSS, even though blog posts make lousy wiki pages?

Should Wikity follow the wiki tradition of supplying editable source to collaborators? Or the web syndication model of supplying encoded content. (Here, actually, I come down rather firmly on the source side of the equation — encoded content is a model suited for readers, not co-authors).

These are just two of many things that come up, and I don’t really have a great answer to these questions. In most cases I’d say it makes sense for these to remain two conceptually distinct projects, except for the big looming issue which is with the open web shrinking it might helpful for these communities to join common cause and solve some of the problems that have plagued both blogging and wiki in their attempt to compete with proprietary platforms.

Again, no firm answers here. Just wanted to share the interesting view I’ve found at the intersection of these two forms.

Why Learning Can’t Be “Like a Video Game”

One of the projects I’m working on with French colonial history scholar Susan Peabody this semester at WSU is building a virtual, wiki-based museum with her students in a history course. We’re using a Wikity-based WordPress template to do it, and while we’re not utilizing the forking elements in it, we’re actually finding the Markdown + card-based layout combo to be super easy for the students to master. It’s honestly been encouraging to see that while I still don’t have the forking and subscription in Wikity quite where I want it, it actually makes a kick-ass WordPress wiki. I should probably write about that at some point.

But what I wanted to talk about today was an excellent article Sue shared with me. It’s one she had her students read, and it helped crystallize some of my ambivalence around virtual reality.

who

The 2004 article, Forum: history and the Web: From the illustrated newspaper to cyberspace by Joshua Brown describes a series of virtual history projects he co-designed in the 1990s. An early project he worked on for Voyager, Who Built America?, was an enhanced textbook HyperCard stack that included more than twenty times as much supplementary written material as the main narrative text, and, more importantly from the author’s point of view, a host of audio-visual artifacts:

That said, the contrast between the informational capacity of a book compared to a CD-ROM was startling. The original four chapters of the book comprised 226 pages of text, 85 half-tone illustrations and 41 brief documents. The Who Built America? CD-ROM, in addition to chapter pages, contained 5000 pages of documents; 700 images; 75 charts, graphs, maps and games; four-and-a-half hours of voices, sounds and music from the period; and 45 minutes of film.

This led to another project, the George Mason University site History Matters, which did away with the narrative thread altogether in favor of what the author calls a “database” approach.

Having obtained the “immersiveness” of the encyclopedic approach, the author and his co-developers reached for another type of immersion: 3-D environments.

Working in Softimage, a wire-frame 3-D modelling program, and the flexible animation and navigation features offered by Flash, a prototype website called The Lost Museum: Exploring Antebellum Life and Culture (http://www.lostmuseum.cuny.edu) finally went public in 2000. Entering the site (www.lostmuseum.cuny.edu/barnum.html), users encounter the Museum’s main room after the building has closed for the day, where they can engage with its various exhibits and attractions, experiencing its mixture of entertainment and education that would influence popular institutions up to the present (Figure 10).

wwwww

At the same time, they also can look for evidence pointing to possible causes of the fire that destroyed the building in July 1865. Moving through the American Museum’s different environments and attractions, users search for one of a number of possible ‘arsonists’ who might have set the fire. These suspects represent some of the political organizations and social groups that contended for power, representation and rights in antebellum and Civil War America—for example, abolitionists (anti-slavery activists) and Copperheads (northern supporters of the Civil War Confederacy). In the process of searching for clues that point to these and other suspects, users also learn information about how P. T. Barnum and his museum expressed and exploited the compromises and conflicts of the mid-nineteenth century.

Ultimately, however, Brown felt the work failed in its pedagogical aims. Why? Because in the desire to make the game “immersive” and “seamless” they had also welded the pieces together, not allowing students to make new, unexpected uses of them:

As University of Chicago art historian Barbara Maria Stafford has pointed out, there are actually two types of digital information, or approaches to organizing collections of information: ‘one type lends itself to integration, the other to linkage.’ The distinction, Stafford argues, is crucial. The difference between systematically blending or collapsing individual characteristics (analogous to a seamless, immersive interactive virtual environment like The Lost Museum exploration) and maintaining separate entities that may be connected or rearranged (such as the fragmented multimedia scrapbook) has farreaching repercussions. In the former case, the immersive, its ‘operations have become amalgamated and thus covert’, preventing users ‘from perceiving how combinations have been artificially contrived’, while the latter is ‘an assemblage whose man-made gatherings remain overt, and so available for public scrutiny’. In Stafford’s estimation, the immersive fosters passive spectatorship while the assemblage promotes active looking (Stafford 1997, p. 73).

And ultimately this is a problem that is acutely felt in 3-D environments: the pieces of the environment and the way they react to the participant gain their immersive quality only by being parts of a coherent whole that can’t be broken down into its constituent parts.

I don’t mean to imply that these issues couldn’t be overcome, to some extent, by brilliant design or user persistence. The many uses that Minecraft has been put to amaze me, for instance.

But Minecraft is not, I think, what the people promoting VR to teachers have in mind. The pitch most often involves exactly the type of locked-down virtual environments that invite inspection but resist deconstruction and rearrangement.

Something to think about as we hand out all those Google Cardboard glasses for virtual field trips, no?

 

Connected Copies, Part Two

This is a series of posts I’ve finally decided to write on the subject of what I call “connected copies”, an old pattern in software that is solving a lot of current problems. Part one of the series is here.

It’s really a bit of a brain dump. But it’s my first attempt to explain these concepts starting at a point most people can grasp (e.g. people who don’t have git or Named Data Networking or blockchain or federated wiki as a starting point). Hopefully publishing these somewhat disorganized presentations will help me eventually put together something more organized and suitable for non-technical audiences. Or maybe it will just convince me that this is too vast a subject to explain.

Anyways….

A Gym Full of People

Let’s think about connection a bit. I want to talk about how connection on the web happens. Because I don’t want to get too into the weeds I’m not going to talk about packets, or too much about routing either. I’m sure many people will think my story here is inadequate for understanding connection on the web, but I think it will work for our purposes.

Imagine you are in a gym full of people, and you’re only allowed to talk to the people next to you. The web sort of works like this.

First, you have to know the domain name or IP of the server that has the thing you want. As we noted in the last post, this is really crucial. The web, as designed, starts with “where, not what.

For a simple example, let’s say what you really want in that gym is a physical book. Let’s say we’re in that gym, and everyone has a stack of books with them. You want a copy of the Connie Willis classic To Say Nothing of the Dog. The first thing you have to know is where that book is.

There’s huge implications to this fundamental fact of net architecture, but we’ll skip over them for the time being.

So the first thing you do is some research to find out who has this book. Your guess is that Jeff Bezos probably has it because he seems to have copies of *all* the books, as part of this little side business he runs called Amazon.

So you look to see whether any of the people around you are Jeff. But they’re not, so you say to a neighbor hey, here’s an envelope with a message in it — can you get this to Jeff? And in the envelope you put a message that says “Send me To Say Nothing of the Dog” and on the outside you write your name and address as the return address and Jeff’s name in the “To:” field.

In any case, the person you give the envelope to looks around and see if any of the people standing next to them are Jeff, and if they’re not they figure out who they can pass it to who has the most likelihood of getting it to Jeff.

After five or six people pass it, it ends up at Jeff, and Jeff opens it and reads your “Send TSNOTD” message. So he makes a copy of that book and put the copy in an envelope (or, for sticklers, puts pieces of the book series of separate envelopes) and then passes it back across the gym to me using the same method “Are you Mike Caulfield’s computer? No? Hmmm. Can you get a message to it then?”

At the risk of boring you, I want to reiterate some points here.

First, before we ask for something on the web — Jennifer Jones, The Gutenberg Bible, the most recent census data — we have to know *where* it is. When what we really wanted was “access to Stanford’s SRI mainframe” or “a videocall connection to Utah” this was an easy problem. What you wanted and the server you wanted it from were inextricably tied together.

But as the Internet (and eventually the web) grew, this became a major problem. Most things we wanted were things where we didn’t know the location.

So the first major giants of the web sprouted up: search engines and directories. They could translate your real question (what you wanted) into the form the web could understand (where to get it from). Essentially, they’d figure out the address to put on the outside of the envelope, so we can mail our request.

You’ll also notice this scheme privileges big sites, because the hardest thing is knowing where things are, and big sites containing everything solve your “from what to where” problem.

What Would a Content-Centric System Look Like?

There are alternate ways of thinking about networking that are based around content instead of location.  These ideas are not just theoretical: they are the basis of things like torrenting, Named Data Networking, and Content-Centric Networking. The priniciples behind the idea were outlined, as many brilliant ideas were, by Ted Nelson many years ago.

To get a content-centric implementation of networking, we ask: “What if instead of asking people around us if they could get a message to Jeff, we instead asked them ‘Do you or anyone you know of have a copy of To Say Nothing of the Dog‘?”

And then each person turned to other people and asked that until either Jeff or someone else said “Yes, I have it right here, let me send a copy to you!”

On the positive side, you’d probably get the book from someone closer to you. Books would flow from people to people instead of always from Amazon to people (and payment systems could be worked out — this doesn’t assume that these would be free).

On the negative side, this sort of protocol would be pretty time intensive. For every request we’d have to ask everyone through this telephone game, they’d have to check for it and so on. In even the gym it’d be a disaster, never mind on the scale of the Internet.

But but this is where connection comes in. Imagine that you had the Connie Willis book The Domesday Book in your own library, which is part of the same series. And let’s imagine that you open the cover of that book and inside the cover is the entire copy history:

“Antje H. copied this from Marcus Y. who copied it from Kin L. who copied it from Martha B. who copied it from Mike C. who copied it from Jeff B.”

Well, if these people have one book in the series, they might have another, right? So you start with a location based request. But you still ask the content question, because you don’t care where it comes from:

“Do you, or anyone you know, have a copy of To Say Nothing of the Dog?”

You notice that Martha B. is standing right next to you, so you ask her. It turns out that she does not have a copy of this anymore. But she used to have a copy, and she made a copy for Pedro P., so she asks him if he still has a copy. He does, so he makes a copy, and passes it to Martha who passes it to you.

You just got a copy of something without knowing where it was. Congratulations!

More importantly, you just saw the power of connected copies. Connected copies are copies that know things about other copies. The connected piece is the “list of previous owners” inside the cover of that book you got, the knowledge that both TSNOTD and Domesday Book are by the same author, and the system that allows you to act on that knowledge.

The Big Lesson

I think Content-Centric Networking (CCN) and its variants are very cool, and I hope they get traction. The Named Data Networking project, for example, was named as one of the NSF’s Future of the Internet projects, and feels to me like the early net, with a bunch of research university running nodes on it. The CCNx work at PARC is fascinating. Maelstrom, a BitTorrent-based browser is an interesting experiment as well. (h/t Bill Seitz for pointing me there)

But CCNx or NDN as an architecture for the entire web has an uphill climb, because it would destroy almost every current Silicon Valley business out there. Who needs Google when you can just broadcast what you want and the network finds it for you? Who need Dropbox, when every node on the web can act like a BitTorrent node? Who needs server space or a domain, when locations no longer matter? Who needs Amazon’s distribution network when you can just ask for a film from your neighbors and pay for a decryption key?

So while these schemes will happen (see Part One for why CCN is inevitable), I don’t think they are coming in the next few years. But, importantly, you don’t have to rejigger the whole Internet to get better with content. You just have to think about the ways in which our location-centrism is contributing to the problems we are hitting, from the rise of Facebook, to the lack of findability of OER, to the Wikipedia Edit Wars.

In other words, the reason I spend time talking about the networking element above is that our location-centrism is so baked into our thinking about the web that it’s invisible to us. We think it’s very normal to have to know whose servers something is on in order to read it. We assume it’s good for things to be in one place and only one place, because we’ve structured the web in a way which doesn’t make use of multiple locations very well.

And crucially, we tend to think of documents as having definitive locations and versions (like the whiteboards) rather than being a set of things in various revisions with various annotations (like when I talk about a book like “The Great Gatsby” or play like “Hamlet”, which covers a wide range of slightly different objects). It’s that piece I want to talk about next.

Next Post: Ward Cunningham’s Idea of a ‘Chorus of Voices’ and Other Stuff I Mostly Got From Ward

 

 

 

 

Amazon, OER, and SoundCloud

So Amazon is getting into the Open Educational Resources market. What do we think about that? If you read these pages regularly, you can probably predict what I’ll say. It’s the wrong model.

For over a decade and a half we’ve focused on getting OER into central sites that everyone can find. Or developing “registries” to index all of them. The idea is if “everyone just knows the one place to go” OER will be findable.

This has been a disaster.

It’s been a disaster for two reasons. The first is that it assumes that learning objects are immutable single things, and that the evolution of the object once it leaves the repository is not interesting to us. And so Amazon thinks that what OER needs is a marketplace (albeit with the money removed). But OER are *living* documents, and what they need is an environment less like Amazon.com and more like GitHub. (And that’s what we’re building at Wikity, a personal GitHub for OER).

So that’s the first lesson: products need a marketplace but living things need an ecosystem. Amazon gives us yet another market.

The second mistake is that it centralizes resources, and therefore makes the existing ecosystem more fragile.

I talked about this yesterday in my mammoth post on Connected Copies. While writing that post I found the most amazing example demonstrating this, which I’ll repeat here.

Here’s a bookseller list from a particular bookshop from 1813:

cat

How many of these works do you think are around today?

Answer: all of them. They all survived, hundreds of years. In fact, you can read almost all of them online today:

 

 

Now consider this. SoundCloud, a platform for music composers that has tens of millions of original works is in trouble. If it goes down, how many of those works will survive? If history is a guide, very few. And the same will be true of Amazon’s new effort. People will put much effort into it, upload things and maintain them there, and then one day Amazon will pull the plug.

Now you might be thinking that what I’m proposing then is that we put the OER we create on our own servers. Power to the People! But that sucks as a strategy too, because we’ve tried that as well, and hugely important works disappear because someone misses a server payment, gets hacked, or just gets sick of paying ten bucks a month so that other people can use their stuff. We trade large cataclysmic events for a thousand tiny disasters, but the result is the same.

So I’m actually proposing something much more radical, that OER should be a system of connected copies. And because I finally got tired of people asking if they needed to drop acid to understand what I’m talking about, I’ve started to explain the problem and the solutions from the beginning over here. It’s readable and understandable, I promise. And it’s key to getting us out of this infinite “let’s make a repository” loop we seem to be in.

Honestly, it’s the product of spending a number of weekends thinking how best to explain this, and the results have been, well, above average:

cc

I haven’t gotten to how copies evolve yet, but I actually managed to write something that starts at the beginning. I’ll try and continue with the middle, and maybe even proceed to an end, although that part has always eluded me.

See: Connected Copies, Part One