I’ve referred in the past to the phenomenon of “affinity posting”. The idea of affinity posting is you post not to spread information or start a discussion, but to demonstrate your membership in certain affinity groups. You’re a fan of Doctor Who or Twilight. You’re a beer drinker. You’re a teacher who belives the lecture is dead. You’re a pacifist. A libertarian. A skeptic. An ally.
I do a lot of affinity posting. So do you. If I retweet a John Oliver rant about the minimum wage, it’s probably not because there’s something particularly useful in the rant. It’s largely to show hey, I’m in this group too. You like this, I like this, and that’s a communion of sorts.
There’s nothing wrong with affinity posting, and lots of times it serves a good purpose. People have some pretty advanced ideas about what I believe and like and support before they meet me, and that’s nice. And it can be pleasant in a world where one feels like they struggle alone for a cause to see an internet full of others supporting this or that.
Where it gets complicated is where we begin to confuse affinity for other things. Take the folks that retweeted Dale’s Cone here:
Why did 223 people retweet this? It’s a bogus finding, and a short Google search would reveal that. More interestingly, it’s not really useful in any way. It’s hard to think of anybody who retweeted the cone looking at this cone and using it as a concrete tool to design instruction. It’s not an argument either, capable of persuading people.
The reason people retweet things like this is to say, more or less, “Hey, this is who I am.” And before you get too haughty about this, it might be wise to think of your own postings on Ferguson or the Clinton emails or the Pluto fly-by. Did you choose to retweet the stuff that expanded and challenged your understanding the most? Or did you retweet the things that most closely expressed who you are?
If you’re like me, it’s probably 80% affinity and 20% challenge. And you kind of have to do that on the web, because we read the web weirdly, as if everything a person posted is a T-shirt they are wearing. We say “Retweets do not equal endorsements”, but the very fact we have to say that proves the point. You can stray a bit, but not so far that people lose sight of who you are or the groups you belong to.
I won’t go into it too deeply, but I think affinity posting as an interaction model is lousy and keeps the web in an infantile state.
I’ve been thinking about an alternate way of thinking about re-posting the things of others: the metaphor of a personal library. In the library metaphor, B. F. Skinner’s works go into the library but Dale’s Cone does not, even though Dale’s cone expresses the truth-as-you-see-it better than Skinner does. In a library if we found books by nobody but post-structuralists, we’d think “This is a narrow thinker,” not “Thank God there is nothing I disagree with on these shelves!”
When I look at someone’s library, I don’t ask “Is this book correct?” or “Do you really agree with this person?!?” Instead, I ask “Is this worth reading? Why?”
In a digital world where storage cost is minimal, and pointing a link is free, the standards for inclusion are bound to be lower. Perhaps it’s more like a newspaper clippings archive, or a library’s vertical file.
In any case, this is the model we are starting to look at for fedwiki, to answer the question of “What does it mean when I fork something?” It doesn’t really mean “like” and it doesn’t map on to the current cultural semantics of reblogging and retweeting. To others it should signal that you think the thing forked is useful. If it is useful and known to be erroneous, you might want to add a note to that effect so it doesn’t spread unchallenged, but if it is true and not useful you’re encouraged to leave it where it is and not pull it into your library at all. If you want to share it to show who you are, put it on a t-shirt and take a selfie instead.
If you want another example of how thinking about distributed personal libaries is helpful to conceptualizing the web of the future, see Bret Victor’s Web of Alexandria.
Yesterday I published one of those unholy been-in-the-drafts-forever posts on issues of linking. Here I see if I can make the point a bit more cleanly.
When we started getting people to use federated wiki in December, I thought the default sort of editing would be of the main article. You write a piece on Dylan’s 1966 Royal Albert Hall show, I come by and add more facts to it.
When we initially put people into federated wiki people didn’t do that. Instead, they commented, by adding endless signed thoughts to the bottom of the page. This is bad for reasons I won’t go into here. So I told people stop commenting, and some people complained, but most complied.
The next behavior that emerged was more interesting. People started extending articles not by editing them, but by linking to older articles or writing new ones. They’d do this at the bottom of the page. For instance:
This seems related to [[Walking the Lines]]
Where Walking the Lines was a page detailing a concept, theory, anecdote, or example of something related to the main page. Over time I started to formalize this pattern in my own writing on wiki. Here’s the bottom of my page on the concept of the Moral Cascade:
[[Normal Accident Theory]] posits that error should be seen as a normal occurrence, and systems should be designed to avoid cascading behavior.
[[Moral Cascade in the Classroom]] details how similar patterns play out in classroom management.
Broken Windows Theory described a moral cascade where small offenses led to large offenses, but it may be overhyped. See [[Broken Windows Theory Broken]] for a rejection of Moral Cascade patterns.
This started to become the most enjoyable part of the process for me, and the most profitable.
As I started to think about this I realized how natural the behavior we were seeing was. Say a person writes a page on — well, Dylan at Royal Albert Hall in 1966. The page is short and focused. Other people come by and read it.
The chances that any given reader knows more about that subject than the person who just wrote it are slim. It happens, but its not the median experience. On the other hand, the chances that any given reader knows something related to that subject are very high. So (in our fictional example here) the links accrue:
Stress from the tour would later be cited as a cause of [[Dylan’s Motorcycle Accident]].
For explanations of why people would go to a concert just to heckle, see [[Psychology of Heckling]]
Royal Albert Hall’s acoustics were not particularly suited for rock music. See [[Acoustics of Royal Albert Hall]]
And what starts to occur to me is that this is actually the Vannevar Bush vision, where the median user is neither a full-fledged writer or a simple reader but a linker. And part of the reason the reader can be a linker is that
they have their own copies of the document
they can add supplementary documents very easily and link to them, and
they don’t have to rewrite the main document to add links
Now you do get these capabilities in some annotation systems, but talking about problems with annotation systems is perhaps for another post.
In any case, this is the core idea of the last post: we can recapture, perhaps, this vision of the reader as the primary link author, but it requires us to think of links in a different way.
There’s a follow-up to this article, now, which explains the federated wiki angle to this more clearly..
“Everyone here will of course say they are carrying on his work, by whatever twisted interpretation. I for one carry on his work by keeping the links outside the file, as he did.” – Ted Nelson, eulogizing Doug Engelbart.
When people talk about Vannevar Bush’s 1945 article As We May Think, they are usually talking about the portion that starts around section six, which seems so prescient:
The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.
But the problem is that it is not prescient. Not at all. The web works very little like this.
Let’s look at some of the attributes of the memex.
You have a library of items. You own them, they are in your memex. You don’t link documents you have to documents you don’t, because that would be silly. Bret Victor has talked about this eloquently elsewhere.
Each memex library contains your original materials and the materials of others. There’s no read-only version of the memex, because that would be silly. Earlier in the article Bush goes through great pains to show that the “dry photography” necessary to users adding their own writings is possible. And your writings are first class citizens of the library, being browsed and linked by the same interface responsible for showing you the works of Shakespeare, Einstein, or Claude Levi-Strauss.
Links are associative. This is a huge deal. Links are there not only as a quick way to get to source material. They remind you of the questions you need to ask, of the connections that aren’t immediately evident.
Links are made by readers as well as writers. A stunning thing that we forget, but the link here is not part of the author’s intent, but of the reader’s analysis. The majority of links in the memex are made by readers, not writers.
Links are outside the text. A corollary perhaps of the above, but since links are a personal take by readers on the relationships of two items, the links cannot be encoded in the document, because that enforces a single interpretation of the document. Links inside the document say that there can only be one set of associations for the document, which would be silly.
There are both linear trails and side trails. This may be weird to our modern sensibilities, but clearly Bush has scenarios in mind where you’d follow a more or less linear reading path, and scenarios where you’d have a lot of side paths.
Going further into the document:
And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.
Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities…
The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.
So publications do sometimes come with pre-made trails, but these are just one set of trails amongst those of you and your friends. And in this lovely utopian flourish, there develops a class of people (“trail blazers”) who go through the records and add new links, new trails, new annotations. I don’t just get to read Alan Kay’s work — I get to see what occurs to Kay as he reads the work of others. Or maybe the best trail blazers are not the Alan Kay’s of the world at all. Maybe there’s a class of people who are great readers, but lousy writers the same way there are great DJ’s who are lousy musicians. We could benefit from their brilliance and textual insights.
What was so exciting about these sections of As We May Think when I first read them was this idea of a new way of transferring knowledge — not by exposition or commentary, but by *linking*. By extending. Trail-blazing.
But this memex thing isn’t how the Internet works. There is no class of trail blazers. You don’t share a set of linked documents with a friend. You don’t own the documents you read. Links are boring references to supporting material, not prompts for though or a model of expert thinking.
Why? I increasingly think that Ted Nelson gets it right. It’s partly about where the links live. To people who have only known hypertext on the web, this may be a difficult thing to wrap ones head around, but let’s do a bit of history.
Early hypertext did not have links as we know them now, the text-embedded “hot-linked” words that cause your mind to pause and ask, “Do I need to click that to understand this?” In fact, links as imagined by the heirs of Bush — Nelson, Van Dam, etc — formed a layer of annotation on documents that were by and large a separate entity.
The “hot spot” link we know today first appeared in the HyperTIES system in the 1980s, almost 15 years after the early hypertext systems of Andy van Dam and Douglas Engelbart. To demonstrate the difference that made in pre-web systems, here’s a mockup of the KMS interface, one of the more advanced hypertext platforms of the 1980s:
You’ll notice that the tree item links and the special item links are distinct portions of the document.
HyperTIES simplified that design and flattened it, mixing links with text.
The brilliance of this HyperTIES design is immediately evident. By mixing links with text you can have your cake and link to it too. The text reads like regular text, prints like regular text, and can be written by copy-editors as more or less regular text. You can take old manuals, and link them up, more or less as is.
The brillance was not lost on Berners-Lee as he designed the web. In his 1989 proposal he specifically mentions the power of highlighted phrases:
“…several programs have been made exploring these ideas, both commercially and academically. Most of them use “hot spots” in documents, like icons, or highlighted phrases, as sensitive areas. Touching a hot spot with a mouse brings up the relevant information, or expands the test on the screen to include it. Imagine, then, then references in this document, all being associated with the network address of the thing to which they referred, so that while reading this document you could skip to them with a click of the mouse.”
That phrasing is really telling: “Imagine, then, then references in this document,” he says, imagine them being hyperlinked so you could go directly to them. Imagine hyperlinking existing documents to references already in this document…
It’s genius. But it’s also a very author-centric version of linking, and one which is not going to reveal to you anything the author didn’t already know.
More importantly it does something unintentionally evil — for any given document there can be only one valid set of relationships, inscribed by the author. So you can link your history of the Polaroid ID-2 camera up to suit the engineering people, or to suit the history of corporate boycotts people, but you can’t set it up the links serve both without overlinking the crap out of it.
Federated wiki deals with this issue by keeping links within the document but letting every person have as many copies of that document as they like, with whatever links they want on each. It’s a simple solution but in practice it works quite well.
What I’ve been interested in however (and something that MC Morgan has been looking at as well) is the way in which writing in federated wiki pushes people to a new way of thinking about links and content.
I’m going to decribe my own evolving behavior here, but I’ve seen similar progressions with most people who have used federated wiki.
In the newer style, content is kept fairly short, and fairly link-less. But at the bottom of the articles we annotate by linking to other content with short explanations of each link. Here’s the bottom of a page on how Kandinsky came up with the idea of abstract art after seeing an unrecognizable painting on its side:
Here’s another bottom of a page on the concept of “Gradually, then Suddenly” — the idea that things tend to decline slowly over a long period of time and then one day, just as people think the decline is livable, the bottom falls out all at once:
What is interesting about this method is it plays into something else we saw in the federated wiki happenings. We expected people to edit each other’s documents a lot, and they did some of that. But what people liked to do most was add links and notes at the bottom to related pages, or, in many cases, create a new page specifically to be linked from the old. The pattern was something along the lines of “Oh, this article reminds me of something I could bring to the table, let me add a link and write that page.”
It reminded me of storytelling sessions where you tell a story in response to my story, and somehow those two stories juxtaposed tell a bigger, fuzzier truth. But it gets better — because of the federation mechanisms, when you add links you add them on your own copy of a page. People seeing your links can choose accept or reject them. Good and useful connections can propagate along with the page. I mentioned ages ago (was it really only November?) that as federated wiki pages move through a system they are improved, and that’s true. But the more common scenario is that as they move through a system they are connected.
And this makes a certain sense after all. If I write a page on early community antennas as the origin of cable the chances that you’re a cable history expert who can improve it are rather low. But the chance that you know something related, perhaps in your own area of expertise, are rather high. And the connections I’ve found through others have often been amazing. (One of those connections, Hospitable Hypertext, has become a core insight for me. And if you go to that page you’ll see an early attempt at conversation by linking at the bottom, before we found our stride).
I’m at kind of a loss at how to end this, but it’s been in my queue long enough. So I’m putting it out. Apologies if it’s a bit muddled, these are very much thoughts in progress…
- It’s read-only. It solves the easier technical problem.
- I’m not really a programmer. I was a programmer almost 15 years ago. But even then it was MUMPS, Python, XSLT, ColdFusion, other things.
- Asynchrony confuses the heck out of me. If you look at it and believe I’m misunderstanding how to code asynchronous JSON let’s have a hangout. I could learn.
- I had a version of this I wrote about a month or two back that I demo’d to some people, but there was an important difference — that version read the federated wiki format but couldn’t resolve links in the federation. This one resolves links by looking through the fork history of the page and querying those sites about whether they might have a copy of the linked page.
The weirdest thing about this code is that it’s so simple in a way. Once you carry the fork history in the page, the page can travel around wherever it wants, and links can be resolved without hard-coding brittle and rigid URLs in the wiki markup. Once you use JSON as the basis of pages instead of HTML, multiple sites can work as one giant site. Taken together these two simple ideas (along with the legal technology of Creative Commons) create a radically different vision of the web. I guess what I’m saying (and what I’m hoping you see in the code) is it’s a lot simpler than you might think to get to the vision of a World Wide Wiki. If a hack like me can get this done in a couple weekends, what could you do? Video on how the code works forthcoming. But download it now and start playing with it.
Update #1: Coding the item handler loop
Update #2: Resolving links from fork history
It’s become trivial to find these examples, I suppose, but here’s some snapshots from today, around 8 a.m. Pacific Time.
Facebook (snapshot via @eliparser, I use Facebook maybe once a month myself).
I’m curious why this happens (and maybe I should read Eli’s book?). In this case it’s not a Friendly Web issue — there are plenty of people to “like” the SCOTUS ruling. And while the population of Twitter is surely more socially involved (for good and for ill) it’s hard to see this repeated pattern as merely a demographic difference.
Yet one of these looks like a passable future, and the other looks like Neil Postman’s worst nightmare.
We’ve talked a lot about the fallacy of technodeterminism in the past here, and I’m not going to defend the reductionist version of that. But this looks like two very different futures to me, and it’s worth thinking about how the technology we promote in our classrooms shapes the future we’re launching our students into.
Hoisted from the journal:
David Graeber has a far too long essay in The Baffler, which is not worth reading in full. In the end, though, it comes to a common but worthwhile point: the structure of research today can’t be open-ended in any real way, due to creeping managerialism, and this kills any possibility of revolutionary technology:
That pretty much answers the question of why we don’t have teleportation devices or antigravity shoes. Common sense suggests that if you want to maximize scientific creativity, you find some bright people, give them the resources they need to pursue whatever idea comes into their heads, and then leave them alone. Most will turn up nothing, but one or two may well discover something. But if you want to minimize the possibility of unexpected breakthroughs, tell those same people they will receive no resources at all unless they spend the bulk of their time competing against each other to convince you they know in advance what they are going to discover.
This is a major problem in technology, though maybe not for reasons Graeber would identify. The main problem with our current setup, where companies make tools for broad edtech markets, is you lose the synergy between technology and practice. As Engelbart noted, the Tool System is only one half of the equation. True progress uses the Tool System to leverage change in the Human System, and in turn uses changes in the Human System to identify necessary tool modifications.
Engelbart’s solution to this, still underappreciated, was to have a team of developer-users that could alternate quickly between designing tools and constructing the culture and practice around them. That takes time, but as the Mother of All Demos showed, it can have fantastic results, because sometimes the future is only comprehensible when delivered as a package.
Current models of development don’t allow that sort of development to occur, and while that is not the reason that flying cars never came about, it is the reason that computer technology has advanced so slowly since the 1960s.
If you wanted to really revolutionize educational technology, for example, here is what I think you could do. Get together a representative group of developers to pair with a small laboratory school, and work so closely with it that the developers could walk in each day and observe ways in which the latest build had succeeded or failed. Talk with teachers about what works and what doesn’t. Organize technology around a new curriculum, then organize the new curriculum around the new affordances of technology.
Do this with ten, twenty, fifty schools, each school no larger than 500-1000 students. Leave these experiments alone for seven years.
I guarantee you at the end of seven years, one of those schools will have truly revolutionized education, and produced more innovation and “progress” than we’ve seen in the past 50 years. And the reason would be that the practice and the technology and the culture and the curriculum all grew together, reacting to the possibilities each exposed, rather than being developed separately.
Ee can’t do that sort of thing because we get too concerned with “waste” and “metrics” and “accountability” (as Graeber notes) but more importantly, we can’t do that because market-driven design *has* to design for *existing* culture. Without the “bootstrapping” framework of Engelbart we plod along at a snails pace.
For a related view see Phil Hill’s post on the LMS as a barrier to innovation.
Michael Feldstein has a must-read post on interoperability and learning management systems, the sort of writing we used to call nuanced and detailed but are now contractually obligated to call a “long-read”. It’s probably an “explainer” too, for that matter, from one of the best explainers of what-the-real-roadblocks-are around. This post is primarily a nudge to get you to read that post so that we can move to a deeper level of conversation on the problems engendered by the LMS.
I will add one (multi-paragraph) comment to what it presents, however. A testimonial of sorts.
It’s been eye-opening working on federated wiki because you simultaneously get amazed by the possibilities of stuff-done-at-the-right-level-of-abstraction and frustrated with people’s inability to comprehend things done at that level. People say they want a classic LEGO set, but in practice most conversations with actual people push you towards providing the Millenium Falcon set Michael mentions (via Amy Collier’s not-yetness presentation).
This is why in the consumer-driven space we get 22 “track your pet’s eating habits” apps next to 63 “track your water consumption” apps next to 98 “what did you eat today” apps, each with a different database, login, API, interface, and small company that will be out of business in a year anyway.
The cycle reinforces itself. In a world where you have gosh-darn so many apps, each app must be dirt simple to learn since you get a new app every week (and as quickly forget them). When presented with a classic LEGO set app people ask “How could I ever learn this in five minutes?”, unaware that the reason you have to learn things in five minutes is that you are dealing with problems at the wrong level of abstraction.
As Michael notes, the stuff that happens at the operating system level can support many things, but is useful primarily to developers, not users. The Millenium Falcon LEGO sets, on the other hand, are user-focused but over-specific. They lead one into a never-ending infancy, where one can quickly become competent with a tool, but never adept or creative with it.
What’s missing is tools in the middle — general purpose end-user tools. We get these every once in a while. Word processors, Excel, Hypercard, the web browser. Each a tool you enter to find a blinking prompt and a couple powerful, generative ideas waiting for you to tap into them. Each a tool that unleashes new capabilities and creativity.
But until users can see the relationship between their app-adopting behavior and their larger situation I’m not sure I see solutions like this in the near future. I’ll continue to promote and work on such solutions, because that’s where the potential is. But it’s the cultural issue that needs solving, and I’m still working out how we overcome that.