M. C. Morgan (my first friend met through federated wiki) pointed me to this series on NeoVictorian Computing by the guy who wrote Tinderbox, a Mac-only hypertext computing tool. The primary point he makes throughout the series is how our fetish for “transparent computing” is making both users and programmers miserable.
What do I mean “our fetish for transparent computing”? You see it everywhere — that every program should look the same, that every bug must be eliminated, no matter how small, that a user interface must be immediately understandable to the novice.
This results in a sort of fast food computing that pleases the senses but ultimately leaves us unsatisfied, unhealthy, and unproductive. We expect our software to demand about as much out of us as watching YouTube #FAIL videos, and we end up getting about as much out of it as we should expect in such circumstances. Problems that software could solve (and could have solved ages ago) remain unsolved because if it doesn’t fit in a bugless File > Edit > Tools menuing system, or worse, the intuitive touchiness of Tablet Computing, then no one is going to use it.
Our response to this trend is interesting. How many times have we heard the story about the toddler who “just starts working the iPad naturally?” And what amazing progress this is!
Step back from that and analyze that statement. The device we are using for our jobs can be used by a toddler. And we’re proud of that!
Would you feel the same way about books? “This book on utility computing is so simple that my third-grader gets it. You have to buy it!”
The problem is that the whole point of your computer is it is NOT an intuitive physical object, but rather an instrument relatively unconstrained by the physical world, and unconstrained by the program author’s intention. It’s supposed to push the boundaries of what’s possible.
A car simple enough for a third-grader to drive is an accomplishment, because we are not counting on the car to do anything radically new.
On the other hand, an interface a four-year-old can use on top of an information technology product is probably a failure, because it means your product encourages a four-year-old’s vision of the world.
I’m not saying you should keep all things hard forever hard. I’m not saying it’s a virtue to force your user to compile their own code and run it on Node.js on an S3 instance (after installing pm2, of course, because we all know screen crashes!). Eventually, such things need to be made easier. (Though obvioulsy, in beta states this is how things may have to be).
No one is asking for installation and setup to become less transparent. That’s just making your user do work you couldn’t automate.
But interface elements that are essential to advancing the way we do things? New gestures that pay off after a week of use? New models of thinking about media elements? We under-use these. And we give ourselves too many excuses for not engaging with them. Sure, Google Wave was corporate Google-ware, but the tech press gave it, what, a week?
And that tablet computing is making our applications even simpler is not an achievement, but rather a threat to our ability to solve new and complex problems.
What’s the alternative? More software. More specialized software. Small pieces loosely joined. Long term relationships with software instead of acquaintances. NeoVictorian Computing. Read it.
UPDATE: In response to Scott’s comment, I wanted to clarify things. This is not a defense of lazy, crappy software, or software that forces you to understand your system’s file-structure to make it work. I don’t buy into the whole “editing your config file will set you free” line of thought any more than I bought the “to truly drive a car you have to rebuild an engine” line of thought. That’s laziness posing as edification.
The claim is actually meant to be the opposite. Think of Google Wave, which was actually a pretty slick piece of software that was far more refined and far less machine-like than email. I don’t know if Wave should have succeeded or failed. But the critique of Wave was not that it was hard to get running, or difficult to use, or forced you to know the internals of it to really use it right. The critique of it was it forced people to reconceptualize their mail, and they couldn’t do that after ten minutes of playing with it, and therefore it was doomed.
I understand why the general public felt that way. But why do we support that? Nelson’s OpenXanadu is yet another example — “It requires to much reformulation to make XanaDocs” is probably an OK response, but the response that will kill it is that it is “too confusing”. Never mind that he is trying to create a whole new paradigm.
So yes, any system that makes me generate reports by saving a csv somewhere and uploading to another place that produces a PDF to download from a third location — please stop making crap-ware like this. It wastes my time in exchange for yours.
But we also have to be careful we don’t fall down this rabbit hole of making only software that does not take any time to master, or software that is only general in nature. What I want from a developer is some very careful thought about what the experience of using this system will be like 30 hours into it, not a relentless focus on my first 30 seconds.
7 thoughts on “NeoVictorian Computing, and the Cult of the Lowest Common Denominator”
Ha, and I got some slaps for railing against “________” in A Box? Thanks for the link, had not seen Bernsteins stuff for a decade maybe.
Reminds me some (not sure why) of a book Scott Leslie recommended that lingers in my unread Kindle pile, “Shop Class as Soulcraft” http://www.thenewatlantis.com/publications/shop-class-as-soulcraft
Mike, so while On the one hand I fully empathize with this post and have had my own jeremiads against “ease,” I wonder if we inhabit the same computing universe. For every simplified interface you can point to, I can likely find 2 dozen others in which developers are asking people to behave like machines, not people.
I do think of this as a pendulum or spectrum that we go back and forth on. Some disciplines and domains are so well modelled and understood that we have some incredible purpose-built tools that are amazingly powerful but take time to master, but also simple ones that let people without previous access or ability go far quickly (think of music software.) in other areas, meh.
It is one reason though that when people ask me about interesting innovations in “learning technology” I generally tell them, there haven’t been many, generally, but if you dig into specific disciplines you’ll find a ton of interesting stuff (for instance medical ed, or engineering, biochem, etc.)
Anyways, will follow up on that link, but I do think your post speaks to some paradoxes at the heart of “general computing.”
There’s a big difference here — I don’t think we should ask people to think like computers. That’s just lazy.
But the issue is whether we are forever going to be mired in a generalized interface and approach to things that serves the novice at the expense of the professional, and serves the general case at the expense of the particular.
I don’t know whether it is a paradox. But it seems to me that there’s a balance between ease of use for a novice and productivity of a master. And I wonder if we have that balance right. The test should not be wether a feature is easy to use, but rather whther it is worth it to learn. Or something like that.
I knew you’d be here tout suite when I wrote that!
I’d argue that there is a distinction to be made. Bernstein is largely arguing that tools need to support mastery, and that we undermine that through idiotproofing them.
So the question with each difficulty is whether the difficulty is necessary because it leaves room for mastery (whereas simplification would dumb it down) or whether the difficulty is necessary just becuase it’s hard to automate.
So I’d argue that in the case of SFW, for example, the fact you have to set up your own server, SSH to it, install node, NPM pm2 and wiki,etc is something that needs to be put “in a box” for users.
On the other hand, I’d argue the journal chiclets at the bottom, which are powerful once you learn to use them, should stay, no matter how confusing they initially look.
With wordpress themes, quicker installation with less tweaking is a plus. On the other hand, stuff like RSS calls should remain exposed to the user, becuase when the user reaches mastery directly manipulating RSS has substantial benefits.
Or maybe I’m just rationalizing 😉
In reading Bernstein and Mike’s post I think about software that intellectually challenges users – not because of design but because it works a new way of thinking about what can be done. Mike mentions OpenXandu. I’d add Bernstein’s Tinderbox, and before that StorySpace. But also the first wikis, and now SFW, which is complex because it presents a new model of making and writing. Perhaps blogs when they first came out. RSS, maybe. But definitely SWF, where the designers are working with interface elements and gestures to let users do things that they didn’t know they wanted to do until it was designed that way.
Mike, really interesting perspective.
I’m a fairly recent convert to Apple products; for a long time I rejected them because they felt simplistic, and I was proud of customizing my Windows or Ubuntu environment. But in an era of increasing complexity, I’ve learned to appreciate how feature limitations challenge me to decide what’s most important. I’ve been thinking about this in terms of what I see as an additive approach to learning design — how much of what we think is necessary for students to achieve learning outcomes is, in fact, disposable? How much can we take away? If we put everything on the table, and the table is a chopping block, what benefits might we gain in trade-off?
There’s also the point I’ve made before: Simplicity of design may enable/encourage people to begin doing things they’ve never done before — things that are prerequisite to the kind of user/community-driven innovation that you allude to.
At the same time, I get that designs that address only the lowest common denominator will rarely result in paradigm changing, edge-cases that cause us to fundamentally rethink our practice or do greater things. When design enables more than simplistic use cases we can see our practice transform. I think of this happening because of at least a couple reasons: 1. The software provides extensive capabilities that allow for myriad new combinations and workflows that benefit from the user’s inventiveness; 2. The software is intentionally biased, providing an almost irresistible path to usage that is different (e.g. more complex) but also more rewarding than what the user may have had in mind.
PS I don’t like the book analogy because even if the content of a book is not simple enough for a third-grader, the interface is 🙂
will follow up on that link, but I do think your post speaks to some paradoxes at the heart of “general computing