Practical Art and Stallman, revisited

I started to type this as a response to the gracious comment Ismael left me on the Stallman post, but it quickly got big, so I am putting it here:

Ismael writes:

The rationale behind my quote of his about art (not actually a literal quote, but actually faithful to what he said) was that:
– if we’re talking about content/works/software that are needed, as tools, to reach other goals, they should be free
– art did not fall in the previous category
– art, as a subjective expression of one’s ideas/feelings, should not be changed by any means (e.g. Richard M. Stallman would not allow any derivative works of his writings not to go out of context, or find he’s being attributed things he did not actually said 😉

First I want to thank Ismael for taking both the initial time to transcribe this lecture of Stallman, and to clarify it. (And I agree with him that from the point of view of most people, the medical patents statement is the most interesting — just not my area)

So to the point —

I think the “practical=tool” clarification helps, but ultimately does not rescue Stallman’s argument. To me, at least, it embraces a Romantic and Early Modern view of art. And it’s a view I’ve found quite interesting — I have always thought, for example, that Jakobson’s “Poetic Function”, which defines art as essentially as a message that turns in on itself — that is, as a message that does not direct itself toward externalities — that analysis is one of the genius moments in 20th century intellectual history. I read the lines “the projection of the principle of equivalence from the axis of selection to the axis of combination”, his definition of the Poetic, and I’m still stunned at how many threads of modern thought come together in that beautifully simple but stunningly creative insight.

So I’m more than interested in attempts to define Art and aesthetic thinking as something in space apart from the prectical and directed. And tellingly, the other name for the Jakobson’s Poetic function it the “autotelic” — that which is an end in itself — and this jives nicely with Stallman’s distinction. That’s not coincidental, since Stallman and Jakobson are pulling from the same Art for Art’s Sake influences, but it’s significant.

Yet even in 1961, Jakobson saw this as a *function* — that is, there is no such thing as poetry in a sense — there’s a poetic element in everything. And the things we call poetry and art are traditionally things which are constructed to highlight the relation of the message to itself. But while the function has clear abstract boundaries, the artifacts that function illuminates do not. And we now have about 40 years of post-structuralist theory showing us that is indeed the case.

So back to the point — to the average person, I suppose, art is not a tool — because they enjoy it as readers. They revel in the autotelic. But to the artist, new art is always demonstrating ways to solve their own artistic problems. It’s no different in some ways than physical invention. Camera obscura, a tool, had a profound effect on Rennaisance Art — but so did Giotto’s realism. To the artist, and even to the astute viewer, art is always a set of tools, characters, plot devices and the like that they can rip out and use.

And of course it does not stop there. Fan fiction is a good example, but we don’t have to go twentieth century on this…here’s DaVinci’s Last Supper:

And here’s Giampetrino’s from some years later:

What was an output of Da Vinci’s artistic process becomes an input into Giampetrino’s own. It’s not the world’s most original work, but as long as correct attribution is made, why shouldn’t Giampetrino use Da Vinci’s work to develop his own style?

Similarly, many of my wife’s friends use photos taken by someone else to make paintings from. To the photographer, the photograph may be meant to be autotelic, but to the painter who uses it, it is another tool in completing their own ends. Likewise, the painting one creates from the photograph could end up as a piece of website layout, or the background of a WordPress theme.

There’s a solution to this, but Stallman can’t use it. The solution is to say that the photographer gets to decide whether his photographs are meant to be tools for graphic designers and artists (in which case he gives up his freedom) or art (in which case he preserves his rights).

But that rests the division in the intentionality of the producer, not in any attribute of the object. And if we vest that distinction in intentionality, we might as well all go home — to say that the producer should determine how his own work should be used is to say that the concept of Free Software is dead. I choose to see my code as my personal self-expression, therefore you can’t copy it.

That’s where we were *before* Stallman’s innovative movement, and I have no intention of going back.

I don’t mean to minimize the massive problems in Art here, with everything from compensation to attribution. It’s not an easy subject — it’s far more difficult than coming to terms with whether printer drivers should be free and open. And I’m guessing that’s why Stallman wants to wall it off from his more core concerns.

Goal-based scenario/simulation vs. learning 2.0

The most invigorating job I ever had was working for CognitiveArts programming learning “simulations”. Founded by Roger Schank, CogArts was truly a company with a mission — to revolutionize education through technology rather than simply extend the current system. And we pushed the envelope in every way we could. I worked with a large team of programmers whose goal was to make the ultimate Choose-your-own-adventure multimedia learning experiences.

The core idea was simple: people learn by doing, so learning should simulate doing in a low risk environment. Schank’s favorite talking point was this “Which would you rather your airplane pilot have — 90 hours of the flight simulator, or 90 hours of book study?”

Simulations would generally lead a person through a “goal-based scenario”: perhaps as a Governor’s economic advisor they had to make decisions for a hurricane torn state on things like price controls and rationing, and observe the effects of the action. Perhaps they had to negotiate a house price as part of Harvard Business School Publishing’s Negotiation class.

The key to the system was failure-based learning paired with just in time instruction. Students would be encouraged to develop expectations about what would happen as a result of their actions. When they failed, they would be provided with context-sensitive instruction, and encouraged to try again. It had been shown in a number of studies  that by providing the bulk of the instruction after failure that you could get retention of information significantly higher.

The system was later copied (often poorly) by other corporate training companies, and is now a pretty standard offering of most custom elearning vendors (although I would argue that the desire of many vendors to push such modules into a one-size-fits-all assessment harness profoundly degraded the experience — at CogArts we built an LMS that was precisely tailored to the needs of our scenarios).

This autodidactic gaming approach to elearning seems miles away from the PLE and the Inverted LMS (I still haven’t quite resolved if those are the same thing yet — please excuse my transitional use of both terms). The Inverted LMS is inherently social and collaborative; the CogArts model was solitary and self-taught. Indeed, if there was one flaw with what we did at Cognitive Arts, it was probably that in the move from CD-based non-networked learning to web-based instruction we were not radical enough in our rethinking of the social element of education.

Despite that, I’d argue that simulations are very close to the PLE/Inverted LMS in theory. Why?

Because both focus on learning by doing. Where there is high-risk to real life failure simulations make a lot of sense. And where the definition of success in a field or task is very narrowly defined, simulations shine. The flight simulator, one of the first computer applications ever built, still remains the model here.

But the web has introduced us to plenty of low-risk ways to engage in disciplines. And that’s where the new approach comes in.

An example? At CogArts, one of the apps I admired most was the “Is it a Rembrandt?” simulation, which provided students with detailed pictures that could be faked paintings or undiscovered Rembrandts. The students, through learning about Rembrandt’s style, had to make the call. Experts were there to give them the just in time instruction should they fail — explaining this or that about brush strokes or subject matter.

I’d still pay good money to use that sim — I think it remains a wonderful way to learn, and one that appeals to our gaming culture. Put software like that in a current high school, and you’re going to blow the doors of education. In a good way.

But what is striking nowadays with the web is how it supplies plenty of real low-risk problems for students to engage in. The Rembrandt simulation was built during a mid-90s rash of discoveries that certain Rembrandts were fakes. Ten years later if such a thing happened, there’d be a good chance you could get hi-res photos of detail from the fakes, if you asked nicely.

So what happens then? You gather your students, you put up a wiki and series of student blogs, you roll your sleeves up, and you get your class analyzing the paintings. Google becomes your just-in-time learning application, which is cool, because that’s what your JIT solution will end up being in real life. Success or failure is determined, as in life, somewhat fuzzily by the reaction of the experts in real life: if you can get them to engage with your work at all, that’s a high level of success; if they actually start agreeing with you or noting things as valuable insight, even better.

I miss both producing and playing with the Schank software, just because of how much fun it was, and if I could buy those titles shrink-wrapped from the local Staples today, I’d spend my own money to buy a title a week. Heck, I may go home tonight and play the Cable & Wireless simulation, which I still have a disc of somewhere. In a perfect world the government would fund more of these sorts of simulations.

But the brilliance of the internet is how much it matches, for a certain subset of problem, the perfect learning environment CogArts was simulating in its courseware. As with the simulations, on the internet you can try out ideas without much risk, you can get information from Google on a Just-in-Time basis, and you can talk to experts about the validity of your decisions. And, yes, it’s a lot fuzzier, and I certainly don’t want my pilot to have put in 90 hours of BLOGGING, but for certain types of learning (and possible for most learning), it’s a preferred method of engagement.

Marc Andreessen Supports the Inverted LMS (sort of)

This is fascinating, to me at least. Marc (are we allowed to call him Marca?) came late to blogging, but he’s clearly making up for lost time and talking to the right people.

But what I noted in his recent post was how much his view of the larger web (via Sifry) matches exactly what we’ve been talking about over here vis-a-vis the Inverted LMS (or really the Inverted CMS idea applied to education). Marc writes:

The first time I met Dave Sifry, over three years ago, he told me that conversations on the Internet would eventually all revolve around every individual having a blog, each individual posting her own thoughts on her own blog, and blogs cross-linking through mechanisms like trackbacks and blog search engines (such as Dave’s Technorati).

The advantage of this new world, said Dave, is that each individual (anonymous or not) would be publicly responsible for their own content and in charge of their own space — substantially reducing the risk of spam and trolls — and the communication would flow through the links. There would still be the risk of link spam, but at least this new world would make people more responsible for their own content, and that would tend to uplevel the discourse.

I think Dave is exactly right, and the implications of this new world are very interesting.

The rest of the post is worth reading too — it’s more of a head-nodder, mostly reiterating stuff that ALL bloggers learn very quickly, but it’s great to have it all in one place. And it has the neat advantage that you can send it to the non-believers with a note that says “From the guy that co-founded Netscape.”

I’m saying, it doesn’t hurt.

Offline thinking

I get a wave of nostalgia when I read a John LeCarre novel. Not for the simplicity of Cold War politics or for spy novels written with a real sense of literary style, but for the physicality of the world George Smiley inhabits. Trying to figure out a particular thorny problem, he grabs a notebook, brings the rotary telephone over to the table, and between making a couple of phone calls, thinking a lot and writing a bit he comes to some conclusion.

I miss the quiet of years gone by, the unconnectedness, and when I read small passages like that a strange bit of longing for that world sweeps over me.

And while there is a certain nostalgia here, I can’t help but think there is something bigger too, that we have lost something important to society, something beyond the aethestic of a clean table, and scratchpad, and a rotary phone.

I remember one particular month in 1992, for example, that I was struggling with some difficult articles on linguistic style. I’d pound my head against some of the text, armed only with a few reference works on the table at Dunkin’ Donuts, and get as far as I could by positing possible interpretations and checking them against the text. And then I’d mark out what things I didn’t understand, pick out relevant articles in the endnotes, and make a note to photocopy them next time I was at the library.

Here’s the dated bit: by the time I got articles commenting on the original, I’d often find I disagreed with their analysis. I had had time to solidify my opinion before joining the conversation.

Business has had its related losses, some very early on. My father, an old DEC guy, once noted to me the difference that Excel had brought to the enterprise in the late 1980s. Before spreadsheets, he said, you’d spend a lot of time hashing out assumptions. You’d get them nailed down, and then you’d do the math. After Excel, he said, the temptation to play with assumptions until you got the result you wanted was too great.

I mean, if we bump this figure up by 0.12, and this one down by half a percent, we’re golden, right?

What I’m trying to say, I guess, is that all these things are connected, and we’re still trying to deal with them. No George Smiley, the CIA collects all internet traffic in America, and tries to data mine it, without so much as positing an assumption first. A kid reads Roman Jakobson, and is immediately exposed to other people’s summaries of the article he has just read, before he has fully parsed it himself — before he has a chance to disagree. An accountant fudges Excel inputs just enough that a projection becomes a positive indicator.

They are all tied together, and together they represent one of the problems of our age. When conversation or computing power is readily available we tend to jump to it very fast. But for those conversations and computations to be meaningful, we have to enter into them with a contribution of our own.

And that requires us to wait a bit.

What worries me about the modern world is not that amateurs are taking over. It’s that the amateurs might be so soaked in the conventional wisdom of a discipline from a very early point that they won’t bring those needed misreadings to the table that have always fueled progress in the past. That without the silence in between, the conversation will become less varied and meaningful.

Which turns, oddly, into an ode on blogs. For today I sit on my porch, unwired, typing on an AlphaSmart Neo and reading some documents I downloaded onto my Sony Reader. And while I’m sure I haven’t pulled together the most cogent argument (or linked as much as I might), it feels damn good.

Photo-0041.jpg

So I wonder if it’s possible to move back after all, to think in wider and longer swaths again, but to still keep the connectivity. And I can’t help but think that the lowly blog, with it’s talent for doing conversation as a series of longer cross referenced articles is the perfect channel we currently have for such discourse.

Regardless, I think it’s worth it to continue talking about what a healthy community of discourse looks like, rather than to assume that future professional communities must borrow thier idiom from current teen or gadget-geek culture.

That is, perhaps we should have the discussion that Andrew Keen and Michael Gorman would start if they were not so interested in being inflammatory. One that notes that Marc Andreesen is trying to get offline more, and that Lessig declared email bankruptcy over three years ago.

There’s a real hunger right now for a work that pulls these New Primitive impulses developing among the older techies and reconciles it with the beauty of the data finds data world Jon Udell recently discussed on his blog.

In short, how do we structure our lives so that we get both the benefits of mass conversation and the restorative power of the silences in between?

Update: I just discovered that Martha Burtis asks perhaps the first important question, one which I skipped over here: How are we blogging now? What are our techniques, and what have we found works and doesn’t work? Much better starting point than my Smiley-induced ramble.

From there it becomes a question of a variety of best practices…but the first step is really to make visible our experience, like those old books on writing that would just be collections of reflections by writers on schedule, technique, process, etc. “I usually start typing at four in the morning on my Remington from notes made the previous afternoon,” said Writer X., etc….

John Willinsky and the Ten Years War

If you’re interested in education and technology, go (now!) and listen to Jon Udell’s recent interview with John Willinsky. Then go listen to Willinsky’s fascinating 1 hour lecture which deals with everything from Issac Newton as proto-blogger to Wikipedia error rates to why our exam-book culture is selfish and anti-intellectual.

You might want to listen at home. I made the mistake of listening just now at lunch, and I don’t know how I’m going to work for the rest of the day.

I want to march. I want to start a revolution.

But since I left my musket and pitchfork at home, I got off the Willinsky lecture hyped up, and put the energy into browsing the web instead. And I bumped into an old friend in a surprising place.

Back in 1996/97 I worked at Northern Illinois University. And long story short, I sold them on an idea I called visible education. We did this site called The Persona Project, which was supposed to be a student produced encyclopedia of biographies.

Then I left grad school, and the site died a slow lonely death.

Here’s the weird bit. The site still exists. I just found out it’s still on NIU’s servers, here:

http://www.clas.niu.edu/persona/index.htm

Apparently no one had the heart to delete it.

What a time machine that site is. And what a trip to see that I’m saying exactly the same things today, and calling them “Web 2.0”.

I’m not showing this to prove how smart and visionary I was in 1997 (although, come on, it *is* kind of cool).

But rather, reading through the site and seeing how much it matches with the Willinsky pieces, it just really brings something home for me.

We’ve been fighting this battle, off and on, for 10 years now. Some of us more than that. But when I listen to John Willinsky the ideas don’t sound old, or tired. I don’t roll my eyes and say “We’ve pushed for this for 10 years, it has no legs.”

When I hear Willinsky, I think, we’re almost there. One more push.

To some people that might sound like I’m in denial.

So be it. I’ve waited (and pushed) 10 years to get to this point. I can do another 10 years if I need to.

When you believe in something passionately, time just scales differently.