So after all the compromises made on the stimulus bill to get Republican support, the roll call is in. The result?
Post-partisanism is a dream, and it always has been. It’s a fiction created by a bunch of Washington journalists who would like the parties they go to to be friendlier, to be party-integrated in a way they haven’t been since Newt Gingrich changed how politics was waged.
It will be interesting to see if Obama keeps playing what Atrios so rightly calls “Football with Lucy”, or if there will be a major shift.
Leigh Blackall replies to my previous post using the example of working with MediaWiki (which is a boon to reuse, but requires training and formatting time — which, in turn, sucks some of the uptake out of transparency efforts). It’s a great example of something just at the edges — like Leigh, I think my instinct would be to push ahead with MediaWiki use — but also to realize that we’re getting considerably less info out than if we let faculty simple post the native forms they already use into an unorganized repository of PDF, Word, and Excel sheets.
It’s always a struggle to find the right balance, but on the scale of designing for reuse, MediaWiki production seems a small reuse tax to pay.
His comments though made me think a little more on the issue, particularly about the relation of this issue to the objects vs. source code debates of the 90s and early 00s. And about what this division means when it gets away from that middle area Leigh is working in.
Transparency (show your code) does promote a certain type of reuse — but it is generally I think reuse of professionals of the same caliber. And this is where the OO vs. scripting language comparison comes in useful — the idea of scripting languages is sort of a single tier — scripters reuse what they learn looking at scripters.
The whole OO idea, when expressed as a business model, was that there are different tiers of user/creators — that the way-smart people make the objects and the less smart (and less paid) people script them together, and this maximizes efficiency.
The everybody is a scripter (which I see as a sort of craft model), and the specialized production OO model (which i see as a manufacturing model) come from two really fundamentally different world views — they intersect in this small place, but at the edges they start to tug at each other.
Once again, I think we need both — the Python Library is a thing of beauty, and allows me to do crazy things with code that I could never do on my own. On the other hand, so much of what I’ve produced of use has come from hacking at spaghetti code copied and pasted from somewhere.
I think there are analogues in open education, even in a single implementation. I might grab the best lecture on Aeration from TU Delft and drop it unedited into my curriculum. I might follow that by reviewing the reading list for that course, and pulling one or two readings I have missed into my own curriculum. But I think even is this case, they are two slightly different activities — in one instance I am essentially a consumer, and in another I’m a co-producer.
I’m flitting around a bit on what this idea means, and how it maps onto things, so either bear with me or speak up in the comments and help me nail it down.
I’ve been looking at a number of comparisons of Carnegie Mellon’s OLI resources to MIT’s OCW, most stimulated by David Wiley’s course on Open Education. The comparisons are interesting, and it’s great to see the different angles people have found on this.
However, I haven’t seen what I consider to be the core difference articulated — at least in the precise way that I would articulate it. So at the risk of kibbutzing the class here, I offer my unsolicited analysis.
Openness is an umbrella term for a number of things. In the mashup era we tend to think of it in terms of reuse. And in the OLI model that is paramount. OLI is about reuse.
MIT OCW (in my completely unofficial opinion) is about the value of transparency. And transparency, not reuse, is the core concern.
It would be tempting to say that while separable concepts, the aims of reuse and transparency are so synergistic as to never be at odds. But this isn’t the case. Engineering for reuse takes a certain type of investment that constitutes a drag on transparency efforts. Transparency is most effective when as much is made transparent as possible. The principle behind transparency is that you never know what bit of internal information may be valuable to outsiders. And you shouldn’t really spend too much time worrying about it — get as much open as you can.
Reuse, when materials have to be reformatted, has different goals. Find what is useful, and put effort into the objectification of it.
Programmers will recognize this division immediately as the OO vs. scripting religious war, transposed in a different key. Does the world belong to elegant APIs and object models, or does it belong to those who open up the spaghetti code of their script to the world?
The answer ends up being that the world needs both — but if you apply a Perl or Python aesthetic to a COM object the COM object comes up lacking, and vice versa.
Worth keeping in mind.
On a slightly related note, I saw a documentary about Tom Dowd yesterday, the guy who did sound engineering on everything from John Coltrane to Aretha Franklin to Eric Clapton. And one of the weirdest stories in the film was this — he went over to England in 1967, and was visiting with the Beatles, and they thought, hey, since you’re here, maybe you could engineer something for us. Dowd went looking for an eight track board — the kind by Ampex that Atlantic Records had been using since 1958. And all he could find in 1967 England was 3 tracks. They had no idea that the soul and R&B recordings in America were being done on 1-inch 8 track devices.
In fact, George Martin, the Beatles’s producer, had the only four track in England. He thought he had the most advanced machine available. And so Sgt. Pepper’s, one of the most sonically ambitious records of its time, was recorded with a pair of four-tracks, bouncing tracks back and forth (with each bounce costing fidelity). Let me repeat this for emphasis: In Britain they had no idea that the eight track machines that had existed almost a decade in America existed.
Nowadays, I imagine Dowd — who was never proprietary about his techniques when asked — would run a blog, talking about the experience of mixing the eight tracks he used on 1961’s Stand By Me and other Atlantic classics. And George Martin would read that blog and have Sgt. Pepper’s — heck, have Revolver — mixed on the Ampex eight-track, just as they ended up using an Ampex 8-track for portions of the White Album, after they learned of its existence.
That would be the power of transparency — a decade leap in sound technology for Britain. A long way of making the point, I suppose, that we shouldn’t underestimate the power of a professor looking at someone else’s class materials and saying — hey, I wonder how they do it over there… if anything, the classroom is a far more closed space right now than the studios of 1967, and we can only imagine what innovations we might find if we opened the whole thing up…
Cryptic note today at Federal Computer Week:
General Services Administration officials are negotiating with Google’s YouTube about the rules governing posting government videos on YouTube, a GSA official said today.
The negotiations focus on YouTube’s terms of service, said Tobi Edler, a GSA spokeswoman.
A coalition of federal agencies led by GSA’s Office of Citizen Services has been negotiating with YouTube for six months, Edler said.
When an agreement is reached, it will be offered to all federal agencies, Edler said.
“The discussions have been fruitful but are still continuing, and the final agreement has not yet been reached,” Edler said.
I don’t quite know what this means. But if it means what I think it means, that YouTube will become a preferred outlet for government publication of video, I don’t like it.
Don’t get me wrong. I think it would be great to have government video on YouTube. But wouldn’t it be better to support something like archive.org, or better yet to build a government infrastructure, available to all levels of government for publication of government produced video? And then let a Creative Commons license do the rest?
Ok, ***Deep Breath***. It’s really impossible to tell what the story is here. Let’s wait till it comes out — but this is definitely something to watch. Be prepared to fight the Googlization of government in the very near future.
I’m a Krugman/DeLong sort of guy when it comes to economics, but I do try to as read widely as my limited free time permits. Even more so since the crash. And I do read the critiques of infrastructure spending. That’s a small reading list lately. But it’s there.
Reading the Becker-Posner blog today, I am reminded of the traditional conservative critique of infrastructure investment, that it “crowds out” private investment:
If the government increased its spending on infrastructure when the economy has full employment, its main impact would likely be to draw labor, capital, and raw materials away from various other activities. In effect, increased government spending under these employment conditions would “crowd out” private spending. Measured GDP would not be much affected, if at all.
The broad answer to this challenge is (I think) that risky assets are currently undervalued by the marketplace, and while that is the case private spending is likely to be timid, and several steps and a couple death spirals down the flow chart later, this results in excess unemployment. And by definition, where government projects cut into excess unemployment (rather than pull from a workforce near optimum employment), they can’t be crowding out private initiatives. That is, after all, what the “excess” in “excess unemployment” means.
Becker concedes that these are not normal times, and different rules apply. But then he asks what I feel is a legitimate question — how are we sure that people are pulled from the right part of the workforce?
For another thing, with unemployment at 7% to 8% of the labor force, it is impossible to target effective spending programs that primarily utilize unemployed workers, or underemployed capital. Spending on infrastructure, and especially on health, energy, and education, will mainly attract employed persons from other activities to the activities stimulated by the government spending. The net job creation from these and related spending is likely to be rather small. In addition, if the private activities crowded out are more valuable than the activities hastily stimulated by this plan, the value of the increase in employment and GDP could be very small, even negative.
I disagree, of course, with the main idea, that people in private industry with good jobs will flee to government stimulus jobs. Jobs are the new black, and in this economy if you know you are not replaceable at your current job, that’s probably enough incentive to not go looking for greener pastures.
However, there is a particular reading of that concern that makes sense. If 100,000 auto assembly line people lose their jobs, and the stimulus creates 100,000 jobs laying fiber optic lines, there’s a bit of a disconnect there. To be effective, stimulus jobs have to require the skills possessed by a large set of unemployed people.
Which brings me back to the why OCW production is such a perfect candidate for infrastructure development. It is a generalist endeavor.
If a plan was implemented to put two $35,000/yr OCW production positions at every school which wanted to put 40 courses online over the next year, it would not crowd out any significant private initiatives (I’m in a position to say that with some certainty, given my job). And even more importantly, the generalist nature of the position (record audio, coordinate document publication, set up a simple web site) ensures that people doing common office work in other industries could easily cross over into OCW production. We’re not talking about a specialized pool of labor where there may be some constraints.
I’m not saying, of course, that there aren’t specialized skills and talents that an OCW team develops. There are. But there isn’t a specialized OCW workforce at this point — if major infrastructure money was to be spent jump-starting OCW production the pool of government-funded workers would so dwarf the current pool of OCW workers that any talk of the demand for new OCW workers being met by reapportionment of people in the profession would be ridiculous.
The only negative effect I can see is that since it might increase demand for the small pool of experienced OCW workers that currently exist, it could result in slight wage inflation for current OCW workers, a result that I might like, but which would be harmful to the bottom line of many nascent initiatives. The solution to that, though, is simple: just adjust the wage paid to the new jobs to 80% or so of the current pool — a good rate for entry-level workers, but raw deal for the more experienced.
Just when you thought Andrew Keen had faded away he brings on the crazy:
On December 6, Barack Obama announced his intention to fund a massive public works program of somewhere between $400 and $700 billion which will create enough jobs to avert the economic catastrophe of the 1930s. But I fear that one element in Obama’s well-intentioned infrastructure plan—his goal of providing all Americans with broadband Internet access—might one day be seen as inadvertently laying the foundations for a return to fascism, the political catastrophe of the 1930’s.
Is it Godwin’s Law if you get to the Hitler comparison in the first paragraph?
I really can’t do this piece justice. You have to read it yourself for the full humorous effect. It is paragraph after paragraph of fearmongering about, well, how the internet will lead to fear-mongering. Apparently the traditional media, with all their “fact-checking” is all that stands between the unemployed masses with all their undirected rage and a new holocaust.
I’m not exaggerating. This isn’t some humorous restatement of what he says. This is his core argument — that broadband in every home combined with double digit unemployment will not only lead to nasty societal effects, but to the rise of “digital fascism”.
I hate to break it to him, but the lesson of history is that in times of unemployment and famine you really don’t need much technology to spread hate or instability. That’s one of the many reasons that keeping unemployment down is crucial to the survival of liberal democracies. You want to avoid the nastiness — do everything you can to keep employment within historical norms (an effort, BTW, that our media is currently trying to sabotage with their uncritically repeated Republican talking points about ‘belt-tightening’). Deal with the cause of the unrest and the rage.
The move from print to web has nothing to do with it. In fact, Keen seems blissfully unaware that this country’s most recent lurch to authoritarianism was ably aided and abetted by his fact-checking press corps, while those loons on the net, the bloggers, were left to do the fact-checking on everything from wiretapping to weapons of mass destruction.
History can be a difficult thing that way. But the larger question, reading Keen’s article, is whether anyone in the academic community can continue to take him seriously at this point. He’s drifting from pompous kook into Ann Coulter territory here, and those that have tied themselves to his line of reasoning best take heed now and cut the ropes.
So I read my Friday post, and it’s a mess.
So here’s my point, simply stated:
We are an event-driven culture.
The reasons for that in the past have often been due to technical limitation.
Those limitations are gone, and now we find ourselves in a world where we no longer have to pull our learning or entertainment out of the event stream. In video, that world really started with the video store. But still people gravitate towards the new releases section. Similarly, radio broadcast of music is ridiculously redundant, yet we still live in a world of the latest hit.
Why? Because despite the long tail of the past, and the availability of customized media, there is still a social, emotional, and possibly intellectual need for certain things that event-driven culture provides. One of those desires is to talk about things on the same day, that is to talk about Episode 7 of Battlestar Galactica with people that have just seen Episode 7, but not Episode 8. There’s something to that. There’s a democracy to having that conversation where neither person knows what is going to happen next week. There’s a joy in listening to a hit song, and forming an opinion on it which can be understood by others listening to the same hit song.
So where does that leave us, the people enabled by technology to transcend the narrow broadcast stream of the immediate, but still feeling some loneliness on that journey? Well, I like this idea of cohorts.
A cohort is like a book group. They decide to read a series of books in the same order, one at a time. Perhaps they even have a mid-book break, where they read up to a page and discuss “The book so far”. Similar groups are likely to grow up around instant media and stuff in the Long Tail of the Past. You want to go through Kojak or The Prisoner on Netflix “Watch Instantly”? Well, you’ll have the option soon, I think, to either watch it, or set up a cohort schedule. You set up a cohort, and invite friends to it. Friends that accept will get a Kojak episode added to the top of their queue on a specified schedule, and by joining the cohort will be locked out from “reading ahead”.
I can see similar things happening in other media. And I think with Tony Hirst’s work on scheduled RSS feeds, and the work of Philipp Schmidt and others on P2P study groups around OCW we are starting to this emerge in education as well — static content transformed into a dynamic serialized stream to improve engagement.
So there you go: Cohorts. Or perhaps another term I haven’t heard of. But whatever it is, it’s the future.