So after all the compromises made on the stimulus bill to get Republican support, the roll call is in. The result?
Post-partisanism is a dream, and it always has been. It’s a fiction created by a bunch of Washington journalists who would like the parties they go to to be friendlier, to be party-integrated in a way they haven’t been since Newt Gingrich changed how politics was waged.
It will be interesting to see if Obama keeps playing what Atrios so rightly calls “Football with Lucy”, or if there will be a major shift.
Leigh Blackall replies to my previous post using the example of working with MediaWiki (which is a boon to reuse, but requires training and formatting time — which, in turn, sucks some of the uptake out of transparency efforts). It’s a great example of something just at the edges — like Leigh, I think my instinct would be to push ahead with MediaWiki use — but also to realize that we’re getting considerably less info out than if we let faculty simple post the native forms they already use into an unorganized repository of PDF, Word, and Excel sheets.
It’s always a struggle to find the right balance, but on the scale of designing for reuse, MediaWiki production seems a small reuse tax to pay.
His comments though made me think a little more on the issue, particularly about the relation of this issue to the objects vs. source code debates of the 90s and early 00s. And about what this division means when it gets away from that middle area Leigh is working in.
Transparency (show your code) does promote a certain type of reuse — but it is generally I think reuse of professionals of the same caliber. And this is where the OO vs. scripting language comparison comes in useful — the idea of scripting languages is sort of a single tier — scripters reuse what they learn looking at scripters.
The whole OO idea, when expressed as a business model, was that there are different tiers of user/creators — that the way-smart people make the objects and the less smart (and less paid) people script them together, and this maximizes efficiency.
The everybody is a scripter (which I see as a sort of craft model), and the specialized production OO model (which i see as a manufacturing model) come from two really fundamentally different world views — they intersect in this small place, but at the edges they start to tug at each other.
Once again, I think we need both — the Python Library is a thing of beauty, and allows me to do crazy things with code that I could never do on my own. On the other hand, so much of what I’ve produced of use has come from hacking at spaghetti code copied and pasted from somewhere.
I think there are analogues in open education, even in a single implementation. I might grab the best lecture on Aeration from TU Delft and drop it unedited into my curriculum. I might follow that by reviewing the reading list for that course, and pulling one or two readings I have missed into my own curriculum. But I think even is this case, they are two slightly different activities — in one instance I am essentially a consumer, and in another I’m a co-producer.
I’m flitting around a bit on what this idea means, and how it maps onto things, so either bear with me or speak up in the comments and help me nail it down.
I’ve been looking at a number of comparisons of Carnegie Mellon’s OLI resources to MIT’s OCW, most stimulated by David Wiley’s course on Open Education. The comparisons are interesting, and it’s great to see the different angles people have found on this.
However, I haven’t seen what I consider to be the core difference articulated — at least in the precise way that I would articulate it. So at the risk of kibbutzing the class here, I offer my unsolicited analysis.
Openness is an umbrella term for a number of things. In the mashup era we tend to think of it in terms of reuse. And in the OLI model that is paramount. OLI is about reuse.
MIT OCW (in my completely unofficial opinion) is about the value of transparency. And transparency, not reuse, is the core concern.
It would be tempting to say that while separable concepts, the aims of reuse and transparency are so synergistic as to never be at odds. But this isn’t the case. Engineering for reuse takes a certain type of investment that constitutes a drag on transparency efforts. Transparency is most effective when as much is made transparent as possible. The principle behind transparency is that you never know what bit of internal information may be valuable to outsiders. And you shouldn’t really spend too much time worrying about it — get as much open as you can.
Reuse, when materials have to be reformatted, has different goals. Find what is useful, and put effort into the objectification of it.
Programmers will recognize this division immediately as the OO vs. scripting religious war, transposed in a different key. Does the world belong to elegant APIs and object models, or does it belong to those who open up the spaghetti code of their script to the world?
The answer ends up being that the world needs both — but if you apply a Perl or Python aesthetic to a COM object the COM object comes up lacking, and vice versa.
Worth keeping in mind.
On a slightly related note, I saw a documentary about Tom Dowd yesterday, the guy who did sound engineering on everything from John Coltrane to Aretha Franklin to Eric Clapton. And one of the weirdest stories in the film was this — he went over to England in 1967, and was visiting with the Beatles, and they thought, hey, since you’re here, maybe you could engineer something for us. Dowd went looking for an eight track board — the kind by Ampex that Atlantic Records had been using since 1958. And all he could find in 1967 England was 3 tracks. They had no idea that the soul and R&B recordings in America were being done on 1-inch 8 track devices.
In fact, George Martin, the Beatles’s producer, had the only four track in England. He thought he had the most advanced machine available. And so Sgt. Pepper’s, one of the most sonically ambitious records of its time, was recorded with a pair of four-tracks, bouncing tracks back and forth (with each bounce costing fidelity). Let me repeat this for emphasis: In Britain they had no idea that the eight track machines that had existed almost a decade in America existed.
Nowadays, I imagine Dowd — who was never proprietary about his techniques when asked — would run a blog, talking about the experience of mixing the eight tracks he used on 1961’s Stand By Me and other Atlantic classics. And George Martin would read that blog and have Sgt. Pepper’s — heck, have Revolver — mixed on the Ampex eight-track, just as they ended up using an Ampex 8-track for portions of the White Album, after they learned of its existence.
That would be the power of transparency — a decade leap in sound technology for Britain. A long way of making the point, I suppose, that we shouldn’t underestimate the power of a professor looking at someone else’s class materials and saying — hey, I wonder how they do it over there… if anything, the classroom is a far more closed space right now than the studios of 1967, and we can only imagine what innovations we might find if we opened the whole thing up…
Cryptic note today at Federal Computer Week:
General Services Administration officials are negotiating with Google’s YouTube about the rules governing posting government videos on YouTube, a GSA official said today.
The negotiations focus on YouTube’s terms of service, said Tobi Edler, a GSA spokeswoman.
A coalition of federal agencies led by GSA’s Office of Citizen Services has been negotiating with YouTube for six months, Edler said.
When an agreement is reached, it will be offered to all federal agencies, Edler said.
“The discussions have been fruitful but are still continuing, and the final agreement has not yet been reached,” Edler said.
I don’t quite know what this means. But if it means what I think it means, that YouTube will become a preferred outlet for government publication of video, I don’t like it.
Don’t get me wrong. I think it would be great to have government video on YouTube. But wouldn’t it be better to support something like archive.org, or better yet to build a government infrastructure, available to all levels of government for publication of government produced video? And then let a Creative Commons license do the rest?
Ok, ***Deep Breath***. It’s really impossible to tell what the story is here. Let’s wait till it comes out — but this is definitely something to watch. Be prepared to fight the Googlization of government in the very near future.
I’m a Krugman/DeLong sort of guy when it comes to economics, but I do try to as read widely as my limited free time permits. Even more so since the crash. And I do read the critiques of infrastructure spending. That’s a small reading list lately. But it’s there.
Reading the Becker-Posner blog today, I am reminded of the traditional conservative critique of infrastructure investment, that it “crowds out” private investment:
If the government increased its spending on infrastructure when the economy has full employment, its main impact would likely be to draw labor, capital, and raw materials away from various other activities. In effect, increased government spending under these employment conditions would “crowd out” private spending. Measured GDP would not be much affected, if at all.
The broad answer to this challenge is (I think) that risky assets are currently undervalued by the marketplace, and while that is the case private spending is likely to be timid, and several steps and a couple death spirals down the flow chart later, this results in excess unemployment. And by definition, where government projects cut into excess unemployment (rather than pull from a workforce near optimum employment), they can’t be crowding out private initiatives. That is, after all, what the “excess” in “excess unemployment” means.
Becker concedes that these are not normal times, and different rules apply. But then he asks what I feel is a legitimate question — how are we sure that people are pulled from the right part of the workforce?
For another thing, with unemployment at 7% to 8% of the labor force, it is impossible to target effective spending programs that primarily utilize unemployed workers, or underemployed capital. Spending on infrastructure, and especially on health, energy, and education, will mainly attract employed persons from other activities to the activities stimulated by the government spending. The net job creation from these and related spending is likely to be rather small. In addition, if the private activities crowded out are more valuable than the activities hastily stimulated by this plan, the value of the increase in employment and GDP could be very small, even negative.
I disagree, of course, with the main idea, that people in private industry with good jobs will flee to government stimulus jobs. Jobs are the new black, and in this economy if you know you are not replaceable at your current job, that’s probably enough incentive to not go looking for greener pastures.
However, there is a particular reading of that concern that makes sense. If 100,000 auto assembly line people lose their jobs, and the stimulus creates 100,000 jobs laying fiber optic lines, there’s a bit of a disconnect there. To be effective, stimulus jobs have to require the skills possessed by a large set of unemployed people.
Which brings me back to the why OCW production is such a perfect candidate for infrastructure development. It is a generalist endeavor.
If a plan was implemented to put two $35,000/yr OCW production positions at every school which wanted to put 40 courses online over the next year, it would not crowd out any significant private initiatives (I’m in a position to say that with some certainty, given my job). And even more importantly, the generalist nature of the position (record audio, coordinate document publication, set up a simple web site) ensures that people doing common office work in other industries could easily cross over into OCW production. We’re not talking about a specialized pool of labor where there may be some constraints.
I’m not saying, of course, that there aren’t specialized skills and talents that an OCW team develops. There are. But there isn’t a specialized OCW workforce at this point — if major infrastructure money was to be spent jump-starting OCW production the pool of government-funded workers would so dwarf the current pool of OCW workers that any talk of the demand for new OCW workers being met by reapportionment of people in the profession would be ridiculous.
The only negative effect I can see is that since it might increase demand for the small pool of experienced OCW workers that currently exist, it could result in slight wage inflation for current OCW workers, a result that I might like, but which would be harmful to the bottom line of many nascent initiatives. The solution to that, though, is simple: just adjust the wage paid to the new jobs to 80% or so of the current pool — a good rate for entry-level workers, but raw deal for the more experienced.
Just when you thought Andrew Keen had faded away he brings on the crazy:
On December 6, Barack Obama announced his intention to fund a massive public works program of somewhere between $400 and $700 billion which will create enough jobs to avert the economic catastrophe of the 1930s. But I fear that one element in Obama’s well-intentioned infrastructure plan—his goal of providing all Americans with broadband Internet access—might one day be seen as inadvertently laying the foundations for a return to fascism, the political catastrophe of the 1930’s.
Is it Godwin’s Law if you get to the Hitler comparison in the first paragraph?
I really can’t do this piece justice. You have to read it yourself for the full humorous effect. It is paragraph after paragraph of fearmongering about, well, how the internet will lead to fear-mongering. Apparently the traditional media, with all their “fact-checking” is all that stands between the unemployed masses with all their undirected rage and a new holocaust.
I’m not exaggerating. This isn’t some humorous restatement of what he says. This is his core argument — that broadband in every home combined with double digit unemployment will not only lead to nasty societal effects, but to the rise of “digital fascism”.
I hate to break it to him, but the lesson of history is that in times of unemployment and famine you really don’t need much technology to spread hate or instability. That’s one of the many reasons that keeping unemployment down is crucial to the survival of liberal democracies. You want to avoid the nastiness — do everything you can to keep employment within historical norms (an effort, BTW, that our media is currently trying to sabotage with their uncritically repeated Republican talking points about ‘belt-tightening’). Deal with the cause of the unrest and the rage.
The move from print to web has nothing to do with it. In fact, Keen seems blissfully unaware that this country’s most recent lurch to authoritarianism was ably aided and abetted by his fact-checking press corps, while those loons on the net, the bloggers, were left to do the fact-checking on everything from wiretapping to weapons of mass destruction.
History can be a difficult thing that way. But the larger question, reading Keen’s article, is whether anyone in the academic community can continue to take him seriously at this point. He’s drifting from pompous kook into Ann Coulter territory here, and those that have tied themselves to his line of reasoning best take heed now and cut the ropes.
So I read my Friday post, and it’s a mess.
So here’s my point, simply stated:
We are an event-driven culture.
The reasons for that in the past have often been due to technical limitation.
Those limitations are gone, and now we find ourselves in a world where we no longer have to pull our learning or entertainment out of the event stream. In video, that world really started with the video store. But still people gravitate towards the new releases section. Similarly, radio broadcast of music is ridiculously redundant, yet we still live in a world of the latest hit.
Why? Because despite the long tail of the past, and the availability of customized media, there is still a social, emotional, and possibly intellectual need for certain things that event-driven culture provides. One of those desires is to talk about things on the same day, that is to talk about Episode 7 of Battlestar Galactica with people that have just seen Episode 7, but not Episode 8. There’s something to that. There’s a democracy to having that conversation where neither person knows what is going to happen next week. There’s a joy in listening to a hit song, and forming an opinion on it which can be understood by others listening to the same hit song.
So where does that leave us, the people enabled by technology to transcend the narrow broadcast stream of the immediate, but still feeling some loneliness on that journey? Well, I like this idea of cohorts.
A cohort is like a book group. They decide to read a series of books in the same order, one at a time. Perhaps they even have a mid-book break, where they read up to a page and discuss “The book so far”. Similar groups are likely to grow up around instant media and stuff in the Long Tail of the Past. You want to go through Kojak or The Prisoner on Netflix “Watch Instantly”? Well, you’ll have the option soon, I think, to either watch it, or set up a cohort schedule. You set up a cohort, and invite friends to it. Friends that accept will get a Kojak episode added to the top of their queue on a specified schedule, and by joining the cohort will be locked out from “reading ahead”.
I can see similar things happening in other media. And I think with Tony Hirst’s work on scheduled RSS feeds, and the work of Philipp Schmidt and others on P2P study groups around OCW we are starting to this emerge in education as well — static content transformed into a dynamic serialized stream to improve engagement.
So there you go: Cohorts. Or perhaps another term I haven’t heard of. But whatever it is, it’s the future.
“Cohort” is a term used in sociology and education that refers to a group of people that experience a certain set of events simultaneously as they move through time. Cohort isn’t a perfect term, but I wonder if we are coming to a point where we need a term that gets rid of the meddlesome baggage associated with a class, but preserves the idea that there’s a particular type of peer instruction that benefits from everybody being on the same lesson at the same time.
Or failing a consensus on that point — at least a term that allows us to discuss the issue, which lately I see popping up all over the place, from Philipp’s quoting John Seely Brown to talk about founding principles for P2PU type efforts:
Together, members construct and negotiate a shared meaning, bringing the group along collectively rather than individually. In the process, they became what the literary critic Stanley Fish calls a “community of interpretation” working toward a shared understanding of the matter under discussion.
To Tony Hirst looking for ways to get OCW content delivered serially:
In contrast to syndication feeds from continually or regularly updated sources, a serialised feed is an RSS feed derived from an unchanging (or “static”) body of content, such as a book, or OpenLearn course unit, for example.
The original work is partitioned (serialised) into a set of separate component parts or chunks – in the case of a book, this might correspond to separate chapters, for example. Each chunk is then published as a separate RSS item. By scheduling the release of each feed item, a book or course can be released as a part-work over a period of time, with each part delivered as a separate feed item.
To Shirky’s recent observation that struck me as so absolutely true in a known and completely mundane way : “…what you see with these user groups, whether it’s for reality TV or science fiction, is that people love the conversation around the shows.” Not that that was his main point here. But it’s true, right? We negotiate experience differently when we feel like we are all going through it for the first time. There’s less of a caste system of amateurs and old-timers. We’re bolder about our pronouncements, more democratic. The possibility for reinterpretation is more dramatic with 99 people going through a course at once than for 99 people being absorbed into a profession or discipline one at a time.
It may be good, it may be bad, but it’s there.
I’ve been thinking about how this is such a pervasive problem in all aspects of culture. TV should be dead, by rights. Ages ago. But the one thing it provides is a serialization mechanism for art, where there’s at least a chance that you could talk to somebody that has seen Episode 8 of Lost, but not Episode 9.
Netflix could solve this of course, and reinvigorate a lot of series in the process. What you would need, ala Hirst, is a serialization mechanism (and here, again, talking in terms of the original meaning of serialization, not it’s specialized computer science meaning). You and your friends sign up to watch the mid-90s series Earth 2, and it delivers you an episode a week. Or every three days. Or each night. Whatever — as long as it allows for shared reflection in between the events.
In other words, you become a cohort, moving through these series in sync so that everyone shares a similar interpretative environment. If Netflix added that, and just that, to its Watch Instantly offerings, it would would change the digital delivery of old TV shows into something entirely different. The same way P2PU would transform the face of OCW use, and the same way Tony’s experiments are pushing the delivery bar.
The “class” is dead, as is the “audience”. Long live the cohort.
We’re facing the most dire economic situation in our nation’s history, but that doesn’t prevent Inside HigherEd from printing the most uninformed analyses of the current situation. Here’s a sample. Commenting on a recent letter from 51 “presidents, chancellors, regents, and heads of university associations” asking that portions of the stimulus be spent on shovel-ready highered infrastrcuture projects, educational policy analyst Jane Shaw comments:
Why did these educators choose capital funding — that is, constructing “essential classroom and research buildings and equipping them with the latest technologies”? Wouldn’t tuition discounts, tax credits, more scholarships, or even faculty salaries be more directly related to the problems that they decry?
Let’s lay out the basics here. Other things equal, public investment is a much better way to provide economic stimulus than tax cuts, for two reasons. First, if the government spends money, that money is spent, helping support demand, whereas tax cuts may be largely saved. So public investment offers more bang for the buck. Second, public investment leaves something of value behind when the stimulus is over.
That said, there’s a problem with a public-investment-only stimulus plan, namely timing. We need stimulus fast, and there’s a limited supply of “shovel-ready” projects that can be started soon enough to deliver an economic boost any time soon. You can bulk up stimulus through other forms of spending, mainly aid to Americans in distress — unemployment benefits, food stamps, etc.. And you can also provide aid to state and local governments so that they don’t have to cut spending — avoiding anti-stimulus is a fast way to achieve net stimulus. But everything I’ve heard says that even with all these things it’s hard to come up with enough spending to provide all the aid the economy needs in 2009.
In other words, it’s pretty simple — the signatories proposed shovel-ready infrastructure projects because that’s what’s needed to save the economy. Shaw’s suggestions may be good ideas, but they are poor stimulus. Higher faculty salaries don’t translate necessarily into jobs, tax credits don’t tap into unutilized productivity, and as much as tuition discounts may be needed, it’s not clear that they put a single person to work or increase demand in the slightest. Does a person with a tuition discount buy more education?
But the errors in the Inside HigherEd article don’t end there. Shaw also says later:
By asking the taxpayers to rev up those projects, the administrators are essentially saying that if state taxpayers can’t afford a project, some mythical “federal taxpayer” can.
But state and federal taxpayers are, by and large, the same people. If Arizona is seeing its tax revenues dip, chances are that the federal government will see its taxes go down, too. If the people of Arizona are hurting, probably taxpayers countrywide are hurting, too.
This is just a ridiculous level of analysis. The difference between states and the federal goverment is that the federal government can print money and states can’t. You can argue about the dangers of essentially borrowing against the future, but the point of stimulus is to avoid the larger government cost of economic collapse by injecting enough capital into the economy to prevent it. States have some capacity to borrow, but in a skittish, frozen economy that capacity totals a couple drops in several buckets. To equate the economics of state budgets with the economics of the federal budget is dangerously uninformed.
Let me be clear — certainly there are economists that have different opinions about the relative worth of stimulus in preventing (or softening) depressions. In the face of overwhelming evidence that monetary policy alone can’t save us, they’re getting fewer, but there are still people that have legitimate disagreements with the Keynesian principles behind government stimulus.
But that’s not what the article in Inside HigherEd is disputing. It is, in fact, taking the stimulus as a given. It is arguing the relative worth of the contents of a stimulus package. And yet the author does not seem to understand what a stimulus package is supposed to do, or how it is expected to do it.
We’re on the brink of the biggest economic disaster in our nation’s history, and right now the biggest risk to our economy is that 28 years of public sphere claptrap about how national economic policy is essentially no different than a family budget is preventing us from implementing the dramatic remedies that are really our only hope of avoiding complete collapse. Publications like Inside HigherEd should be doing their best to raise the level of dialogue on these issues, not dragging it back down into ignorance.
I’d thought I was abandoning blogs and projects in a linear pattern, but history here has taken a Viconian turn: I’ve revived my Endless Lunch blog, with the purpose of using it to connect with other writers in pursuing my personal new year’s resolution — to write 52 songs and 12 short stories this year.
Yeah, I know I could use this blog, with tags and the like. Not sure why I’m doing over there. Just feels right.
It will be interesting, though. to get back to using a blog as a novice in an area where I want to do something new. We talk a lot about the benefits to experts of blogging, but the benefits there still don’t compare to what a novice can get out of the blogosphere, that zero to sixty experience that is such a rush.
In any case, if you are interested in my analyses of 60s pop hits. the prose of Douglas Adams, and various recording projects and short stories of mine, head over to 52/12. And if you are interested on my thoughts on the Theory of the Golden Ticket and the Endless Lunch, head over to this post in particular.
Hope to see you all there!