Hapgood

Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and Online Digital Literacy


Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century

Maybe I’m just not connected to the edublogosphere the way I used to be, but the story of Matt Rognlie should be on every person’s front page right now, and it’s not. So let’s fix that, and talk a bit about remix along the way.

(Let me admit the title is a bit of hyperbole, but not by much. Plus, if you have other candidates, why aren’t you posting them?)

First, the story in brief.

  • A scholar named Piketty produces the most influential economic work of the 21st century, which pulls together years of historical data suggesting that inequality is baked into our current economic system at a pretty deep level. It’s the post-war years that were the anomaly, and if you look at the model going forward, inequality is going to get worse.
  • A lot of people try to take this argument down, but mostly misunderstand the argument.
  • An MIT grad student named Matt Rognlie enters a discussion on this topic by posting a comment on the blog Marginal Revolution. He proposes something that hasn’t been brought up before: Piketty has not properly accounted for depreciation of investment. If you account for that, he claims most of the capital increase comes from housing. And if that’s the case, we should see a slowing of inequality growth.
  • He gets encouragement from Tyler Cowen and others at the blog. So he goes and writes a paper on this and posts it in his student tilde space
  • On March 20th he presented that paper at Brookings. He’s been back and forth with Piketty on this. To the extent that policy is influenced by prevailing economic models, the outcome of this debate will determine future appoaches to the question of inequality.
  • As of today, it seems whatever the outcome of that debate may be Rognlie has permanently altered the discussion around the work, and the discussion around inequality.

So first things first — this is a massive story that started as a student blog comment and germinated in tilde space. So why are we not spreading this as Exhibit A of why blogging and commenting and tilde space matters? Did we win this argument already and I just didn’t notice? Because I think we still need to promote stories like this.

Forward Into Remix

Of course, I need to add my own take on this, because this is a perfect example of why remix is important, and why Alan Kay’s dream of Personal Dynamic Media is still so relevant.

Here’s what the first comments on that post read like. You’ll recognize the current state of most blogs today:

krug

Hooray for petty ad hominem attacks and the Internet that gives us them! Paul Krugman is rich, and therefore his take on Piketty is wrong. Al Gore took a jet somewhere, so climate change does not exist. Call this a Type 1 Comment.

Matt comes in though, and does something different. Matt addresses the model under discussion.

matt

He goes on and shows the impact of this on the projections that Piketty makes. Call this a Type 2 Comment.

This is an amazing story, and an amazing comment. I don’t want to take anything away from this. No matter what the outcome of this debate we will end up with a better understanding of equality, thanks to a student commenting on a blog.

But here’s my frustration, and why I’m so obsessed with alternative models of the web. The web, as it currently stands, makes Type 1 Comments trivially easy and very profitable for people to make. The web *loves* a pig pile. It thrives on confirmation bias, identity affirmation, shaming, grandstanding, the naming-of-enemies etc.

On the other hand, the web makes Type 2 Comments impossibly hard. Matt has to read Piketty’s book, go download the appropriate data, sift through the assumptions of the data, change those assumptions and see the effect and then come back to this blog post and explain his findings in a succinct way to an audience that then has to decide whether to take him at his word.

If they do decide he might be right, they have to go re-read Piketty, download the data themselves, change the assumptions in whatever offline program they are using and then come back on the blog and say, you know, you might be right.

And it’s that labor-intensive, disconnected element that makes it the case that the most important economics blog comment of the 21st century (so far) received less comment in the forum than debates about whether Paul Krugman’s pay makes him a hypocrite.

And before people object that maybe that’s just human nature — I don’t think that’s the whole story. The big issue is that the web simply doesn’t optimize for this sort of thing. One of the commenters hints at this, trying to carry forward the conversation but lost as to how to do so. “Is this the Solow model, ” he asks, “I need a refresh, but I don’t know what to Google….”

There is a Better Way

What Matt is doing is actually remix. He’s taking Piketty’s data, tweaking assumptions, and presenting his own results. But it’s taken him a whole bunch of time to gather that data, put together the model, and run the assumptions.

Piketty does make his data sets available for the charts he presents, and that’s really helpful. But notice what happens here — Piketty builds a model, runs the data through it and presents the results in a form that resolves to charts and narrative in a book. Matt takes those charts, narrative, and the data, reinflates the model, works with the model, then does the same thing to blog commenters, by producing an explanation with no dynamic model.

Blog commenters who want to comment have to make guesses at how Matt built his model, re-read Piketty, download the data and run their own results and come back and talk about it.

I understand it’s always going to be easier to post throwaway conversation around a result than actively move a model forward, or clean up data, or cooperatively investigate assumptions. But the commenters on Marginal Revolution and other blogs like it often would like to do more, it’s just that the web forces them to redo each other’s work at each stage of the conversation.

It’s hard to get people to see this. But given Picketty’s book was primarily about the data and the model lets imagine a different world, shall we?

Let’s imagine a world where that model is published as a dynamic, tweakable model. A standard set of Javascript plugins are created which allow people to dynamically alter the model. In this alternate world Piketty publishes these representations along with page text on what the model shows and why the assumptions are set the way they are.

When Matt sees it, he doesn’t read it in a book. He reads it online, in a framework like the data aware federated wiki. He forks a graph he’s interested in, and can click in immediately and see those assumptions. He edits those or modifies those in his forked version.

When a discussion happens about Krugman’s post, he writes a shorter explanation of what is wrong with Piketty and links to his forked model. People can immediately see what he did by looking at the assumptions. They can trace it back to Piketty’s page in the fork history, and see what he has changed and if he’s being intellectually honest. If he’s introduced minor issues, they can fork their own version and fix them.

At each stage, we keep the work done by the previous contributors fluid and editable. You can’t put out an argument without giving other the tools to prove you wrong. You disagree by trying to improve a model rather than defeat your opponents.

The Future

I don’t know what the future of federated wiki is. I really don’t. I think it could be huge, but it could also be just too hard for people to wrap their heads around.

But the point is that it asks and answers the right questions, and it shows what is very hard now could be very simple and fluid.

It’s great that this comment was able to move through tilde space to have impact on the real world. But when you look at the friction introduced every step of the way you realize this was the outlier.

What would happen if we increased the friction for making stupid throwaway comments and decreased the friction for advancing our collective knowledge through remix and cooperation? Not just with data, but in all realms and disciplines. What could we accomplish?

The Remix Hypothesis says we could accomplish an awful lot more than we are doing now. It says that for every Matt out there there were hundreds of people who had an insight, but not the time to get together the data. For every Marginal Revolution there were dozens of blogs that didn’t have the time or sense to see the value of the most important comment made on them becuase it required to much research, fact-checking or rework.

This is a triumphant story for the advocates of Connected Learning — publicize it! But it’s also a depressing insight in how far we have to go to make this sort of thing a more ordinary occurence.



12 responses to “Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century”

  1. Lisa Chamberlin Avatar
    Lisa Chamberlin

    Reblogged this on Frogs in Hot Water and commented:
    Mike Caulfield sums it up so nicely, I’ll just let his words do the talking.

  2. We’ve got the tools now. Technically. Tools to re-craft incentives. Tools to separate and remix in Javascript and Linked Data. It will take time to put the pieces together. Thanks for the inspiring post.

  3. I don’t get fedwiki. What questions does it ask and how does it answer them? What am I missing? The last happening, for me, was a bunch of you copying and pasting from Wikipedia. As far as I am concerned, you persuaded me that the happening was going to be magic, got me to spend a long time learning to use your unintuitive software, then … failed to show me why you think it is exciting.

    1. mikecaulfield Avatar
      mikecaulfield

      I think I’ve provided more than enough background on why it’s important here. Is there something here you don’t understand? Do you disagree with the Remix Hypothesis or not, and why or why not?

  4. […] intellectual property, and the economic effects of digital disruptions — not to mention coming to grips with the nature of digital communication itself. And finally, while Martin is justifiably cautious about making extravagant claims of reduced […]

  5. Thanks for pointing to this post from the Google Groups. I am struggling to understand everything you mean here so be sure to correct any misperceptions. The example on a fedwiki happening that seems relevant to me (OK very simple) was the recipe one where calculations could be performed so i can see that a collaboration environment that enable calculation/ interactivity with data could be powerful.
    But .. there is always a but .. I am wondering what level of standardisation would be needed for that to happen, and standardisation can enable and impede innovation. Anyway, I will ponder that some more.
    The other examples that your words brought to mind were
    1. The Guardian datasets http://www.theguardian.com/news/datablog/interactive/2013/jan/14/all-our-datasets-index and the innovative use that have been made of them eg http://blog.ouseful.info/2009/04/02/visualising-mps-expenses-using-scatter-plots-charts-and-maps/
    2. The very wonderful Hans Rosling and his site at http://www.gapminder.org/ – if you have 4 minutes, watch this https://www.youtube.com/watch?v=jbkSRLYSojo

    The dreaded Excel spreadsheet has proved itself quite versatile 🙂

    1. You’re a 100% correct — the problem is standardization. There’s always a tension between these two things — the web standardized reading, for example, but killed re-usability. Von Hippel and Kay both see the excel spreadsheet as something approaching what they mean, but not everyone has Excel. There aren’t perfect answers to this, but there’s a lot of good thinking around it a lot of people aren’t aware of, maybe that’s what I need to talk about next.

      1. I just had a weird experience. I thought I had replied but I must have shut down the browser without sending. Anyway, I am now re-thinking my reply. When I reflected on my own experience of exporting and importing data the most persistent element was not the software but the .CSV format. I would hazard a guess that anyone with a computing device could import .csv data to at least one app/programme. So maybe bricoleurs can scoot about with the aid of a soft data standard. Now I am wondering if that also works for image and video.
        Maybe we could be thinking about how federated wiki gets ‘outside itself’ instead of wanting to draw everything into a standardised infrastructure. Maybe we could have ‘press this’ function on a page.
        Now I am going to dig out my copy of http://books.google.co.uk/books/about/From_Control_to_Drift.html?id=GFg6nwEACAAJ&redir_esc=y

  6. […] as not to “kick up [more] dust” as Jeremy suggests, and to give a feeble nod to Caufield’s “Type 2 comments,” I suggest using “engaging” to supplant “effective” when referring to what our highest […]

  7. […] [toread] Picketty, Remix, and the Most Important Student Blog Comment of the 21st Century | Hapgood – – (none) […]

  8. The discussion of being able to fork a mathematical model makes me think of the way you can share a graph on Desmos, which other people can then modify. For example, here’s a Desmos graph that’s imbedded inside a choose-your-own-adventure game that’s part of my student’s homework:

    https://www.desmos.com/calculator/zemjv4pz4m

    When they find it, they can start messing with it, and in theory, they could even share any modifications they make on their blogs. It doesn’t have the immediacy of forking a graph though, which would really help in asynchronous discussions about mathematics. I’d love to be able to trace back through the fork tree.

    Perhaps, to avoid the limitations of standardization that francesbell is talking about, what we really need is in abstracted standard for forking things online, something like RSS that is platform and format agnostic, but that allows you to keep track of the various changes to information made over time. You could pull up the fork tree in FedWiki and it would look one way; you could pull it up in another software and it might look differently in the same way that RSS does not know anything about end software that will read it–a sort of forking syndication system (FSS.)

    1. Oh! thanks Forest – you are leading me to the perimeter of my understanding but I think you are suggesting that we agree on data exchange standards (but can encapsulate process details) without necessarily needing to inhabit a common infrastructure.

Leave a comment