SoundCloud and Connected Copies

SoundCloud, a music publishing site which holds millions of original works not held elsewhere (and over a hundred million works total), may be in trouble. And if it is in trouble, we’ll lose much of that music, forever.

The situation is, of course, ridiculous.

I know I sound like a broken, um, MP3 file on this, but theres a simple solution to this problem: connected copies.

In such a scheme, I might share the file to SoundCloud (perhaps from my server, perhaps not). As other people share it, copies are made to their servers. These copies are connected by an ID and protocol that allows fail-over: if the copy cannot be found on the SoundCloud server, it tracks it down to the other locations (the “connected” in “connected copies” means that each copy points to the existence of other known copies).

Again, this is the underlying principle of lots of cool things not yet on people’s radar, like the Interplanetary File System, Named Data Networking, federated wiki, and Amber. It’s the future of the web, the next major evolution of it.

It’s worth thinking about.

—-

P. S. While SoundCloud is still around you should check out my awesome playlist of new darkwave and neo-psychedelia tunes from little known artists.


An End to the Den Wars?

As you doubtless know by now, in 2010 I went and gave a plenary at UMW on the Liberal Arts in an era of Just-In-Time Learning, drank more than any human really should at the various parade of after-events, predicted the coming onslaught of xMOOCs in a drunken vision, and ended up going back to Jim Groom’s place at like 3 a.m. to drink some more.

It was at that point, where my liver was already in the process of packing its bags and moving to find a home in a saner individual that Jim Groom asked me a fateful question.

“So,” he said, “What do you think of my den?”

I looked up and noticed we were sitting in a den.

“It’s nice,” I said.

I don’t remember much more about that night, but I did wake up back in the hotel, so that’s good.

It would be the last time I would drink to that level, because, no joke, I was hungover for three days. Welcome to age 40, time to stop acting like a teenager.

I forget the next time I saw Jim in person, but it was at one conference or another. Someone tried to introduce us, not knowing we go way back (2007?) and Jim said something to the effect of “Oh, I know Mike, we go way back. Last time he was over my house he insulted my den.”

I tried to piece together fragments of that night.

“I said it was nice,” I said.

“Yes,” he said, “Nice. You said it was ‘nice’. I worked hard on that den.”

This was the beginning of #denwars, which would come up anew at every conference.

But this week, heading to ELI, I realized there is another side to the story.

On the way back from that 2010 UMW event I took Amtrak, and I started writing an album called Double Phantasm. (The title is a pun involving John Lennon’s Double Phantasm album and the Derridian notion of hauntology. This is why I’m currently outselling Taylor Swift).

The first song on it, Queen of America, was loosely based on an events of that week. And until yesterday I thought that was the only song that bore any relation to that trip. The rest of the album is a science fiction concept album that details the dying romance of a man and a woman during a post-apocalyptic future. (Again, these hot song ideas are the secrets to my stunning success).

But giving it my first relisten in several years on the plane, I was suddenly struck by the last song, “Like a Great Big Meteor“. It’s the final scene of the four song apocalyptic romance.

And there it was, in the setting of the final scene of that album. It was Jim’s den:

If all we have is now,
then what did we have then
as she sat across me nervously
in what used to be a den?

Here, in this verse, looking for a scene which would contrast former suburban opulence with post-apocalyptic decay I could find no better setting than Jim’s den. The world is over in this song, but what we miss is the den.

The song is rough, as all my songs are (I spend hours writing and tinkering with synth settings and textures, and then basically record a demo-level version of the song in thirty minutes, because I like writing and synths but hate production and singing). Sometimes this style of production works, and sometimes it doesn’t. It sort of half-works here.

But if you listen to my rough vocal on that track you can hear the raw and powerful emotions that Jim’s den stirred in me.

So there you go Jim. I did say your den was “nice”.  But that was only because the depths of my feelings for your den could only be expressed through art. Perhaps we can now lay to rest the den wars.

 

 


Connected Copies

In case you don’t know, I believe the future of the web involves moving away from the idea of centralized, authoritative locations and into something I call “connected copies”.

The idea is that the current model of the web, which is based on the places where things live instead of the names of things, creates natural choke points and power inequities. Further, it undermines the true peer-to-peer potential of the web.

A newer model would look like email, torrenting, or git, where multiple copies of things were stored across the web, but connected and authenticated by protocols, data models, or other conventions. Federated wiki is one way to do that. Named Data Networking is another. The Interplanetary File System is a third. This new project by the Berkman Center is a fourth.

I’m posting this because I’ve been writing an explanatory post on Connected Copies for five months now (and I’ve been writing on the subject of copies as an approach to OER for over half a decade) and it’s clear it’s just a hard idea to explain to people who aren’t poised to get it. The draft of the explanatory post is now about 10,000 words which is ridiculous.

But maybe the better way to get the idea is to just keep this term in your head — connected copies — and just go about your daily life. My experience is that once you do that, you can’t unsee how many people are working in this space.


You Should Be Able to Browse the Web Through Your Own Website

Making a quicker pass at the reply to Dave Winer below, I want to call out one radical idea that people don’t get: You should be able to browse the web through your own website.

As an example of this, consider my Wikity interface when I’m logged in (if you’re not logged in the interface will be missing the edit box):

wikityid

I use Wikity as a combination social-bookmarking tool and wiki. And I’ve got my site set  up in a way that’s efficient for me — I have a Markdown based editor at the top, and then around it I have little Pinterest-like excerpts of my posts. When I want to write something new, or when I read something I want to summarize I usually execute a search to remind me of what I’ve written on it before and then plug stuff into the Markdown box. I scan over these search results and link to them or quote from them as I write.

If I want to alter older posts to link to this, I can quick-edit them on the spot to cross link my new stuff.

I haven’t quite got this part working yet, but the idea is a multi-document editing environment that mimics some of the affordances of federated wiki. Here’s a screenshot of writing an article while updating two other articles to link to the new information (note scroll bars of pages where editing is going on).

multi-editing

But the thing is it’s really lonely in here — the only things I’m working with are the ones I’ve created.

And what I learned from federated wiki is doesn’t have to be like that. If I had a common data format and a set of protocols, I could pull all the articles from my friends into this space, and I could fork them in and work on them, link to them, etc.

In the web as it is, we move, and the data stays put. In a federated web, the data moves and we stay put. Does that make sense?

To me at least, that’s the core dream of federated wiki. But what’s interesting is it’s also the dream of Dave Winer to reboot the blogosphere.

You can do some of the above with feeds, of course. But for something like the search-my-network-and-write habits I’ve developed you really need API calls, and if you are going to port things like categories and data and media that are going to be processed by the UI you might as well put it in JSON.

 

 


JSON-Based Transclusion, and WordPress as the Universal Reader

Dave Winer wrote a recent post on, roughly, how to reboot the Blogosphere with JSON. I read it last night and thought I understood it, then read it again this morning and realized I’d missed the core idea of what he was saying. Here’s the relevant graf:

But there is another approach, to have WordPress accept as input, the URL of a JSON file containing the source code for a post, and then do its rendering exactly as if that post had been written using WordPress’s editor. That would give us exactly what we need to have the best of all worlds. Widespread syndication and control over our own writing and archive. All at a very low cost in terms of storage and CPU.

Maybe I was just a bit tired last night but it’s worth staying on how this is different from other models. The idea here is that your data doesn’t have to be stored in WordPress at all. Dave Winer can make his editor, Ward Cunningham can make federated wiki — but should they want to publish to WordPress site —

See, that’s not quite it either. Because the idea here as I read it is pull, not push. (Dave, correct me if I misunderstand here). The idea is, given certain permissions and compliance with a WordPress API, a WordPress site I have can go out and fetch Dave or Ward’s content and render it up for me in my own blog dynamically.

I’m not sure Dave is going this far, but imagine this as a scenario — I link to Dave on my WordPress blog, but the link makes a double pass.

First, it sees if it can just grab the JSON and render Dave’s post in my WordPress blog. If it can, great. It renders Dave’s page with the links. Links I click on there also attempt to render in my default WordPress environment.

Sometimes links won’t return nice requests for JSON. Those ask me if I want to go outside my reading environment to someone else’s site. If history is any guide, these sites don’t get much traffic, because the answer to that question is often no.

Links that render into your environment could be acted on by your theme functions. Maybe you take a snapshot of something, repost it, index it, annotate it. Over time, there is a market for themes that play nice with other peoples content, or allow people to make the best sense of it.

And of course if you add in feeds….

What this does is move from a situation where we have a couple online RSS readers to a world where every WordPress theme (and there are tensof thousands of WordPress themes)  is potentially an RSS reader of sorts. It moves from a world where every theme is potentially a new Facebook or Twitter as well.

It does this because it solves part of the problem Facebook solved for people — it lets us read other people’s stuff in a clean, stable environment that we control. (There are other things as well, but you have to start somewhere).

So why not try this? Turn themes — the killer feature of WordPress — into a way to read other people’s content, and see what happens. WordPress has already made a stab at being the universal publisher, but it could be the Universal Reader as well, not through providing a single UI, but by supplying an endlessly customizable one.

 


Capable Clients and the WordPress API

Update: If you read the comments below you’ll see one of the API developers has responded; there are some issues with private information in short codes being exposed.

I still wish all the smart-quotification, m-dashing, and paragraphing could be more easily disabled, but I’m very grateful for the quick and thoughtful response.

——-

I wasted another afternoon looking for a way that I won’t have to write my own custom WordPress JSON API. After all, the WordPress REST API folks have spent many, many hours producing a core API, and writing my own just seems silly.

But it looks like I will have to write my own, and I thought I’d explain why. Maybe the WordPress API folks will read this and understand a whole set of use cases they have missed.

Here’s the deal. I store Markdown in my WordPress database. Not the processed stuff. Not Markdown that turns to HTML on save. Markdown.

I do this because I believe in what I call “capable clients”. I want anyone else to be able to make their own client and represent my data in the best way they see fit.  I want people to be able to fork my source text. I want people to be able to remix my stuff with stuff from other servers.

The same is true about the short codes that go in for things like images. I want my reader’s client to get the source and render it, or move it to their server to render.

I don’t think this is such a weird thing. This is, after all, how the web worked in 1991, before graphic designers and relational database designers mucked the whole thing up with their magazine designs and fifth normal forms.  Back in 1991, if you saved a file from Tim Berners-Lee’s NeXT server to your laptop, you had an exact copy of what Tim’s server had. You had the source. It was a level playing field.

APIs + JSON gives us a chance to return to that world, a world of capable clients. A world where clients of my platform can potentially build a better view of my data than I had imagined.  A world where I let you fork raw data from my server and reuse it on yours, git-style. A world where people can remix data from multiple servers in new and illuminating ways, the way they did in the early days of RSS.

That’s the real promise of JSON development — the permissionless innovation that characterized the early web, but brought up to date.

So why would you only allow raw content — the unprocessed HTML or Markdown source — to be accessed by people who are logged in as editors?

Is what I typed originally into the text field of the WordPress editor such a secret? I suppose it could be, but why not give people the option to see exactly what I entered into the editor? Why fix it with paragraph tags and smart quotes, when you don’t even know if it’s HTML, or Markdown, or LaTeX stored there? Why run always run the shortcode filters, even when the client wants the data — not the useless processed output?

There’s a huge opportunity here to unleash a level of innovation we haven’t seen for years. But it starts with allowing capable clients to consume clean data, even if they only have read permissions.

In a system that truly values client-side development, clean data is a right, not a privilege. Why not give us that right?

 

 

 

 

 


The Future of Empowerment Is On the Client

I’m excited about Brave, the new browser coming out with privacy and content payment features built into the core browser code. I won’t detail it here, but you need to check it out.

The piece that people miss about all these debates about Facebook-ization and evil tracking and big data is that the Web we got is largely a function of the structure of browsers and protocols we inherited. As a simple example, a browser has an idea of what your IP is, but no concept of you as a user, which means you need to rely on big central servers like Facebook to supply an identity to you. (Compare email, where identity is federated, and central to the system protocols).

As another example, more pertinent to Brave, the third-party cookie hack available in browsers sprouted a culture of Surveillance as a Business Model.

I think two approaches to this mess have emerged. The first idea is since browsers cede all your power to servers (at least when you want to do something interesting) — the idea is you should own a server, because that’s where the power is.

I think the less publicized idea is to move more power back to the client. Let the browser (or the JavaScript running in the browser) make more choices on how to interact with the web, supplying the layers that never got built, the identity, commerce, and syndication gaps that companies like Google and Facebook have made a fortune filling in.

Both the “Own your own domain” approach and the “Power to the client” approach to a better web are complementary, but I actually believe that it is this second path — exemplified by projects such as Brave and Calypso — that has the best chance of broad adoption.

See also Calypso is the Future of Personal Cyberinfrastructure

 


Follow

Get every new post delivered to your Inbox.

Join 218 other followers