A Pedagogy of the Edges (or, the Wrong Robots)

The theme for #FutureEd this week was expressed in a Toffler quote (which turns out to not quite be a Toffler quote):

The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.

I find this quote a bit frustrating. For one, I agree with Tom that “unlearning” is just learning. For another, I’m troubled by the notion that previous generations didn’t engage in lifelong learning. Surely a cobbler, physicist, or reporter continued to learn throughout their career? I prefer Harold Jarche’s formulation of this phenomenon:

In the near future, the edges will be where almost all high-value work will be done in organizations. Change and complexity will be the norm in this work. Most people will work the edges, or not at all. Core activities will be increasingly automated or outsourced. This core will be managed by very few internal staff.

This is a sea change in organizational design. Some companies are already playing with new designs, tweaking their existing models. A few, mostly start-ups, are trying completely new models. Any work where complexity is not the norm will be of diminishing value. Freelancers and contractors, already increasing in number, will be needed to address continuously evolving markets. The future of work will be in understanding complexity and dealing with chaos.

It’s not just that we continue learning, it’s that automation of known processes and replication of digital content keeps pushing us to the edges. A decent newspaper reporter of ages past continued to learn her core craft throughout her career. But there’s something decidedly different about a reporter who moves from reporting to blogging to blogging plus photography and basic number crunching along with online research and analysis. As history proceeds, it’s the knowledge and process integrator that is becoming more valued, and the nature of what needs to be integrated is a moving target. We can certainly teach our students that intersection of geology, engineering, and statistics that is petroleum engineering today (a great example of demand for integrated skills) — but there is simply no way that much of what we teach today won’t be automated in five to ten years. And what then? The predictable does not have much of a future.

A lot of the educational debate breaks down into debates between the robo-believers and the robo-doubters. The robo-believers believe the emergence of technological solutions, AI, and networked knowledge can obviate the need for traditional teacher-powered education. The robo-doubters believe no app will ever capture the feel of an afternoon on the quad reading Keats with your class. Robo-believers accuse robo-doubters of living in Brideshead Revisted. Robo-doubters accuse the robo-believers of constructing Minority-Report-As-a-Service.

Left out of the public conversation quite often is a third point of view — that the robo-believers are right, but they are focussing on the wrong robots.

Because, if you believe, like I do, that technology will be doing amazing things in fifteen years, that AI *will* finally start living up to its promise, that vast advances in the machine processing of distributed knowledge will radically change what is possible, then the education “cost crisis” will solve itself. (And if you buy into Moore’s Law, the fact that that educational revolution is 15 years away predicts that our educational solutions tapping into into these AI technologies will suck for the next 10 years, and then suddenly become viable. Getting a head start here isn’t going to help much.)

But what about the next fifteen years?  The automation that is here now, today, the nature of distributed knowledge, here, today, makes our pedagogy of the center increasingly obsolete. The nature of the robo-future (and my partner Nicole tells me to stop using robot when I mean automation because I sound insane, but there you go) — the nature of the robo-future is that the predictable, the central, the places that exhibit best practice and utilize core domain knowledge will be hollowed out. *Are* being hollowed out.

Calls for efficiency in education are fine, and talk about afforadibilty and social justice is critical. But by the time that teaching — one of the hardest jobs to automate — is significantly automated we will be at the end of the robo-revolution, not the beginning. And if we really think that’s the case, what we’re teaching students now seems a much more pressing issue than how we’ll teach them years from now. What we need above all else is not an education that is powered by automation, but an education that is a response to it.

A Better Way to Build a EdTech Support Wiki (or, Doctor, Heal Thyself)

This post is going to be a bit geeky, and a tad technical. So there’s your warning.

I’ve been thinking a lot about the forkable Domain of One’s Own wiki, with its GitHub underpinning. And I’ve been watching a lot of Ward Cunningham’s videos on the concept of federated wikis.

And the thing I’ve come to realize is the institutional educational technology unit support site is a perfect microcosm of our more general OER challenges. What do I mean? Here’s my support wiki:

mywiki

And here’s Keene State’s site:

keene

And here’s Thompson Rivers:

blende2d

Here’s some UMW documentation on using digital audio:

players

Just as with class materials, the reusability paradox rears its ugly head. There are bits mixed into these sites that are very institution specific. The players site above links to some UMW-specific stuff. The Keene site mixes in announcements about local presentations with more generic pieces on using screencasting as a tool. These are small things, but the feeling of local integration is important to the faculty hitting these resources. You really do want the faculty reading the material on screencasting to finish the article with a link to someone on campus they can contact, and as silly as it is, you’d like the faculty to feel locally supported, so the prospect of everything being a link to other university’s websites is unappealing.

But the way we accomplish this is a bit ridiculous, really. We reinvent the wheel a dozen times a month so that we can get that 10% difference that preserves the impact we want.

Is there another way to get that balance? It’s possible that a federated approach could provide the impact of local integration with the efficiency (and culture) of syndication. Playing around with Cunningham’s radical version of federation has convinced me it’s a bit much for people to swallow at this point. But the basic principle — that instead of every page on a site have an edit button, there is a “clone” or “fork” button is compelling. In such a world, if I saw a useful article on screencasting from Judy Brophy at Keene State, I’d hit the “Clone This” button on it (or perhaps hit a scriptlet button in my browser), and pull it over to my own support site. There I’d edit it to make it fit my faculty needs better.

Here’s the important thing — cloned articles would retain their history and relation. So the pieces of this that were Judy’s would still be attributed on the back end to Judy automagically, and the edits that were mine would be attributed to me. Better yet, since my page is seen by the version control repository as a fork, when Judy updates her article to cover new versions of Jing (complete with updated screenshots) those edits are pushed to me to accept or reject. And when I link to things on my site from the cloned page, Judy can check if she would like to clone some of those pages into her own site.

I don’t quite know how to build this, but it seems to me that the work that UMW is doing with dokuwiki could be a step towards this sort of world. And it could form the basis of a solution to some persistent problems in ed-tech around balancing distributed production and local autonomy. Best of all, unlike the classroom, how we build our support wikis is completely within our control.

Does anyone want to look at this idea with me, head down the path with it a quarter-mile or so, and see if it’s worthwhile or completely cracked? In short, what we’re looking at is collaboration in a world where open-licensing and version-control software make forks much, much less painful to manage…

(Note: In addition to Ryan Brazzel and Tim Owens and Jim Groom, I am indebted to both Alan Levine and Brian Lamb who have talked before about embedded content which is the *syndicated* way of doing this, this is really just taking that a step further towards *federation*.)

Forking an Academic Wiki Should Be a Basic Student Right. Discuss.

UPDATE: Cathy Davidson replies in the comments. It looks like moving the the timeline out of the Coursera platfrom has been planned but not yet implemented; they will be taking a similar approach to the Rap Genius approach they are taking with the Constitution. Thanks for the reply, Cathy!

I still encourage you to read the post, because the issue is much larger than the FutureEd example. We now have the technical possibility of making student wikis forkable, but haven’t yet dealt with the ramifications of that. 

FutureEd, in case you don’t know, is a course being offered by HASTAC and others through the Coursera platform. It’s offered by Cathy Davidson, a person who is well-versed in the issue of student rights to their intellectual property (and, I think, genuinely concerned about this issue). Having become recently (re-)interested in wikis in education, I decided to look at how she was using the wiki. There’s a couple neat activities there already, like the education timeline:

timeline

So far so good — this is a good example of why wikis can matter in education. Certainly you could have the students do this on Wikipedia instead of a Coursera wiki, but here the local needs of the class are somewhat orthogonal to the culture of Wikipedia, and this sort of crowdsourced timeline is a great use of student time, having the potential to to foster some really interesting dicsussions while engaging students in meaningful work.

Unfortunately, as I read it, students don’t own any of that work — it all becomes the property of Coursera. It seems like you need to be logged in to even read it. And even if you did have the right to take your stuff out of it, it’s hard to see how you would. Your work is going to be contextualized and made understandable by the things to which it links, and Coursera controls the accessibility of those links.

The only way students can truly have the right to their intellectual property is through that most basic of Open Culture rights: the right to fork. The right to fork is one of the most important rights true communities have to protect themselves from co-option and malevolent dictatorship:

The right to fork guards the project against single points of failure. For example, the right to fork is a powerful check upon the influence of the benevolent dictator on the project’s work, and through the project’s work, on the community itself. The presence of this right provides strong assurance for any participant in the community to contribute his/her efforts to the community, and lack of it calls into question the open nature of a BenevolentDictator‘s leadership of that community.

That’s why the post you should read this week is Tim Owen’s post on making the Reclaim Wiki forkable:

In addition to making this documentation available, we’re also syncing the documentation to a GitHub repository. We use Dokuwiki for our documentation and one of the biggest benefits is that it uses static text files for the various pages, which made syncing all documentation an easy process and also makes it easy for another institution to grab all of our documentation as a starting point for their own.

There’s still a lot of questions on how to do this elegantly, preserving attribution in a readable way across wikis with different user bases. But we can do the basic stuff now:

  • Remove over-reaching corporate TOS’s from sign-ups.
  • Make it clear all material is contributed on a CC-BY basis so cloning is legally possible.
  • Within reason, provide a simple way for people to export material from the wiki (obviously if bandwidth is an issue, you may need to add protections against abuse). At the very least, provide a page that explains how material can be extracted, and reaffirms the rights of users to do that.

The federated, forkable wiki may or may not be the wiki of the future, but it seems to me to be the wiki of the future we actually want. Let’s work towards that. If the course-runners are interested in a lean, mean alternative to Coursera’s wiki platform, I hear Reclaim Hosting might be able to help…or maybe interested students could fork the wiki work to a more equitable location?

It is our right, after all.

A Federated Approach Could Make OER More Numerous, Findable, and Attributable

For as long as I have been involved in the Open Education Resources community there’s always been that moment in a conversation where someone comes up with the “brilliant” idea of building a central OER repository to solve the OER “findability problem”. I usually bite my tongue until it bleeds at that point and do math problems in my head to avoid saying something rude.

The repository approach to OER gets findability exactly backwards. Imagine you wanted to make a certain genre of novels more findable; say, science fiction novels dealing with interstellar trade. You have a meeting with fellow authors, who tell you that they always bump into potential readers who say they would love to read more sci-fi novels about interstellar trade, but they can never find any good ones at the online bookstores they frequent. There’s one author available at this store, another at this one. None of the 28 stores seems to have everything. You have at this point two strategies available to you:

  • “We should create one *master* store which will have all Sci-Fi on Insterstellar Trade in it (called SFIT!). It will have both the physical books and an ordering mechanism to order books from any of the other stores that have SFIT titles. When authors publish, they’ll know to contact us and have us update the index, letting us know all the places where their books are being sold, and maybe giving us a physical book for our own store! Problem solved!”, or
  • “We should try to get our novels in more stores.”

There might be some reason that the first approach makes sense, though frankly, I’ve never seen it argued for persuasively. In general, the simpler way to go is the second way — if you want to make something more findable, put it all over the damn place.

When I’ve brought this up before, the reasonable objections to this have generally been:

  • Fork avoidance: Copies in separate spaces develop seperate lives, with editors feeding back into two seperate instances, and editing that could be used to make one awesome copy makes two mediocre ones.
  • Attribution: So I copy something to my achive, host it in a new space, then change it a bit. Then someone copies it from me, changes it a bit and hosts their copy. Attribution starts to get complicated.
  • Psychological Issues: Isn’t copying to my site stealing? There’s a way in which we still think of the “payoff” for resources as some sort of site traffic.

And these are real issues. But they are exactly the sort of issues that plenty of people in the past few years have been tackling. Github makes forking less painful, and deals elegantly with attribution. Recent editing tools such as Draftin turn the community revisions process on their head. Here’s the author of that tool talking about how forking is good for the soul:

When I share a Google Doc, collaborators overwrite my master copy. It’s insanely difficult to accept individual changes they’ve made. However, when you share your document using Draft, any changes your collaborator makes are on their own copy of the document, and you get to accept or ignore each individual change they make.

This seems to me an evolution of how we are solving knowledge management problems. I’ve heard this shift described before as a shift from collaboration to federation — there’s a governing body at work in both the GitHub and Draftin examples that allows changes to propagate through the federated instances, but at any given time individual users are in complete control of their copies (in the language of federation, “self-governing”). You can compare this to your normal experience with MS Word where there is one copy of a document, and revisions must be resolved, or Wikipedia where there is one definitive article on which people must reach consensus. There are certain places where that approach makes sense. There’s something to be said for the convention that Wikipedia articles can’t be forked — people have to hash it out and come to agreement with people with whom they disagree. But for an awful lot of scenarios, the consensus piece is a drawback.

This is why, in my quest to build a cross-institutional wiki, I’ve found Ward Cunningham’s proposal for a federated wiki so interesting. Cunningham, the inventor of the first wiki as well as of the methodology called “extreme programming”, is, at the age of 64, still one of the most insightful people around when it comes to issues of what hinders individual and group productivity. And looking at his mid-90s creation — the wiki — he’s become convinced the problem is too much emphasis on consensus — the preference that there be a single copy of the wiki. In his federated wiki, to make a change is *always* to fork a page. Your change happens in your own space.  If people find your change useful, they pull it back into their copy. It’s a GitHub approach, it’s a Draftin approach. It’s not collaborative; it’s cooperative, it’s federated.

What the first wiki did was replace the 404 page with a “create this page” interface, and replace the “Email the Webmaster” link with an edit option.

What the federated wiki does is replace the edit button with a “clone” button, trusting the version tracking software on the backend can easily resolve the forks that people care to resolve. You come to a page, click edit, and now that page fork lives on your site. Maybe that gets merged back to the other site. Maybe it stays your own edit. There’s still plenty of reasons why writers would want to resolve versions and propagate changes. But there’s no imperative to do that.

That’s a huge shift in how we think about digital artifacts and networks, and it’s got me thinking about OER and copies again. I’m early in my thinking about it, but I would really encourage you to watch the video below and think about how such a model could inform our edtech efforts — whether they be efforts around OER or cross-institutional collaboration. Is this a productive direction? Is it a possible one?

Ward’s not the world’s best presenter, and you can see him struggling to explain the concept below. But its worth the effort to try to understand this, I think. Plus, it’s Ward Freaking Cunningham, and this video only has 2,000 views whereas Sugura Mitra’s has tens of millions of viewers for reinventing kiosk computing. Watch it based on principle alone! Ideas matter!

 

Open Collaborative Software is a Lot Cheaper at Scale. Why Don’t We Harness That More?

So I’m sure you know this — supporting 50 individual WordPress blog sites is draining and expensive. But if you have that sort of scale, you can get WordPress Multiuser instead, and put em all on that, and its rather cheap to maintain.  Likewise, as efforts around a technology expand, support per person goes down because the community can start to help with support, and more importantly people start to learn by looking at other people’s models.

The same is true of wikis. Fifty wikis will kill you stone cold dead. But one big wiki where different classes can edit their corner of it — that’s pretty cheap, comparatively at least.

Niche stuff like the Assignment Bank ds106 built and Alan is refactoring is really expensive too — if you do it one class at a time. But once that configuration and integration cost is spread over multiple classes it won’t cost you much per class at all.

The term for these sort of things is scalability, and the key idea is that as scale increases, marginal cost of product and support approaches zero. Your 500th customer costs you a fraction of your first.

We heard a lot about scalability in the MOOC craze — scalability of classes, primarily. And thinking about that is a worthwhile task I think, even if some of those experiments haven’t exactly panned out.

But for people looking for an *easy* win, it’s staring us right in the face. Because the easiest way to really unleash the potential of these technologies is to kill the unprofitable bits of fragmentation and build at scale. What if a *state* or *province* just said — “You know what — we’re going to support one wiki installation across all our two-year and four-year institutions for student use in the classroom. We’ll support it centrally, and let you all work out the details of how to share it.”

Not one wiki contract, mind you. One wiki.

What would that do?

What if a state said, hey, instead of running UMWBlogs and UVaBlogs and So and So Community College Blogs we want to hire the UMW team and we want to support a set of multi-site installs available to all schools in the state. Or if an organization such as AAC&U or EDUCAUSE said look, as part of your benefits, all your students get to run blogs on this thing we’re going to build. Not special pricing on a contract, but something we actually own, maintain, and develop based on your feedback.

What would that do?

Suddenly your wiki isn’t a ghost-town — you’ve got the scale to have a real community around it. Suddenly your WP install is getting upgraded without anyone at your institution having to touch it. Alan could build out an install just for various assignment banks, with specialized themes. That models problem goes away, because you’re not relying of ten people to jumpstart this at your institution, but hundereds of people across many institutions.

But we don’t do this. We buy institutional repository software that no one ever uses. We support our own WordPress install and have the painful year or two where five people are on it, and we have to decide whether we push through, or ditch it. We build a new wiki for every class.

This is really a simple calculus. This isn’t scaling as a metaphor. This is straightforward scaling. Yet we’ve rushed right past it. We buy state contracts with vendors, or support open services locally. we don’t do open services at the state or consortium level.

Why?

The OS-based Lifestream Will Kill the Web-based Mega-Service, Part the Third

New data this week about the Facebook being abandoned by the younger set:

facebook

Now, there could be an error with the way this was computed — I’m fighting a number of edtech fires right now and don’t have time to dig into the methodology. But it matches the anecdotal evidence we’re seeing.

One interpretation of this is now that parents are all on Facebook, it’s uncool. You can’t really say the stuff you need as a teen in front of parents.

And I think that’s true. But what’s more interesting to me is not the motive, but the opportunity. In other words, how has it become so easy for teens to move from the “platform” that is Facebook to the world of a multiude of single purpose apps? The answer: what’s enabling that is the notifications panel of the smartphone. And what’s happening is the OS’s are the only entities around with enough klout to insist on app integration, so web based-harnesses are becoming also-rans.

In other words, if I build the world’s newest videochat service or best net-enabled slow-cooker, maybe I build a Facebook app, maybe I don’t. Maybe I integrate with Google+, maybe I don’t. Maybe I open up my API to IFTTT, maybe I don’t. But what’s crucial to my survival is I integrate with the major app-based OS’s providing the sort of sharing and notifications hooks that promote use. And this ends up having a reinforcing effect — because the only place I can check ALL my stuff is my phone, that’s where I’m going to check it.  The fact that Microsoft has also gone to app-based OS’s on the desktop and Xbox just seals the deal.

Call me crazy, but I think that has implications beyond the 13-24 year-old demographic. Facebook has always been a relatively decent photo and link sharing site, but its attractiveness as a *platform* was based on the idea that it would become the “lifestream” that lent coherence to all your other interactions.

Your OS does that now, so Facebook is just another service whose individual components can be replaced as necessary. Teens have realized that, I think — who’s next?

(Incidentally, there are both upsides and downsides to this. But I think ultimately it’s an unstoppable shift.)

EdXx

Short thought I had last night. TED, as we know, is an elitist event that with a problematic epistemology. I think this take from Education Rethink captures some of the larger problems with the format and culture:

TED Talks are the megaphones in the midst of a conversation… When I tweet about vulnerability, someone will be quick to send a link to a TED Talk. If I question whether students can truly be entirely self-directed (especially in the realm of reading), someone tweets me Mitra’s TED Talk on minimally invasive learning. When I question the nature of creativity and the role of limitations in fostering it, the first response is nearly always Sir Ken Robinson’s famous TED Talk.

Put another way, TED elevates a chosen elite, but at the cost of shutting down local conversation and culture.

If this sounds all-too-familiar to the MOOC set, it’s probably not a coincidence. After all, Sebastian Thrun didn’t decide to found Udacity after talking to a professor at San Jose State, or touring Pakistan. He took steps towards creating Udacity after watching a TED talk by Salman Khan. The conversation began with TED, and continues to be mediated through TED. The DNA of TED and xMOOCs is so intertwined as to be indistinguishable at times.

But it’s precisely this similarity that suggests a way forward for organizations like edX. Because there’s another aspect to TED that builds local community rather than erodes it: TEDx. TEDx, for those who don’t know, works like TED, but is run by local communities:

Created in the spirit of TED’s mission, “ideas worth spreading,” the TEDx program is designed to give communities, organizations and individuals the opportunity to stimulate dialogue through TED-like experiences at the local level. TEDx events are fully planned and coordinated independently, on a community-by-community basis.

I haven’t done a scientific survey, but what I’ve heard back from people participating in these smaller events is that they’ve been transformative — not because of any wisdom raining down on people in 18-minute segments, but because they’ve brought together local communities, fostered new connections, and jump-started important local conversations.

It occurs to me that edX has a similar issue to TED. For the core of what they do to remain prestigious, it must remain elite. And you see that in the schools they’ve recruited.

That’s not a bad thing. Prestige opens doors and gets attention in a culture where attention is the scarcest of resources. I don’t know that change is possible in higher education without leveraging some sort of prestige. Prestige greases the wheels of higher education.

But if we want to start conversations, build connections, and strengthen higher education, an additional approach is required. Why not adopt the TED/TEDx model? Start a co-branded segment of edX that allows community colleges, public four-years, and second tier research universities to share and exchange courses among one another. Provide some architecture for compensation, sustainability, publicity, licensing, and publication. Maybe cloud-host it on the OpenEdX platform. Cross-list offerings through edXx (yes the name is a joke, but you get the point) on the edX site as a “Would you like to also search” option.

Maybe it would work; maybe it wouldn’t. But non-elite institutions have been talking for years about forming such collectives; maybe the prestige of edX could be used to oil the cogs of cooperation. Or if not edX, something similar.

TED was about “ideas worth spreading.” TEDx, from what I hear, has become about conversations worth starting. Maybe it’s time we too a similar approach with MOOCs?

Issue Hubs /Water106

There was so much good thinking by others on the web during my winter vacation. And I want to comment on it all — just as soon as I go through the purgatory of semester startup. Faculty need blogs, administrators need schedules and work plans, and I may even need to have a syllabus together for a small class I am teaching this semester. These things need to happen by, say, Friday.

At the same time, I notice that I haven’t updated people on the evolution of Water106 on this blog recently. So here’s the update.

Water106 is still going forward, but in a slightly altered form. It’s simpler, I think. My current mental formulation is this: folks have been producing UMW-like “course hubs” for a while now. And we’re starting to build some demand on campus for exactly this sort of thing. A course hub is just a set of web spaces and services that forms the public presence of a class on the web. Here’s one on I worked on with Clare Weber last semester for ANTH301 (Arts and Media in a Global Perspective) — it’s got a public blog and a semi-private wiki. Easy-peasy. If I wasn’t so bogged down, I’d drop a dozen links here to show you examples from all over the country, but use google and type in “UMW blogs” or “course hubs” and you’ll see the sort of thing I mean.

Here’s my utterly reduced “issue hubs” pitch. What if instead of setting hubs up under the “course” umbrella, we set them up by “topics” or “issues”? What if instead of encouraging faculty to set up ANTH-301.xxxx.xxx, I encouraged them to set up global-issues-in-media.issuehubs.com? And then what if we made minor alterations to the structure of the hub that made it really easy for another class working on a similar or related subject to live in the same place? Without ever explicitly coordinating with the initial class?

This is not new by any means. Looking for Whitman modeled much of this approach something like four years ago, and remains for me one of the great untapped experiments of the pre-xMOOC age. Ds106 has done similar things. So has FemTechNet. Many others have done experiments with this as well. So I’m not sure how much new I’m adding to the store of human knowledge here. But what I am asking is whether we can apply the lessons we have learned in the past 4 years about how to run “blurred-boundary” courses and work it into what we do not as the exception, but as the default. And ideally, build it in such a way that less coordination is needed between classes and individuals participating. I know that Jim Groom and Alan Levine and Brian Lamb have in the past rightly critiqued the “scalability” push on these efforts (was it Brian who asked “Does poetry scale?”). But structuring classes in this way is one way to start to get beneficial network effects in the production of these sites while simultaneously improving the pedagogy.

Absolutely know there are some issues with this approach, and it is not a one-size-fits all. More soon.

Revenge of the OS

lock-screen-notifications[1]

I’ve been trying to write a longer piece on an issue and failing, so I thought I’d put down the five paragraph version here, and see what Twitter thinks.

Roughly, circa 2006 there was an corporate/institutional integration problem and a personal integration problem. The corporate/institutional problem went something like “We need to be THE one stop solution, so we can maximize clicks and get a more global view of the customer.” The personal problem went something like “I’m sick of having to log in to 20 different services in the course of the day.”

These somewhat related concerns pushed on us the age of the mega-service, and the discussion was around which provider would become that mega-service. Would it be Google+, Facebook, Twitter? The idea of the mega-service was total identity management — convenience for us, unified data for them.

Except something strange has happened over the past couple of years. Identity is now maintained on our phones. It’s a device issue. Our portal is our app screen. Our network isn’t Facebook or Google or Twitter. It’s the phone address book that is the union of those three imports. And on the phone we stop dreaming about “If only there was a service that integrated functions of Twitter, Gmail, and Snapchat!” Because there is a service that integrates that — your phone’s notifications screen. The notifications screen is the new Facebook feed. The mega-service — a bizarre artifact of web-based logins, crippled APIs and an embarrassingly outdated cookie-based persistence scheme — is at its height right now, but it no longer solves a consumer problem. It’s about to collapse.

xbox-one-tiles[1]

Xbox 360 has looked like this for years now, skipped over portal phase altogether…

So where are we? Well, the consumer-corporate pact is unwinding. Mega-services and institutions still have their data needs an monopolistic dreams, but we no longer have a need for their solution. My daughter, who needed Facebook to hold her life together 4 years ago, now moves fluidly between Tumblr, Snapchat, Instagram, and Vine accounts, with the notifications panel her point of integration. It doesn’t occur to her that this is a hassle — it feels like little more than switching pages in Facebook.

Identity has been moved upstream. It’s the revenge of the OS, and it’s already spreading beyond phones as app-based design becomes the dominant computing model. Today it’s Facebook that needs to worry. But it seems to me there are interesting implications for edtech as well. Thoughts?