Your Diploma Just Got Downgraded. But You Can Upgrade It At a 20% Discount!

BlxjS2ECAAAQ-el

From the comments on my last post –  friend of the blog (and Sloan-C Karoake instigator) Michael Berman lets us know he got some bad news about the Udacity certificate he earned in 2012.

Please note this is a real email, from “Amanda Sparr, Coach @Udacity”. This is not a parody. (Really!)

Dear Michael,

Nice work on earning a certificate for the Intro to Computer Science courseware. It speaks volumes for what you learned while completing this rigorous course. Congrats again!

Today we upgraded this course . This brings you new opportunities, for little extra work:

  • You can now practice and show off your skills, with a new hands-on project where you’ll build a social network.
  • You can also earn a Verified Certificate, since this course now has a subscription option.

Udacious Coaches like me work with you and award these Verified Certificates. We review your project code, share feedback and tips, and conduct a final video interview where we verify your identity. With these steps, employers see our Verified Certificates as an even stronger endorsement of your skills.

To get you started here’s a 20% discount*: Udacity.com/IntroCS?coupon=UMxYBAx22T1JKx6.

Oh my. I don’t know where to start. The fact that the “magic formula” involves personal tutors and graders? The line “Udacious coaches like me….”?

But of course the kicker is the business model: hey, we know you earned this certificate, but it’s kind of worthless now that we only do Verified Certificates. But, look, for a low, low fee you can earn it again. For real this time!

I never realized the answer to the college cost crisis was right in front of us all the time. We just need to downgrade everyone’s degrees, then let them know for a low, low price and a short trip to a testing center we will continue to verify their diploma. Instant cash flow! And accountability!

Oh, man. You really can’t make this crap up.


Are Blogs the Vinyl Records of the Internet?

From Blogs are the Vinyl Records of the Internet

The quote comes from a full article in the Washington Post about the decline of blogging in Iran. A few years ago, Iran emerged as a culture filled with high traffic, powerful blogs. It was called Blogestan. But, these days, as in many other cultures around the world, personal blogging is retreating in favour of corporate social media sites such as Facebook, twitter, and tumblr.

Two things.

First, the premise of the article is vinyl records are better (like blogs) but they have no impact on the record industry anymore.

I don’t quite know how to judge that. What would it mean to have impact as a *format* rather than as an *artist*. Maybe I’m being stupid, but I can’t really grok it. It makes sense as a statement only if you equate impact with money. In which case, yes. On the other hand, the sonic textures that you will be enjoying in two years on your iPod are spinning right now on a short run vinyl release in some Athens, GA apartment. You haven’t heard the band, and you perhaps you never will. But just like so much in pop music today can be traced to aesthetics of mid-to-late 00s releases, the same thing will happen again. In fact, I would not doubt that the Future Sound of the Format Formerly Known As Rock is floating around currently on cassette. So format, schmormat. In philosophy I believe Gilbert Ryle called such comparisons “category mistakes”.

Secondly, I’m still on my tunnel-vision streak of seeing the re-share issue everywhere. Blogs have been replaced by….Tumblr? Um, Tumblr is at heart a blog with a reblog button. That’s the innovation there. The behavior (and integration of the non-writer into the community) is a result of that. And in fact, as you look at migration to online communities and away from free-standing social media, one of the big sells is the re-share (or retweet, re-pin, whatever). Worth thinking about why that is, and whether we in educational technology could learn a thing or two from that. A nascent thought. But still a thought. Maybe people who re-share but never create are more important than we think, right?


228 Summaries

I don’t talk enough about the classes I work with. I’m trying to change that, starting with my own class, T&L 521: Educational Technology.

I ended up teaching this class because they had a last minute schedule conflict with the person who normally does it. It was a one credit class with pre-service teachers who are working in classrooms full-time Monday-Thursday. They are also prepping for the state certification exam. They are a very over-extended bunch.

I decided that as they had already had some training on classroom tech that I’d focus on use of net-enabled tools for professional development. Not “How can I teach with this tool?” necessarily, but “How can these tools help me to be a better teacher?” which is the real question.  One of the central pieces of that was that the students were suppoed to use their Twitter feed and other tools to find three articles a week they thought were particularly helpful, book mark them on Pinboard.in, and write a brief summary. I’ve become a big fan of this approach, which sits somewhere in between blogging and micro-blogging as a format, and builds personal habits that can be of use.

In any case, I looked at the class Pinboard account today — and over the semester the eight students in the class have summarized  228 articles. These are informal, and make an awkward (teacher required) pivot to class topics in some cases, but they are mostly solid summaries of good articles.

pinboard

And there’s 228 of them. That really blows my mind.

Pinboard’s a pay once, stay up forever sort of account, so the account will stay up for their use afterwards, and any article we talked about is just a Pinboard search away. We may have future classes contribute to it as well — if you’d like to use the account for your class, just let me know, I’ll give you the password and login.


Using ProProfs With Dokuwiki

I’ve wanted for a long time to embed questions in things like course wikis and blogs, questions that fed to a centrally managed backend system. Finally a number of people are working on this — Instructure’s Canvas mentioned this as something under development (or maybe here at this point, we’re an Angel campus, unfortunately). Bill Fitzgerald is working with Lumen Learning on LMS/WordPress integration, and I think this may be a piece of that as well.

However, if you want a simple solution available *today*, without needing an LMS, it’s available. ProProfs gives you an account (free accounts are available too) that lets you easily embed quizzes, questions, and upload fields into your wiki pages.

Here’s an example. I wrote up this page on the TL 521 wiki with some final project instructions. It’s pretty typical — watch a video, think about it, write a response.

project2

At this point what we would usually do is either have the students write something publicly (which works great) on the wiki, or have them submit into the LMS if it needs to be private. The problem is being out on the wiki is like being out on the quad on a sunny spring day. And sending the students back into the LMS feels like sending them into an SAT center from said quad. It’s just so institutional.

Besides that, there’s a *flow* at work here. They’ve read the text and watched the video; they are ready to write. Adding the friction of the LMS at that point is ill-advised. We want to them to stay on this page to submit the project for the same reason we wanted them to stay on this page to watch the video. The way to do that is embedding. And what ProProfs does is allow you to embed your assessments the same way you embed the video:

upload

 

So the student uploads….

uploaded

 

And then it appears in  your ProProfs panel. Easy-peasy.

report

As I said, other options will be available soon — this isn’t an ad for ProProfs. It *is* however an example of the loosely-coupled assessment we’ve been begging for for over seven years now. It makes a ton more sense to assess in your teaching space than to teach in your assessing space. But I’d gotten so used to disappointment on this front I was unaware it had arrived. Give it a shot, the world’s your oyster.

 

Additional Note: I’m sure some people will say this has been available from various embeddable survey tools. Not really. ProProfs allows you to assign points, make autograded MC and fill in the blank questions, optionally authenticate a roster of students, provide feedback on wrong answers, and set number of retries. The level of functionality changes everything, because there really isn’t much you *can’t* do with it.

 

 

 


Experience Without Humility Is Not Very Helpful At All

Phil Hill has a great analysis of the NYT interview with Richard Levin, the new CEO of Coursera. And core to that analysis is a point I’ve made before — that Ivy League institutions *do* have experience in online education, but they are so committed to covering up their failure in those efforts that they can’t learn from those mistakes. This is in contradistinction to public, tuition-supported efforts where rewriting narratives can only take you so far. After all, most institutions only have so much money you can throw down a hole and light on fire.

Here’s a piece of the NYT conversation cited by Phil:

Q. Yale has not exactly been a mass institution.

A. No, but we were early in the on-line arena, with a venture back in 2000 called All-Learn.

Q. How much did you lose, and why didn’t that spoil this for you?

A. It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.

As Phil points out, bandwidth really wasn’t an issue for the demographic they were looking at with All-Learn.

Image

All-Learn folded in 2006, when broadband was at a meager 20% adoption. Today, it’s different, supposedly. It’s at 28%. Are we to really believe that somewhere in that 8% of the population is the difference between success and failure?

Levin goes on to say they gathered a lot of experience on how to create courses, and cites Open Yale Courses as an example of that. Now the courses at OYC are interesting, and I’ve used portions of the Introductory Psychology course in my own work, as well as the Kelly Brownell course on obesity. But the price tag for those forty courses, as far as I know, was $4 million dollars of Hewlett money. And the videos are basically recordings of class lectures. Four million dollars for forty filmed courses, or, if you prefer, $100,000 a course for video lectures.

Hewlett, of course, didn’t grant Yale that money for *just* 40 courses. As anyone who has ever applied for an OER grant knows, the big question one has to answer is “How will you make this effort sustainable after the money is gone?” Levin and others apparently had an answer for that, and that answer was apparently wrong. And what the reporter is asking now is how Coursera’s sustainability path (which looks at this point to be somewhat similar to both OYC and AllLearn) is different. And the answer Levin gives is “bandwidth”. In other words, the plan was great, it was the world that was imperfect. But this time it will work for sure.

If I was an investor in Coursera and I heard that answer, I’d panic. And if I was a grant manager at Hewlett, I’d cry. It’s not Groundhog Day, it’s worse. It’s Memento, where the lead character is doomed to repeat his past because he cannot come to terms with what he has done.

It’s worth noting that there *are* newer models out there for Open Education which are learning from the past instead of repeating it. At WSU Vancouver, for example, we’re working with Lumen Learning on a math initiative. Lumen has an interesting model, which they refer to as Red Hat for OER (or, allternately, “filling the gap between DIY and WTF“). In this model Lumen iteratively improves and maintains a set of OER for free, and makes money off of consulting with colleges on adoption and integration of that OER into the curriculum.

If you ask its founders David Wiley and Kim Thanos why this time Open Education will be different, they’ll certainly mention that the world has changed since the first open textbooks. We have higher quality books, more printing options, broader adoption of devices to run those books on. And the growth pattern is different. There’s a more or less continuing growth of OER use from the late 90s forward, not the boom and bust of Ivy League Online initiatives.

But I think they’d also be quite happy to tell you how their views of what the OER movement needs have changed over the past years. In fact, here’s David doing just that in a recent blog post:

 But, in their own way, each of these efforts [were] underpinned by an “if we build it they will come” philosophy. If we just make the content sufficiently high quality, if we just make it easy enough to find, if we just make it easy enough to remix, faculty will adopt OER in their classrooms. Don’t get me wrong – there are some faculty who have the necessary time, prerequisite skills, and hacker ethic to do it themselves (I would like to believe that I’m one of them). But people with this particular configuration of opportunity, means, and motive are the overwhelming minority of higher education faculty. By the end of 2012 it had become clear that if OER adoption was ever going to happen at any scale, someone needed to get on a plane, go to campus, and train people. So that’s what the Lumen team did in 2012.

If you ask David and Kim what they have learned in the past decade, they are not going to say “bandwidth”. They are going to say something along the lines of “We radically underestimated the amount of time and expertise required to integrate OER into curriculum.” and explain how their recent efforts address that issue.

That’s what’s supposed to happen. That’s how you move forward. That’s what you pay people for – not for the experience they have, but for the knowledge they’ve brought away from that experience. It’s true that Levin brings a wealth of experience to the table. But for the life of me I can’t see what he’s gained from it.

 

 


Some Notes on DokuWiki Setup for Academic Settings: Spam

Still  working with DokuWiki as an educational platform for faculty here at WSU Vancouver. I’ve found a couple things that are worth mentioning, Thought I’d jot them down here. This post deals with spam prevention.

The idea that Dokuwiki wikis don’t get spammed as much as MediaWiki installs is true, but trivially so. You’ll get more than enough spam to clog up the series of tubes that is your website. You’re going to have to lock down the installation.

I’ve experiemented with a couple approaches to this. Here’s some things you don’t want to do:

  • The common “must confirm email” approach is not a long term winner. Plenty of spambots now happily confirm email,  get user accounts, and live happily simulated lives on your wiki discussing the latest medical devices and weight-loss drugs available.
  • Corralling freshly registered users into a “non-editing” user type is also not a great idea. I registered 8 students in my class during class for a wiki project. They then waited while I fiddled around and bumped up their privileges. It’s hard to imagine that process scaling in an academic setting.
  • Similarly, deactivating registration and doing admin panel sign-ups manually is not a pleasant activity either.
  • LDAP then? Ugh. An EXCELLENT feature of DokuWiki. But not really a great option in academia for a pilot project. You’d have to coordinate with IT (which will lead to who knows what). Might be something to explore down the road, but not as you’re getting this off the ground.
  • Visual post CAPTCHAs? Yes, this is a great way to spark a multi-million dollar ADA/Section 508 lawsuit. Avoid.

So what do you do?

  • Set read permissions to “all”. Anyone can read.
  • Set edit to whatever your default confirmed registered user is.
  • This configuration is that everyone can read, but only registered users can edit.
  • Keep the registration link/functionality up.
  • Install the Captcha plugin. Under type, chose “question”
  • Make sure that registered users *don’t* have to do the CAPTCHA. In this configuration, since all non-registered can do is read, the only place the CAPTCHA will be is on the registration form.

This option will ask the student a plain text question of your choice when they register. If they get it right, registration proceeds. If not, it bumps them back.

Here’s where a bit of discretion comes into play. You can take one of two approaches:

  • Make the question a piece of cultural knowledge that students should know — e.g. the name of the dining commons.
  • Make the question “Access Code?” and have them supply an access code furnished by you or the prof.

As I went through “cultural knowledge” access codes, I started to realize how fraught that process was. I can maybe talk more about that later. I also realized what I really wanted was a semi-automated process for WSU staff and faculty not available to outsiders. I decided on the access code with a twist.

Here’s how it works. If you mail vc.wiki.access@gmail.com from a WSU email account, an autoresponder will send you back the code. If you mail it from a non-WSU account, you get nothing. I do this through setting up an autoresponder on that Gmail address with the code in it, but routing everything not from @xxxx.wsu.edu directly to deletion.

So there you go, that’s my setup. Maybe in a few days I’ll talk about my depressing struggle with various markdown plugins. Or requests… I’ll take requests too.

 


If Your Product Is So Data-Centric, Maybe It Should Have Data Export?

Yesterday-ish, from Justin Reich:

I was also somewhat surprised to learn that in many systems, it is actually quite difficult to get a raw dump of all of the data from a student or class. Many systems don’t have an easy “export to .csv file” option that would let teachers or administrators play around on their own. That’s a terrible omission that most systems could fix quickly.

A couple years ago, working on an LMS evaluation, I kept getting asked what reporting features each potential platform had. Can this platform generate type-of-report-X? About 8 years ago, working on a ePortfolio evaluation, the same question came up — where are the reports? Does this have report Y?

I’d always point out that we didn’t want reports, we wanted data exports and data APIs that allowed us to generate our own reports, reports that we could change as we developed new questions and theories, or launched new initiatives in need of tracking. The data solutions we’re likely to see have real impact (with no offense to Reich’s Law of Doing Stuff) are likely to come from grassroots tinkering. Data that is exportable in common formats can be processed with common tools, and solutions built in those common tools can be broadly shared. CSV-based reports developed and adopted by Framingham State can be adopted by Keene State or WSU overnight. A solution one of your physics faculty develops can be quickly applied across all entry level courses.

What you want is not “reports” but sensible, easy, and relatively unfettered access to data. And if you don’t have someone on your campus that can make sense of such data, then you need to either hire that person, or give up on the idea that a canned set of reports are going to help you. When fields are mature, canned and polished reigns. But when they are nascent (as is the field of analytics) hackability is  a necessity.

 


Hacking and Reuse: A Regrouping

Image

Via Clay Fenlason: “Feeling like the time spent to understand WTF is talking about would be well spent, but who has that kind of time?”

Fair enough. I blog mostly for myself, to try and push on my own ideas in front of a relatively small group of people I know who push back. And part of that process is a bit manic and expansive. At some point that’s followed by a more contractive process that tries to organize and summarize. Maybe it’s time to get to that phase.

So I’ll do that soon. What I’ll say in the meantime is that all of this stuff — hybrid apps, storage-neutral apps, federated wikis, etc — is interesting to me because of my obsession with hacking and reuse. Why is reuse so darn hard? Why don’t we reuse more things? What systems would support a higher degree of reuse and sharing, of hacking and recombination? What are the cultural barriers?

There are implications to this stuff far bigger than that, but reuse (and hacking, which is a type of reuse) has been a core obsession of mine for a decade now, so that ends up being the lens.

You go to an event and there’s 50 people taking pictures of it individually on their cell phones, none of whom will share those photos with one another, yet all of whom would benefit from sharing the load of picture taking. There are psychological and social reasons why that’s the case, but there’s also technological reasons for that. Likewise there are brilliant economics teachers who have built exercises and case studies that would set your class on *fire* if you used them — but you’ll never see them.

I’ve been over the several hundred reasons why reuse doesn’t happen, over a period of ten years, It’s not just about the technology, absolutely. But occasionally I see places where reuse explodes, and the technology turns out to be a pretty big piece of that. My wife is a K-3 art teacher. And Pinterest just exploded reuse in that community. Sharing went from minimal to amazing in the space of 12 months. And suddenly she was putting together a much better art curriculum than she could have ever dreamed of in half the time, in ways that had a huge impact on her students.

So — reuse, sharing, networked learning, hacking. I’m interested in the two sides of this: first, we must teach students how to work this way. We have to. And two, we have to get our colleagues to work this way.

What does that have to do with the shift to hybrid apps? With moving from a world of reference to a world of copies and forks? With storage-neutral designs? With the pull request culture of GitHub vs. the single copy culture of OER?  With the move back to file-based publication systems? I’m still trying to work that out. But I think the answer is “a lot”, and a post is coming soon.


This is what I mean by new modes of sharing (Fedwiki meets Dropbox Carousel)

File-based sharing based around pushing copies of good stuff to others. That’s what the federated wiki is about.

For that reason I find newer efforts like this that push files around instead of references to be fascinating. This out today from Dropbox, a new product called Carousel:

Photos of events such as graduations and weddings, Houston points out, are spread over the devices and hard drives of multiple guests. It creates pervasive photo anxiety: People are no longer sure they own the best images of the most important moments in their lives. The app, which becomes available this week for iPhones and Android phones—with a version coming soon for desktops—taps into photos stored on Dropbox and allows users to cycle through them quickly and send images to friends and family, so they can add them to their collections well.

Think about how this changes notions of sharing, and you’ll see it as part of a move towards file-based copy systems, and the pull request approach of a GitHub.

Also, read that paragraph again, and tell me if that doesn’t look similar to the educational materials situation we face everyday.

OK, now imagine your wiki exists in a Dropbox account, and you do the same thing — you flip though all your articles and forward the ones that you think are useful to your various federations. Those get dropped into other people’s own dropbox wikis, and the virtuous cycle continues.

It’s a different way of thinking about things. It’s file based, and it sees copies of things as a feature, not a bug. The storage for your project is not seperate from the sharing features of your project. We let the copies happen and we sort out the mess afterwards.

My argument is not that Dropbox rules, but that this is part of a larger trend that rethinks how sharing and forking works on the new web.  It’s also a potentially a powerful rethinking of how OER could propagate through a system.


The First Web Browser Was a Storage-Neutral App

ONE IMPORTANT NOTE: I’m just toying with this idea, not asserting it at this point. But part of me is very interested in what happens when we view the rise of the app as not a betrayal of the original vision of the web, but as a potential return to it. I don’t see many people pushing that idea, so it seems worth pushing. That’s how I roll. ;)

——-

Apropos of both an earlier post of mine and Jim’s Internet Course. This is a screenshot of the first web browser (red annotations added by me):

tims_editor

 

The first web browser was a storage-neutral editing app. If you pointed it at files you had permission to edit, you could edit them. If you pointed it at files you had permission to read, you could read them. But the server in these days awas a Big Dumb Object which passed your files to a client-side application without any role in interpreting them.

I never used the Berners-Lee browser, but even in the mid-90s when I was hacking my first sites together Netscape had a rudimentary editor (I was using something called HoTMeTaL at the time, but stilll):

Netscape_EDITOR_Window

This is still the case with many HTML files a browser handles, but what’s notable here is that in those days a browser sort of worked much like what a storage neutral app would today.  When I talk about having the editing functions of a markdown-wiki client-side in an app, we’re essentially returning to this model.

And think about that for a minute. Imagine what that wiki would be like — you tool around your wiki in your browser editing these Markdown files directly. When someone hits your site in their browser, it lets them know that they should install the Markdown extension, or download the Markdown app to view these things. Grabbing a file is just grabbing a file.

So what happened to this original vision? So many things, and I only saw my little corner of the world, so I’m biased.

  • Publishers: The first issue hit when the publishers moved in. They wanted sites to look like magazines. This accelerated a browser extension war and pushed website design to people slicing up sites in Adobe and Macromedia tools.
  • Databases + Template-based Design: As layouts got more complex, you wanted to be able to swap out designs and have the content just drop in; so we started putting pages in database tables that required server interpretation (this is how WordPress, Drupal, or alomost any CMS works for example).
  • Browser incompataibility, platform differences: People didn’t update browsers for years, which meant we had to serve version and platform specific HTML to browsers. This pushed us further into storing page contents in databases.
  • E-commerce. You were going to have a database anyway to take orders, so why not generate pages?
  • Viruses and Spyware. Early on, you used to download a number of viewer extensions. But lack of a real store to vet these items led to lots of super nasty browser helper objects and extensions, and the fact that you used your browser for e-commerce as well as looking at Pixies fan sites made hijacking your browser a profitable business.

In addition, there was this whole vision of the web as middleware that would pave the way to a thin-client future free from platform incompatibilities. Companies like Sun were particularly hot to trot on this, since it would make the PC/Mac market less of an issue to them. Scott McNealy of Sun started talking about “Sun Rays” and saying McLuhanesque things like “The Network is the Computer“.

In the corporate environment, thin clients are wired to company servers.

In your home, McNealy envisions Sun Rays replacing PCs.

“There’s no more client software required in the world,” he said. “There’s no need for [Microsoft] Windows.”

Sun Rays fizzled, but the general dynamic acclerated. And part of me wonders is it accelerated for the same reasons that Sun embraced it. In a thin client world, the people who own the servers make the rules. That’s good — for the people who own the servers.

This is really just a stream of conciousness post, but really consider that for a moment. In the first version of the web you downloaded a standard message format with your email client, and web pages were pages that could live anywhere (storage-neutral) and be interpreted by a multitude of software (app-neutral).  In version two, your mail becomes Gmail, and your pages get locked into whatever code is pulling them from your 10 table database.  And yes — your blogging engine becomes WordPress.

OF COURSE there were other reasons, good reasons, why this happened. But it’s amazing to me how much of the software I use on a daily basis (email, wikis, blogs, twitter) would lose almost nothing if it went storage neutral — besides lock-in. And such formats might actually be *more* hackable, not less.

It’s also interesting to see how much other elements of the ecosystem have solved the problems that led us to abandon the initial vision. Apps auto-update now. The HTML spec has stabilized somewhat, and browsers are more capable. The presence of stores for extensions gets rid of the “should I install random extension from unknown site” problem — people install and uninstall apps constantly. Server power is now such that most database-like features can be accomplished in a file-based system — Dokuwiki is file based, but can generate RSS when needed and respond to API calls. And, interestingly, we are finally returning to a design minimalism that reduces the need for pixel-based tweaking.

In any case, this post is a bit of a thought experiment, and I retain the right to walk away from anything I say in it. But what if we imagined the rise of apps as a POTENTIAL RETURN to the roots of the web, a slightly thicker, more directly purposed client that did interpretation on the client-side of the equation? Whether that interpretation is data API calls or loading text files?

I know that’s not where we are being driven, but it seems to me it’s a place that we could go. And it’s a narrative that is more invigorating to me than the “Loss of Eden” narrative that often hear about such things. Just a thought.


Follow

Get every new post delivered to your Inbox.

Join 117 other followers