The quote comes from a full article in the Washington Post about the decline of blogging in Iran. A few years ago, Iran emerged as a culture filled with high traffic, powerful blogs. It was called Blogestan. But, these days, as in many other cultures around the world, personal blogging is retreating in favour of corporate social media sites such as Facebook, twitter, and tumblr.
First, the premise of the article is vinyl records are better (like blogs) but they have no impact on the record industry anymore.
I don’t quite know how to judge that. What would it mean to have impact as a *format* rather than as an *artist*. Maybe I’m being stupid, but I can’t really grok it. It makes sense as a statement only if you equate impact with money. In which case, yes. On the other hand, the sonic textures that you will be enjoying in two years on your iPod are spinning right now on a short run vinyl release in some Athens, GA apartment. You haven’t heard the band, and you perhaps you never will. But just like so much in pop music today can be traced to aesthetics of mid-to-late 00s releases, the same thing will happen again. In fact, I would not doubt that the Future Sound of the Format Formerly Known As Rock is floating around currently on cassette. So format, schmormat. In philosophy I believe Gilbert Ryle called such comparisons “category mistakes”.
Secondly, I’m still on my tunnel-vision streak of seeing the re-share issue everywhere. Blogs have been replaced by….Tumblr? Um, Tumblr is at heart a blog with a reblog button. That’s the innovation there. The behavior (and integration of the non-writer into the community) is a result of that. And in fact, as you look at migration to online communities and away from free-standing social media, one of the big sells is the re-share (or retweet, re-pin, whatever). Worth thinking about why that is, and whether we in educational technology could learn a thing or two from that. A nascent thought. But still a thought. Maybe people who re-share but never create are more important than we think, right?
I don’t talk enough about the classes I work with. I’m trying to change that, starting with my own class, T&L 521: Educational Technology.
I ended up teaching this class because they had a last minute schedule conflict with the person who normally does it. It was a one credit class with pre-service teachers who are working in classrooms full-time Monday-Thursday. They are also prepping for the state certification exam. They are a very over-extended bunch.
I decided that as they had already had some training on classroom tech that I’d focus on use of net-enabled tools for professional development. Not “How can I teach with this tool?” necessarily, but “How can these tools help me to be a better teacher?” which is the real question. One of the central pieces of that was that the students were suppoed to use their Twitter feed and other tools to find three articles a week they thought were particularly helpful, book mark them on Pinboard.in, and write a brief summary. I’ve become a big fan of this approach, which sits somewhere in between blogging and micro-blogging as a format, and builds personal habits that can be of use.
In any case, I looked at the class Pinboard account today — and over the semester the eight students in the class have summarized 228 articles. These are informal, and make an awkward (teacher required) pivot to class topics in some cases, but they are mostly solid summaries of good articles.
And there’s 228 of them. That really blows my mind.
Pinboard’s a pay once, stay up forever sort of account, so the account will stay up for their use afterwards, and any article we talked about is just a Pinboard search away. We may have future classes contribute to it as well — if you’d like to use the account for your class, just let me know, I’ll give you the password and login.
I’ve wanted for a long time to embed questions in things like course wikis and blogs, questions that fed to a centrally managed backend system. Finally a number of people are working on this — Instructure’s Canvas mentioned this as something under development (or maybe here at this point, we’re an Angel campus, unfortunately). Bill Fitzgerald is working with Lumen Learning on LMS/WordPress integration, and I think this may be a piece of that as well.
However, if you want a simple solution available *today*, without needing an LMS, it’s available. ProProfs gives you an account (free accounts are available too) that lets you easily embed quizzes, questions, and upload fields into your wiki pages.
Here’s an example. I wrote up this page on the TL 521 wiki with some final project instructions. It’s pretty typical — watch a video, think about it, write a response.
At this point what we would usually do is either have the students write something publicly (which works great) on the wiki, or have them submit into the LMS if it needs to be private. The problem is being out on the wiki is like being out on the quad on a sunny spring day. And sending the students back into the LMS feels like sending them into an SAT center from said quad. It’s just so institutional.
Besides that, there’s a *flow* at work here. They’ve read the text and watched the video; they are ready to write. Adding the friction of the LMS at that point is ill-advised. We want to them to stay on this page to submit the project for the same reason we wanted them to stay on this page to watch the video. The way to do that is embedding. And what ProProfs does is allow you to embed your assessments the same way you embed the video:
So the student uploads….
And then it appears in your ProProfs panel. Easy-peasy.
As I said, other options will be available soon — this isn’t an ad for ProProfs. It *is* however an example of the loosely-coupled assessment we’ve been begging for for over seven years now. It makes a ton more sense to assess in your teaching space than to teach in your assessing space. But I’d gotten so used to disappointment on this front I was unaware it had arrived. Give it a shot, the world’s your oyster.
Additional Note: I’m sure some people will say this has been available from various embeddable survey tools. Not really. ProProfs allows you to assign points, make autograded MC and fill in the blank questions, optionally authenticate a roster of students, provide feedback on wrong answers, and set number of retries. The level of functionality changes everything, because there really isn’t much you *can’t* do with it.
Phil Hill has a great analysis of the NYT interview with Richard Levin, the new CEO of Coursera. And core to that analysis is a point I’ve made before — that Ivy League institutions *do* have experience in online education, but they are so committed to covering up their failure in those efforts that they can’t learn from those mistakes. This is in contradistinction to public, tuition-supported efforts where rewriting narratives can only take you so far. After all, most institutions only have so much money you can throw down a hole and light on fire.
Here’s a piece of the NYT conversation cited by Phil:
Q. Yale has not exactly been a mass institution.
A. No, but we were early in the on-line arena, with a venture back in 2000 called All-Learn.
Q. How much did you lose, and why didn’t that spoil this for you?
A. It was too early. Bandwidth wasn’t adequate to support the video. But we gained a lot of experience of how to create courses, and then we used it starting in 2007 to create very high quality videos, now supported by adequate bandwidth in many parts of the world, with the Open Yale courses. We’ve released over 40 of them, and they gained a wide audience.
As Phil points out, bandwidth really wasn’t an issue for the demographic they were looking at with All-Learn.
All-Learn folded in 2006, when broadband was at a meager 20% adoption. Today, it’s different, supposedly. It’s at 28%. Are we to really believe that somewhere in that 8% of the population is the difference between success and failure?
Levin goes on to say they gathered a lot of experience on how to create courses, and cites Open Yale Courses as an example of that. Now the courses at OYC are interesting, and I’ve used portions of the Introductory Psychology course in my own work, as well as the Kelly Brownell course on obesity. But the price tag for those forty courses, as far as I know, was $4 million dollars of Hewlett money. And the videos are basically recordings of class lectures. Four million dollars for forty filmed courses, or, if you prefer, $100,000 a course for video lectures.
Hewlett, of course, didn’t grant Yale that money for *just* 40 courses. As anyone who has ever applied for an OER grant knows, the big question one has to answer is “How will you make this effort sustainable after the money is gone?” Levin and others apparently had an answer for that, and that answer was apparently wrong. And what the reporter is asking now is how Coursera’s sustainability path (which looks at this point to be somewhat similar to both OYC and AllLearn) is different. And the answer Levin gives is “bandwidth”. In other words, the plan was great, it was the world that was imperfect. But this time it will work for sure.
If I was an investor in Coursera and I heard that answer, I’d panic. And if I was a grant manager at Hewlett, I’d cry. It’s not Groundhog Day, it’s worse. It’s Memento, where the lead character is doomed to repeat his past because he cannot come to terms with what he has done.
It’s worth noting that there *are* newer models out there for Open Education which are learning from the past instead of repeating it. At WSU Vancouver, for example, we’re working with Lumen Learning on a math initiative. Lumen has an interesting model, which they refer to as Red Hat for OER (or, allternately, “filling the gap between DIY and WTF“). In this model Lumen iteratively improves and maintains a set of OER for free, and makes money off of consulting with colleges on adoption and integration of that OER into the curriculum.
If you ask its founders David Wiley and Kim Thanos why this time Open Education will be different, they’ll certainly mention that the world has changed since the first open textbooks. We have higher quality books, more printing options, broader adoption of devices to run those books on. And the growth pattern is different. There’s a more or less continuing growth of OER use from the late 90s forward, not the boom and bust of Ivy League Online initiatives.
But I think they’d also be quite happy to tell you how their views of what the OER movement needs have changed over the past years. In fact, here’s David doing just that in a recent blog post:
But, in their own way, each of these efforts [were] underpinned by an “if we build it they will come” philosophy. If we just make the content sufficiently high quality, if we just make it easy enough to find, if we just make it easy enough to remix, faculty will adopt OER in their classrooms. Don’t get me wrong – there are some faculty who have the necessary time, prerequisite skills, and hacker ethic to do it themselves (I would like to believe that I’m one of them). But people with this particular configuration of opportunity, means, and motive are the overwhelming minority of higher education faculty. By the end of 2012 it had become clear that if OER adoption was ever going to happen at any scale, someone needed to get on a plane, go to campus, and train people. So that’s what the Lumen team did in 2012.
If you ask David and Kim what they have learned in the past decade, they are not going to say “bandwidth”. They are going to say something along the lines of “We radically underestimated the amount of time and expertise required to integrate OER into curriculum.” and explain how their recent efforts address that issue.
That’s what’s supposed to happen. That’s how you move forward. That’s what you pay people for – not for the experience they have, but for the knowledge they’ve brought away from that experience. It’s true that Levin brings a wealth of experience to the table. But for the life of me I can’t see what he’s gained from it.
Still working with DokuWiki as an educational platform for faculty here at WSU Vancouver. I’ve found a couple things that are worth mentioning, Thought I’d jot them down here. This post deals with spam prevention.
The idea that Dokuwiki wikis don’t get spammed as much as MediaWiki installs is true, but trivially so. You’ll get more than enough spam to clog up the series of tubes that is your website. You’re going to have to lock down the installation.
I’ve experiemented with a couple approaches to this. Here’s some things you don’t want to do:
- The common “must confirm email” approach is not a long term winner. Plenty of spambots now happily confirm email, get user accounts, and live happily simulated lives on your wiki discussing the latest medical devices and weight-loss drugs available.
- Corralling freshly registered users into a “non-editing” user type is also not a great idea. I registered 8 students in my class during class for a wiki project. They then waited while I fiddled around and bumped up their privileges. It’s hard to imagine that process scaling in an academic setting.
- Similarly, deactivating registration and doing admin panel sign-ups manually is not a pleasant activity either.
- LDAP then? Ugh. An EXCELLENT feature of DokuWiki. But not really a great option in academia for a pilot project. You’d have to coordinate with IT (which will lead to who knows what). Might be something to explore down the road, but not as you’re getting this off the ground.
- Visual post CAPTCHAs? Yes, this is a great way to spark a multi-million dollar ADA/Section 508 lawsuit. Avoid.
So what do you do?
- Set read permissions to “all”. Anyone can read.
- Set edit to whatever your default confirmed registered user is.
- This configuration is that everyone can read, but only registered users can edit.
- Keep the registration link/functionality up.
- Install the Captcha plugin. Under type, chose “question”
- Make sure that registered users *don’t* have to do the CAPTCHA. In this configuration, since all non-registered can do is read, the only place the CAPTCHA will be is on the registration form.
This option will ask the student a plain text question of your choice when they register. If they get it right, registration proceeds. If not, it bumps them back.
Here’s where a bit of discretion comes into play. You can take one of two approaches:
- Make the question a piece of cultural knowledge that students should know — e.g. the name of the dining commons.
- Make the question “Access Code?” and have them supply an access code furnished by you or the prof.
As I went through “cultural knowledge” access codes, I started to realize how fraught that process was. I can maybe talk more about that later. I also realized what I really wanted was a semi-automated process for WSU staff and faculty not available to outsiders. I decided on the access code with a twist.
Here’s how it works. If you mail firstname.lastname@example.org from a WSU email account, an autoresponder will send you back the code. If you mail it from a non-WSU account, you get nothing. I do this through setting up an autoresponder on that Gmail address with the code in it, but routing everything not from @xxxx.wsu.edu directly to deletion.
So there you go, that’s my setup. Maybe in a few days I’ll talk about my depressing struggle with various markdown plugins. Or requests… I’ll take requests too.
Yesterday-ish, from Justin Reich:
I was also somewhat surprised to learn that in many systems, it is actually quite difficult to get a raw dump of all of the data from a student or class. Many systems don’t have an easy “export to .csv file” option that would let teachers or administrators play around on their own. That’s a terrible omission that most systems could fix quickly.
A couple years ago, working on an LMS evaluation, I kept getting asked what reporting features each potential platform had. Can this platform generate type-of-report-X? About 8 years ago, working on a ePortfolio evaluation, the same question came up — where are the reports? Does this have report Y?
I’d always point out that we didn’t want reports, we wanted data exports and data APIs that allowed us to generate our own reports, reports that we could change as we developed new questions and theories, or launched new initiatives in need of tracking. The data solutions we’re likely to see have real impact (with no offense to Reich’s Law of Doing Stuff) are likely to come from grassroots tinkering. Data that is exportable in common formats can be processed with common tools, and solutions built in those common tools can be broadly shared. CSV-based reports developed and adopted by Framingham State can be adopted by Keene State or WSU overnight. A solution one of your physics faculty develops can be quickly applied across all entry level courses.
What you want is not “reports” but sensible, easy, and relatively unfettered access to data. And if you don’t have someone on your campus that can make sense of such data, then you need to either hire that person, or give up on the idea that a canned set of reports are going to help you. When fields are mature, canned and polished reigns. But when they are nascent (as is the field of analytics) hackability is a necessity.
Via Clay Fenlason: “Feeling like the time spent to understand WTF @holden is talking about would be well spent, but who has that kind of time?”
Fair enough. I blog mostly for myself, to try and push on my own ideas in front of a relatively small group of people I know who push back. And part of that process is a bit manic and expansive. At some point that’s followed by a more contractive process that tries to organize and summarize. Maybe it’s time to get to that phase.
So I’ll do that soon. What I’ll say in the meantime is that all of this stuff — hybrid apps, storage-neutral apps, federated wikis, etc — is interesting to me because of my obsession with hacking and reuse. Why is reuse so darn hard? Why don’t we reuse more things? What systems would support a higher degree of reuse and sharing, of hacking and recombination? What are the cultural barriers?
There are implications to this stuff far bigger than that, but reuse (and hacking, which is a type of reuse) has been a core obsession of mine for a decade now, so that ends up being the lens.
You go to an event and there’s 50 people taking pictures of it individually on their cell phones, none of whom will share those photos with one another, yet all of whom would benefit from sharing the load of picture taking. There are psychological and social reasons why that’s the case, but there’s also technological reasons for that. Likewise there are brilliant economics teachers who have built exercises and case studies that would set your class on *fire* if you used them — but you’ll never see them.
I’ve been over the several hundred reasons why reuse doesn’t happen, over a period of ten years, It’s not just about the technology, absolutely. But occasionally I see places where reuse explodes, and the technology turns out to be a pretty big piece of that. My wife is a K-3 art teacher. And Pinterest just exploded reuse in that community. Sharing went from minimal to amazing in the space of 12 months. And suddenly she was putting together a much better art curriculum than she could have ever dreamed of in half the time, in ways that had a huge impact on her students.
So — reuse, sharing, networked learning, hacking. I’m interested in the two sides of this: first, we must teach students how to work this way. We have to. And two, we have to get our colleagues to work this way.
What does that have to do with the shift to hybrid apps? With moving from a world of reference to a world of copies and forks? With storage-neutral designs? With the pull request culture of GitHub vs. the single copy culture of OER? With the move back to file-based publication systems? I’m still trying to work that out. But I think the answer is “a lot”, and a post is coming soon.