# Second (Third?) Tumblr Linklog Attempt

I’m trying that thing again where I post a few Downes-style1 takes on random education articles I come across to Tumblr. This will likely be followed by that thing where I stop posting Downes-style takes on random articles. But the sad twitter-centric state of the artist formerly know as the edublogosphere has compelled me to try once again. I really shudder to think what would happen to the space if Downes stopped the OLDaily project, especially now that people have fallen out of Google Readerdom.

For those unfamiliar with linklogs, click on the title to read the original article, and realize I write the summary in the space of about three to five minutes, so don’t expect the Critique of Pure Reason.

Anyway, go check it out. Subscribe to the RSS. And if you find it useful, let me know, it might help me figure out whether to slog through the dip.

—–

1Yes, Downes-style. Quality and consistency, I don’t kid myself.

# A Simple, Less Mathematical Way to Understand the Course Signals Issue

I’m re-reading my Course Signals post, and realizing how difficult it is to follow the numbers. So here’s an example that might make it clearer.

From this desk here, without a stitch of research, I can show that people who have had more car accidents live, on average, longer then people that have had very few car accidents.

Why? Because each year you live you have a chance of racking up another car accident. In general, the older you live, the more car accidents you are likely to have had.

If you want to know whether people who have more accidents are more likely to live longer because of the car accidents, you have to do something like take 40 year-olds and compare the number of 40 year-olds that make it to 41 in your high and low accident groups (simple check), or use any one of a number of more sophisticated methods to filter out the age-car accident relation.

The Purdue example is somewhat more contained, because the event of taking a Course Signals class or set of classes happens once per semester.  But what I am asking is whether

1. the number of classes a student took is controlled, for, and more importantly,
2. whether first to second year retention is calculated as
1.  the number of students that started year two / the number of students who started year one (our car accident problem), or
2.  the number of students who started year two / the number of students who finished year one (our better measure in this case).

I think it is the “car accident problem” statistic that is being quoted in the press. If it is, then it’s possible the causality is reversed: students are taking more Course Signals courses because they persist, rather than persisting because they are taking more Signals courses.

# Why the Course Signals Math Does Not Add Up

I see that there’s a new Course Signals press release out that claims Course Signals boosts graduation rate 21%. Or, more exactly, it claims taking *two* classes using Course Signals boosts graduation rates by 21%.

Why does it claim that? Well, I haven’t looked at the new data, but I did look at it last year, and assuming the trends didn’t change it’s because taking *one* CS course correlates with a *lower* retention rate than the non-CS population.

(Note that Purdue has since moved this study off their site). UPDATE: Found a version of this study, moved here.

The press release then contains this maddening quote:

“For some reason, two courses is the magic number. Taking just one course with Course Signals is beneficial for students, but we’ve seen significant improvement when students are enrolled in at least two courses using the system,” Pistilli says.

“We need to continue to do more research on why this is significant,” Pistilli continues. “We think it is because if a student gets feedback from two different instructors, they can make better links between their performance and their behaviors. For example, a student may be taking a course in biology and and another in communications, which are very different courses, but if they get the same feedback in both courses about study habits and needing to spend more time on task – and they hear suggestions on how to improve from two different professors – they appear to make the change. What’s notable is that this improvement stays with them in their other courses and for the rest of their academic careers.”

Ok, that’s a great theory about the “magic number”. But I actually have another one.

Only a portion of Purdue’s classes are Course Signals classes, so the chance any course a freshman takes is a Course Signals course can be expressed as a percentage, say 25%. In an overly dramatic simplification of this model, a freshman who takes four classes the first semester and drops out has a has about a 16% chance of having taken two Course Signals courses (as always, beware my math here, but I think I’m right). Meanwhile they have a 74% chance of having taken 1 or fewer, and a 42% chance of having taken exactly one.

What about about a student who does *not* drop out first semester, and takes a full load of five courses each semester? Well, the chance of that student having two or more Course Signals courses is 75%. That’s right — just by taking a full load of classes and not dropping out first semester you’re likely to be tagged as a CS 2+ student.

In other words, each class you take is like an additional coin flip. A lot of what Course Signals “analysis” is measuring is how many classes students are taking.

Are there predictions this model makes that we can test? Absolutely. As we saw in the above example, at a 25% CS adoption rate, the median dropout has a 42% chance of having taken exactly one CS course. So it’s quite normal for a dropout to have had a CS course. But early on in the program the adoption rate would have much lower. What are the odds of a first semester dropout having a CS course in those early pilots? For the sake of argument let’s say adoption at that point was 5%. In that case, the chance our 4-course semester drop out would have exactly one CS course drops from 42% to 17%. In other words, as adoption grows having had one course in CS will cease to be a useful predictor of first to second-year persistence.

Is that what we see? Assuming adoption grew between 2007 and 2009, that’s *exactly* what we see. Here’s the early pilot days of Course Signals:

As you can see, in what were likely the lower adoption days taking one Course Signals course is a huge predictor of persistence. The one year retention rate is 97% for the one-CS course student, compared to just 83% for the no-CS student. As adoption expands, changing the median experience of the drop out, that difference disappears, just like the model predicts:

Two years later, that retention effect has disappeared entirely, with the same percentage of one-CS students dropping out as non-CS students. Why? Without access to dropout data, I can’t say for sure. But I submit a large part of the shift of one-CS as predictor of retention to one-CS as non-predictor is that having one-CS class is now the average experience of both low course-load students and first semester dropouts. The effect disappears because the confounding variable is no longer correlated with the dependent variable.

So, do I believe that Course Signals works? Maybe, it’s possible. There are other measures with grades and such that look a bit more promising. But the problem is that until they control for this issue, the percentage increases they cite might as well be random numbers. It would be trivially easy to account for these things — for example, by removing first semester dropouts from the equation, and only looking at students under full course load. When I looked at the (now removed) 2012 report, I saw nothing to indicate they had done these things. Now, in 2013, I still see nothing to indicate they have taken these factors into account.

I could be wrong. I’m a Literature and Linguistics M.A. who works in educational technology, not a statistician. But if I am right, this is sloppy, sloppy work.

# Commitment

New research out on the use of student response systems in the classroom, and really no surprises to be found in it. Students respond favorably to SRS use in the classroom when it’s used consistently with a clear purpose by an instructor who is excited about using it, and committed to the method.

It does remind me though of how often we fail at the commitment piece. We go into a class believing in a method, mostly, but worried about failure. And our first instinct can be for us to distance ourselves from the method or technique or technology, to somehow immunize ourselves against potential future disaster: we say, “Hey, so we’re trying something new in this class, maybe it will work, maybe it won’t, frankly it will probably blow up around midterms (ha ha) but just soldier through it…”

And the students rightly go um, who’s this “we”? You’re the teacher, you have the power, you’ve got them in the desks, you’ve designed the semester that they are pouring their money and time into; have you thought this through or not? There’s some sorts of weakness and doubt you can show to students. In my experience, this sort of weakness is not one of them. You’re not doing your students any favors by fostering worries that all their effort may be for naught. And ultimately you’re not insulating yourself from failure either.

If you want to talk about failure, explain to your students the conditions under which the method seems to work (type of effort and participation required, potential fail points and solutions, etc.) Places where your students can have an impact on the design or success of the project. These are places where you can empower your students, which is quite different from punting on your own responsibility as a course creator and facilitator.

It’s a simple point, but so many educational technology disasters I’ve been involved with have come down to the committment aspect. If you can’t commit to it, don’t do it. But if you’re going to do it, commit.

Photo Credit: flickr/hectorir

# The Myth of the All-in-one

Occasionally (well, OK, more than occassionally) I’m asked why we can’t just get a single educational tech application that would have everything our students could need — blogging, wikis, messaging, link-curation, etc.

The simple answer to that is that such a tool does exist, it’s called Sharepoint, and it’s where content goes to die.

The more complex answer is that we are always balancing the compatibility of tools with one another against the compatibility of tools with the task at hand.

The compatibility of tools with each other tends to be the most visible aspect of compatibility. You have to remember if you typed up something in Word or Google Docs, remember what your username was on x account. There’s also a lot of cognitive load associated with deciding what tool to use and to learning new processes, and that stresses you out and wastes time better spent on doing stuff that matters.

But the hidden compatibility issue is whether the tools are appropriate to the task we have at hand. Case in point — I am a Markdown fan. I find that using Markdown to write documents keeps me focused on the document’s verbal flow instead of its look. I write better when I write in Markdown than I do when I write in Google Docs, and better in Google Docs than when I write in Word. For me, clarity of prose is inversely proportional to the number of icons on the editing ribbon.

Today, Alan Levine introduced me to the tool I am typing in right now — a lightweight piece of software called draftin. Draftin is a tool that is designed around the ways writers work and collaborate, rather than the way that coders think about office software. It uses Markdown, integrates with file sharing services, and sports a revise/merge feature that pulls the Microsoft Word “Merge Revisions” process into the age of cloud storage.

As I think about it, though, it’s also a great example of why the all-in-one dream is an empty one. If I was teaching a composition class, this tool would be a godsend, both in terms of the collaboration model (where students make suggested edits that are either accepted or rejected) and in the way Markdown refocuses student attention on the text. Part of the art of teaching (and part of the art of working) is in the calculus of how the benefits of the new tool stack up against the cognitive load the new tool imposes on the user.

We want more integration, absolutely. Better APIs, better protocols, more fluid sharing. Reduced lock-in, unbundled services, common authentication. These things will help. But ultimately cutting a liveable path between yet-another-tool syndrome and I-have-a-hammer-this-is-a-nail disease has been part of the human experience since the first human thought that chipped flint might outperform pointy stick. The search for the all-in-one is, at its heart, a longing for the end of history. And for most of us, that isn’t what we want at all.

Photo credit: flickr/clang boom steam

# Peak Higher Ed and the Age of Diminished Expectations

Bryan Alexander has a good post up on the idea of peak higher ed, a trend/theory that encompasses peak demographics, declining public support, ballooning debt, and the increasingly conventional public opinion that college is no longer as sure a route to a good job.

I’d point out that a lot of what Bryan looks at can be seen as “Peak United States” as much as Peak H.E. The phrase that always pops to mind when looking at such things is Paul Krugman’s 1990s book title “The Age of Diminished Expectations.” For all of the light and heat generated in the current edtech buzz, what lies beneath it all is more a resignation than a positive vision of the future. Just as we’ve accepted that we can do nothing to reduce unemployment, or avert global warming, the grand rhetorical play of current edtech is that we can’t really do more or better, but we can make doing it inexpensively suck less.  That’s just the price of entry into public discourse in 2013; you have to start by embracing the myth that there’s no free lunch, that anything done to improve one sector comes at a cost to another. Even the idea that MOOCs could lead to better learning outcomes is only made plausible to the public when paired with the Greek drama of higher education’s implosion. To make progress believable you must invoke suffering.

So we suffer more from a limiting perspective than real constraints. And reflecting that, when I look at the figures that Bryan cites, I see in them more malaise than fundamentals.

• Those skyrocketing tuitions? Due in part to the fact the economics of going to college have never been stronger. Colleges charge a lot because they can charge a lot, because investment in a college degree outperforms the stock market 3-to-1.
• Declining demographics? They could (in a rational world) be seen as an opportunity to spend more resources per capita on students.
• The lack of jobs for recent graduates? It actually decreases the opportunity cost of education for them, making further post-secondary study more palatable. (Heck, that’s why I went to grad school in 1994).
• Adjunctification? If recent research is to be believed, adjunctification might not be wholly bad, as professionals that primarily teach for a living become better teachers (yes, insert 1000 caveats on ed research here). Given that teachers cost less than researchers, this is hardly a bad thing.
• The increasing “irrelevance” of a degree in the marketplace is also the result of huge gains in degree attainment. As we educate more people we’re going to find that a degree makes you stand out from the crowd less. Assuming the degree makes you a better worker (and everything seems to indicate it does), who cares? Are we supposed to be sad about the decline of white male privilege? Or can we celebrate that more students than ever before are graduating college? Does anyone despair that a high school degree doesn’t give them a leg up anymore? (Besides Richard Vedder, of course…).

As a thought experiment, imagine that we followed the advice of many and made community college free for all, apart from some small fees to build commitment. Imagine that we saw it as as much an educational right as high school. What do these indicators of decline look like then?

• As mentioned above, declining demographics are good — it costs us less, which means we can spend more per capita on students.
• Debt disappears, and in a surprising “free lunch” scenario, it costs less to run the community colleges than we spent on our current byzantine system of federal higher education funding.
• Questions of whether “College is worth it”, at the community college level become almost as senseless as asking whether high school is worth it — if you aren’t doing anything else, and it gets you closer to your career goals, yes it’s worth it.  (And if it’s not, it’s not — do something else!)
• Adjunctification becomes just the messy lead-in to the professionalization of the discipline of teaching in higher education.  Adjuncts become full-time employees, and education costs less.

So maybe we are seeing peak higher ed. But maybe we’re also witnessing a transitional state. Maybe the problem is that we see “higher ed” as higher. Maybe the end of higher ed is the expansion of lower ed or the evolution of lifelong ed.

Or maybe something else is happening entirely. Matt Reed, who I respect quite a bit, sees the free community college idea as a trap. Sherman Dorn embraces a middle ground. (And frankly, I believe in the long run the real question will be about how to divvy up the GDP produced by our robot overlords, but that’s another post).

The point is not that this solution is *the* solution. Rather, it’s how different the fundamentals look if you assume universal education is an expectation we shouldn’t diminish. The same markers that spell decline point a path to ubiquity, if we want it. Do we?

Tanya Joosten was asking today about password managers, and it being 4:30 p.m. it seemed as good a time as ever to talk about password mental algorithms, and how they can make your life easier. Here’s how they work (keep in mind this is not *my* algorithm, just an example).

First pick a good root password. You’re just going to have to remember one root, so you can afford to make it good.

4tYPoG!U

Now come up with your algorithm. It should be based on some system you know with enough of a twist to obfuscate it. What the system is will depend on what you know — the periodic table, release dates of Beatles records, the stops on the Boston Red Line.

Here’s an example — take the first three letters of the domain name, express them as the NATO phonetic alphabet, and intersperse the second letter of each of those words in slots 1,3, and 5 of the password. E.g.

Google = Golf Oscar Oscar = oss = o4stYsPoG!U

Outlook = Oscar Uniform Tango = sna = s4ntYaPoG!U

This sounds incredibly hard, but in reality since you type in passwords a lot you practice the system a lot and it becomes second nature.

You want to see the Beatles example? Sure. Google has six letters, the sixth Beatles album was Rubber Soul, so maybe:

R4utbYPoG!U

That system will give you the same password for Outlook and Twitter, but you’ll live.

Once you know the system, it’s easy to see what letters have been replaced. But a person that learns one of your passwords can’t possibly intuit the other passwords without the system. If you want to change your passwords after six or twelve months, then alter the system slightly — now it will be the third letter of the word, or slots 3,4,6. Whatever.

Again, it sounds crazy complex, but it is so much more simple than remembering 40 separate passwords, and much less nerve-wracking than putting all your passwords into a piece of software that can become a single point of failure.