“Experimented On”

This “experimented on” phrase bothers me a bit:

thrun

The SJSU affair falls somewhere between educational research and a social experiment, and we are very much in need of better experiments in these areas. Most educational research is pretty abysmal. Most social policy goes untested. The lack of decently designed experiments in these areas generally allows the people with the most money and policy clout to determine what constitutes truth in this space. And people suffer because of that, every day.

So we need more experimentation. And we probably need better experiments than SJSU, where Udacity demonstrated negligence in offering students an experience they should have known to be inferior. I am not arguing that we should shrug our shoulders at a company that takes student failure too lightly, or directs policy interventions disproportionately at the powerless.

But “experimented on” somehow implies to me that the rest of us are not making choices every day on how we educate students. I used a team-based model in my third iteration of a statistical literacy class I taught, and I tracked its effectiveness. Results were mixed. Was I “experimenting on” my students? I introduced peer instruction another semester. I was certainly experimenting with my delivery — but was I experimenting “on” the class?

I’d say no. I was altering instruction to find out what worked best, and paying attention to the results. This, broadly, is what it means to be a professional. And I don’t think that changes at the institutional level. I was not using my power as a teacher to collect data on stress reactions to various forms of supersonic pitches, or on heart rate reactions to violent imagery. I was trying to do the best I could at what both society and the students were paying me to do.

I’m eliding a lot of concerns here for the sake of brevity. I’m happy to argue this more deeply in the comments. But “experimented on” sounds to me like the ickiness is coming from the use of a formal design. That’s wrong, in so many ways.

In reality, the ethical considerations in situations like SJSU are both more broad and more narrow. Such activities are unethical if the treatment received by either of the test groups is unethical, end of story. If the treatment of the Udacity students is unethical inside the experiment, it would likely be unethical had no experiment framed it — the fact that we’re tracking outcomes has little to do with it. Likewise, if the use of Udacity for this purpose is an acceptable policy option outside of an experiment, then the use of random assignment to assign it is ethically neutral.

To my ears, the phrase “experimented on” confuses that issue by imposing a particular set of ethical concerns that only exist once we decide to track outcomes, or use random assignment to allocate limited resources. So please — argue whether offering such courses as educational alternatives is ethical, and debate whether experimentation that tends to target those alternatives at poorer schools is socially just. But let’s not create the impression that it’s the presence of the experiment that makes these solutions ethically dubious.

Advertisements

Thrun Enters Burgeoning Sieve Market

I can’t read much of the recent piece of Thrun hagiography without wanting to do bodily harm to myself, so this following analysis might miss some of the subtlety of the article. I’ve tried to push myself to read it fully, and I really just can’t. From the photo up top of Thrun in what looks to be a 1973 Swedish cycling film, to the URL (“uphill climb”, get it?), to the vast research incompetence of the unbelievably compromised reporter who wrote it, every paragraph reminds us that Fast Company and other such publications exist as a sort of Pravda for the Valley set. With apologies, of course, to Pravda.

But if I read the article right, Sebastian Thrun, a man who slaved a full hour over a lesson he had to correct in between displays of physical prowess, is done with the traditional higher education market.

But for Thrun, who had been wrestling over who Udacity’s ideal students should be, the results were not a failure; they were clarifying. “We were initially torn between collaborating with universities and working outside the world of college,” Thrun tells me. The San Jose State pilot offered the answer. “These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit.”

What is the answer? Move to a market segment where innovator-preneurs are free to innovator-preneuriate. Here’s one of the new classes, taught by educator-preneur Chris Wilson:

If Wilson seems slightly unprofessional as an educator, that’s because his only formal teaching credential is as an assistant scuba-diving instructor. Wilson works at Google as a developer advocate in the company’s Chrome division. His class was conceived, and paid for, by Google as a way to attract developers to its platforms. Over the past year, Udacity has recruited a dozen or so companies, including Autodesk, Intuit, Cloudera, Nvidia, 23andMe, and Salesforce.com, which had sent a couple of reps to discuss a forthcoming course on how to best use its application programming interface, or API. The companies pay to produce the classes and pledge to accept the certificates awarded by Udacity for purposes of employment.

There’ll likely be lots of analysis on this article and change in direction. He’s my little contribution. Thrun can’t build a bucket that doesn’t leak, so he’s going to sell sieves. I discussed this a bit a year ago in Why We Shouldn’t Talk MOOCs as Meritocracies (graph at top seems broken, sorry):

It’s that central point that I want to deal with though – that as a society we need only be interested in equality of opportunity, and that wide disparities of results on display are in fact OK, because they represent the system working its sorting magic. The people that have merit, who put in the work are succeeding. The people that don’t are not.

I hear this tossed around as an answer to MOOC failure rate, and it scares me a bit. It has taken decades for us to get to a point in higher education and K-12 where we are held accountable for social outcomes. And while there are flaws in the way those outcomes are measured, I know my own institution has actually undergone a sea change since I attended. We still struggle, occasionally, with faculty who think their job is to thin the class on its way up, but on the whole most faculty are committed to increasing the student success rate…similarly, my child’s grade school has moved heaven and earth to successfully teach skills to children that would have been abandoned years ago.

Udacity dithered for a bit on whether it would be accountable for student outcomes. Failures at San Jose State put an end to that. The move now is to return to the original idea: high failure rates and dropouts are features, not bugs, because they represent a way to thin pools of applicants for potential employers. Thrun is moving to an area where he is unaccountable, because accountability is hard.

It’s tempting to say good riddance, but I would add just one more thing. Despite giving up on equality of outcomes, Thrun still believes he is in the education business. Fast Company still believes he’s in education. So do a lot of policy makers.

And it’s quite possible to go to a model of education that sees its primary goal as thinning the herd. Such systems have existed in many places throughout history. There is no reason that this version of education can’t come back, and every day we allow Thrun to pretend he is not running from accountability is a day we move closer to such a model. That future involves an “uphill climb” for the people who need our help the most, and I’m hoping we can avoid it.