Ages ago in MOOCtime there was this media think-nugget going around about the glories of Big Data in MOOCs. It reached its apex in the modestly titled BBC piece “We Can Build the Perfect Teacher“:
One day, Sebastian Thrun ran a simple and surprising experiment on a class of students that changed his ideas about how they were learning.
The students were doing an online course provided by Udacity, an educational organisation that Thrun co-founded in 2011. Thrun and his colleagues split the online students into two groups. One group saw the lesson’s presentation slides in colour, and another got the same material in black and white. Thrun and Udacity then monitored their performance. The outcome? “Test results were much better for the black-and-white version,” Thrun told Technology Review. “That surprised me.”
Why was a black-and-white lesson better than colour? It’s not clear. But what matters is that the data was unequivocal – and crucially it challenged conventional assumptions about teaching, providing the possibility that lessons can be tweaked and improved for students.
The data was unequivocal. But was the truth it found durable? I’ve argued before that the Big Data truth of A/B testing is different from the truth of theoretically grounded models. And one of the differences is durability. We saw this with the A/B testing during the Obama campaign, when they thought they had found the Holy Grail of campaign email marketing:
It quickly became clear that a casual tone was usually most effective. “The subject lines that worked best were things you might see in your in-box from other people,” Fallsgraff says. “ ‘Hey’ was probably the best one we had over the duration.” Another blockbuster in June simply read, “I will be outspent.” According to testing data shared with Bloomberg Businessweek, that outperformed 17 other variants and raised more than $2.6 million.
The “magic formula”, right? Well, no:
But these triumphs were fleeting. There was no such thing as the perfect e-mail; every breakthrough had a shelf life. “Eventually the novelty wore off, and we had to go back and retest,” says Showalter.
And today there is news that the “Upworthy effect” — that A/B tested impulse to click on those “This man was assaulted for his beliefs. You won’t believe what he did next.” sort of headlines — is fading:
[Mordecai] lets everyone in on his newest data discovery, which is that descriptive headlines—ones that tell you exactly what the content is—are starting to win out over Upworthy’s signature “curiosity gap” headlines, which tease you by withholding details. (“She Has a Horrifying Story to Tell. Except It Isn’t Actually True. Except It Actually Is True.”) How then, someone asks, have they been getting away with teasing headlines for so long? “Because people weren’t used to it,” says Mordecai.
Now, Upworthy is an amazing organization, and I’m pretty sure they’ll stay ahead of the curve. But they are ahead of the curve precisely because they understand something that many Big Data in Education folks don’t — the truths of A/B are not the truths of theory. Thrun either believed or pretended to believe he had discovered something eternal about black and white slides and cognition. Which is ridiculous. Because the likelihood is he discovered something about how students largely fed color slides reacted to a slideset strangely reduced to black and white.
Had he scaled that truth up and delivered all slides in black and white he would have found that suddenly color slides were more effective.
There’s nothing wrong with this. Chasing the opportunities of the moment with materials keyed to the specific set of students in front of you is worthwhile. In fact, it’s more than worthwhile; it’s much of what teaching is *about*. Big Data can help us do that better. But it can only do that if we realize the difference between discovering a process that gets at eternal truths vs. discovering a process that gets at the truth of the moment.