Mixed-Course Students Are the Norm, Not the Exception

One of the reactions people are having to the new Blackboard report on design findings is that it “doesn’t represent online” because many of the students and the findings show students are taking a mix of online and face-to-face courses, and this, people claim, is not your average online experience. Why are we pretending that students taking an occassional course in a face-to-face experience are representative of online students?

Blackboard is right here. People are wrong.

This idea people have of the “online student” might have been true ten years ago in the U.S. I don’t know, because I can’t find older data that asks the right questions. But as of 2016, the idea that your median online student only takes online courses is wrong. Most students who take online courses do it as part of a portfolio of online and face-to-face experiences, and at the undergraduate level it’s not even that close. In 2013, about two million undergraduates took purely online courses, whereas 2.6 million took an online course together with courses from other modalities (blended, face-to-face, etc).

The head of that project at Blackboard tells me (via Twitter) that they actually had a variety of students, including some pure online, but the report from Blackboard actually fills out my prediction from 2014: that people will finally take the experience of these mixed-course students seriously.

In fact, the most stunning thing about the Blackboard report might not be its implied admission of current design failure, but the fact that it wrestles with the local online student’s experience as a whole, something that I don’t see near enough. Here are the the findings:

  • When students take a class online, they make a tacit agreement to a poorer
    experience which undermines their educational self worth.
  • Students perceive online classes as a loophole they can exploit that also
    shortcuts the “real” college experience.
  • Online classes don’t have the familiar reference points of in-person classes
    which can make the courses feel like a minefield of unexpected difficulties.
  • Online students don’t experience social recognition or mutual accountability, so
    online classes end up low priority by default.
  • Students take more pride in the skills they develop to cope with an online class
    than what they learn from it.
  • Online classes neglect the aspects of college that create a lasting perception of
    value.

What shocks some people reading this, I think, is that online students would have face-to-face courses as a comparison or option, or that they would be consciously choosing between online and face-to-face in the course of a semester. But this is the norm now at state universities and community colleges; it’s only a secret to people not in those sorts of environments.

In any case, everyone update your prototypes please, the mixed-course student (hmmm… should I have gone with “mixed-course” many years ago?) is here to stay. Time to better understand what she is looking for. In the U.S., at least, this is the median experience: time to stop treating it as an anomaly.

“Grit” and Personalization

Jeff Selingo has a helpful summary of some recent research that attempts to add some nuance to our “grit”-crazy times:

“Things like grit and 10,000 hours are mindsets that are very misleading because they are consequences not causes — they are lagging indicators of performance,” said Todd Rose, who is the author of The End of Average, a book that illustrates how averages are flawed in understanding human achievement.

Since September, Ogas and Rose have interviewed three dozen people who have achieved success in their fields, from sommeliers to poker players. “When we spent the time to understand how they got better, each master had his or her own unique path,” Ogas said.

Now, these researchers are interviewing masters, and relying on self-constructed narratives for their research. That seems prone to some distortion.

But it reminds me that grit studies suffer from the same distortion as well. The typical grit-speriment involves giving people a task to complete, and seeing how well they complete it. By definition, the biggest predictor of who successfully completes the task is “grit”, which is largely a measure of whether people stick with tasks. It’s tautological, really.

Don’t believe me? Here’s question #2 and # 3 on Duckworth’s Grit Scale. Both questions are taken as negative indicators of grit (e.g. the more you agree with these the less grit you have).

2. New ideas and projects sometimes distract me from previous ones.*

  • Very much like me
  • Mostly like me
  • Somewhat like me
  • Not much like me
  • Not like me at all

3. My interests change from year to year.*

  • Very much like me
  • Mostly like me
  • Somewhat like me
  • Not much like me
  • Not like me at all

If new projects distract you, or you evolve in your interests over time, then you have no grit. And scarily, that’s something we’re supposed to fix.

In contrast, here’s a question that identifies the existence of grit. Higher identification means more grit:

9. I finish whatever I begin.

  • Very much like me
  • Mostly like me
  • Somewhat like me
  • Not much like me
  • Not like me at all

Now, I know that Duckworth’s research is more nuanced than this. But here’s what I see in a lot of research around grit these days:

  1. Questionaire: Do you finish things you start, even if they suck?
  2. [Put student in sucky experience]
  3. Result: Students that say they stick with sucky things stick with sucky things!

Slightly better than this (but not by much) is the intervention approach:

  1. Questionaire: Do you finish things you start, even if they suck?
  2. Intervention: Did you know that only bad and unsuccessful people quit things?
  3. [Put student in sucky experience]
  4. Result: Students told quitting things is a sign of a weak personality quit less!

I’m surprised people don’t comment on this more. Traditional psychological experiments tend to be about the completion of contrived and inflexible tasks, because you want to measure the same dependent variable for everyone. So, for example, we can look at the fact people with more grit stuck with bootcamp at higher rates. But given that military life might not be for everyone, surely some of the people who dropped out made better decisions than some of the people who stayed in. Since the dependent variable isn’t “success”, but rather “not changing goals based on new experience”, we find that people in our goal-adherent group don’t change goals much. (I’m shocked!)

The whole thing really does seem like one big tautology, an effect of how experiments and institutions are designed instead of any profound insight into human nature. (I wonder if grit is coming up on its own Verbal Behavior moment.)

But let’s move on.

Finding the Unique Path

What the newer researchers found is that most people that mastered something did not get there by a straight path. In effect, a very normal path to success was quitting a way that failed for them.

The example that spoke to me the most was this:

Take the wine connoisseur who spent hours studying for the test to become a master sommelier without success. “Then he realized he was able to recognize wines through his facial reactions when he tasted them. When using this method, he aced the test and spent a fraction of the time studying,” Ogas said.

What we’re talking about here is learning, of course. And again, this is not learning styles we are discussing. What this person realized were the explanations he had been banging his head against on how to discern wine weren’t working for him. He needed a different route, a different way into the understanding.

And so instead of persevering, he switched tactics. Is everyone going to be able to use his path to understanding? Probably not. People are different. But some might.

I can’t help but see this through the lens of Choral Explanations. In that article we noticed that the way proficient programmers came to understand things on help sites was *not* by banging their head against the best explanation, but by scanning a number of alternate explanations until something “clicked”.

This differs from the current “gritty textbooks” approach to teaching. In this approach we give the students what we feel is the ONE BEST EXPLANATION of something in a textbook, and have them bang their head against it. If they don’t get it, well, they just have to try harder.

And at the end of forcing students to learn something the ONE RIGHT THING the ONE RIGHT WAY, we discover an amazing statistical fact — students that don’t have “grit” to do things this way fail.

But who does this pattern really indict?

 

Communities Need Tools to Protect Themselves From Scale

Rolin Moe points me to the pointlessness that is another study that finds massive, temporary forums are not as engaging as smaller online groups.

This is why I spend zero time publishing academic articles, frankly, besides the obvious Reviewer #2 junk. The state of knowledge among people who have actually run large online communities is so far advanced beyond the research community that most research in this area is more amusing than helpful. Education researchers are the worst offenders, and you’ll notice that paper Rolin points to has barely a citation before 2011, because isn’t that when we discovered massive forums?

Sigh. I’m trying to stay positive, I really am.

That said, I think a lot of this comes down to the fact that people who have run large online communities largely assume everyone is aware of this stuff. We don’t have the arrogance to post old knowledge as new insights, and so we loop past snoozing the I Got You Babe alarm clock time and time again.

I’m running to a meeting, but I figured I had 10 minutes to drop one of these long known truths here, one that could have saved people from writing that paper. It was (to my mind) most succinctly expressed by Clay Shirky:

You have to find some way to protect your own users from scale. This doesn’t mean the scale of the whole system can’t grow. But you can’t try to make the system large by taking individual conversations and blowing them up like a balloon; human interaction, many to many interaction, doesn’t blow up like a balloon. It either dissipates, or turns into broadcast, or collapses. So plan for dealing with scale in advance, because it’s going to happen anyway.

This is a really simple observation, and it’s not Shirky’s. Shirky is actually expressing community admin knowledge from 1980s BBS’s that was forgotten in the 1990s.

This is discovered repeatedly throughout history, and you could write a taxonomy of different techniques we use to protect users from scale. Maybe I will at some point. But I’d beg you to stop reading recent research until you read what actual practitioners have known for years, since the BBS really. Please?

 

 

New WYSIWII Editor Added to Wikity

One of the ways we kill reusability is through layout and markup. In fact, this was one of the realizations that started me on this path eight years ago. Looking at the practical barriers to remix it became clear that highly formatted web pages, PowerPoints, and Word documents were not really remixable, because of the work required to strip formatting and rebuild the text in a form and style that suited your course.

What you really wanted was something that had nothing but the bare minimum of semantic tags. And that’s when I got infatuated with the idea that Markdown and Textile and other simple markup formats might point the way to greater reusability. These formats allow you to do a few semantic things in human readable markup:

You can **bold**, _italicize_, and [link](http://example.com). You can

## Make Headings

and

* make a
* list
* of items

and it comes out like HTML:

md

The thing Markdown stops you from doing, however, are things like specifying font type or size, or doing table layouts, or making complicated float-left or z-index rules for your content.

You can add some rules to your platform to deal with these things, but they don’t live in your document source code. This makes your document portable in a way very little else is. It also makes it accessible, since you can use these codes without having to use a ribbon bar, or select text, or use complicated key combinations (all of which reduce accessibility). For these same reasons, a Markdown based system is quick and easy to use on a phone.

So when I started Wikity, I decided to go Markdown source as format. And that’s worked out for me. The speed with which I can put together a Wikity card makes this WordPress dashboard editor feel like concrete shoes. I’ve written 924 cards since November, because the Markdown always-there system we have in place makes writing a card as easy as tweeting (and forking it nearly easy as retweeting). About 20% of those have been posted from my phone, which I would not have thought a possible percentage.

That said, people get intimidated by the blank box. They needed the training wheels of the ribbon bar. So after a few botched attempts, I finally found a WYSIWII (What You See Is What It Is) editor for Markdown that could hook up easily to some Wikity events. I think we’re going to get the best of both worlds here, but check out this video, and let me know what you think:

 

De-Legitimization

weller

People with broadly similar goals often operate under different theories of change. Some people believe change comes from moving fast and building things, and a lot of it does. Some people think it comes from laying the intellectual and policy foundations that nurture desired results, and that’s true too.

Most massive changes require approaches at a variety of levels. I’m not a fan of the conservative revolution of the 1980s and 1990s in the U.S. but I am a student of it. And what you see if you look back to the 1970s is an unlikely alliance of rabble-rousers, charming demagogues, and eggheaded think-tanks all working in ways that the others don’t necessarily appreciate, but all ultimately pulling in the same direction until the perfect storm is launched with the election of Ronald Reagan.

What we are watching now on the Republican side of this American election is largely the fraying of that alliance. The policy wonks are disgusted by the activists, the activists are disgusted by the policy wonks, the politicians are loathed by everyone. And while this has been true a while, the new twist is de-legitimization. You saw this in the de-legitimization of Obama (he’s not really our president, there is no birth certificate, the election was rigged). But once Frankenstein’s monster is put together it doesn’t do your bidding for long.

Over time the differences in the theory of how progress gets made began to seen be through the lens of de-legitimization. Republican legislators who believed that a government shutdown was not the best way to overturn Obamacare — well, it wasn’t that they had a different theory of change or sense of what was possible. It was that they were corrupted, indistinguishable from the Obama establishment. They’d been bought out, co-opted, charmed by Satan’s silver tongue.

The circle of who was considered a legitimate actor began to narrow to only those who compromised nothing, and the pressures of this process intensified expectations of the base while cutting off the opportunities for compromise that might make true progress possible. The lack of progress then only confirmed the base’s feeling they we being sold out by those above them.

The odd alliances of the past 40 years were held together by winning, and without wins they began to deteriorate. The circles of the legitimate activists shrunk, as they positioned themselves not only against Obama, but against the establishment, purging first the RINOs (Republicans in Name Only) and then those that refused to adopt obstructionism. The “elites” began to lash out at the base, confused that they were suddenly outside the circle that they themselves had helped draw.

In short, they had created the perfect Perpetual Disillusionment Machine: unrealistic expectations fueled rigidity which sabotaged results which convinced the different actors that there was even more corruption than they had thought, leading to further purges, which led to expectations being even higher, and so on.

I suppose I should get to my point.

The Open Education Movement (if it is a movement) is not the Republican Party of 2016. On that timescale I’d actually place us somewhere around 1978. We’re young. We’re starting to win. It feels like something big may be coming down the road. This is an incredibly good place to be, especially when your mission is making education more responsive to the needs of students and educators, more liberating, and more suited to our networked age.

We all have different theories of change. Unsurprisingly, most people’s theories of change support the contention that their unique gifts (whether those are policy analysis and engagement, building things, designing good UX, or promoting the heck out of existing technologies) are the lynchpin of the movement. It’s funny how that works out like that!

And so we argue over where we should spend our effort. Why? Because there are limited resources. You can put a builder or a thinker in a keynote. You can fund open textbooks or open pedagogy. We notice that some parts of the movement get more press and some get more money. We notice that open textbooks are taking off while open pedagogy is still mostly below the radar. We get upset, maybe, at these divisions. And all this is good and right. You can operate for decades like this and most movements worth anything do.

Where it falls apart is at that line of legitimacy. You have to assume your allies have good intentions and are being thoughtful and reflective about their practice. You have to treat differences in approach as differences in personal theories of change, not as tests of moral fiber. That means engaging respectfully with the person’s underlying assumptions instead of challenging their motives or character. And not only engaging with those assumptions, but doing it with the understanding that they have thought as long and hard about about their chosen path as you have about yours. I know a lot of people in Open Education. Navel-gazing is our national sport; I don’t think we’re in danger of being too un-reflective anytime soon.

At the risk of offending nearly everyone in the community and ending up horribly alone let me stop sub-blogging for a moment and name names. Things that Jim Groom has tweeted in the past couple weeks have implied that George Siemens has engaged in ethically dubious action by speaking to people in China about his very human and anti-authoritarian take on analytics (Release the Transcripts, George!). George responded by writing some lines in an otherwise lovely tribute to Gardner Campbell that were taken to imply that Jim Groom’s particular knack at generating enthusiasm around difficult to understand concepts was really a form of self-aggrandizing limelight stealing. (UPDATE: George says in comments that he had no particular individual in mind)

I can’t say this was what either intended, but that’s how most people read each.And it’s indicative of a tension that’s been brewing for a few years in the community, and seems to be boiling over more and more frequently.

It’s fun to believe that the activity we engage in is the center of the universe or the lynchpin of the movement. I “know”, for example, that Wikity is going to change EVERYTHING because all you peons are thinking TOO SMALL. I have to “know” that, because that precious delusion helps me rationalize the Saturdays I spend at Starbucks trying to get my spaghetti PHP code to work when I’d rather be home making music with the new Reason 9 beta. I have to delude myself, at least a little bit, to get it done. We all do.

And that’s fine.

But we don’t have to bring that delusion into our work together, and we certainly don’t have to see people working problems from a different angle as ethically dubious or narcissistic agents. We can’t call allies sell-outs and then wonder why our movement is so ineffectual or fractured. And we can’t treat people racing ahead as bomb-throwing proles. Because the truth is that none of us is the center of this thing. We need all the talents, we need the multiple approaches. We need people who make change possible, and we need people who provide the intellectual and policy substrate to make that change last.

In closing Gardner Campbell is awesome and love to see him getting his due, everyone else is a genius too, if you don’t like George’s list of people promote your own list because a lot of people don’t get their due in this industry, but for the love of God stop trying to attribute different tactics among allies to questionable ethics or self-interested motives.

Also, if you need a boost of idealism, read this extended Washington Post article on how Aerosmith and Run-D.M.C. recorded “Walk This Way” and changed what got played on the radio forever. I’ll let you draw your own analogies.

———

UPDATE: As noted above, George says he didn’t intend to single out Jim. I take him at his word there. I do think, however, that underlying story here is still the same. These flare-ups happen against a background of tension that seems to have been growing in the past few years (just look at the tenor of Open Ed year over year).

The solution is not that fucking hard. Assume good faith. Attack people’s theory of how things change, and even their actions, but stay away from motives and character unless you are sure you know the motives (you don’t) or truly believe their character is the root issue (it probably isn’t).

And yes, implied counts as much as said, so cut a broad swath.

 

 

A Quora for Open Educational Resources?

Quick note: after writing the last post on choral explanations I’ve gotten more deeply into Quora. And while there are a number of things that would have to change to make a Quora-style site a viable OER collaboration tool the base interaction is amazingly on target for how collaborative OER repositories should work.

Again, I hasten to say that it is not an out-of-the-box solution for OER at the moment. But it comes closer to any product I’ve seen. And a lot of the genius is in the little UX details.

If you’re involved in open educational resources, I’d highly encourage you to explore the site. Take a day, set up a profile, fill out some areas of expertise, pose a question, follow a few questions, and ask a question to get a sense of the flow of the site. And then if you want to talk implications for faculty and student produced OER let me know, and we’ll chat.

I’ll also post some of my thought on this site in the coming weeks. But use the site, you won’t regret it.

 

Choral Explanations

I mentioned in a recent post that the collaborative web is moving away from the “one best resource” model, but didn’t go into much detail on that point. I’d like to talk a bit more about that, and hopefully relate it to newer models of Open Educational Resource (OER) use and courseware design.

When people think about “non-academic” collaborative educational resources on the web, they tend to think about traditional wiki sites such as Wikipedia, WikiHow, etc. In places like these people work together to produce the one best explanation of a topic.

As readers here know, I’m a big fan of wiki, and believe that understanding the culture of wiki is crucial to understanding the potential of OER. But here’s something that many readers may not know: Wikipedia is no longer the undisputed champion of collaborative sites, at least in the English-language world. Stack Exchange, the question and answer (Q&A) site launched just six years ago, has surpassed Wikipedia by some measures. In September 2015, there were 32,025 users who posted five times or more on the Stack Exchange network. Wikipedia, on the other hand, had 29,434 five-time editors (editing five times a month is seen as a “frequent editor” in this formulation).

We also could look at up-and-comer Quora, a site that recently claimed itself as a monthly destination for 10% of people on the web.

And it’s possible that these sites, more than traditional wiki, might point to the future of OER use on the web, and maybe the future of courseware more generally.

Choral Explanations

This new breed of Q&A sites is replacing more traditional wiki as a source of information, through a modality we’ll call “choral explanations” (a term based on Ward Cunningham’s claim that federated wiki is a “chorus of voices”)

These newer sites do not work like older Q&A sites. Older sites (e.g. Yahoo answers) are essentially transactional. A person with a question poses a question, and the answers below the question respond to it. When the original poster of the question selects an answer as sufficient or best, the question is closed, and people move on.

These older Q&A sites are simple variants of general forum architecture. And they get good results occasionally, but it also have the sort of problems a forum runs into — they tend to produce answers that look more like replies than generalized explanations.

For example, here’s some of the bottom answers to the question “How can a person increase their chances in a lottery?”

answers

Quora and Stack Exchange turned this process on its head. Instead of envisioning the Q&A site as a single-purpose forum, the new breed of Q&A site sees the model as half-wiki/half-forum. The question asked is like the title of a wiki page; it’s not transactional but communal. A question like “How can a person increase their chances in the lottery?” is the place where the community will store their collective knowledge one that point, and it is not owned by the person who asked the question.

On Quora, in fact, the question can (and often is) edited by the community for clarity, and on Stack Exchange posters who pose badly formulated questions are pushed by moderators to reformulate their question in ways more beneficial to the site. Duplicate questions are shut down, just as duplicate wiki pages would be. The original poster of the question has no more power than any other user to rate specific answers more useful than others or to close the question. And as with wiki, answers posted are meant to be complete answers, not lazy responses to the original poster.Each answer is self-sufficient (a pattern I have termed elsewhere as “hospitable”).

Posting a question on these sites is really not about starting a conversation at all. It’s saying “Let’s gather our community knowledge on this particular issue,” just as one might do with wiki.

Unlike wiki, however, individual control of writing is preserved, and multiple unique passes at a subject are appreciated. And big questions get a lot of passes. Here’s a snapshot of a few of the sixty-eight responses to Quora’s question of why many physicists believe in a multiverse.

multiverse

As you can see, people get into it at a level that often exceeds what one can find on wiki. Yet each response takes a different approach to providing an answer. As you read multiple responses some click with you, and some don’t. Some are above your head, and some ridiculously simplified. Some exercise metaphorical thinking, others dive into math.

Here’s the beginning of an answer to the multiverse question that I particularly love, because it starts with the big question of why physicists would imagine a more complex world than necessary:

It may seem counterintuitive, but subsets are often more complex than the entire set. If you don’t believe me, here’s some everyday analogies. Which would be more surprising to find: a half-kitchen, without the rest of the house; or an entire house?

This is very similar to some problems in linguistics I remember from grad school, and the general point resonates with me. The rest of that explanation, not so much. But combined with other explanations I can start to grasp towards the idea. Here’s a more detailed explanation that is graspable to a beginner:

There is this other situation involving inflation.  It turns out that assuming that the universe expanded rapidly solves a lot of cosmological problem.  The hard part is not to find reasons for the universe to expand, but to find ways of causing it to stop.  Stopping inflation turns out to be a little hard, so one idea is that in our patch of the universe inflation stopped, but there are other parts of the universe where it keeps going.

And from there I can maybe scaffold up to another like this:

Quantum fluctuations in the false vacuum in the first bar results in a region of FV to roll down the hill of the potential energy diagram  giving rise to a local big bang which evolves into a universe.

fluctuation

The size of each of the FV’s in the second time interval is thus actually the size of the FV in the first time interval (due to inflation). In the next interval each of these regions go on to give birth to another universe when the FV breaks up. Then there are 4 remaining regions, each as big as the first region and thus the process repeats ad infinitum.

These local universes are referred to as “bubble” or “pocket” universes. Note that there is nothing round or regular about these universes, they are in fact highly irregular.

OK, yeah, I’m not there yet. To me it seems to be saying something different than the others, but that’s a discrepancy I can work with. The key for me the reader is that these “choral explanations”

  1. combine to push me to a deep understanding no single explanation can, and
  2. give me multiple routes into the content

From the production side, choral explanations have benefits as well. Unlike in pure wiki, I can write with a coherent voice, and highlight unique examples, metaphors,  or connections to which I  I have access. More importantly, I don’t have to deal with the edit flame wars that are the natural result of trying to push humanity’s diverse understanding of a topic into one page. The chorus not only results in a more complete understanding, but properly conceived and executed encourages more participation as well.

An Enclosed Dancing Floor

Long time readers of my work will realize that these insights are fundamental to the work of Ward Cunningham on federated wiki and my own work on Wikity. One of the moments I truly got what federated wiki was was when Ward was explaining how the federated wiki system used the name of documents to match up different versions of a document.

“Well, hold on,” I said. “What if you and your friends have a document you’re passing around on the Zone of Proximal Development and I have my own independently developed document of the Zone of Proximal Development? The system is going to pull those together and present them as versions of the same thing!”

“Ah,” said Ward, “but I love a good namespace collision!”

What he meant was that the pulling together of these two separate things, regardless of their lack of a shared history, might invite new insights. In these cases we are talking about, the modularity of the topic allows multiple explanations to be tied together in the same space.

It reminds me that the origin of “chorus” is thought by some to have been derived from the Ancient Greek for “enclosed dancing floor”, and although that’s just an accident of etymology, I can’t help but thinking of a chorus as individual agents we push into a bounded space; it’s really the bounding of that space — whether through harmony, melody, implied chord progressions, whatever, that allows us to see both the connectedness and the difference at the same time.

What places like Quora and Stack Exchange and the hundreds of clones that have emerged in the last few years do is work on that balance, to combine the boundedness of wiki with the individuality of personal voice. The result, when it works, is the sort of personalization that matters most.

The Future

I think OER has a lot to learn from this pattern. In this case the “contract” the answer fulfills is defined by the question posed, but I can forsee a future where that contract might be a learning goal.

Additionally, in these examples you see the stack of all the answers right in front of you, but it doesn’t have to be that way. You can imagine either an instructor or recommendation software (or some combination) defining an initial path through the chorus:

gliffy

The key would be that the student at any time could decide they wanted to look at the parallel content, either because they weren’t quite sure they were getting it, or because they were interested in seeing it from another angle:

glif.PNG

I think you could also have some basic types of things. In Wikity, I’ve found there a few basic relationships cards can have. There’s ideas/theories (e.g., the “problem of redundant security”). There’s examples of things which apply the theories (goof ups with redundant security: nuclear weapons incidents, TSA ID checks). There’s activities/assessments that demonstrate the theory in action. And so on. So maybe your fly-out buttons would say something like Show me more:

  • Explanations
  • Examples
  • Activities

And so on. Now, if they didn’t want that, they could move on to the next curated step. But the idea of of the choral system would be that the student could break out of the curated path to the alternate resources when garden path resources were not sufficient.

Ideally, as the student went through their path they would fork, modify, and annotate the materials that they review, and if we were very lucky they might link new stuff they learn to old stuff they have put in their personal learning repository. And in a perfect world, students might produce their own explanations, which would feed back into the system.

In any case, I don’t think this is that complex a system, and to some extent this is already what a lot of adaptive systems do with assessment (garden path with deep exercises if student doesn’t pass assessment). The key here is to move beyond adaptive assessment, where a deep pool of assessments are drawn on to adaptive explanation where a deep pool of examples and explanations are available and can be put at the disposal of the student in their quest to better understand a topic.

UPDATE: I’ve pulled some of these ideas into a proposal for a simple Choral OER service, here.


Special thanks to Tony Hirst, who reminded me that Stack Overflow is a much simpler way into explaining this concept than Federated Wiki. (Though, for purists, Federated Wiki is still the BEST way).

Speaking of Federated Wiki — honestly, it’s federated wiki that is pointing to the future of the web. But the new Q&A sites are also leading indicators.

Finally, after re-reading this I realize that a lot of this idea is influenced by my time at Cognitive Arts and Roger Schank’s vision for what just-in-time instruction should look like. In particular, ILS and Cognitive Arts systems often used a chorus of experts telling different “war stories” around a central point, and students were allowed to drill down into them or continue on the garden path.

We Have Personalization Backwards

I drive my oldest daughter to high school everyday. She goes to a magnet STEM school in the district that’s on the campus where I work. I’ve been brainwashing her into liking indie rock one car ride at a time using carefully planned mix CDs.

Last week she tells me I need to get more Magnetic Fields songs. Why? I ask.

“Physics homework.” she says.

It turns out that there’s a number of principles of physics that she remembers through a complex set of associations she’s developed referencing indie rock songs. I don’t pretend to get them all, but the 69 Love Songs hit “Meaningless” plays an apparently crucial role.

Later on my youngest daughter is asking me about the book Persepolis. The author of that book spends the preface talking about the reasons she wrote it, and how she felt the understanding of her native country of Iran was too narrow, and in a way, too exotic. She tells me that she doesn’t quite get what the author is talking about. After all, there’s a lot of fundamentalism in the early parts of the book — people are really in a revolution in 1978, so what are we getting wrong?

I know that this daughter, a middle schooler, has had some stress about Donald Trump. She has people in her class who like him, and she can’t understand why when he’s so mean. It worries her.

I ask her if Trump gets elected, how would she feel if everyone assumed all Americans were like Donald Trump. Well, we wouldn’t be, she says.

Oh, she says.

When we talk personalization, we tend to talk about targeting. You learn a certain set of things, you get tested, the personalization software finds knowledge gaps and runs you through the explanation that you need. (There are other, related meanings, of which there is a partial taxonomy here).

The idea seems to be that there is a wide variety in what concepts students struggle with, but there is one perfect explanation per concept. Personalization gets the explanation of that concept to the student.

That’s a part of the story, but it’s not even the most important half.

When tutors work with students they do alter what they work on based on student need. They personalize what skills they target, sure. But the big advantage of a tutor is not that they personalize the task, it’s that they personalize the explanation. They look into the eyes of the other person and try to understand what material the student has locked in their head that could be leveraged into new understandings.

If you find yourself teaching people something — anything — you’ll see this at work. How many times do you being with the phrase “So have you heard of X?” There you are, looking for the way into the explanation. It could be from point X, a Magnetic Fields song. Or from point Y, a Trump analogy. For a Trump-supporting indie-rock-hater it’s going to be a completely different entry point, and a different explanation.

This gets to my obsession with thinking about Open Educational Resources as explanations and data organized as variants around a namespace. Instead of us having curated and published the supposed “best” explanation of a subject, why not take a git-like approach, and let different explanations proliferate? Instead of “read this chapter on Broken Windows Theory” say “Here’s five variant explanations of Broken Windows Theory, and fifteen related examples — find the ones that work for you. and copy the ones that work best to your own space.”

Over time, what happens? People with similar backgrounds, needs, abilities, and talents as me curate their favorite explanations and examples to their space, and as I discover these people I discover a hand-crafted curriculum for me, one that makes use of the fact I have an encyclopedic knowledge of indie music and a background in political theory to teach me new things. The resources you tap into teach you the same things as me, but from different starting points.

Accessibility is baked in, as some resources are particularly good for students with visual impairment. Some might even use a students unique experience of the world as a strength. Can you imagine an explanation for a deaf student that doesn’t just work around the disability, but occasionally says something like “You know how when you’re deaf..”

Adult learners might get examples and writings that don’t treat them as a 19 old. Business majors might get explanations of psychological concepts that apply to business. These modules would be recombinable and remixable into unlimited combinations, and each explanation would be linked to its variants.

This is really the power of OER that has not yet been unleashed.  The problem, I think, has been getting people who haven’t used federated wiki or Wikity to understand what a system of connected copies like this might look like. Thoughts?