How My “Disarm Hate” Slogan Went Viral

Recently, a slogan I created went viral. Since my area of (day job) expertise is how we use networks to learn and collaborate, I thought I might talk about how that happened, and what its path to fame can tell us about information flow in networks.

Today, I want to just set the story up. Tomorrow I will discuss how what happened undermines most people’s “common sense” understanding of networks, but is supported by current and older research on networked behavior.

The Avatar

When I woke up to the news of the Orlando shooting a little over a week ago, I was horrified like most people at what had happened. But I also have been in online activism long enough to know that the “meaning” of an event often gets hashed out in places like Twitter in the hours after it. I thought I could make a difference there.

I decided I would make an avatar to express that the main story here was not about ISIS. It seemed to me that there was one set of interpretations floating out there that would end up at “ban muslims” and another set that wouldn’t. And I desperately didn’t want Islamic terrorism to become the primary frame through which we viewed Orlando.

Thinking Things Through: The Two Community Problem

I started out by taking an transparent screen of a rainbow flag and putting over my current avatar of my face, expressing solidarity with our LGBTQ folks (including my oldest daughter). Except that didn’t work, because it neglected the gun violence message, casting this as solely an event of hatred, and hiding the fact that this was also yet another in a chain of high-magazine assault weapon tragedies.

I thought back to Charleston, the Dylan Roof shooting a year ago. Back then, people casting it as a result of white supremacy (which it was) claimed that the gun control folks were minimizing their message by portraying it as “just” another gun tragedy. People in the gun control movement were upset that they were supposed to stay silent on this issue so that the racism issue could be highlighted, even though the gun control movement relies on making the most of small windows of public outrage after an event like this.

Every image and phrase I came up with had tripwires. I could put “Not One More” or “Stop the Violence” on top of a rainbow flag, but that seemed to equate one of the worst hate crimes in modern American history with school shootings and San Bernadino. I made one that said “Stop the Hate”, but now I was ignoring the gun control issue. It started to feel like I was just going to have to pick a side. It was Charleston all over again.

Then about 90 minutes into brainstorming avatars, it hit me. I had just made a “Stop the Hate” one when I thought of a small tweak that pulled it all together. I showed it to my wife, Nicole.

ngQjWbJr

“Disarm Hate,” I said. “Is it too cutesy?”

Nicole thought a second. “No, I like it. It says a bunch of things in two short words. It’s clever, but it’s not cute.”

I uploaded it to my Twitter profile, which borked the readability of it a bit. I spent another hour fiddling with font-sizes, drop-shadows, and letter positioning, uploaded the finished version and tweeted out that other people should steal it and make it their avatar.

ClW7mTBUsAEI-Vn

As you can see from the avatars at the bottom, some people came by relatively immediately and grabbed it for their avatar. (Special thanks to Amy Collier in particular, who picked it up and changed it to her avatar two minutes after I tweeted this).

People started grabbing the avatar from other people, and tweeting the hashtag #DisarmHate. It felt small, but it still felt good.

Seven Days Later

My daughter was at the Portland gay pride parade yesterday and pictures that had her tagged in them started coming back to Nicole’s Facebook account (I’m not on Facebook). Suddenly she pauses.

“Hey, Mike,” she says, “What was your avatar slogan again?”

“Disarm Hate?”

“I think someone has it on the back of their shirt.”

She showed me the photograph:

dis

One of the pictures Katie’s ex-girlfriend took in Portland.

 

Wait, what? It’s not exactly my avatar, but boy is that familiar…

I went onto Twitter and typed #DisarmHate.

dis2

And from today….

warren

I looked at the news:

disarmhate

By the way, here’s the news up to day I coined it — the phrase (as phrase and not random collisions of words) simply doesn’t exist in the media:

disarm4

So what the heck happened? How did my tagline go viral? How’d a slogan invented on Sunday become the term of choice for a movement by Wednesday?

It’s probably not what you think.

Tomorrow I’ll talk about how the “organic” model that most people assume creates virality is a load of bunk. (Short summary: the phrase got a huge boost people whose job it is to find these sorts of things and promote them through the network; organic is just another name for “well-oiled and well-funded advocacy machine”).

Is “The Web As a Tool For Thought” a Gating Item?

In instructional design “gating items” are items on tests which, if not answered or performed correctly, cause failure of the test as a whole.

As a simple example, imagine a driving test that starts in a parking lot with the car parked. The driving test has a lot of elements — stopping at stop signs, adjusting mirrors, smooth braking, highway merging, etc. These are all important, and rated by weighted points.

But none of these can be tested unless the student driver can release the emergency brake, place the car in reverse, and back out of the initial parking spot. That part of the test may be worth 10% of it, but it forms the gateway to the majority of the test, and if you don’t make it through it, you’re toast.

I’ve been thinking about gating items in relation to my work on Wikity. There’s a lot of ideas in Wikity that people don’t get, and they don’t seem to me to be hierarchical for the most part. This isn’t a sort of “you have to learn about averages before you learn about standard deviation” sort of problem. But I’m starting to think that there may be a gating item that is keeping us stuck in the parking lot.

What Wikity Is, in My Mind at Least

Let’s start by talking about what Wikity is, at least in my view.

Wikity encompasses, currently, a lot of ideas counter to our current web narrative. In all cases, it’s not meant to supplant current sorts of activity, but to maybe pull the pendulum back into a better balance. Here’s some of those ideas:

  • Federation, not centralization. Wikity allows, through the magic of forking and portable revision histories, a way for people to work on texts and hypertexts across a network of individually owned sites.
  • Tool for thinking, not expression. Wikity is meant as a way to make you smarter, more empathetic, more aware of complexity and connection. You put stuff on your site not to express something, but because it’s “useful to think with”.  By getting away from expression you also get away from the blinders (and occasional ugliness) being in persuasive mode comes with.
  • Garden, not Stream. The web today is full of disposable speech acts, that are not maintained, enriched, or returned to. Tweets, Facebook posts, contextually dependent blog posts. Consequently entering new conversations feels like sifting through the Nixon tapes. Wikity aims to follow the Wiki Way in promoting the act of gardening — of maintaining a database of our current state of knowledge rather than a record of past conversations.
  • Linked ideas and data, not documents. Things like social bookmarking tools, Evernote, Refme, and Hypothes.is act as annotation layers for documents. But the biggest gains in insight come when we deconstruct documents into their component pieces and allow their reassembly into new understandings. Our fetish for documents (and summaries, replies, and annotations of documents) welds important insights and data into a single context. Wikity doesn’t encourage you to annotate documents — it encourages you to mine them and remix them.
  • Connected Copies, not Copies or Links by Reference. We generally have had two ways of thinking about passing value (e.g. text, media, algorithms, calendar data, whatever). We can pass by value (e.g. make a copy) or by reference (point to a copy maintained elsewhere). We’ve often defaulted to Links by reference, because of the strengths of that, but as web URLs deteriorate at ever faster rates, a hybrid mode can solve some of our problems. Connected copies learn from GitHub and federated wiki: They are locally owned copies that know about and point to other copies, allowing for a combination of local control and network effects.
  • A Chorus, not a Collaborative Solo. We tend to think of collaborations being, at their best, many people tending towards one output. Collaborative software generally follows this model, allowing deviations, forks, track changes and the like, but keeping the root assumption that most deviations will either die or be adopted into the whole. For some things this makes sense, but for others an endless proliferation variations and different takes is a net positive. Wikity tries to harness this power of proliferation over consolidation.

These ideas aren’t mine. They are pulled from giants of our field, people like Ward Cunningham, Jon Udell, Ted Nelson, Vannevar Bush, and Doug Engelbart.

But while they are my entry points into this, most don’t seem to be a great entry point for others. They form, for most people, a confusing collection of unrelated and undesired (or only faintly desired) things.

This is sad, because using Wikity and Federated Wiki has been life-changing for me, giving me a vision of a web that could more effectively deliver on its goal of augmenting human intellect and understanding by rethinking what web writing looks and acts like.

The Web As a Tool for Thought, Not (Just) Conversation

What I’ve come to realize is while “Web as a tool for thinking, not expression” is not foundational to the other concepts in a normal sense, it acts as a bit of a gate to getting their relevance. If the web is (just) conversation and collaboration, then

  • Why would you want copies of other people’s stuff on your site?
  • Why would you care about the chorus? (If it happens great, but your job is your solo, right?)
  • Why would you post ideas and data that are not embedded (and welded to) the argument you wish to make and presumably win?
  • Why would you manage and update past speech acts to be less context-driven (Garden) when you could just make new speech acts for a new context (Stream)?

I think you can probably talk about federation and copies and linked data separately, but it’s difficult to get to those parts of the conversation if the vision of the web is “how do we talk and share things with one another” instead of  “how can this machine and network make me smarter and more empathetic?”

Conversation is one way that can happen. But there are so many other important ways to use networked knowledge to think and feel that aren’t “I’ll say this and then you say that”. In fact, I’d argue that the web at full scale is not particularly *good* at conversation, and our over-reliance on “My site/feed/comment is my voice” as a metaphor is behind a lot of the nastiness we get into.

And as I think about it, it’s not just Wikity/Federated Wiki that struggles with this. Hypothes.is is an annotation platform that could alter the conversational paradigm, but what I see people using it as (mostly) is a form of targeted commenting. In this case, understanding the web as a tool for expression is not gating the adoption of the tool, but may be gating people using it to its full potential.

Jon Udell has recently started to push users towards a new understanding of annotation as something other than comments. And what he says, I think, is interesting:

Annotation looks like a new way to comment on web pages. “It’s like Medium,” I sometimes explain, “you highlight the passage you’re talking about, you write a comment about it, the comment anchors to the passage and displays to its right.” I need to stop saying that, though, because it’s wrong in two ways.

First, annotation isn’t new. In 1968 Doug Engelbart showed a hypertext system that could link to regions within documents. In 1993, NCSA Mosaic implemented the first in a long lineage of modern annotation tools. We pretend that tech innovation races along at breakneck speed. But sometimes it sputters until conditions are right.

Second, annotation isn’t only a form of online discussion. Yes, we can converse more effectively when we refer to selected passages. Yes, such conversation is easier to discover and join when we can link directly to a context that includes the passage and its anchored conversation. But I want to draw attention to a very different use of annotation.

Jon’s absolutely right — it’s really tempting to try to approach annotation as commenting, because that’s a behavior users understand. But the problem is that it’s a gating item — you can’t get to what the tool really is unless you can overcome that initial conception of the web as a self-expression engine. Otherwise you’re just a low-rent Medium.

The first, biggest, and most important step is to get people to think of the web as something bigger than just conversation or expression. Once we do that, the reasons why things like annotation layers, linked data, and federated wiki make sense will be come clear.

Until then, we’ll stay stuck in the DMV parking lot.

 

 

Stereotype Threat and Police Recruitment

From an interview on the World Economic Forum site (which is surprisingly good). A description of how a small change to an invitation email increased pass rates on police recruitment exam:

Small, contextual factors can have impacts on people’s performance. In this particular case, there is literature to suggest that exams for particular groups might be seen as a situation where they are less likely to perform at their best. We ran a trial where there was a control group that got the usual email, which was sort of, “Dear Cade, you’ve passed through to the next stage of the recruitment process. We would like you to take this test please. Click here.” Then for another randomly selected group of people, we sent them the same email but changed two things. We made the email slightly friendlier in its tone and added a line that said, “Take two minutes before you do the test to have a think about what becoming a police officer might mean to you and your community.” This was about bringing in this concept of you are part of this process, you are part of the community and becoming a police officer is part of that — trying to break down the barrier that they are not of the police force because it doesn’t look like them.

….

Interestingly, if you were a white candidate, the email had no impact on your pass rate. Sixty percent of white candidates were likely to pass in either condition. But interestingly, it basically closed the attainment gap between white and nonwhite candidates. It increased by 50% their chance of passing that test, just by adding that line and changing the email format. That was an early piece of work that reminded us of the thousands of ways that we could be re-thinking recruitment practices to influence the kind of social outcomes we care about.

There’s a lot to take away from this. The finding they have applied here originally comes from educational research, and the obvious and most important parallel is in how we approach our students in higher education. How often do we provide the sort of positive and supportive environment our at-risk students need?

The larger pattern I see here with design is just how much small things matter. There’s a reason why no major extant community uses out-of-the-box software. If you’re Reddit, Facebook, Instagram, Twitter, etc. and you want to encourage participation, or minimize trolling, or reduce hate speech you have to have control of the end-to-end experience. Labeling something a “like” will produce one sort of community, and labeling it “favorite” will produce another.

We get hung up on “ease-of-use” in software, as if that was the only dimension to judge it. But social software architectures must be judged not on ease of use, but on the communities and behaviors they create, from the invite email to the labels on the buttons. If one sentence can make this much difference, imagine what damage your UI choices might be doing to your community.

BTW — I write a lot of stuff like this over the day as I process stuff on Wikity (though it’s usually shorter). It’s all there, and you might find something interesting. I post this here because it is just too important to leave on my unread wiki, but it’s only on wiki that you’ll see the connection to the Analytics of Empathy or Reducing Abuse on Twitter.

Predicting the Future

I’m a person that generally doesn’t spend much time predicting the future. I’m more comfortable trying to imagine the possible futures I find desirable, and that’s mostly what I do on this blog, talk about the futures we should strive for.

But two and a half years ago, at the encouragement of the folks at e-Literate, and with the world just coming out of its xMOOC binge, I made some predictions about the future of edtech for e-Literate. I decided to put aside my 10 year visions of the desirable, and just straight up predict what would actually happen.

I spent about a week thinking through all the stuff I talk about and trying to be brutally honest with myself about the future of each item. I literally had a pad where I crossed out most of my most beloved futures. Most things I loved were revealed to be untenable in the short term, due to the structure of the market, the required skills, cultural mismatches, or the lack of a business model.

It was immensely painful. Still, when I was done, a few things survived. They weren’t like most people’s predictions of the time, and in fact ran against most of the major narratives in play as of December 2013.

Here were the predictions. I made three firm predictions under the title “Good Opportunities That Will Be Taken Seriously by the Powers That Be”. I’ll put aside one of these “Local Online”, as I noted even at the time it was a bit of a cheat: local online was a transition that had already happened; it was just no one had noticed.

I’ll deal first with my two other major predictions, which ran counter to the narratives of the time.

  • At a time when asynchronous learning was king, I predicted the rebirth of synchronous learning.
  • At a time when Big Data was the rage, I predicted the rise of Small Data.

How’d I do?

Synchronous Online

In a time when the focus was on asynchronous and self-paced learning, I predicted a renaissance of synchronous learning:

Synchronous online is largely dismissed — the sexy stuff is all in programmed, individuated learning these days, and individuated is culturally identified with asynchronous. That’s a mistake.

I went on to describe how the emergence of new tools based on APIs like WebRTC would make possible the merging of traditional synchronous learning sessions with active learning pedagogies, and how this would result in a fast-growing market, as it would address the needs of a huge existing population of students currently underserved. I compared the market for videoconferencing products to where the market was for the LMS on the eve of Canvas entering it: people believed the LMS wars were over, but in fact they had just begun, because Blackboard had treated the LMS as a corporate tool rather than an educational one:

Adobe Connect and Blackboard Collaborate are, I think, in a similar place. They are perfect tools for sales presentations, but they remain education-illiterate products. They don’t help structure interaction in helpful ways. I sincerely doubt that either product has ever asked for the input of experts on classroom discussion on how net affordances might be used to produce better educational interaction, and I doubt there’s all that much more teacher input into the products either. The first people to bother to talk to experts like Stephen Brookfield on what makes good discussion work *pedagogically* and implement net-based features based on that input are going to have a different pitch, a redefined market, and the potential to make a lot of money. For this reason, I suspect we’ll see increasing entrants into this space and increasing prominence of their offerings.

Suggested tag line: “We built research-driven video conferencing built for educators, and that is sadly revolutionary.”

I don’t know if you can remember how unpopular synchronous was in January 2014, but contemporary takes on it ranked it somewhere between Nickelback and Creed as far as coolness.

So where are we today? Well, WebRTC is propelling a billion dollar industry. Blackboard Collaborate got its first refresh in a decade in 2015 (based on a WebRTC purchase they made in November 2014). Minerva, the alt-education darling, released its platform later that year, which was based on synchronous video learning.

And today, we find an extended article in the Chronicle about the surprising new trend in online education: the rebirth of synchronous education, the hottest trend in learning right now. The reasons for it?

What’s giving rise to the renewed interest in more-formalized synchronous courses is that the technology for “high-touch experiences” in real time is getting more sophisticated, says Karen L. Pedersen, chief knowledge officer at the Online Learning Consortium, a nonprofit training and education group. Institutions are catching up to their professors, and tools are now widely available that let professors share whiteboards simultaneously or collect comments and on-the-spot poll results in real time.

The article goes on to explain that the recent ability of tools has paired traditional synchronous classes with active learning, which makes the difference.

I have some ambivalence on where this will go, as mentioned in the intro to this post, these were predictions, not my top desired futures. Opportunities. And opportunities can be perverted. But this was surprisingly on target.

Small Data

At the height of Big Data madness, I predicted the rise of small data products:

Big Data is data so big you can’t fit it in a commercial database. Small Data is the amount of data so small you can fit it in a spreadsheet. Big Data may indeed be the future. But Small Data is the revolution in progress.

Why? Because the the two people most able to affect education in any given scenario are the student and the teacher. And the information they need to make decisions has to be grokable to them, and fit with their understanding of the universe.

Small Data was a relatively new term at the time the prediction was made. The Wikipedia page for the term was actually birthed on January 2, 2014, about the same time I was writing the post, and looking back now I only see a smattering of uses of the term in 2013. I was at the time reading the wonderful critiques of Big Data by writers like Michael Feldstein and Audrey Watters and thinking through the question, if not “Big Data” then what?

Then in Spring of 2013 I saw a presentation by the local school district on their use of data. The head of their operation said the most useful data for them had been the “One-F” test. They would just compile the grades of the students in all their classes and look for students that had an F in one subject but A’s and B’s in others. Then they’d go to the student and say — look, you obviously can do the work in other classes, what’s happening here? And they’d go to the teacher and say hey, did you know this student is an A student in their other classes — what is going wrong in this class?

And the reason why it worked, they said, was you could talk about standard deviations or point-biserial correlations all day, but it would never make sense to the people whose actions had to change. But people could understand the “One-F” metric. It wasn’t a p-valued research finding: instead it was a clue, understandable by both teacher and student, that something needed investigating and a bit of guidance on where the problem might be, and how to address it. And that — not research level precision on average behavior — was where the value was.

And so it was really Lisa Greseth, the IT head of Vancouver Public Schools at the time, who showed me the way on this. “Small Data” seemed to encompass this idea — it was theory-informed data collection. It was data as a clue for action. And most importantly, it was data that is meant to be understood, in its raw form, by the students and teachers involved.

How’d this prediction go? Pretty well. In the two and a half years since there’s been an explosion of interest in small data. Here’s the first eight results from a Google search on “small data education”:

The Washington Post. May 9, 2016: ‘Big data’ was supposed to fix education. It didn’t. It’s time for ‘small data’

EdWeek. May 16, 2016: Can Small Data Improve K-12 Education? –

InformationWeek. Nov 24, 2015 – McGraw-Hill Education’s chief digital officer has driven the company’s effort to leverage small data to improve student outcomes.

Helix Education. Oct 22, 2015: Big and Small Data are Key to Retention

Portland Community College. Mar 9, 2015: Distance Education: Using small data

Pando Daily. March 9, 2014: The power of small data in education

Center for Digital Education. Jul 1, 2015: 7 Benefits of Using Small Data In K-12 Schools

Times Higher Education Journal. Jul 1, 2015: The Power of Small Data

The prediction, of course, was about the introduction of “small data products”, and there’s been growth there too. McGraw-Hill, for example, is pushing a small-data focus in its Connect Insight series. In many ways, this is a return to a data focus that existed before Big Data madness, a focus on small, teacher-grokable data points collected for a specific purpose. And though McGraw-Hill calls it “Small Data” explicitly, it is the direction that most products seem to be re-exploring after the crash of Big Data hype.

By the way, I still believe Big Data has a place, applied to the right problems. It just wasn’t the place people were predicting two and a half years ago. Maybe I’ll save thoughts about that for a future prediction post.

Other Predictions

I had a category for things that I thought would develop but mostly remain under the radar, and not see broad institutional adoption. I put the return of Education 2.0 (blogs, wikis, etc) in there as well as “privacy products”. I think I was more or less right on those issues. In Education 2.0 we’ve seen real growth, particularly with Reclaim Hosting’s efforts, but it’s still off the institutional mainstream for the moment. On privacy products there has been less development than even I thought there would be, though the recent development of the Brave browser and increasing use of personal VPN provide some useful data points.

I did make the brave, and completely wrong, prediction that Facebook had peaked, thinking that many of its features could be supplanted by OS-level notification systems. Looking back on this prediction I learned something about making predictions: don’t make predictions about things you don’t use, at least not without observing closely how other people use them. My use of Facebook at that time was limited to a quasi-monthly visit.

So lesson learned there? In the time since that I’ve worked on Wikity and Federated Wiki, I’ve come to a greater understanding of what Facebook provides people, almost invisibly. And I have to say, paired with my prediction from 2014, it has really demonstrated to me that what a lot of people build to “replace Facebook” (including things I build) don’t really replace what Facebook provides people. If you look at Facebook and the rise of Slack you start to realize that maybe centralized control of these platforms is key to the sorts of experiences people crave. It may be that you can’t make a federated Facebook anymore than you can make an alcohol-free bar.

I’m not saying that many things can’t be federated. But I have a new appreciation for why they aren’t. (And, as expected, it’s probably this failure of prediction that is most useful to me at this point).

Anti-Predictions

Finally I made some anti-predictions about hyped trends of the time that I believed would go nowhere. Here I predicted that Gamification and Mobile Learning would crash and burn.

I turned out to be largely correct. Gamification seems to be entering its death throes, as it is really just rehashed behaviorism, with the dubious distinction of being even less self-reflective than behaviorism. (The “good” parts of “gamification” are really just learning theory — scaffolding, feedback, and spiral designs come from Vygotsky, Bruner, and others, not Atari).

More interestingly, my prediction about mobile came out more correct than I imagined. As predicted, we’ve gone through the iPad optimism of 2013 and 2014 to find that, unsurprisingly, learning and creating are not really mobile endeavors. Deep learning, it turns out, tends to be an activity that benefits from focused time and rich input systems. (We tried to tell you). So as we watch the iPad one-to-one programs crash and burn, let me revise my previous claim that Education Analysts Have Predicted Seven of the Last Zero Mobile Revolutions.

They’ve now predicted eight of them.

Conclusion

I don’t know. I feel like this is a pretty good record. The Facebook prediction was arrogant and misplaced. I am seriously contemplating that error at this point, hoping for some insight there.

Most of the rest of the predictions were arrogant as well, but came true anyway.

What was behind the right and wrong predictions? There’s no overall trend, but the Facebook failure is instructive when put next to the other predictions.

The key in all these things is to try to truly understand where the value in the current system is, as well as what the true pain points are. And the key is to imagine technological solutions that that address the true pain points without taking away the existing value of the system.

  • Synchronous Online manages to preserve valuable elements of synchronous learning while addressing its main problem: feelings of isolation and disengagement.
  • Small Data builds on the strengths of a system built on the intuitions of the teacher, instead of the data analyst, and works backwards from their needs as a consumer of data.

Things that don’t take off tend to misunderstand central features as flaws. The iPad misunderstood the rich input systems of the laptop as a hindrance rather than a benefit. And its “benefit” of being a “personal” device didn’t map to a classroom where devices weren’t personal, but constantly swapped between students and classes.

Likewise, the centralization of Facebook turns out to be one of its great features: people are actually craving more filters, not less, for the information they consume, and they’d prefer to stay in a standard environment rather than venture out onto the web for most things. Plus, in the two and a half years since I wrote this we’ve seen what has happened to the notifications panel on phones: it’s a Tragedy of the Commons. With every app now pushing messages into the notifications panel, I can’t go to it without finding it littered with thirty or forty ridiculously mundane “updates” from 18 different apps, all cloying for my attention. Facebook’s centralized, privatized ownership of its newsfeed allows it to reduce noise in a way that federated systems have trouble doing.

The biggest blindspot tends to be our own experience. I was able to see the mobile mismatch, because it matches my own experience as a learner. I couldn’t see the strength of Facebook because I don’t *want* the world to like the things about Facebook that it so obviously likes, and I never should have predicted anything about it until I understood its present value to people.

On a personal note, going back through this reminds me that I should probably try to predict more. My tendency is towards futurism, unfettered by reality, and I remember how painful the process of trying to truly predict things was. But truth is, if you can dredge up some ruthless honesty, you can see what the likely routes forward are. That’s not quite as fun as advocating what should be, but it’s probably a useful skill to develop.

Plans vs. Planning

lesson

Dan Meyer quotes Eisenhower: “In preparing for battle I have always found that plans are useless, but planning is indispensable.”

It’s likely that Eisenhower said the above lines, but it’s actually Richard Nixon who reported them. Nixon, in “Crisis 4” of his “Six Crises” writes about his Kitchen Debate with Soviet leader Nikita Khrushchev, and it’s from there alone that Ike’s saying enters the written record.

Nixon’s prose in that article is a bit self-congratulatory, but the point of quoting Ike is clear. Nixon did a lot of preparation before going to Russia in 1959, learning about their aims, the psychology of Khrushchev himself, and the larger cultural context. He used that to put together a plan of how we would approach the leader, debate him, and the points he would try to make.

Capture

Within 30 seconds of meeting Khrushchev, the plan was out the window. Khrushchev made a practice of getting opponents off-balance by being unpredictable, and when Nixon stopped by his office for a simple meet-and-greet before the debate Khrushchev viciously dug into him about Captive Nations Week, a week recently signed into law that called for prayer for countries held captive behind the Iron Curtain. How could he defend such a thing?

Nixon makes the point that, while the plan itself was knocked off-balance by this unexpected offensive, the process of planning allowed him to fluidly adjust his approach. He had learned much about Khrushchev’s character, why the Soviets were meeting them, and what they were attempting to achieve. Knowing, for example, that Khrushchev’s working assumption was not that the U.S. wanted war, but that the U.S. was soft and would fold under aggression allowed him to calibrate his responses correctly. Understanding the materialist underpinnings of  Soviet philosophy allowed him to see the ideological frame to which he had to bend his arguments.

As Nixon elaborates:

It was obvious that no plans could have possibly been devised to cope with such unpredictable conduct. Yet without months of planning … I might have been completely dismayed and routed by his unexpected assaults.

By making a plan, you get an understanding. And it’s the understanding, not the plan, that is the prime asset, as it allows you to respond fluidly to rapidly evolving situations.

While Nixon’s parallel to battle planning is a bit overblown, it’s instructive. Imagine if Eisenhower was given a plan by his best strategist and then Eisenhower implemented it, without having been involved with its development, and without having developed all the background understanding and knowledge that one gets in making a plan. How well would that go?

Lesson Plans and Other OER

And here’s where we get to lesson plans and to issues around OER in general. The question Dan and others have been grappling with is why lesson plan reuse is low. Why, in a world full of digital resources, do teachers still construct their own material from scratch?

My guess is the answer to this depends on the teacher, the subject, and the type of material being used. There are materials, for example, where the main barrier to reuse is technological. Materials where the main barriers are legal. Or where the problem is the ever over-hyped findability gap.

But for at least a certain class of materials — lesson plans for secondary teachers — Dan seems to be coming to the conclusion that we have a bit of an Ike problem. If we get the plans without carefully thinking about what we want to achieve, what our students already know, and what problems we’re likely to encounter, we’ve lost the most important part of the process.

This is not to say that there is no place for sharing of lesson plans. Or for the sharing of other OER. But it is to say we have to approach the construction and sharing of OER understanding that there may be certain processes in course construction that we just can’t short-circuit. To the extent we develop materials and sharing architectures for faculty they need to make that planning process more effective, not simply bypass it.

 

 

 

 

The Textbook Duet

Our current process for provisioning courses with OER looks like this:

  1. Identify course content needs
  2. Find materials that support those needs
  3. Chose the best material for each need
  4. Pull those materials into a coherent whole

In practice, items two and four take an awful lot of time, so many people punt and get an open textbook or get a course pre-assembled.

That’s a bit of a shame, because textbooks do not provide Choral Explanations. They provide the explanation of concepts that works for the average reader. And is that what we really want?

In reality, however, the slack is picked up by the teacher. The course becomes a duet between the instructor and the textbook. When we wonder what students think they are getting out of lecture, this is maybe one of the things. They are getting the textbook concept explained to them in a slightly different way.

I’m not saying lecture is good, mind you. There’s a lot of evidence that it may be a lousy way to do this. But getting two explanations of the same concept over time has been shown as an efficient way to increase understanding. (Robert Bjork did some work on this, though I can’t find the cite at the moment).

What if what some students are seeking in lecture is just a different version of this? How might we think about lecture differently (and find alternatives for lecture) if this is true?

 

Blogging as Multi-Track

From @brackenm:

grit

grit2

The core idea of Choral Explanations is that we benefit more from multiple parallel explanations than the “one best explanation”, and that educational materials should utilize this pattern more fully. As I’ve argued, choral explanations are how people tend to reach mastery of difficult areas, whether they are a programmer on Stack Exchange or a sommelier trying to find another route into recognizing wines.

Bracken reminds me that chorus need not be composed of different voices, necessarily. One of the patterns of blogging is to repeatedly explain the same concepts in different ways through different examples. And one of the joys of reading blogs is suddenly one day a post just clicks, and you get the idea someone has been trying to explain forever, and you get it in a deep an profound way.

It’s tempting to think, after you read that blog post that helps you get it, “Well, if only you had explained it this way before! It’s so much simpler than you’ve been making it!”

But that’s a wrong reaction for two reasons. First, it’s the case that that explanation worked for you, but that others have worked for other people. This is my point about personalization: since we all come into learning contexts with vastly different backgrounds and interests the most important personalization provides different routes into the same concepts.

But the other reason it’s wrong is your understanding of that article that “clicked” is likely path-dependent: had you read that article without reading the others, you probably wouldn’t have gotten it. To overextend the metaphor, this is because we teach our students notes, but expect them to understand chords, and it’s often only by the interaction of multiple examples and explanations that the underlying structure of the idea becomes evident.

 

A Reminder: What Your Students Do Is Hard

The most important thing I do as an edtecher is try to teach myself things outside my comfort zone. When you get into your thirties and forties (assuming you’re out of your PhD program) you get pretty ensconced in a discipline, and are able to leverage previous knowledge to acquire new related knowledge. This is a very different process from what novices go through, and it can warp your vision of what education should look like.

So this weekend I decided I wanted to take a few hours to understand cell division. So I got me a textbook, and here is a relatively random paragraph from the cell division chapter:

The key to progress past the restriction point is a protein called RB (retinoblastoma protein, named for a childhood cancer in which it was first discovered). RB normally inhibits the cell cycle. But when RB is phosphorylated by a protein kinase, it becomes inactive and no longer blocks the restriction point, and the cell progresses past G1 into S phase (Note the double negative here—a cell function happens because an inhibitor is inhibited! This phenomenon is rather common in the control of cellular metabolism.) The enzymes that catalyze RB phosphorylation are Cdk4 and Cdk2. So what is needed for a cell to pass the restriction point is the synthesis of cyclins D and E, which activate Cdk 4 and 2, which phosphorylate RB, which becomes inactivated.

This is an introductory textbook for a college class. If you are a biologist, that paragraph probably looks like this one does to edtech folks:

According to Anderson and Dron (2011), during this time, theories of learning have shifted from ‘cognitive-behaviourist’ to ‘social constructivist’ to ‘connectivist’ pedagogical models. A cognitive-behaviourist model sees learning as something that is ‘acquired’ through a sequence of linear stages leading to a predefined goal, with periodic reinforcement of learned constructs, knowledge and behaviour. Social constructivist models view learning that is directly affected by the student’s social environment, context and relationships (Greenhow et al, 2009). Students do not merely passively consume knowledge in an isolated manner; instead, students actively create and integrate this new knowledge with their existing knowledge.

And vice-versa for the biologist I suppose (though honestly the biologist would probably have an easier time).  We start to think that learning feels like it does in our discipline, when really it feels like reading a sentence like:

The enzymes that catalyze RB phosphorylation are Cdk4 and Cdk2. So what is needed for a cell to pass the restriction point is the synthesis of cyclins D and E, which activate Cdk 4 and 2, which phosphorylate RB, which becomes inactivated.

What are some mistakes we make based on this? The biggest one is we underestimate how important it is to know facts. You can’t read this sentence above because you didn’t read (or pay attention while reading) the earlier chapters, which talked about cyclins and phosphorylates.  And this issue — bootstrapping knowledge — is one of the pressing reasons why we need textbooks, references, and sequenced material.

When people say ludicrous things like “we don’t need to remember things any more because we have Google!” you can assume they haven’t tried to learn anything outside their domain for a long time.

The other thing this reminds me of is what I discussed yesterday — you need a frame of reference for this to be anything but goobledygook. But the problem with students (as opposed to experts) is they don’t really have a library of framing contexts to make this meaningful. This is one of the reasons why projects or thematic threads can really help comprehension. Again, using my interest in cancer, we look at a line like

The enzymes that catalyze RB phosphorylation are Cdk4 and Cdk2. So what is needed for a cell to pass the restriction point is the synthesis of cyclins D and E, which activate Cdk 4 and 2, which phosphorylate RB, which becomes inactivated.

And think huh, so does cancer involve increased Cdk4 and Cdk2? The thing about that question is maybe it does, and maybe they doesn’t, but the process of formulating that question and answering it gives me a frame in which the knowledge becomes sticky, and in which I’m pushed towards a deeper understanding.

In any case, this is really meant to encourage people who study learning to keep pushing yourself out of comfort zones. It’s only be remembering what it is like to struggle that you are going to be able to build environments that address that struggle.

Prism: A Proposal for a Choral Approach to OER

If you’ve read Choral Explanations, you know that I’ve proposed a new (well, as much as anything is ever new) approach to OER use and production that is based on trends in both wiki and question and answer sites. (If  you haven’t read Choral Explanations, you can read it here).

In the time since I wrote that piece, I’ve kept coming back to it. And the more I look at it, the more I see a simple yet powerful idea at the center of it. I could be wrong, but I’m starting to feel there’s a huge opportunity here that is ripe for the taking. And unlike other stuff I’ve worked on (Wikity, federated wiki) it’s not that hard to build software around it. This post is going to attempt to outline how such software might work, and how it could dramatically change how we approach OER.

Grandiose enough? 😉

Choral Explanations

Those who have not read Choral Explanations should go read it; this article assumes you have. But for those who have already read it (and those who will ignore this warning anyway) here’s the quick review of what I proposed in that piece.

In general, the way we often think of resources in a class is that we want to select the perfect piece to explain a concept to our students. So an OER provisioning process looks something like:

  • Identify course goals, objectives, assessments
  • Use course goals etc., to determine necessary content
  • Find best piece of open content for each content need
  • Sew these OER together in LMS or other software, sand down rough edges, publish.

What this misses is that people are often helped by having multiple explanations of concepts and issues available to them. The trend, for example, in question and answer sites over the last six years has been towards something I call (with a nod to Ward Cunningham) “the chorus”. These modern Q&A sites are based on the idea that there may be better and worse answers for individuals but we benefit when we have access to a wide range of explanations and examples, because the explanation that may work for someone else may not work for someone else. (I cover this issue explicitly in my e-Literate piece We Have Personalization Backwards).

Again, this is meant only as a recap. Please read Choral Explanations for more detail on this. (In particular, you’ll need to understand what separates choral approaches from traditional forum approaches).

OER and the Chorus

I have an expansive, mind-bending idea of where these efforts could go. But I want to tone it down to show you how simple Choral OER could be. Because really, it wouldn’t be hard at all. (Thanks to Lumen’s Bracken Mosbacker, who talked with me over Twitter DM last night, for talking me down in scale).

So imagine we create a Q&A style website that is designed around some simple question types:

  • What are some reasons why understanding Thing X is important?
  • What are some examples of Thing X in action?
  • What problems does Thing X solve?
  • What are some Y’s associated with Thing X?
  • What do we still not know about Thing X?

So given something like “unconscious bias” in a sociology of race module:

  • What are some of the reasons understanding unconscious bias is important?
  • What are some prominent or common examples of unconscious bias having negative effects?
  • What do we still not know about unconscious bias?

Multiple teachers take a stab at these, and they use the “choral” pattern we discusssed in Choral Explanations. So “what are some examples” might have five short posts under it, where a variety of teachers describe examples:

  • One discusses recent findings in hiring patterns.
  • Another describes a situation in Iowa schools where 98% of teachers are white, and white teachers are found to push black students less forcefully to excel.
  • A third recalls a story in the Gladwell book “Blink” where a cop makes a split second decision and shoots an unarmed man. It’s later shown that cops think a fraction of a second less before pulling the trigger on black suspects than they do on whites — even if the officer is black herself.

You can link directly to these questions from your course, or spend a fun day answering them (I’m not being sarcastic — these sorts of things are fun if you like the subject; much more fun than writing a textbook!).

In my dream world, sites like this start becoming the protoplasm out of which OER gets made, by both students and teachers. But here’s where Bracken thoughtfully slows me down — how could we leverage existing OER to build this community in a smaller way?

Choral Sidebars

Here’s a simpler case.

Take a textbook on Biology. Let’s take a chapter on mitosis, since I realized yesterday I know stunningly little about mitosis, and tried to learn a bit more. Here’s a textbook page from Lumen’s online version of OpenStax’s Introductory Biology text:

mitosis

OK, lets mock this up a bit. Let’s replace the media link with something we’ll call “Insights and Perspectives”, which links to our imaginary software product “Prism”:

i&p

When you click one of these links as a student, you get the choral explanation. Each explanation is a different route to understanding, complete in itself, but along with the other explanations forms a sort of 360 degree view of the question.

For instance, here’s some entries on Why Mitosis Matters, which I’ve mocked up based on a simplified version of Quora’s design, shamelessly rebranded (we’re just going for general effect here).

prism.png

This is a quick mockup, but you see how it works. Instuctors (and perhaps students eventually as well) answer these questions, but not in wiki form. Instead, the array of answers provides multiple ways into the material for the student.

These explanations are rated up by other answer authors (for accuracy) and by students (for usefulness). The ideal answer is accurate and useful to students, but the rating there only changes the position of the question in the scroll.

I think this example, although contrived, starts to show the sort of areas this approach could excel. While I was trying to learn mitosis yesterday night, I was just zoning out as I read it (much like a student). I didn’t have a route in, a way to connect previous knowledge to new knowledge to make the slog easier or more engaging.

I found these two examples gave me a way in. Chemotherapy, because I lost my father to cancer a number of years back now, and I remember what those drugs did to him, and I couldn’t stop wondering why they couldn’t make a drug that didn’t make you nauseous or lose your hair, and make you such a shell of your former self in your last days. And it turns out the answer is not quite “Oh, well the drug is poisoning you but poisoning the cancer faster.” which is what everybody tells you. It turns out a much more precise answer deals with mitosis.

So one reason why we want to understand mitosis better is we could make drugs that work better or make people suffer less.

Another thing I found engaging was the issue of Alzheimer’s. I actually don’t have personal experience with Alzhiemer’s, apart from a great aunt who died when I was young. But I’m a bit of a political junkie, and I remember the whole stem cell debate and its relation to abortion. And I remember Nancy Reagan, based on Ronald Reagan’s Alzheimer’s disease, came out for supporting stem cell research.

Well, why is that so important for Alzheimer’s?  It turns out that nerve cells are “amitotic”: they don’t undergo mitosis. And they don’t even have the machinery to do so — they are missing the centrioles that would begin the process of pulling a single cell into two cells. So when your neurons get damaged, whether as part of a mental disease such as Alzheimer’s or an injury that takes away the feeling in your hand, nerves don’t grow back. This is different from when you cut your finger and you get new skin, for example.

Stem cells are important because they can divide, multiply and turn into new nerve cells. That fact connects into my previous political interest and knowledge.

So these two examples are perfect for me, but the idea would be there would be a dozen other examples that might not be. I just look through them as a student until I find the one that really sticks.

There’s little things here we could borrow from Quora and other Q&A sites. Real names. Micro-biographies that attempt to answer the question “Why should you trust me?” View counts to remind contributors of how much their answer has helped students or been reused by professors in their courses.

Instructor/Author View

Instructor/Author view would be a bit different. Here’s how it might work.

The instructor gets the book with these links in it. They click through to the link and look at the examples. But because they have an account, if they don’t like the examples or have a better one they are asked to contribute it:

answer.PNG

If you want to add your answer, you click there and add it.

mike

This starts to get instructors and students contributing to open textbooks without having to edit the main narrative of the text. When editing the main narrative of the text, the many voices can sound incoherent, with Prism, the many voices and perspectives becomes a strength. And with the process of supplementing the text in this way made this easy, we might finally begin to up the rates of profs extending and remixing OER.

Sustainability Model?

Sustainability with projects that are about fostering a commons is always hard. They don’t call it “The Tragedy of the Commons” for nothing. So what I propose here is meant to be a pass at how this might be sustained, and I’m hoping people might have better ideas as well.

To my way of thinking, the instructor piece gives us a possible business model. All text on Prism would be CC-BY. Use of Prism would be free.

But instructors might want to write answers that they don’t share with the general public, but just with their class. Or they might want to customize the answers their students see. Or perhaps they’d want to see the answers that their particular students found most helpful. A small per class charge for these features, even if it was paid for by only a fraction of those using it, could finance the operation of the central site, just as those on GitHub wishing for private repositories subsidize the activities of those using open repositories.

You’d probably also have to put some poison pills in the architecture, functionality, or governance of the site so that you’d be sure the company or foundation running it would not pull a Flat World Knowledge on everybody. But if you could make sure that the content on it would always be free and forkable, it could eventually also become the OER protoplasm site I dream of, where you say “What are the questions we need to answer with content in this course?”, upload them to the site, and watch as hundreds of teachers and students from across the country slowly build your course for you.

Failing that?

You could try to do this through a consortium/association, such as AASCU or AAU. I haven’t seen this  sort of thing work yet, but one could try.

Another approach might be for a university to build and maintain this, with enough grant funding or consortial support that people would believe that the project would be safe from the vicissitudes of university budgeting.

Finally, why not a state government in the U.S. (or a country or province elsewhere)? There are actually a number of funded OER initiatives that could attempt to pin a service like this on. There’s the freeloader problem, but it’s probably far cheaper to open your doors and let the world write your textbooks than fund them yourself. In a recent study of WSU we found that eliminating textbooks from our top 7 freshmen-enrolled classes would save our students something like $1 million a semester. Across all of Washington, of course, the savings would be much higher. Surely even a sliver of this money could support such a site?

But for now, we don’t even need a huge plan or a huge site. We need a relatively simple site, plugged into existing OER, and managed by a government, non-profit or ethical corporation, just to see if this can work. Can we all build this please?

 

 

 

 

 

Superpowers Take Time

superpower

So I’ve been doing this Wikity thing for a while now. I use it as a personal learning environment.

When I learn something new, I try to capture it and connect it. This usually comes in stages. First, I’ll just capture some text, usually with the Wik-it bookmark (but sometimes with “Share to WordPress” when I’m on my phone).

Here’s something I was reading at lunch about salt, arguing that low salt diets were as bad for you as high salt diets.

low

So I think of a name. The name is just a handle, like a variable name for an idea. Importantly, it’s not the name of the article I pull it from, but the name of the idea I’m pulling. (Multiple ideas might come from a single article).

The idea here, or the pattern, is this response curve to salt. If you eat barely any salt, you have an increased risk of coronary issues. But if you eat a lot of salt, your risk increases too. Statistical patterns like this are often called “J-curves” because when they are plotted on line graphs, they often make a “leaning J”, like so.

jcurve

I call this “Salt J-Curve” and post it to Wikity.

There’s a lot to just that process, in terms of learning. I’ve found the right paragraphs (I choose ones here that are less opinionated than the conclusion), I’ve given it a name. It takes maybe 30 seconds, but it’s an engaged thirty seconds.

I decide to improve the article. I add the graph and an introductory line. “Both low-salt and high-salt diets are correlated with increased mortality.”

Again, a small one minute thing, but the process of summarizing what the excerpt says in a sentence further solidifies my understanding. I hit back space a number of times before I get it right.

sjc

Finally, it’s connection time. As people familiar with the process know, rather than writing a judgment on the card, you try to find connections you can make to other cards.

Here’s a mockup I made of the card format a while ago: at the bottom you show connections to other cards (and explain the connection). I used to call these “references” but the point is the same. You must connect your new knowledge to your existing knowledge web.

structure


Anatomy of a Wikity Card: Title, Abstract, Treatment, Related Cards and Pages

So back to salt. I need some references, and I know that J-curves are also associated with alcohol consumption. A drink a night is correlated with benefit, whereas both many drinks and no drinks tend to co-occur with bad health effects. I search on alcohol, with a vague memory of a card I wrote on those curves. (UPDATE: Responding to Kate’s comment, a lot of time I have no memory of any cards, in which case I just plug in related terms and see what comes up).

So I search “alcohol”.

alcohol

Hmmm. So here’s an issue.

I’ll see if I can explain it to you here. My “Abstainer Bias” alcohol card reminds me that the J-curve in alcohol is thought (by some) to be a result of the fact that abstainers are a very different population than infrequent drinkers. If you take a population that has a drink a week and one that has a drink a night they are going to be roughly comparable population. But _zero_ drinks, now that’s a special number. People doing zero drinks in our culture are generally doing zero drinks for a reason. It might be religious reasons. It might be health reasons. It might be they’re a former alcoholic. It might be that at an advanced age, they don’t handle it well any more.

So that curve in alcohol is likely not a cause-effect curve telling you to drink “just the right amount of alcohol” to get in the alcohol Goldilocks zone, it’s probably a normal dose-response curve where each bit of more alcohol = just slightly more death, no matter how much you drink. We know this, because if we take out people that abstain from alcohol entirely, the J-curve goes away.

This happens in my head in about three seconds, by the way. “Oh, right, abstainer bias!”

So if I want to make a link from Salt J-Curve to Abstainer Bias, what would it say? Can salt have an abstainer bias too? Let’s look at the chart again.

low

This is just a guess, or the fragment of an idea, but where would you expect people with high blood pressure to be? Well, I’d expect them to be two places on this graph. I’d expect the people with very high sodium intake to be have a lot of cases based on cause/effect.

 

But I’d also expect a lot of people with heart conditions and high blood pressure to be abstaining from salt, and would assume they cluster at the bottom.

So I write up a link at the bottom. “The J-curve here may be just another example of [[Abstainer Bias]]” and link to the card.

This process, beginning to end takes about 3-5 minutes. I’ve done it hundreds of times since November, and now have a library of stuff which produces neat connections about half the time I use it. It took a long time to get here, a lot of work, but I am not kidding when I say it’s a superpower. Or as I said to David Wiley a while back, “My main pitch for this thing is this — it’s made me smarter. A *lot* smarter.”

It does that by forcing me to suspend my reaction to things until I’ve summarized them and connected them to previous knowledge. It forces me to confront contradictions between new knowledge and previous knowledge, and see unexpected parallels across multiple domains. It forces me to constantly review, rehearse, revise, and update old knowledge.

What do other social media solutions do? They allow you to comment on it, to share it. They ask you to react immediately, preferably with a quick opinion. They push you to always look at the new — never connect or revisit the old. They treat your reaction — your feelings about the thing — as the center of your media universe.

Can any of this be good for learning? For empathy? For innovation?

Of course, doing it this Wikity way takes time. The more you put in your library, the more useful it gets, the more it feels — honestly– like a superpower. But I don’t know how to market that to a culture used to gratification on day one, I really don’t. And I don’t know how to explain the benefits of a product that generates insights that are complex, not simple. It’s a puzzle.

What I do know is that it continually teaches me surprising things, and forces me to question my judgment. As long as it’s doing that, I guess I have to keep trying to explain it.