How My “Disarm Hate” Slogan Went Viral (A Lesson in Network Communities and Networks, Part I)

Recently, a slogan I created went viral. Since my area of (day job) expertise is how we use networks to learn and collaborate, I thought I might talk about how that happened, and what its path to fame can tell us about information flow in networks.

Today, I want to just set the story up. Tomorrow I will discuss how what happened undermines most people’s “common sense” understanding of networks, but is supported by current and older research on networked behavior.

The Avatar

When I woke up to the news of the Orlando shooting a little over a week ago, I was horrified like most people at what had happened. But I also have been in online activism long enough to know that the “meaning” of an event often gets hashed out in places like Twitter in the hours after it. I thought I could make a difference there.

I decided I would make an avatar to express that the main story here was not about ISIS. It seemed to me that there was one set of interpretations floating out there that would end up at “ban muslims” and another set that wouldn’t. And I desperately didn’t want Islamic terrorism to become the primary frame through which we viewed Orlando.

Thinking Things Through: The Two Community Problem

I started out by taking an transparent screen of a rainbow flag and putting over my current avatar of my face, expressing solidarity with our LGBTQ folks (including my oldest daughter). Except that didn’t work, because it neglected the gun violence message, casting this as solely an event of hatred, and hiding the fact that this was also yet another in a chain of high-magazine assault weapon tragedies.

I thought back to Charleston, the Dylan Roof shooting a year ago. Back then, people casting it as a result of white supremacy (which it was) claimed that the gun control folks were minimizing their message by portraying it as “just” another gun tragedy. People in the gun control movement were upset that they were supposed to stay silent on this issue so that the racism issue could be highlighted, even though the gun control movement relies on making the most of small windows of public outrage after an event like this.

Every image and phrase I came up with had tripwires. I could put “Not One More” or “Stop the Violence” on top of a rainbow flag, but that seemed to equate one of the worst hate crimes in modern American history with school shootings and San Bernadino. I made one that said “Stop the Hate”, but now I was ignoring the gun control issue. It started to feel like I was just going to have to pick a side. It was Charleston all over again.

Then about 90 minutes into brainstorming avatars, it hit me. I had just made a “Stop the Hate” one when I thought of a small tweak that pulled it all together. I showed it to my wife, Nicole.

ngQjWbJr

“Disarm Hate,” I said. “Is it too cutesy?”

Nicole thought a second. “No, I like it. It says a bunch of things in two short words. It’s clever, but it’s not cute.”

I uploaded it to my Twitter profile, which borked the readability of it a bit. I spent another hour fiddling with font-sizes, drop-shadows, and letter positioning, uploaded the finished version and tweeted out that other people should steal it and make it their avatar.

ClW7mTBUsAEI-Vn

As you can see from the avatars at the bottom, some people came by relatively immediately and grabbed it for their avatar. (Special thanks to Amy Collier in particular, who picked it up and changed it to her avatar two minutes after I tweeted this).

People started grabbing the avatar from other people, and tweeting the hashtag #DisarmHate. It felt small, but it still felt good.

Seven Days Later

My daughter was at the Portland gay pride parade yesterday and pictures that had her tagged in them started coming back to Nicole’s Facebook account (I’m not on Facebook). Suddenly she pauses.

“Hey, Mike,” she says, “What was your avatar slogan again?”

“Disarm Hate?”

“I think someone has it on the back of their shirt.”

She showed me the photograph:

dis

One of the pictures Katie’s ex-girlfriend took in Portland.

 

Wait, what? It’s not exactly my avatar, but boy is that familiar…

I went onto Twitter and typed #DisarmHate.

dis2

And from today….

warren

I looked at the news:

disarmhate

By the way, here’s the news up to day I coined it — the phrase (as phrase and not random collisions of words) simply doesn’t exist in the media:

disarm4

So what the heck happened? How did my tagline go viral? How’d a slogan invented on Sunday become the term of choice for a movement by Wednesday?

It’s probably not what you think.

Tomorrow I’ll talk about how the “organic” model that most people assume creates virality is a load of bunk. (Short summary: the phrase got a huge boost people whose job it is to find these sorts of things and promote them through the network; organic is just another name for “well-oiled and well-funded advocacy machine”).

Advertisements

Is “The Web As a Tool For Thought” a Gating Item?

In instructional design “gating items” are items on tests which, if not answered or performed correctly, cause failure of the test as a whole.

As a simple example, imagine a driving test that starts in a parking lot with the car parked. The driving test has a lot of elements — stopping at stop signs, adjusting mirrors, smooth braking, highway merging, etc. These are all important, and rated by weighted points.

But none of these can be tested unless the student driver can release the emergency brake, place the car in reverse, and back out of the initial parking spot. That part of the test may be worth 10% of it, but it forms the gateway to the majority of the test, and if you don’t make it through it, you’re toast.

I’ve been thinking about gating items in relation to my work on Wikity. There’s a lot of ideas in Wikity that people don’t get, and they don’t seem to me to be hierarchical for the most part. This isn’t a sort of “you have to learn about averages before you learn about standard deviation” sort of problem. But I’m starting to think that there may be a gating item that is keeping us stuck in the parking lot.

What Wikity Is, in My Mind at Least

Let’s start by talking about what Wikity is, at least in my view.

Wikity encompasses, currently, a lot of ideas counter to our current web narrative. In all cases, it’s not meant to supplant current sorts of activity, but to maybe pull the pendulum back into a better balance. Here’s some of those ideas:

  • Federation, not centralization. Wikity allows, through the magic of forking and portable revision histories, a way for people to work on texts and hypertexts across a network of individually owned sites.
  • Tool for thinking, not expression. Wikity is meant as a way to make you smarter, more empathetic, more aware of complexity and connection. You put stuff on your site not to express something, but because it’s “useful to think with”.  By getting away from expression you also get away from the blinders (and occasional ugliness) being in persuasive mode comes with.
  • Garden, not Stream. The web today is full of disposable speech acts, that are not maintained, enriched, or returned to. Tweets, Facebook posts, contextually dependent blog posts. Consequently entering new conversations feels like sifting through the Nixon tapes. Wikity aims to follow the Wiki Way in promoting the act of gardening — of maintaining a database of our current state of knowledge rather than a record of past conversations.
  • Linked ideas and data, not documents. Things like social bookmarking tools, Evernote, Refme, and Hypothes.is act as annotation layers for documents. But the biggest gains in insight come when we deconstruct documents into their component pieces and allow their reassembly into new understandings. Our fetish for documents (and summaries, replies, and annotations of documents) welds important insights and data into a single context. Wikity doesn’t encourage you to annotate documents — it encourages you to mine them and remix them.
  • Connected Copies, not Copies or Links by Reference. We generally have had two ways of thinking about passing value (e.g. text, media, algorithms, calendar data, whatever). We can pass by value (e.g. make a copy) or by reference (point to a copy maintained elsewhere). We’ve often defaulted to Links by reference, because of the strengths of that, but as web URLs deteriorate at ever faster rates, a hybrid mode can solve some of our problems. Connected copies learn from GitHub and federated wiki: They are locally owned copies that know about and point to other copies, allowing for a combination of local control and network effects.
  • A Chorus, not a Collaborative Solo. We tend to think of collaborations being, at their best, many people tending towards one output. Collaborative software generally follows this model, allowing deviations, forks, track changes and the like, but keeping the root assumption that most deviations will either die or be adopted into the whole. For some things this makes sense, but for others an endless proliferation variations and different takes is a net positive. Wikity tries to harness this power of proliferation over consolidation.

These ideas aren’t mine. They are pulled from giants of our field, people like Ward Cunningham, Jon Udell, Ted Nelson, Vannevar Bush, and Doug Engelbart.

But while they are my entry points into this, most don’t seem to be a great entry point for others. They form, for most people, a confusing collection of unrelated and undesired (or only faintly desired) things.

This is sad, because using Wikity and Federated Wiki has been life-changing for me, giving me a vision of a web that could more effectively deliver on its goal of augmenting human intellect and understanding by rethinking what web writing looks and acts like.

The Web As a Tool for Thought, Not (Just) Conversation

What I’ve come to realize is while “Web as a tool for thinking, not expression” is not foundational to the other concepts in a normal sense, it acts as a bit of a gate to getting their relevance. If the web is (just) conversation and collaboration, then

  • Why would you want copies of other people’s stuff on your site?
  • Why would you care about the chorus? (If it happens great, but your job is your solo, right?)
  • Why would you post ideas and data that are not embedded (and welded to) the argument you wish to make and presumably win?
  • Why would you manage and update past speech acts to be less context-driven (Garden) when you could just make new speech acts for a new context (Stream)?

I think you can probably talk about federation and copies and linked data separately, but it’s difficult to get to those parts of the conversation if the vision of the web is “how do we talk and share things with one another” instead of  “how can this machine and network make me smarter and more empathetic?”

Conversation is one way that can happen. But there are so many other important ways to use networked knowledge to think and feel that aren’t “I’ll say this and then you say that”. In fact, I’d argue that the web at full scale is not particularly *good* at conversation, and our over-reliance on “My site/feed/comment is my voice” as a metaphor is behind a lot of the nastiness we get into.

And as I think about it, it’s not just Wikity/Federated Wiki that struggles with this. Hypothes.is is an annotation platform that could alter the conversational paradigm, but what I see people using it as (mostly) is a form of targeted commenting. In this case, understanding the web as a tool for expression is not gating the adoption of the tool, but may be gating people using it to its full potential.

Jon Udell has recently started to push users towards a new understanding of annotation as something other than comments. And what he says, I think, is interesting:

Annotation looks like a new way to comment on web pages. “It’s like Medium,” I sometimes explain, “you highlight the passage you’re talking about, you write a comment about it, the comment anchors to the passage and displays to its right.” I need to stop saying that, though, because it’s wrong in two ways.

First, annotation isn’t new. In 1968 Doug Engelbart showed a hypertext system that could link to regions within documents. In 1993, NCSA Mosaic implemented the first in a long lineage of modern annotation tools. We pretend that tech innovation races along at breakneck speed. But sometimes it sputters until conditions are right.

Second, annotation isn’t only a form of online discussion. Yes, we can converse more effectively when we refer to selected passages. Yes, such conversation is easier to discover and join when we can link directly to a context that includes the passage and its anchored conversation. But I want to draw attention to a very different use of annotation.

Jon’s absolutely right — it’s really tempting to try to approach annotation as commenting, because that’s a behavior users understand. But the problem is that it’s a gating item — you can’t get to what the tool really is unless you can overcome that initial conception of the web as a self-expression engine. Otherwise you’re just a low-rent Medium.

The first, biggest, and most important step is to get people to think of the web as something bigger than just conversation or expression. Once we do that, the reasons why things like annotation layers, linked data, and federated wiki make sense will be come clear.

Until then, we’ll stay stuck in the DMV parking lot.

 

 

Stereotype Threat and Police Recruitment

From an interview on the World Economic Forum site (which is surprisingly good). A description of how a small change to an invitation email increased pass rates on police recruitment exam:

Small, contextual factors can have impacts on people’s performance. In this particular case, there is literature to suggest that exams for particular groups might be seen as a situation where they are less likely to perform at their best. We ran a trial where there was a control group that got the usual email, which was sort of, “Dear Cade, you’ve passed through to the next stage of the recruitment process. We would like you to take this test please. Click here.” Then for another randomly selected group of people, we sent them the same email but changed two things. We made the email slightly friendlier in its tone and added a line that said, “Take two minutes before you do the test to have a think about what becoming a police officer might mean to you and your community.” This was about bringing in this concept of you are part of this process, you are part of the community and becoming a police officer is part of that — trying to break down the barrier that they are not of the police force because it doesn’t look like them.

….

Interestingly, if you were a white candidate, the email had no impact on your pass rate. Sixty percent of white candidates were likely to pass in either condition. But interestingly, it basically closed the attainment gap between white and nonwhite candidates. It increased by 50% their chance of passing that test, just by adding that line and changing the email format. That was an early piece of work that reminded us of the thousands of ways that we could be re-thinking recruitment practices to influence the kind of social outcomes we care about.

There’s a lot to take away from this. The finding they have applied here originally comes from educational research, and the obvious and most important parallel is in how we approach our students in higher education. How often do we provide the sort of positive and supportive environment our at-risk students need?

The larger pattern I see here with design is just how much small things matter. There’s a reason why no major extant community uses out-of-the-box software. If you’re Reddit, Facebook, Instagram, Twitter, etc. and you want to encourage participation, or minimize trolling, or reduce hate speech you have to have control of the end-to-end experience. Labeling something a “like” will produce one sort of community, and labeling it “favorite” will produce another.

We get hung up on “ease-of-use” in software, as if that was the only dimension to judge it. But social software architectures must be judged not on ease of use, but on the communities and behaviors they create, from the invite email to the labels on the buttons. If one sentence can make this much difference, imagine what damage your UI choices might be doing to your community.

BTW — I write a lot of stuff like this over the day as I process stuff on Wikity (though it’s usually shorter). It’s all there, and you might find something interesting. I post this here because it is just too important to leave on my unread wiki, but it’s only on wiki that you’ll see the connection to the Analytics of Empathy or Reducing Abuse on Twitter.

Predicting the Future

I’m a person that generally doesn’t spend much time predicting the future. I’m more comfortable trying to imagine the possible futures I find desirable, and that’s mostly what I do on this blog, talk about the futures we should strive for.

But two and a half years ago, at the encouragement of the folks at e-Literate, and with the world just coming out of its xMOOC binge, I made some predictions about the future of edtech for e-Literate. I decided to put aside my 10 year visions of the desirable, and just straight up predict what would actually happen.

I spent about a week thinking through all the stuff I talk about and trying to be brutally honest with myself about the future of each item. I literally had a pad where I crossed out most of my most beloved futures. Most things I loved were revealed to be untenable in the short term, due to the structure of the market, the required skills, cultural mismatches, or the lack of a business model.

It was immensely painful. Still, when I was done, a few things survived. They weren’t like most people’s predictions of the time, and in fact ran against most of the major narratives in play as of December 2013.

Here were the predictions. I made three firm predictions under the title “Good Opportunities That Will Be Taken Seriously by the Powers That Be”. I’ll put aside one of these “Local Online”, as I noted even at the time it was a bit of a cheat: local online was a transition that had already happened; it was just no one had noticed.

I’ll deal first with my two other major predictions, which ran counter to the narratives of the time.

  • At a time when asynchronous learning was king, I predicted the rebirth of synchronous learning.
  • At a time when Big Data was the rage, I predicted the rise of Small Data.

How’d I do?

Synchronous Online

In a time when the focus was on asynchronous and self-paced learning, I predicted a renaissance of synchronous learning:

Synchronous online is largely dismissed — the sexy stuff is all in programmed, individuated learning these days, and individuated is culturally identified with asynchronous. That’s a mistake.

I went on to describe how the emergence of new tools based on APIs like WebRTC would make possible the merging of traditional synchronous learning sessions with active learning pedagogies, and how this would result in a fast-growing market, as it would address the needs of a huge existing population of students currently underserved. I compared the market for videoconferencing products to where the market was for the LMS on the eve of Canvas entering it: people believed the LMS wars were over, but in fact they had just begun, because Blackboard had treated the LMS as a corporate tool rather than an educational one:

Adobe Connect and Blackboard Collaborate are, I think, in a similar place. They are perfect tools for sales presentations, but they remain education-illiterate products. They don’t help structure interaction in helpful ways. I sincerely doubt that either product has ever asked for the input of experts on classroom discussion on how net affordances might be used to produce better educational interaction, and I doubt there’s all that much more teacher input into the products either. The first people to bother to talk to experts like Stephen Brookfield on what makes good discussion work *pedagogically* and implement net-based features based on that input are going to have a different pitch, a redefined market, and the potential to make a lot of money. For this reason, I suspect we’ll see increasing entrants into this space and increasing prominence of their offerings.

Suggested tag line: “We built research-driven video conferencing built for educators, and that is sadly revolutionary.”

I don’t know if you can remember how unpopular synchronous was in January 2014, but contemporary takes on it ranked it somewhere between Nickelback and Creed as far as coolness.

So where are we today? Well, WebRTC is propelling a billion dollar industry. Blackboard Collaborate got its first refresh in a decade in 2015 (based on a WebRTC purchase they made in November 2014). Minerva, the alt-education darling, released its platform later that year, which was based on synchronous video learning.

And today, we find an extended article in the Chronicle about the surprising new trend in online education: the rebirth of synchronous education, the hottest trend in learning right now. The reasons for it?

What’s giving rise to the renewed interest in more-formalized synchronous courses is that the technology for “high-touch experiences” in real time is getting more sophisticated, says Karen L. Pedersen, chief knowledge officer at the Online Learning Consortium, a nonprofit training and education group. Institutions are catching up to their professors, and tools are now widely available that let professors share whiteboards simultaneously or collect comments and on-the-spot poll results in real time.

The article goes on to explain that the recent ability of tools has paired traditional synchronous classes with active learning, which makes the difference.

I have some ambivalence on where this will go, as mentioned in the intro to this post, these were predictions, not my top desired futures. Opportunities. And opportunities can be perverted. But this was surprisingly on target.

Small Data

At the height of Big Data madness, I predicted the rise of small data products:

Big Data is data so big you can’t fit it in a commercial database. Small Data is the amount of data so small you can fit it in a spreadsheet. Big Data may indeed be the future. But Small Data is the revolution in progress.

Why? Because the the two people most able to affect education in any given scenario are the student and the teacher. And the information they need to make decisions has to be grokable to them, and fit with their understanding of the universe.

Small Data was a relatively new term at the time the prediction was made. The Wikipedia page for the term was actually birthed on January 2, 2014, about the same time I was writing the post, and looking back now I only see a smattering of uses of the term in 2013. I was at the time reading the wonderful critiques of Big Data by writers like Michael Feldstein and Audrey Watters and thinking through the question, if not “Big Data” then what?

Then in Spring of 2013 I saw a presentation by the local school district on their use of data. The head of their operation said the most useful data for them had been the “One-F” test. They would just compile the grades of the students in all their classes and look for students that had an F in one subject but A’s and B’s in others. Then they’d go to the student and say — look, you obviously can do the work in other classes, what’s happening here? And they’d go to the teacher and say hey, did you know this student is an A student in their other classes — what is going wrong in this class?

And the reason why it worked, they said, was you could talk about standard deviations or point-biserial correlations all day, but it would never make sense to the people whose actions had to change. But people could understand the “One-F” metric. It wasn’t a p-valued research finding: instead it was a clue, understandable by both teacher and student, that something needed investigating and a bit of guidance on where the problem might be, and how to address it. And that — not research level precision on average behavior — was where the value was.

And so it was really Lisa Greseth, the IT head of Vancouver Public Schools at the time, who showed me the way on this. “Small Data” seemed to encompass this idea — it was theory-informed data collection. It was data as a clue for action. And most importantly, it was data that is meant to be understood, in its raw form, by the students and teachers involved.

How’d this prediction go? Pretty well. In the two and a half years since there’s been an explosion of interest in small data. Here’s the first eight results from a Google search on “small data education”:

The Washington Post. May 9, 2016: ‘Big data’ was supposed to fix education. It didn’t. It’s time for ‘small data’

EdWeek. May 16, 2016: Can Small Data Improve K-12 Education? –

InformationWeek. Nov 24, 2015 – McGraw-Hill Education’s chief digital officer has driven the company’s effort to leverage small data to improve student outcomes.

Helix Education. Oct 22, 2015: Big and Small Data are Key to Retention

Portland Community College. Mar 9, 2015: Distance Education: Using small data

Pando Daily. March 9, 2014: The power of small data in education

Center for Digital Education. Jul 1, 2015: 7 Benefits of Using Small Data In K-12 Schools

Times Higher Education Journal. Jul 1, 2015: The Power of Small Data

The prediction, of course, was about the introduction of “small data products”, and there’s been growth there too. McGraw-Hill, for example, is pushing a small-data focus in its Connect Insight series. In many ways, this is a return to a data focus that existed before Big Data madness, a focus on small, teacher-grokable data points collected for a specific purpose. And though McGraw-Hill calls it “Small Data” explicitly, it is the direction that most products seem to be re-exploring after the crash of Big Data hype.

By the way, I still believe Big Data has a place, applied to the right problems. It just wasn’t the place people were predicting two and a half years ago. Maybe I’ll save thoughts about that for a future prediction post.

Other Predictions

I had a category for things that I thought would develop but mostly remain under the radar, and not see broad institutional adoption. I put the return of Education 2.0 (blogs, wikis, etc) in there as well as “privacy products”. I think I was more or less right on those issues. In Education 2.0 we’ve seen real growth, particularly with Reclaim Hosting’s efforts, but it’s still off the institutional mainstream for the moment. On privacy products there has been less development than even I thought there would be, though the recent development of the Brave browser and increasing use of personal VPN provide some useful data points.

I did make the brave, and completely wrong, prediction that Facebook had peaked, thinking that many of its features could be supplanted by OS-level notification systems. Looking back on this prediction I learned something about making predictions: don’t make predictions about things you don’t use, at least not without observing closely how other people use them. My use of Facebook at that time was limited to a quasi-monthly visit.

So lesson learned there? In the time since that I’ve worked on Wikity and Federated Wiki, I’ve come to a greater understanding of what Facebook provides people, almost invisibly. And I have to say, paired with my prediction from 2014, it has really demonstrated to me that what a lot of people build to “replace Facebook” (including things I build) don’t really replace what Facebook provides people. If you look at Facebook and the rise of Slack you start to realize that maybe centralized control of these platforms is key to the sorts of experiences people crave. It may be that you can’t make a federated Facebook anymore than you can make an alcohol-free bar.

I’m not saying that many things can’t be federated. But I have a new appreciation for why they aren’t. (And, as expected, it’s probably this failure of prediction that is most useful to me at this point).

Anti-Predictions

Finally I made some anti-predictions about hyped trends of the time that I believed would go nowhere. Here I predicted that Gamification and Mobile Learning would crash and burn.

I turned out to be largely correct. Gamification seems to be entering its death throes, as it is really just rehashed behaviorism, with the dubious distinction of being even less self-reflective than behaviorism. (The “good” parts of “gamification” are really just learning theory — scaffolding, feedback, and spiral designs come from Vygotsky, Bruner, and others, not Atari).

More interestingly, my prediction about mobile came out more correct than I imagined. As predicted, we’ve gone through the iPad optimism of 2013 and 2014 to find that, unsurprisingly, learning and creating are not really mobile endeavors. Deep learning, it turns out, tends to be an activity that benefits from focused time and rich input systems. (We tried to tell you). So as we watch the iPad one-to-one programs crash and burn, let me revise my previous claim that Education Analysts Have Predicted Seven of the Last Zero Mobile Revolutions.

They’ve now predicted eight of them.

Conclusion

I don’t know. I feel like this is a pretty good record. The Facebook prediction was arrogant and misplaced. I am seriously contemplating that error at this point, hoping for some insight there.

Most of the rest of the predictions were arrogant as well, but came true anyway.

What was behind the right and wrong predictions? There’s no overall trend, but the Facebook failure is instructive when put next to the other predictions.

The key in all these things is to try to truly understand where the value in the current system is, as well as what the true pain points are. And the key is to imagine technological solutions that that address the true pain points without taking away the existing value of the system.

  • Synchronous Online manages to preserve valuable elements of synchronous learning while addressing its main problem: feelings of isolation and disengagement.
  • Small Data builds on the strengths of a system built on the intuitions of the teacher, instead of the data analyst, and works backwards from their needs as a consumer of data.

Things that don’t take off tend to misunderstand central features as flaws. The iPad misunderstood the rich input systems of the laptop as a hindrance rather than a benefit. And its “benefit” of being a “personal” device didn’t map to a classroom where devices weren’t personal, but constantly swapped between students and classes.

Likewise, the centralization of Facebook turns out to be one of its great features: people are actually craving more filters, not less, for the information they consume, and they’d prefer to stay in a standard environment rather than venture out onto the web for most things. Plus, in the two and a half years since I wrote this we’ve seen what has happened to the notifications panel on phones: it’s a Tragedy of the Commons. With every app now pushing messages into the notifications panel, I can’t go to it without finding it littered with thirty or forty ridiculously mundane “updates” from 18 different apps, all cloying for my attention. Facebook’s centralized, privatized ownership of its newsfeed allows it to reduce noise in a way that federated systems have trouble doing.

The biggest blindspot tends to be our own experience. I was able to see the mobile mismatch, because it matches my own experience as a learner. I couldn’t see the strength of Facebook because I don’t *want* the world to like the things about Facebook that it so obviously likes, and I never should have predicted anything about it until I understood its present value to people.

On a personal note, going back through this reminds me that I should probably try to predict more. My tendency is towards futurism, unfettered by reality, and I remember how painful the process of trying to truly predict things was. But truth is, if you can dredge up some ruthless honesty, you can see what the likely routes forward are. That’s not quite as fun as advocating what should be, but it’s probably a useful skill to develop.

Plans vs. Planning

lesson

Dan Meyer quotes Eisenhower: “In preparing for battle I have always found that plans are useless, but planning is indispensable.”

It’s likely that Eisenhower said the above lines, but it’s actually Richard Nixon who reported them. Nixon, in “Crisis 4” of his “Six Crises” writes about his Kitchen Debate with Soviet leader Nikita Khrushchev, and it’s from there alone that Ike’s saying enters the written record.

Nixon’s prose in that article is a bit self-congratulatory, but the point of quoting Ike is clear. Nixon did a lot of preparation before going to Russia in 1959, learning about their aims, the psychology of Khrushchev himself, and the larger cultural context. He used that to put together a plan of how we would approach the leader, debate him, and the points he would try to make.

Capture

Within 30 seconds of meeting Khrushchev, the plan was out the window. Khrushchev made a practice of getting opponents off-balance by being unpredictable, and when Nixon stopped by his office for a simple meet-and-greet before the debate Khrushchev viciously dug into him about Captive Nations Week, a week recently signed into law that called for prayer for countries held captive behind the Iron Curtain. How could he defend such a thing?

Nixon makes the point that, while the plan itself was knocked off-balance by this unexpected offensive, the process of planning allowed him to fluidly adjust his approach. He had learned much about Khrushchev’s character, why the Soviets were meeting them, and what they were attempting to achieve. Knowing, for example, that Khrushchev’s working assumption was not that the U.S. wanted war, but that the U.S. was soft and would fold under aggression allowed him to calibrate his responses correctly. Understanding the materialist underpinnings of  Soviet philosophy allowed him to see the ideological frame to which he had to bend his arguments.

As Nixon elaborates:

It was obvious that no plans could have possibly been devised to cope with such unpredictable conduct. Yet without months of planning … I might have been completely dismayed and routed by his unexpected assaults.

By making a plan, you get an understanding. And it’s the understanding, not the plan, that is the prime asset, as it allows you to respond fluidly to rapidly evolving situations.

While Nixon’s parallel to battle planning is a bit overblown, it’s instructive. Imagine if Eisenhower was given a plan by his best strategist and then Eisenhower implemented it, without having been involved with its development, and without having developed all the background understanding and knowledge that one gets in making a plan. How well would that go?

Lesson Plans and Other OER

And here’s where we get to lesson plans and to issues around OER in general. The question Dan and others have been grappling with is why lesson plan reuse is low. Why, in a world full of digital resources, do teachers still construct their own material from scratch?

My guess is the answer to this depends on the teacher, the subject, and the type of material being used. There are materials, for example, where the main barrier to reuse is technological. Materials where the main barriers are legal. Or where the problem is the ever over-hyped findability gap.

But for at least a certain class of materials — lesson plans for secondary teachers — Dan seems to be coming to the conclusion that we have a bit of an Ike problem. If we get the plans without carefully thinking about what we want to achieve, what our students already know, and what problems we’re likely to encounter, we’ve lost the most important part of the process.

This is not to say that there is no place for sharing of lesson plans. Or for the sharing of other OER. But it is to say we have to approach the construction and sharing of OER understanding that there may be certain processes in course construction that we just can’t short-circuit. To the extent we develop materials and sharing architectures for faculty they need to make that planning process more effective, not simply bypass it.

 

 

 

 

The Textbook Duet

Our current process for provisioning courses with OER looks like this:

  1. Identify course content needs
  2. Find materials that support those needs
  3. Chose the best material for each need
  4. Pull those materials into a coherent whole

In practice, items two and four take an awful lot of time, so many people punt and get an open textbook or get a course pre-assembled.

That’s a bit of a shame, because textbooks do not provide Choral Explanations. They provide the explanation of concepts that works for the average reader. And is that what we really want?

In reality, however, the slack is picked up by the teacher. The course becomes a duet between the instructor and the textbook. When we wonder what students think they are getting out of lecture, this is maybe one of the things. They are getting the textbook concept explained to them in a slightly different way.

I’m not saying lecture is good, mind you. There’s a lot of evidence that it may be a lousy way to do this. But getting two explanations of the same concept over time has been shown as an efficient way to increase understanding. (Robert Bjork did some work on this, though I can’t find the cite at the moment).

What if what some students are seeking in lecture is just a different version of this? How might we think about lecture differently (and find alternatives for lecture) if this is true?

 

Blogging as Multi-Track

From @brackenm:

grit

grit2

The core idea of Choral Explanations is that we benefit more from multiple parallel explanations than the “one best explanation”, and that educational materials should utilize this pattern more fully. As I’ve argued, choral explanations are how people tend to reach mastery of difficult areas, whether they are a programmer on Stack Exchange or a sommelier trying to find another route into recognizing wines.

Bracken reminds me that chorus need not be composed of different voices, necessarily. One of the patterns of blogging is to repeatedly explain the same concepts in different ways through different examples. And one of the joys of reading blogs is suddenly one day a post just clicks, and you get the idea someone has been trying to explain forever, and you get it in a deep an profound way.

It’s tempting to think, after you read that blog post that helps you get it, “Well, if only you had explained it this way before! It’s so much simpler than you’ve been making it!”

But that’s a wrong reaction for two reasons. First, it’s the case that that explanation worked for you, but that others have worked for other people. This is my point about personalization: since we all come into learning contexts with vastly different backgrounds and interests the most important personalization provides different routes into the same concepts.

But the other reason it’s wrong is your understanding of that article that “clicked” is likely path-dependent: had you read that article without reading the others, you probably wouldn’t have gotten it. To overextend the metaphor, this is because we teach our students notes, but expect them to understand chords, and it’s often only by the interaction of multiple examples and explanations that the underlying structure of the idea becomes evident.