The Homeostatic Fallacy and Misinformation Literacy

I wrote a thing for Neiman’s year-end journalism predictions yesterday that I’m quite excited about. Hopefully will be out soon. (Look at me, a typical writer — submitting something 48 hours late and then hoping it gets published early).

In the article I finally publish this term I’ve been throwing around in some private conversations — the “homeostatic fallacy”. 

Homeostasis is a fundamental concept of biology. The typical example is human temperature. For humans the temperature 98.6 tends to be a very desirable temperature. And so what you find is no matter where you put a human body — in the arctic or the tropics — the human body finds ways to keep that temperature constant. In hot environments you sweat, and the condensation helps cool you. Your blood vessels dilate, to bring more heat to the skin where it can be dispensed with. In cold environments your blood vessels constrict. 

There are a lot of people out there that think that misinformation is sort of like hot and cold environments are towards the body. In other words, both misinformation (let’s think of it as cold) and information literacy (let’s think of it as heat) have relatively little effect, because the mind has certain homeostatic mechanisms that protect against identity threat and provide resistance to new ideas. And there’s a certain truth to this. Knowing the facts about nuclear power won’t suddenly make you a supporter, and learning the history of Reconstruction won’t turn you woke. 

And as such, we hear a lot of critiques about media literacy of the sort that you can’t change people’s minds by giving them better information. Homeostatic arguments sometimes go even further, claiming bad information is likely not leading to bad decisions anyway.

Often this is posed as a counter to a naive Cartesian view of beliefs, that treats our decision-making processes as scientific. But weirdly, both the homeostatic view of misinformation and the Cartesian one suffer from the same transactional blindspot. On a case by case basis — show a person a fact, check if it changes them — homeostatic mechanisms often prevail. Your mind has certain psychological set-points and will often fight to keep them constant, even in the face of disinformation or massive scientific evidence. 

But the fallacy is that the set-point itself will not change.

People often use temperature as an example of a homeostatic set-point, but I think another example is far more educational. Consider weight.

Your body has a natural weight it gravitates towards, and the power of that set-point can’t be underestimated. Think about this astonishing fact — one pound of weight is about 3,500 excess calories. To maintain weight over a year, your average intake would have to stay within about 10 calories a day of maintenance levels. Few people account for calories at this level of precision, and yet many retain a stable weight year after year. That’s the power of homeostasis.

How do we gain weight, then? There’s quite a lot of debate on this, actually. But, more or less, temporary behavior and adverse events, in sufficient quantities, not only increase our weight, but change our set-points. And now the homeostatic mechanisms that kept us trim work against us, to keep the pounds on.

In short time frames, a sort of psychological homeostasis is protective. We see bad info or good info and nothing changes. We share a corrosive meme and we’re still the same person. Just as a single cookie at the Christmas party will have a net zero effect on your weight through the magic of homeostasis, a single bit of disinfo isn’t going to change you.

But the goal of disinformation isn’t really around these individual transactions. The goal of disinformation is to, over time, change our psychological set-points. To the researcher looking at individuals at specific points in time, the homeostasis looks protective — fire up Mechanical Turk, see what people believe, give them information or disinformation, see what changes. What you’ll find is nothing changes — set-points are remarkably resilient.

But underneath that, from year to year, is drift. And its the drift that matters.

Recognition Is Futile (and also dangerous)

I often talk about the dangerous of teaching students to “recognize” fake news. Here’s a good example from today of why recognition is a lousy strategy that can lead to bad results, a tweet proposing that the President of Nigeria has been replaced with a clone.

Here is how Peter Adams’s excellent newsletter The Sift  describes the conspiracy theory:

Speaking on Sunday to Nigerians living in Poland, where he is attending a U.N. climate conference, Nigeria’s President Muhammadu Buhari denied viral rumors — amplified by his political opponents — that he had died and that a look-alike from Sudan had taken his place.

The rumors first emerged in 2017 amid Buhari’s lengthy, unexplained disappearances from public life. They have been strengthened by misinformation and conspiracy theories shared on social media — including by Mazi Nnamdi Kanu, a political activist and leader of a separatist group.

But wait, are those the same person? Look closely…

What do you think? 

See, here’s the thing, and I really wish we’d put this at the center of what we do. If I tell you these are the same person, you’ll say, “But of course, it’s so obvious!” and point to some details. On the other hand, if I say actually I played a trick on you and swapped a picture out for another one, you’ll say… “But of course, it’s so obvious!” and point to other details.

Why is that? Two reasons. First, the informational field here is dense. A photograph has literally hundreds of things I can drill down on, and given an initial orientation towards the photo I may often find whatever I need to bolster that initial orientation. Second, there’s no stopping rule for declaring whether these are the same person or not. There’s no point, for example, where you can have looked at three precise things and declare, with high certainty, that this is confirmed to be him. So instead of bringing you to more certainty, the longer you look at it the more likely you are to find some strange differences, and the more likely you are to become confused.

Most things aren’t quite as informationally dense as photographs, but the same problems come into play. The more features you have to look at, the more you start to be manipulated. Don’t believe me? Here’s a cruel trick you can play on your faculty. Show them this page:

Tell them — hey this is a suspicious medical site, how do we know? Unless they know the site is the site of one one of the most prestigious journals in medicine they’ll tell you all the reasons why it’s obvious this is a junk site:

  • It’s a .com, not a .org. Probably an imposter.
  • There’s literally an “Our Company” link at the top. Like, hello!?! This is not a journal!
  • “Beating sugar taxes” in that headline doesn’t sound very scholarly
  • And look at these pictures, they look like something from a clickbait site, stock photos, etc.
  • There’s a popup ad that looks like an ad for some sort of pharmaceutical promotion in the bottom, what a scam!

Start by telling them the opposite — that it’s one of the top five medical journals published in English, and again, they’ll come up with all the reasons why it was so so obvious from the start this was quality:

  • Good clean style
  • Lots of focus on medical practice
  • Scholarly titles on some articles
  • Mentions of the CDC
  • Research tab and Authors tab looks legit.

In a dense informational field you think you’re being Sherlock, and feel pretty smart. But the more data you look at the more you’ll get confused.

This is well known by people outside online media literacy of course. Ever go to buy a car? The psychology of dealerships is to put complex configurations of car options in front of you — this one is $1600 more but it has the power windows and the satellite radio is standard, it has seat warmers and leather interiors, this one is less but but doesn’t have ABS and doesn’t come in white, at least on the lot. All of this stuff is fed to you deliberately to overload your cognition so that by the time you sit down and hear about their special warranties you’re putty in their hands. 

If you want to survive on the internet or in a car dealership, you have to radically reduce what you look at. If seat warmers and satellite radio were not things you came in looking for — ignore them entirely. Their value to you is zero. Reduce the information you look at dramatically, and prefer things that are resolvable to comparable criteria (is this journal well cited compared to other journals) to things that aren’t comparable (which page looks more professional). Choose a couple crucial things, and stop there if those things suggest clear answer.

Our four moves approach, of course, deals with just this issue, and you can read about it here.

Empower Teachers First

Someone asked me today whether I could share any insights about OER creation. I have a few thoughts about that, but the one I always come back to is that you have to empower teachers first.

You know that thing on planes where it’s like “In case of sudden decompression, put on your own oxygen mask first. Once it’s securely fastened, help those around you put on theirs?”

That’s OER. If the teacher gets their mask on they are going to save the damn plane, and if they don’t you’re all screwed. 

There are other models of course — every mega-MOOC wanted to do a direct-to-student play back in 2011. The OpenCourseWare movement largely ignored teacher-facing resources for most of its history. In both cases, the lack of focus on reuse by teachers resulted in impact patterns that followed the “Matthew Effect”, with most gains going to the students that came in with privilege, knowledge, and access.  Those who already had knowledge and opportunity gained more opportunity, but those who didn’t never get a foot on that first rung.

The way to help at-risk students and the way to create more diversity in professions lacking it is not to create more and better self-study materials. It is to find teachers that are already teaching the populations you want to attract or help and relentlessly focus on helping them be better and more effective teachers. 

——

I’ve written on this, alone and with Amy Collier, a bunch of times over the years. Here are some posts on it. Some of this is dated but the larger points still hold:

Why I Am Concentrating On Open Teaching Resources (2010)

Openness as a Privilege Multiplier (2011)

Why We Shouldn’t Talk MOOCs as Meritocracies (2012)

Rethinking Online Community in MOOCs Used for Blended Learning (2013, with Amy Collier)

The Tensions of Open Pedagogy

New article out in EDUCAUSE Review that outlines a possible open pedagogy framework. Here’s the key graphic:

4 ovals connected by 4 diamonds in a circle. Center says 'Open Pedagogy'. Circle1: Access & Equity; Diamond 1: Participation; Circle 2: Community & Connection; Diamond 2: Curiosity; Circle 3: Opportunity & Risk; Diamond 3: Responsibility; Circle 4: Agency & Ownership; Diamond 4: Empathy.

As long-time readers of this blog know, I may be the misinformation literacy person right now, but I came here by way of thinking about open pedagogy and its intersection with digital literacy and democracy. It’s literally my 20 year passion in this space, what excited me about edtech back in 1996 and got me on this road.

The one thing I like about the language above is that it’s possible to start to tease out the tensions of open pedagogy from the model. And it’s the tensions that you have to start with, otherwise you just get religion.

As an example, Maha Bali has noted that access and equity often are in tension with agency and ownership. Self-hosted publishing systems, for example, which provide strong levels of ownership and agency, often throw up technical and economic barriers for students. There is a tension there that never resolves, but has to be seen as a set of competing ideals, reconcilable only in the scope of a given local context. 

You find similar tension between agency and community. Long ago, when we did Keene State’s media fluency outcomes, we separated out participatory modes (about personal engagement and empowerment) and collaborative modes (about working together). And there is a tension there — to act collectively is to give up some power and ownership for the sake of connection and communal action. Things like federation provide new (and exciting!) ways of dealing with these concerns, but do not eliminate the need for trade-offs.

Even Community and Equity can be at loggerheads. When we have students engage in real-world environments like Wikipedia, we are giving them ability to have real impact, but it can come at the cost of putting them into environments that are not equitable.

In terms exploring tensions, risk and responsibility — defined as a commitment to interrogate tools and practices — is the one that doesn’t quite fit for me — to me this is a broader element that actually touches on these other realms — Agency, Community, Equity — because these tensions are exactly what a interrogation of tools should look at. So I see this as more of a meta orientation, connecting the others. If I were to choose a fourth to replace it, I might look at impact — the way in which students are given opportunities to make non-pedagogical impacts on things that matter to them, and then have interrogation cut across all the dimensions.

Overall, though, very happy to see this and hope we keep talking about it.

Cynicism, Not Gullibility, Will Kill Our Humanity

I’ve mentioned before that students come into the misinfo classes we teach more or less not trusting things. Here’s student trust in four stories they should have low trust in:

Level one there is low trust. And that’s where the students are.

That’s the dubious prompts, so that would be good if that was the end of the story. But of course they don’t trust anything fully (or distrust anything fully). Here’s some stories they should trust in blue next to the dubious ones:

That’s right — nationally reported, well-sourced stories get equal trust levels to child sex-trafficking conspiracies broadly debunked by all reputable press and all levels of government. A level that equates to “There might be something there but I don’t really trust it.”

Why does this matter? It matters because of this:


Photo by Reuters photographer Kim Hyung-Hoon

This is a photograph of what’s going on at the border in America’s name. Maybe you agree with it, but actually most Americans don’t. That is, assuming they think the photo is trustworthy. 

But of course, a lot of people are out there saying the photo is not trustworthy:

The vast majority of students don’t engage in this level of conspiracy theorizing. But exposure to this stuff over time, combined with no instruction on how to sort trustworthy information from untrustworthy information in a networked environment  leads to that undifferentiated suspicion that we see towards everything.

Gullibility can do real damage, as we see with some people who consume a daily diet of toxic fakery. But for the median citizen — and especially our youth — I am far more worried about the corrosive effects of cynicism.

Major advances in the U.S. — from civil rights to the social safety net — have been driven by the public being confronted with the consequences of their inaction or action and having to reckon with that. But if everything is worthy of low trust at best, you never need to confront the impacts of policy or politics or personal action. Uncertainty — hey, I did hear about that but who knows what’s true anyway — acts as a cutural novacaine, allowing one to persist in inaction, even as evidence mounts of effects that that same individual might find repugnant — if, you know, it turn out to be really true. Like, really, really true. And who knows?

The depressing thing is that there are methods that can help with this undifferentiated cynicism but we aren’t rolling them out to students. Remember the graph above showing the undifferentiated cynicism? Here it is again, the dubious prompts in red, the trustworthy in blue, for a variety of misinformation and disinformation across the political spectrum:

Not good. But here’s how students do after our “four moves” style instruction:

For reference, a three for most of these trustworthy prompts is a good answer, so this is an incredible amount of differentiation. That’s after just four hours. More details here.

Most of what we give students today doesn’t get students anywhere near this of course. Some of it probably makes them worse at this stuff — so much of the free text comments of students that got this stuff wrong used language that was straight out of an information literacy session they had had at some point (more on that later).

But we could give students this, if we wanted to. And if we want to retain our capacity to be unsettled — the driver of so much of the politics that has improved this country and others — giving them this is essential.

In the Web’s Hyperreality, Information Is Experience

A neighbor was sweeping his sidewalk, pushing tiny white rocks back into his rock garden. The sky was an uninterrupted blue. A mailman worked his way up the empty street. There were no signs of “Sharia Law.” The migrant caravan was still hundreds of miles away in Mexico. Antifa protesters had yet to descend on Pahrump. Chapian squinted against the sun, closed the shades and went back to her screen.

 — Description of Shirley Chapian, consumer and believer of near-apocalyptic right-wing disinformation, from ‘Nothing on this page is real’: How lies become truth in online America (WaPo)

I have a few thoughts on the recent Washington Post piece on misinformation, which follows both a purveyor of it and a consumer of it. I’m breaking those thoughts into a few posts. This is number one.

It’s a doozy.

One note — I should be clear to people reading this who don’t know my deep hatred of conspiracy theory — 9/11 happened. Charlottesville happened. The Access Hollywood tape is real. If you think otherwise you’re a bit of a dope. That’s the whole point of using the examples I use below.

It’s amazing I have to say this stuff, but I choose these events as examples because they are real and yet our experience of them is surprisingly ungrounded, and that level of ungroundedness presents cultural vulnerabilities that are exploited by bad actors. Stick with me, folks. Read it to the end.

Facts and Downstream Beliefs

In misinformation, you’ll often hear it said that “facts” don’t really change downstream actions much. 

This is true, in a very contained sort of way. If you believe nuclear power is safe, and I show you evidence it is not and then ask you what you think we should do about nuclear power, the chances are your answer will not change much. In fact, your knowledge may increase, but your larger beliefs about nuclear power will likely not move, at least in the short term. This is one of the more established facts of political science.

People are really resistant to changing their minds (unless, of course, you tell them a new idea is what they have always thought, but more on that in later posts). We have a status quo bias of sorts when it comes to our identity, and we’re not going to start ripping up the floorboards of our self-conception because someone forwarded us some new press clippings. Identity and experience is what really shifts our thinking, and as such people are (rightly) skeptical that headlines reading “California Wildfires due to Global Warming Say Experts” are going to turn us all into a cap-and-trade evangelists.

But reading the Washington Post story, I think it’s clear that this model of disinformation — as primarily changing beliefs — is wrong for much of what’s going on.

Experience Alters Identity, and Identity Action

Did 9/11 change you? It changed me. Not immediately, but over time. Like nearly everybody, I remember the day well. My wife drove me to work, our two year old in the back seat. For some reason the car radio had flipped to a Spanish station and they were talking excitedly. Nicole went to change it and I joked no, leave it there, give our daughter some exposure to Spanish.

“What do you think they are so excited about?” Nicole asked.

I walked into the office, ready to crank out some educational software and saw people huddled around some of the TVs we used for previewing educational video clips.

“What’s going on?” I asked, seeing the now famous footage on TV.

“Basically, we’re under attack.” said one of my co-workers.

I was a libertarian anti-war kid who believed most geo-political threats were blown out of proportion, who had opposed everything from the Gulf War to Kosovo, and openly scoffed at Clinton’s attempted Bin Laden strike as a wag-the-dog response to the Lewinsky scandal. I didn’t become a pro-war booster, of course, but the experience shattered my ideological simplicity. Temporarily, at least, it made me less vocally anti-war. Again, not pro-war — but far less confident about my own opinions for a short period of time.

That day is the big one for a lot of people, but think of how many other events shaped your life. The death of Heather Heyer in Charlottesville. The shooting at Tree of Life. Dylann Roof and that Confederate flag. The images of Katrina. The pepper-spraying of students at UC Davis. The shooting of Philando Castile. The Access Hollywood bus tape.

It would be folly, I think, in a world where the gender gap in politics has reached historic levels, to think that that Access Hollywood tape hasn’t had a part in framing what has come since. That Heather Heyer’s death didn’t push the ACLU to rethink its mission. That Castile’s death didn’t alter, at least incrementally, the sense of urgency around police violence. That even smaller things — like the President’s comment that there were fine people on both sides — hasn’t altered the way we can frame and not frame events. 

Political scientists are a cynical bunch about change and causality, but you won’t find political scientists that say that events don’t matter at all. Matter less than we think, matter only in certain circumstances. But from Watergate to the sinking of the Lusitania to Pearl Harbor to the death of Emmett Till, events have changed the political landscape in both subtle and profound ways.

These cultural experiences shape us and change what is politically possible. Not overnight, but, like the bankruptcy in The Sun Also Rises, gradually, then suddenly.  Even 9/11, one remembers, found itself amplified by the DC sniper, the ricin letters, the slow confluence of events that built a narrative of a world out of control. Together they made it politically possible to advance a war that wreaked untold suffering on both Iraqis and our soldiers, that made leaders unembarrassed to share new waterboarding authoritarian fantasies with pride.

I didn’t support the Iraq War, of course. For me, it was a matter of disagreeing but too noncommittally. Too much deference to writers I should have ignored. Too much of the then Slate-inflected on-the-other-hand distance, from about 2001 to early 2003. For others, it moved them into far darker places from which we still have yet to escape. (Others of course, were not knocked off balance at all , and I salute them).

But added together, the shifts were enough to change history, not just of the U.S., but of the world.

And yet, for most people, their primary experience of these events was through screens, websites, and print. One of the defining emotional experiences of any person my age’s life, and it was essentially information.

Information Is Experience

Years ago, Baudrillard made a compelling argument that the nature of reality had shifted over the course of human history, due to a shift in the way simulation relates to experience. According to him, first order simulation — the more common simulation of the past — represents things but maintains a boundary. I draw a picture of you the best I can, and hang it up.  I make a map of the road to my house and hand it to you. You look at these as copies of reality and judge the fidelity of them to the source.

We’ll skip over second order simulation, to jump to this: by the time you get to third order simulation, the relationship of mediation to what is “real” has changed. The simulation doesn’t mediate reality as much as create it. For the vast majority, the reality of most of what you know about 9/11 or Charlottesville or Castile is never verified against any non-mediated reality. In turn, that created reality informs your physical experience, rightly or wrongly. My perspective of a recent “patriot” group that came onto campus, for example, was more influenced by the death of Heather Heyer than anything that happened on campus, and the digital reality I had been exposed to shaped that experience.

In the world of Baudrillard’s hyperreal, information is experience. And as such, the standard old-school experimental psychology tests — “Here’s some information about global warming, how do you feel about regulation now?” and the standard negative campaigning studies don’t really apply to what where seeing right now. They don’t even come close.

Digital Experience Frames Non-Digital Experience

Imagine your parent dies, or, if you’ve been so unlucky, think back on the death of a parent. This is an event that seems relatively unmediated, very real, very raw.

But what if you had found out, directly after their death, that the death was not inevitable? That as a matter of fact your parent had been misdiagnosed, was not in fact sick at all, and had been prescribed an accidentally lethal dose of medication by a doctor too drunk to even stand up?

Would your life change? Would it be different? Would your perspective on health care change? Would your narrative about this your parent’s life change? Would your identity change?

Some varieties of experimental psychology applied to disinformation would say no. It’s just information, it doesn’t change your opinions or beliefs. But of course it’s not just information. It’s experience, and the experience of having a parent die due to the negligence of a drunk doctor is fundamentally different than the experience of believing they died of lung cancer caused by earlier smoking.

If this seems over the top, consider that this is the experience of parents who have kids with autism, who daily must convince themselves that their child’s lifetime of struggle is not due to their decision to vaccinate them, or the greed of Big Pharma. It’s the experience of a person who doesn’t get a needed job and believes it is due to affirmative action policies they’ve read about, meme-ified across the web.

And it’s the experience of Chapian, where the stuff that she is reading online connects with nothing in her life, and yet it is the digital world that is more real. 

This is not just about Chapian, of course. It is an unavoidable consequence of the world we inhabit. I care deeply about children separated at the border, have broken down in heaving sobs over the Tree of Life slaughter. Yet, the realness of these things for me is not as much mediated as it is created by the media I consume, and my experience of that reality is not fundamentally different than local events I hear about.

This is not necessarily a bad thing — as a person with a great deal of privilege, to trust only what is outside my window as reality would be ethically dubious at best. But the stream of digital information that reaches people is experience, both on its own — without any connection to daily life whatsoever — and as powerful frame for what they experience daily. Mucking with that stream of information doesn’t just change what I know, it changes my life history. It changes what happens to me. 

Not What Would It Mean To Believe, But What Would It Mean To Experience

There’s benefits to separating belief and experience of course in analysis of course. But when thinking about disinformation I wouldn’t start with belief. I’d start with thinking about experience. 

Not “what does it mean to believe crime is skyrocketing”. But “what would be the effects of skyrocketing crime on a population?” Not “what would it change if people believed migrants were threatening people in the street”, but instead “how would politics change if migrants were threatening people in the street?” Not “what would it change if people believed Monsanto was  poisoning our cereal?” but instead “what would be the impact on your trust of corporations if you were personally poisoned by Monsanto?”

It seems a small change, but it will help you better understand how much of the most effective disinformation works, not by providing us with information, but by hacking the simulation that we must necessarily inhabit, and mucking with our experience, and thereby changing the realities that we are willing to accept.

Take for example this:

Chapian looked at the photo and nothing about it surprised her. Of course Trump had invited Clinton and Obama to the White House in a generous act of patriotism. Of course the Democrats — or “Demonrats,” as Chapian sometimes called them —had acted badly and disrespected America. It was the exact same narrative she saw playing out on her screen hundreds of times each day, and this time she decided to click ‘like’ and leave a comment.

“Well, they never did have any class,” she wrote.

 from ‘Nothing on this page is real’: How lies become truth in online America (WaPo)

It’s a guess of course, but I’m going to guess that Chapian knows at least some Democrats, but that given her level of interaction with Facebook and her lack of interaction with others it is more likely that what she reads on the web alters her experience of those people more than her relationship with those people informs her experience of the web. The same would be true of immigrants, communities of color, and so on.

As she reads, she is not simply learning new information — she is repeatedly having enraging experiences with Democrats, Jews, migrants, Hollywood celebrities and others, and each bad experience does not ground to some mediated but ultimately singular reality. Instead, it grounds to a rich texture of other digital events which have formed her set of impressions. It’s believable the Democrats flipped off Trump because — well, all of your life experience supports it. I mean, for starters, Jay-Z rapped “middle finger to the lord” at a Clinton rally:

Michelle herself said “White folks are what’s wrong with America”

obama white folks

And Obama was caught on a mic saying that 

“These crackers got no idea what it’s like to feel the struggle. Speakin’ of, how you gonna keep all your n*ggas in line while whitey Trump rules the roost?”

– Last Line of Defense, 18 January 2017

This is all false, of course, but Chapian experiences it at the level the average American experiences child separation policy at the border, the Access Hollywood tape, or far-right violence.  The information is false, but each experience is real. And because of the phenomenon of hyperreality, she is more likely to trust her digital experience than her non-digital experience. She simply has accumulated far more influential experiences online, and to the extent that her online experience differs from other mediated realities, it’s the non-digital that is seen as not fitting:

For years she had watched network TV news, but increasingly Chapian wondered about the widening gap between what she read online and what she heard on the networks. “What else aren’t they telling us?” she wrote once, on Facebook, and if she believed the mainstream media was becoming insufficient or biased, it was her responsibility to seek out alternatives.

 from ‘Nothing on this page is real’: How lies become truth in online America (WaPo)

So What?

The story above is for a trivial thing, of course — the rudeness of Democrats. But it’s the same process for pulling people into conspiracy theories about immigrants or Jews or the Deep State. It’s the same process by which many Democrats came to believe that Clinton had hacked the results of voting machines in the primary to hand her wins in Arizona, or that the White Helmets are really CIA agents.

See, even there I slipped — I let language minimize and constrain. “[T]he same process by which many Democrats came to believe that Clinton had hacked…” No. It’s the same process by which many Democrats came to experience the primaries as hacked by Clinton. See the difference?

Maybe this all seems like a trivial distinction to you. Maybe you already know this. I’m deep enough into this field that it is hard for me to tell what is common knowledge out there and what is not. Or maybe you think this conception is far too out there — though if that’s the case, I beg you to read the Washington Post article after this and reconsider. 

But whatever your take, I encourage you to think of disinformation in this way, at least for a bit  — not as the spread of false information, but as the hacking of the simulated reality which we all must necessarily inhabit. As something that does not just change knowledge, but which produces new life experiences as real as the the Iraq War, your neighbor’s fight with cancer, or your child’s illness. To see it in this way is perhaps more terrifying, but ultimately necessary as we attempt to address the problem.