Hapgood

Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and Online Digital Literacy


Reasonableness: An Introduction

There are two primary accounts of the relation between evidence and belief in misinformation research, and neither is adequate.

The first model is simple and direct. The idea here is you see misinformation and it shifts your belief. It is not identical to the old hypodermic model of media impact, but bears some relation to it. The idea is I see that Hillary Clinton is supposedly implicated in a murder on lizardpeople.com and, well, now I’m definitely not going to vote for Hillary Clinton. Or I see a video of a person collapsing or dying suddenly supposedly from a vaccine and I say, well, I’m not getting that vaccine then.

The second account is a bit more cynical. In this model, misinformation doesn’t really have much impact at all. Those that advance this model often talk about narratives, deep stories, incentives, structural inequalities, and see these as barriers to any significant change in belief. The idea is that if I have a deep narrative that the government is corrupt and untrustworthy, I’ll believe things that support that and disbelieve things that don’t. The same if I have a deep resentment about increasing demographic diversity. In this view, if you want change then you want to address the the larger narrative. Fix inequalities, address self-identity, etc. Or so the story goes.

For as long as I’ve been in this field we’ve had a set of people running studies in the first model, and people from the second model rightfully critiquing them. At the same time, the second model seems insufficient. To a misled reader wrong information is processed exactly like right information — so to say that misinformation has no effect you would have to argue that no information has any effect on anything, which is clearly not the case.

It’s possible (probable, really) that I’ll be accused of not presenting one of the two modes above fairly. But I mean only to sketch them briefly because I wish to come at this from a different angle altogether.

The Pursuit of Reasonableness

Fundamental to the “narrative” account is that we have the story backwards. We don’t collect evidence and decide what to believe. Rather, we believe something and then collect evidence. I actually agree with this, and have for a long time. Here’s a little snippet of something I wrote back in 2016 where I talked about rebuttal shopping:

… a lot of stuff that goes viral on Facebook is posted as an implicit rebuttal to arguments that the poster feels are being levied against their position. This stuff tends to go viral on Facebook because the minute the Facebook user sees the headline they know this is something they need, an answer to a question or criticism [of their position] that irks them.

I still believe this. My issue with the “narrative account” is not the noting of this pattern of biased selection of evidence. Rather it’s the unidirectional nature of the account. The “naive” account ran causality from information to belief. The “narrative” account runs it from belief to information. But here’s a question — if you have a belief already, why spend all this time collecting evidence and in many cases sharing it?

After all, people spend a lot of time doing this, and people generally invest their time in things that have value to them. When we ask those advancing the insurmountable “narrative” account, they’ll reference things outside of the logical — people share to self-express, people read things for reasons of self-identity. But this too is a bit odd. There’s many different ways to express yourself, or connect with your self-identity, and a lot of them are quite low effort. If all this sharing of facts has nothing to do with logic, then why are you collecting facts?

What I’d propose (and I have borrowed from a mishmash of sources here, from Leo Festinger to Matthew McKeon) is that people spend all this time because they want their beliefs to seem reasonable. And while that is connected to identity, it is connected in a way that straddles the worlds of logic and self-conception.

An Example

I’ll give you an example from my own family. A family member of mine did not want to get the COVID vaccine. When I’d call her, I wouldn’t talk too much about it, I’d just ask what she was currently thinking about the vaccine. And she would reply with a long list of reasons why she did not trust it, as well as ask me a variety of questions. For the most part I did not argue, though I did occasionally get frustrated with some of the logic. Eventually this family member did get the vaccine, and the reason she said was that she “just was tired of talking about it.”

The interesting thing is neither I nor anyone else was forcing her to defend her position. Rather, we knew her position, and she knew we knew her position and that we thought it was an unreasonable position. The talking on the phone calls was not to convince me to not get the vaccine. Rather, it was to convince me that her position was reasonable, and that she had come to it by reasonable means. She wanted two things at once: to not get the vaccine, and to be perceived as reasonable. Those things were in conflict, which meant that she had to spend quite a bit of time on phone calls introducing new evidence, new concerns, new stories. Eventually the maintenance of perceptions of reasonableness became too big a cost relative to just getting the vaccine.

This is not to “other” this position at all. We all do this all the time. We have beliefs, we would like to be thought reasonable, we supply reasons. Sometimes that is in an effort just that we be thought reasonable, and other times we engage in persuasion, attempting to enhance the reasonableness of a position so that others will adopt it, as I was doing on those calls. We’re all doing this, quite a lot of the time.

Conclusion

Admittedly, this is just a proposal. It’s not wholly ungrounded: it aligns with a lot of thinking in epistemology, argumentation theory, mirrors work in the study of cognitive dissonance, and is informed in part by Mercier’s argumentative theory (which itself has many antecedents). But I am not presenting empirical research here. Rather, I am trying to outline a different view of the problem that might inform future research and interventions. In future posts I hope to explain a bit of how this lens helps us explain some interesting findings in information literacy, for example.

But to review what I am proposing — yes, we come to beliefs before evidence, but we wish not only to express our beliefs but to be thought as reasonable. In some cases, we adopt beliefs considered reasonable by those around us. Sometimes we adopt those beliefs just to seem reasonable. When we adopt beliefs thought by some to be unreasonable, we supply reasons, often in the form of evidence. Far from being an afterthought, the evidence we supply is a necessary price we pay for the maintenance of our beliefs. If the reasonableness of a belief becomes too expensive or difficult to maintain, we lose the belief — or sometimes, sadly, the social circle we defend it to.

This model hopefully cuts a middle road between the mode one (facts form beliefs) and mode two (facts are merely window-dressing on beliefs). People do select facts based on pre-existing beliefs quite often, but that does not mean that the facts are irrelevant. On the contrary, since a sense of reasonableness is required for belief maintenance the facts and evidence matter quite a bit, and people confronted with counter-evidence or a lack of supporting facts may find their beliefs difficult to maintain socially, and ultimately personally.

In the next couple of posts, I’ll show how the concept of reasonableness significantly shifts models of misinformation and information environments and crucially how it can inform educational approaches.

Notes

I tried to keep the sources out of the way during this. It’s a high-level mish-mash of sorts, because my ultimate aim here is not to enter the multiple intersecting domains I have encroached on, but to get to more practical implications regarding an educational model which can inform educational interventions and be tested directly.

Still, here’s some of the inspirations and basis for this.

Reasonableness is used by a lot of different scholars in a lot of ways. Rawlsian reasonableness is at the center a certain view of political morality. Reasonableness also plays a role in law. Epistemologists have used the term in different ways, including as a standard for relevance. I mean something more constrained and at the same time more general. To argue is to enhance the reasonableness of a belief (or other position, such as fear) relative to a proposition. To be thought reasonable as a person may require a range of things, but one requirement is that one has and can supply reasons for beliefs held. Following Toulmin (1958), what constitutes “reasons” varies by culture, profession, domain, and era.

Outside of Toulmin’s work on argument, probably one of the main influences here is A Theory of Cognitive Dissonance, by Leo Festinger (1957). Most people don’t realize that Festinger’s work on dissonance was an attempt at first to make sense of misinformation after an earthquake in India. What Festinger found was that people in the regions that experienced the least threat spread the most misinformation about imagined threats. He postulated that people who were safe needed to rationalize their fear, and hence, had to create reasons for the fear. In our terms here, they had to make the fear reasonable.

Festinger is fascinating to me, because he is the author of a foundational, data-informed theory about misinformation, including the relation between identity and patterns of information-seeking and information-avoidance. And yet for all the work on identity and misinformation relatively few of the major works on misinformation and identity cite it in any meaningful way. It’s not unknown in the field — but it’s on the edges, and I’d argue it shouldn’t be. At the core of the work is a vision where individual facts don’t matter but exert a cumulative effect over time that can trigger significant shifts when belief coherence becomes too stressed.

A lot of what I say here is guided by newer work in argumentation theory (2000s – present). Crucially, I am influenced by observations in a 2013 work by McKeon, who makes the provocative (and I think correct) claim that there is no hard line between the argument and explanation, which are both rationale-giving activities. I also regret to inform you all I have a scrawled note in my notebook from an argumentation theory reading binge half a year ago that says “argumentation enhances the reasonableness of a position” and for the life of me I don’t know if that was a quote, a summary, or a thought of my own. It’s a shame, because that formulation has been really useful to me. But wherever it came from, it’s not far from either the work of McKeon or Robert Pinto.

Richard Stalnaker (1970s – present). In terms of linguistics, this presentation is highly influenced by Richard Stalnaker’s work from the 1970s on, which looked in part at how conversational participants negotiate the introduction of new evidence into the “common ground”. This work is extended by relevance-theoretic work in formal pragmatics in the 1990s, including the Craige Roberts model of the QUD in conversational discourse (“questions under discussion”).



One response to “Reasonableness: An Introduction”

  1. […] Author: mikecaulfield Source […]