The Information Intervention Chain: Interface Layer Example

A couple days ago I wrote up my description of the Information Intervention Chain. One of the points there was that work on each layer decreases the load on the layers below, and helps cover some of the errors not caught in the layers above.

Here’s a simple example, where a user has has responded to someone talking about ivermectin. Their impression — if we take this user at their word, and in this case I do — is that the NIH has started recommending ivermectin.

Now this is false, and I suppose some might say to just ban such things altogether. The NIH is not recommending the use of ivermectin for COVID-19. This is a fact. But I doubt we want to be in the business of policing every small, non-viral, good faith but confused reply people make on social media. Moderation is important, but it needs to be well targeted.

So next we get to the layer of interface. And here we find something pretty interesting. The user believes at least two wrong things:

  1. That this is a recent (late summer/fall) article on the use of ivermectin which negates previous guidance
  2. That this article represents a statement by the NIH

Take a moment to look at the post. Where would they get such ideas?

The fact is that their assumptions are quite reasonable given the failures of the interface to provide context. The article linked here is not from the NIH but rather from the American Journal of Therapeutics. It looks like it comes from the NIH, and that’s largely because the Twitter card (as well as the Facebook card, etc) highlights the NIH.gov address as the source, a side effect of the article being available through a database that is run through the NIH. The card, in this case, actively creates the wrong impression.

The second point — is it new? Note that when the link is shared there is no indication of the publication date of the article. So this article was actually published in the spring, and is, at best, out of date at this point. But Twitter chooses to not make the date of the article visible in the shared card. And that’s not a dig on just Twitter here — at least as far as I can tell, the PubMed page doesn’t expose the publication date or the journal name at the meta level. Somewhat shockingly there seems to be no Facebook or Twitter-specific meta info at all. Even if Twitter wanted to make publication and publication date more visible, it’s not clear the site gives them the information they would need to do it.

Now once you click through, you should be good. Should be good, but I’ll get to that in a moment.

Here’s the good news, if you click the link to the page, you see some of the information of which this person was unaware: the journal name, the fact that it’s on PubMed, the date at the top. But even here we are undone by confusing interface choices.

That banner at the top? From the NIH, supposedly? What does it say?

It says that this is the Library of Medicine by the National Institutes of Health, and you’re in luck, because there’s some important COVID-19 guidance below!

Wait, that’s not what a big banner with an exclamation point saying “COVID-19 Information” means? So tell me what an average person is supposed to think an exclamation-marked heading on an NIH site saying “COVID-19 Information” indicates?

It’s supposed to mean that what is below it is not official?

Well, good luck with that.

People keep wanting to talk about how people are hopelessly biased, or cynical, or post-truth, or whatever. And sure, maybe. But how would we know? How would we possibly know when someone engaging in a plain text reading of both what Twitter and the NIH is providing them here would come to this exact conclusion, that the NIH is now recommending they take ivermectin?

Now, can the layer below the interface intervention — in this case, the individual layer of educational interventions — clean this up? Well, educators have been trying to. Understanding things like the difference between an aggregation site like PubMed and a publisher like the American Journal of Therapeutics are things we teach students. But coming back to the “load” metaphor, it would make a lot more sense to clean this mess up at the interface layer, at least for a major site like PubMed. I mean, I can try to teach several billion people what PubMed is, or, alternatively, Twitter, Facebook, and PubMed itself could choose to make it clear what PubMed is at the interface layer, which would allow education to focus limited time on more difficult problems.

Nothing — not in any of the layers — is determinative in itself. But improving the information environment means chipping away at the things that can be done, in each of the layers, until the problems left are truly only the hard ones. We still aren’t anywhere near to that point.

The Information Intervention Chain

Some notes I just wanted to get down. There are four places where information interventions can be applied.

Moderation/Promotion. A platform always makes decisions on what to put in front of a user. It can decide to privilege information that is more reliable on one or another dimension, or to reduce the dissemination of unreliable or harmful information, either through putting limits on its virality or findability, or through removal. There are clearly things which need to be dealt with at this level, though it is notable that most arguments happen here.

Interface. Once a platform decides to provide a user information, it can choose to supply additional context. This is a place where there has been a mixed bag of interventions. Labeling is an example of one that has often been used in relatively ineffective ways. Other more specific interventions have better effects — for example, letting people know a story deceptively presented new is actually old.

Individual. This is (usually) the realm of educational interventions. We can choose to build in the user capabilities to better assess information they are provided. This might be specific understandings about information-seeking behavior, or more general understandings about subjects in question or the social environment in which they are making judgments (including examining biases they may hold).

Social. Consuming information via social software is not an individual endeavor, but a process of community sense-making. Social interventions seek to empower communities of users to get better at collective sense-making and promotion of helpful information. Some of these interventions may involve platform actions — whatever one thinks of a recent Facebook proposal to identify knowledgeable members in Facebook groups, it is clearly a social intervention meant to aid in collective sense-making. Other interventions may not involve platforms at all — teaching members of communities to correct error effectively, for example, does not require any additional platform tools, but may have substantial impact. Teaching experts to communicate more effectively on social media may bring specific expertise in to communities which desire it, and teaching community leaders the basics of a given subject can provide expertise to those with influence.

The Intervention Chain

People sometimes argue where interventions should be applied. For instance, there is a valid argument that deplatforming figures that are repeatedly deceptive may do more net good than interface interventions or media literacy. Likewise, many scholars point out without strengthening impacted communities or addressing underlying social drivers of bad information little progress can be made.

I think it’s more helpful to see the various layers as a chain. Ultimately, the final level is always social — that’s where messages get turned (or not turned) into meaning and action. And it’s still a place where interventions are relatively unexplored.

But that doesn’t diminish the value of the other layers. Because each layer involves intensive and sometimes contentious work, it relies on the layers of intervention above to reduce the “load” it has to deal with. For instance, the choice between media literacy and moderation is a false choice. Media literacy can be made less cognitively costly for individuals to apply, but there still is a cost. If obvious bullshit is allowed to flow through the system unchecked — if, say, every third post is absolute fantasy — media literacy doesn’t stand a chance.

Likewise, proponents of social interventions often point out the flaws of the individual layer — people are social animals of course, and a technocratic “check before you share” doesn’t reckon with the immense influence of social determinants on sharing and belief. And it’s true that solutions at the individual layer are a bit of a sieve, just as are solutions in the layers above. But we need to stop seeing that as a weakness. Yes, taken individually, interventions at each layer are leaky. But by filtering out the most egregious stuff they make the layers below viable. By the time we hit social interventions, for example, we are are hopefully dealing with a smaller class of problems unsolvable by the other layers. Likewise, moderation interventions can afford to be a bit leaky (as they have to be to preserve certain social values around expression) if subsequent layers are healthy and well-resourced.

Anyway, this is my attempted contribution to get us past the endless cycle where people involved with theorizing one level explain to people theorizing the other levels why their work is not the Real Work (TM). In reality, no link in the intervention chain can function meaningfully without the others. (And yes, I am mixing a “chain” metaphor with a “layers” metaphor, but it’s a first go here.)

In a future post I hope to talk about promising interventions at each level here.