Some notes I just wanted to get down. There are four places where information interventions can be applied.
Moderation/Promotion. A platform always makes decisions on what to put in front of a user. It can decide to privilege information that is more reliable on one or another dimension, or to reduce the dissemination of unreliable or harmful information, either through putting limits on its virality or findability, or through removal. There are clearly things which need to be dealt with at this level, though it is notable that most arguments happen here.
Interface. Once a platform decides to provide a user information, it can choose to supply additional context. This is a place where there has been a mixed bag of interventions. Labeling is an example of one that has often been used in relatively ineffective ways. Other more specific interventions have better effects — for example, letting people know a story deceptively presented new is actually old.
Individual. This is (usually) the realm of educational interventions. We can choose to build in the user capabilities to better assess information they are provided. This might be specific understandings about information-seeking behavior, or more general understandings about subjects in question or the social environment in which they are making judgments (including examining biases they may hold).
Social. Consuming information via social software is not an individual endeavor, but a process of community sense-making. Social interventions seek to empower communities of users to get better at collective sense-making and promotion of helpful information. Some of these interventions may involve platform actions — whatever one thinks of a recent Facebook proposal to identify knowledgeable members in Facebook groups, it is clearly a social intervention meant to aid in collective sense-making. Other interventions may not involve platforms at all — teaching members of communities to correct error effectively, for example, does not require any additional platform tools, but may have substantial impact. Teaching experts to communicate more effectively on social media may bring specific expertise in to communities which desire it, and teaching community leaders the basics of a given subject can provide expertise to those with influence.
The Intervention Chain
People sometimes argue where interventions should be applied. For instance, there is a valid argument that deplatforming figures that are repeatedly deceptive may do more net good than interface interventions or media literacy. Likewise, many scholars point out without strengthening impacted communities or addressing underlying social drivers of bad information little progress can be made.
I think it’s more helpful to see the various layers as a chain. Ultimately, the final level is always social — that’s where messages get turned (or not turned) into meaning and action. And it’s still a place where interventions are relatively unexplored.
But that doesn’t diminish the value of the other layers. Because each layer involves intensive and sometimes contentious work, it relies on the layers of intervention above to reduce the “load” it has to deal with. For instance, the choice between media literacy and moderation is a false choice. Media literacy can be made less cognitively costly for individuals to apply, but there still is a cost. If obvious bullshit is allowed to flow through the system unchecked — if, say, every third post is absolute fantasy — media literacy doesn’t stand a chance.
Likewise, proponents of social interventions often point out the flaws of the individual layer — people are social animals of course, and a technocratic “check before you share” doesn’t reckon with the immense influence of social determinants on sharing and belief. And it’s true that solutions at the individual layer are a bit of a sieve, just as are solutions in the layers above. But we need to stop seeing that as a weakness. Yes, taken individually, interventions at each layer are leaky. But by filtering out the most egregious stuff they make the layers below viable. By the time we hit social interventions, for example, we are are hopefully dealing with a smaller class of problems unsolvable by the other layers. Likewise, moderation interventions can afford to be a bit leaky (as they have to be to preserve certain social values around expression) if subsequent layers are healthy and well-resourced.
Anyway, this is my attempted contribution to get us past the endless cycle where people involved with theorizing one level explain to people theorizing the other levels why their work is not the Real Work (TM). In reality, no link in the intervention chain can function meaningfully without the others. (And yes, I am mixing a “chain” metaphor with a “layers” metaphor, but it’s a first go here.)
In a future post I hope to talk about promising interventions at each level here.