Twitter Should Cancel the Appeals Process or Make It Work (also: I’m in Twitter jail!)

Welp, I was going to write a much more nuanced post about problems with the Twitter appeals process, but I’ll just put this here instead for now.

I got banned wrongly for a tweet last week where I was talking about the history of conspiracy theory and its relationship to current COVID-19 misinformation. Someone had posted that conspiracy pyramid that shows the relative harms of conspiracy and asked where fluoride might fit. I replied saying I thought that fluoride definitely belonged in the 5g layer — not anti-Semitic but definitely part of that dangerous John Birch Society politics/medical-conspiracy stew. A few minutes later I was hit by this.

Now, I want to be clear. This stuff happens to other people quite a bit, particularly women academics and activists, due to the gaming of reporting features by trolls. And it happens to lots of regular folks as well due to the algorithmic nature of enforcement — I saw someone go to Twitter jail once for tweeting “I hope Trump chokes on his own uvula” (incitement to violence!). So none of this is particularly noteworthy. This has been broken a long time, and there’s a lot of people I respect who say it may not even be fixable. I’m more optimistic and think it could be made workable, but even there it’s always going to be imperfect: there’s some collateral damage with even the best moderation regimes.

But in any case, I decided to opt for the appeal. After all, I’m a well-known expert on media literacy and COVID-19. My pinned tweet is actually a OneZero article on my efforts to fight misinformation on COVID-19, an effort I got involved in mid-February 2020, before Twitter was even thinking about this stuff. Etc, etc. I expected the appeal might take three days, maybe. So I appealed.

Now, appealing isn’t cost-free. In fact, one of the primary ways reporters contact me for information on how best to fight COVID-19 misinfo is through Twitter DMs, and when you decide to appeal you lose all access to your DMs, all ability to browse, everything. (And perversely, all those DMs just go into a bit of a black hole, there for when you get privileges back, but with no one DMing you knowing that in the meantime you can’t see them). Getting banned for alleged COVID misinfo significantly affects my ability to work on real COVID misinfo. On the other hand, I don’t want to start accruing a bunch of black marks that might get me banned sometime down the road.

Anyway — it’s been a week now. I’ve hesitated writing this because I actually support stronger moderation on Twitter and for the love of God, this isn’t a “I’ve been censored” story. But as always with policy, stronger isn’t enough, smarter means much more. And an appeals process that is in effect a week’s ban isn’t really an appeals process at all. It would make more sense to me, and everyone else, to simply give up the pretense of an appeals process on individual tweets altogether, until Twitter can actually run one effectively. Had they not offered one, I’d have treated this as an algorithmic goof I had to live with; instead I lost a week on Twitter which I would have been using to actually advance anti-misinformation practices.

So that would be my recommendation to Twitter. Either cancel the appeals process, apply it narrowly to suspensions, or speed it up. At the very least, inform people engaging in it what the average time for resolution is. And while my suspension probably won’t derail national or international efforts against COVID-19, I can’t help but think of all the medical researchers and public policy people out there using Twitter to communicate and collaborate. So as much as Twitter seems to think any deference to academic culture is a thumb on the scale, I really hope they can have someone write up a list of experts more important than me and take a bit more care before they ban them. I assume what I was hit with was based on a programmatic scan, not trolls gaming reporting. But the anti-vaccine trolls are out there and I know they are reporting the heck out of anyone that gets in their way. If Twitter doesn’t make a nominal effort to protect those researchers, there will be much more high-profile (and damaging) bannings to come.

(Incidentally the fact that the report does not actually tell me if I have been banned by a programmatic scan –having 5g and vaccines in the same tweet — or via a report is very bad in terms of both transparency and utility. I actually need to know whether it is a troll report or algorithm. If it’s an algorithm, it’s a lightning strike, and I go on the way I have. If the trolls have found me, that’s a different problem, and one I need to be alerted to.)

If the appeal doesn’t come through soon, I’ll remove the tweet, which I guess means I’ll see you all in about 12-24 hours. (UPDATE: I have removed the tweet and am back)

One final note — I also hesitated putting this up because I don’t want to field questions from reporters about it. So many women and people of color deal with this sort of issue constantly, due to targeting by trolls. Talk to them, not me. Maybe actually phone up a sex worker and learn about crazy path they have to thread on various platforms to avoid being shadowbanned, or social justice activists whose every sarcastic tweet is pored over and brigaded by trolls looking to get them kicked off the platform. Also, as I said, I’m broadly supportive of Twitter’s efforts to keep COVID misinfo off the platform. To paraphrase the famous Obama quote, I’m not against moderation, I’m against dumb moderation. But if you are a reporter looking to talk about moderation challenges, I highly suggest talking to people besides me. You can start with Sarah T. Roberts on what really goes on behind the scenes, and Safiya Noble on the issues of algorithmic enforcement (which again, are felt less by people like me than others). I find Siva Vaidhyanathan’s thesis that the system cannot actually be made to work a bit more pessimistic than my take, but one that deserves more airtime. And of course for general policy perspective on platforms, my colleague at the Center for an Informed Public, Ryan Calo, is always a good call.

If on the other hand you want to talk about my work on COVID misinfo and the new and effective model of digital literacy I promote, feel free to email me at michael.caulfield@wsu.edu. Direct messages at @holden probably won’t be up for a while.

Normie Infiltration

Still banned from Twitter (over a dumb mistake their algorithm made), so I’ll just put this here — I am finding it really hard to figure out if some of these QAnon groups are really rifting at the moment over things not playing out as they were told or if the groups have been infiltrated by normies posing as disaffected QAnon supporters. I think honestly it’s a bit of both, I just don’t know the ratio.

Screenshot of folks losing faith on the Great Awakening site.

If normie infiltration plays even a small role, that’s honestly fascinating. Infiltration not by radicals but by centrists. Strange times.

Microclout

I have a couple people in my online social circle who were over the past month telling followers to “just watch” what would happen on the 6th, when everybody but them and their followers would be surprised that Joe Biden didn’t become president. At first, Mike Pence was going to heroically pull some imagined maneuver. Then it was another theory. But the idea from the posters was the same: remember who was right and who was wrong, they’d ask, when this all happens.

I don’t think they were expecting what happened to happen. But I think they were doing something that feels very much like clout-building: taking a gamble on being the one person who seemed in the know, because the rewards would be significant if true.

There’s talk right now about the number of social media influencers at the Capitol Insurrection. A lot of the people leading it were media stars, and it’s difficult to know how much of it they did for their brand, and how much was for the desired result.

But I’m not sure those dynamics stop at a certain floor of users. It seems to me that everyone has at least a few people in their online circles who are approaching issues around these events and conspiracies related to them as a brand-building process. In that case, can we really say the motivation is as simple as “confirmation bias”? Or would we be better off thinking of these dynamics around issues of personal brand-building, its incentives and disincentives?

When it comes to disinformation, the public is a vector, not a target.

Disinformation has always been about getting elites to do things. That’s the point that so many who have looked at what percentage of ppl saw what on Facebook have missed. The public isn’t a target — it’s a vector (and it’s not the only vector).

Hopefully, as we watch what’s going on today, people can see that now? We track spread, but the real measure is penetration into groups that either make decisions or exert broad public influence. Or exert influence over those with influence.

Whether it’s our President who is talking about “shredded votes” in Fulton County, the politicians frightened of a small but heavily deluded set of future primary voters, or health care workers starting to plug into antivax networks due to COVID, that’s what to watch.

And by that measure, I’m sorry to say, we’re looking increasingly fucked.

Control-F and Building Resilient Information Networks

In the misinformation field there’s often a weird dynamic between the short-term and long-term gains folks. Maybe I don’t go to the right meetings, but my guess is if you went to a conference on structural racism and talked about a redesigning the mortgage interest deduction in a way that was built to specifically build black wealth rather than intensify racial wealth gaps most of the folks there would be fine with yes-anding it. Let’s get that done short term, and other stuff long-term. Put it on the road map.

In misinformation, however, the short term and long term people are perpetually at war. It’s as if you went to the structural racism conference and presented on revised mortgage policy and someone asked you how that freed children from cages on the border. And when you said it didn’t, they threw up their hands and said, “See?”

Control-F as an Example of a Targeted Approach

Here’s an example: control-f. In my classes, I teach our students to use control-f to find stuff on web pages. And I beg other teachers to teach control-f as well. Some folks look at that and say, that’s ridiculous. Mike, you’re not going to de-radicalize Nazis by teaching them control-f. It’s not going to address cognitive bias. It doesn’t give them deep critical thinking powers, or undo the resentment that fuels disinformation’s spread.

But consider the tactics used by propagandists, conspiracy theorists, bad actors, and the garden variety misinformed. Here’s a guy yesterday implying that the current coronavirus outbreak is potentially a bioweapon, developed with the help of Chinese spies (That’s how I read the implication at least).

Screenshotted tweet links to CBC article and claims it describes a husband and wife were Chinese “spies” removed from a facility for sending pathogens back to China.

Now is that true? It’s linked to the CBC, after all. That’s a reputable outlet.

The first thing you have to do to verify it is click the link. And right there, most students don’t know they should do that. They really don’t. It’s where most students fail, actually, their lack of link-clicking. But the second thing you have to do is see whether the article actually supports that summary.

How do you do that? Well, you could advise people to fully read the article, in which case zero people are going to do that because it takes too long to do for every tweet or email or post. And if it takes too long, the most careless people in the network will tweet unverified claims (because they are comfortable with not verifying) and the most careful people will tweet nothing (because they don’t have time to verify to their level of certainty). And if you multiply that out over a few hundred million nodes you get the web as we have it today, victim of the Yeats Effect (“The best lack all conviction, while the worst / Are full of passionate intensity”). The reckless post left and right and the careful barely post at all.

The Yeats Effect Is Partially a Product of Time Disparities

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment).

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted?

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f. Roll the animated GIF!

In the GIF above we show a person checking whether key terms in the tweet about the virus researchers are found in the article. Here we check “spy”, but we can quickly follow up with other terms: coronavirus, threat, steal, send.

I just did this for the tweeted article, and repeatedly those terms are found either not at all or in links to other unrelated stories. Except for threat, which turned up this paragraph that says the opposite of what the tweet alleges:

Paragraph indicating no threat to public perceived. Which would be odd if they were shipping deadly viruses around.

The idea here is not that if those specific words are not found that the contextualization is wrong. But rather than reading every article cited to determine whether it has been correctly contextualized, a person can quickly identify cases which have a high probability of being miscontextualized and are therefore worth the effort to correct. And for every case like this, where it’s reckless summary, there’s maybe ten other cases where the first term helps the user verify it’s good to share. Again, in less than a few seconds.

But People Know This, Right?

Now, here’s the kicker. You might think that since this form of verification triage is so easy that we’d be in a better situation. One theory is that people know about control-f, but they just don’t care. They like their disinfo, they can’t be bothered. (I know there’s a mobile issue too, but that’s another post). If everybody knows this and doesn’t do it, isn’t that just more evidence that we are not looking at a skills issue?

Except, if you were going to make that argument, you’d have to show that everybody actually does know about control-f. It wouldn’t be the end of the argument — I could reply that knowing and having a habit are different — but that’s where we’d start.

So think for a minute. How many people know that you can use control-f and other functions to search a page? What percentage of internet users? How close to a 100% is it? What do we have to work with —

Eh, I can’t drag out the suspense any longer. This is an older finding, internal to Google: only 10% of internet users know how to use control-F.

I have looked for more recent studies and I can’t find them. But I know in my classes many-to-most students have never heard of control-f, and another portion is aware it can be used in things like Microsoft Word, but unaware it’s a cross-application feature available on the web. When I look over student shoulders as they execute web search tasks, I repeatedly find them reading every word of a document to answer a specific question about the document. In a class of 25 or so there’s maybe one student that uses control-f naturally coming into the class.

Can We Do This Already Please

If we assume that people have a limited amount of effort they’ll expend on verification, the lack of knowledge here may be as big a barrier as other cognitive biases. Why we aren’t vigorously addressing issues like this in order to build a more resilient information network (and even to just help students efficiently do things like study!) is something I continue to not understand. Yes, we have big issues. But can we take five minutes and show people how to search?

Memorizing Lists of Cognitive Biases Won’t Help

From the Twitters, by me.

What’s the cognitive bias that explains why someone would think having a list of 200 cognitive biases bookmarked would make them any better at thinking?

Image
Screenshot of tweet encouraging people to read a list of 200 biases to be a better thinker.

(It literally says it’s “to help you remember” 200+ biases. Two hundred! LOL, critical thinking boosters are hilarious)

 I should be clear — biases are a great way to look at certain issues *after* the fact, and it’s good to know that you’re biased. Our own methods look pretty deeply at certain types of bias and try to design methods that route around them, or use them to advantage.

But if you want to change your own behavior, memorizing long lists of biases isn’t going to help you. If anything it’s likely to just become another weapon in your motivated reasoning arsenal. You can literally read the list of biases to see why reading the list won’t work. 

The alternate approach, ala Simon/Gigerenzer, is to see “biases” not as failings but as useful rules of thumb that are inapplicable in certain circumstances, and push people towards rules of thumb that better suit the environment. 

As an example, salience bias — paying more attention to things that are prominent or emotionally striking — is a pretty useful behavior in most circumstances, particularly in personal life or local events. 

It falls apart partly because in larger domains – city, state, country – there’s more emotional and striking events than you can count, which means you can be easily manipulated through selection, and because larger problems often are not tied to the most emotional events. 

Does that mean we should throw away our emotional reaction as a guide altogether? Ignore things that are more prominent? Not use emotion as any indication of what to pay attention to?

Not at all. Instead we need to think carefully about how to make sure the emotion and our methods/environment work *together*. 

Reading that list of biases may start with “I will not be fooled,” but it probably ends with some dude telling you family separation at the border isn’t a problem because “it’s really the salience effect at work”. 

TL;DR: biases aren’t wholly bad, and the flip side of a bias is a useful heuristic. Instead of thinking about biases and eliminating them, think about applying the right heuristics to the right sorts of problems, and organizing your environment in such a way that the heuristics don’t get hacked.

The Stigmergic Myth of Social Media, or Why Thinking About Radicalization Without Thinking About Radicalizers Doesn’t Work.

One of the founding myths of internet culture, and particularly web culture, is the principle of stigmergy.

This will sound weird, but stigmergy is about ant behavior. Basically, ants do various things to try to accomplish objectives (e.g. get food to nest) but rather than a command and control structure to coordinate they use pheromones, or something like pheromones. (My new goal is to write shorter, quicker blog posts this year, and that means not spiraling into my usually obsession with precision. So let’s just say something like pheromones. Maybe actually pheromones. You get the point.)

So, for example, ants wander all over, and they are leaving maybe one scent, but they go and find the Pringle crumbs and as they come back with the food they leave another scent. A little scent trail. And then other ants looking for Pringles stumble over that scent trail and they follow it to the Pringle crumbs. And then all those ants leave a scent as they come back with their Pringle crumbs, and what happens over time is the most productive paths have the best and strongest smell.

If you think this smells very E. O. Wilson, it is. But it’s not just E. O. Wilson. This stuff was everywhere in the 1990s. Take “desire paths”, which was a metaphor I first heard when I landed in the middle of the dot com explosion. The story goes some university somewhere doesn’t build paths when they first put up the buildings. Instead, they wait for the first snow, and see where the tracks between the buildings come out. And where the tracks fall they put the paths. Another one talked about the worness of objects as an indicator. And in my first meeting with a MediaLab grad in 1999 (who’d been hired as a consultant for the educational software company I worked for) he described to me his major project: Patina, a web site whose pages showed visible signs of wear they more they were read.

This stuff was everywhere in the 1990s, and when Web 2.0 came around it was the founding mythology. I swear, unless you were around then, you have no idea how this cluster of metaphors formed the thinking of Silicon Valley. You really don’t.

And like a lot of mythologies, there’s a lot of truth to it. When I say myth, I don’t mean it’s wrong. It’s a good way to think about a lot of things. I have built (and will continue to build) a lot of projects around these principles.

But it’s also a real hindrance when we talk about disinfo and bad actors. Because the general idea in the Stigmergic Myth is that uncoordinated individual action is capable of expressing a representative collective consciousness. And in that case all we have to do is set up a system of signals that truly capture that collective or consensus intent.

But none of the founding myths — ants and Pringles, Swedish college desire paths, or even Galton’s ox weighing — deal with opposing, foundational interests. And opposing interests change everything. There isn’t a collective will or consciousness to express.

Faced with this issue, Web 2.0 doubled down. The real issue was the signals were getting hacked. And that’s absolutely true. There was a lot of counterfeit pheromone about, and getting rid of that was crucial. Don’t discount that.

But the underlying reality was never addressed. In areas where cooperation and equality prevails, the Stigmergic Myth is useful. But in areas of conflict and inequality, it can be a real hindrance to understanding what is going on. It can be far less less an expression of collective will or desire than other less distributed approaches, and while fixing the signals and the system is crucial, it’s worth asking if the underlying myth is holding our understanding back.

A New Year’s Eve Activity (and a New Year’s Day Wish)

I made a short video showing a New Year’s Eve Activity around SIFT, and getting serious for a minute with a New Year’s Day wish.

I don’t know how many people know this about me, but I actually study misinfo/disinfo pretty deeply, outside of my short videos on how to do quick checks. If anything, I probably spend too much time keeping up with the latest social science, cognitive theory, network analysis, etc. on the issue.

But scholarship and civic action are different. Action to me is like Weber’s politics, the slow drilling of hard boards, taking passion and perspective. You figure out where you can make a meaningful difference. You find where the cold hard reality of where we are intersects with a public’s desire to make things better. And then you drill.

It’s been three long exhausting years since I put out Web Literacy for Student Fact-Checkers, and over a decade since I got into civic digital literacies. I’m still learning, still adapting. And still drilling.

Happy New Year, everyone. And thanks to everyone that has helped me on this weird, weird, journey.

Chatham House Sharing for OER

I’ve noted a new need in my open education work that isn’t supported by many tools and not found in any licenses. I’m going to call it “Chatham House Sharing”

For those that don’t know, the Chatham House Rules are a set of rules traditionally used in association with reporters covering an event, but more recently used to govern the tweetability of different gatherings. There are probably more rules than two, but the most notable are these:

  • You can report out anything said, but
  • You can’t identify who said it

The reason for the rules is that people need to speak freely as they hash out things at a conference, and to do that they sometimes have to speak loosely in ways that don’t translate outside the conference. Politicians or practitioners may want to express concerns without triggering followup questions or teapot tempests over out-of-context utterances. Academics might like to share some preliminary data or explore nascent thoughts without confronting the level of precision a formal publication or public comment might require. And people that work for various companies may want to comment on various things without the inevitable tempest that “someone from Microsoft said X” or “someone from Harvard said Y” that accompanies that.

In open education there is a need for a form of sharing that works like this, especially in collaborative projects, though for slightly different reasons. If we imagine people working together on an evolving open resource on, say, the evolution of dark money in politics it stands to reason that many authors might not want it shared under their name. Why?

  • Most of the time it’s a work in progress, it’s not ready yet.
  • It may have undergone revisions from others that they do not want their name attached to.
  • They may never want their name attached to it, because they cannot give it the level of precision their other work in the field demands.
  • They may be part of a group that is explicitly targeted for their gender, race, or sexual orientation online and fear they will become a lightning rod for bad actors.
  • They make work for an institution or company and worry that no matter how much their input comes with the caveat that it does not represent the views of their employer it may be read that way and that is risk.
  • In cases where there is a revision history, they might be ok with attaching a name to the final project, but do not like the fact that the history logs their activity for public consumption. (One can imagine other people to whom they owe projects complaining about the amount of time spent on the resource. Even worse, as data gets combined and recombined with other tracking data, it’s impossible to predict the was in which people will use anything time stamped — but there is almost surely malicious uses to come).

What Chatham House Sharing would be is sharing that follows the following rules:

  • Within the smaller group of collaborators, contributions may or may not be tracked by name, and
  • Anyone may share any document publicly, or remix/revise for their own use, but
  • They may not attribute the document to any author or expose any editing history

If they want, of course, they can use their own authority to say, hey this document I found is pretty good. If they want to make some edits and slap their name on it, noting that portions of the document were developed collaboratively by unnamed folks, they could do that as well.

Maybe there’s already a license that covers this — perhaps makes it legally binding. People will have to let me know. The Creative Commons licenses tend to run the other way, with attribution even encouraged on the CC0 licenses though not required. But I’ve worked with academics long enough to know that the promise to not not be quoted on something can facilitate their cooperation on more informal documents, and I’ve seen enough ugliness to know that there are risks to many people in taking credit that are not felt equally. OER and open educational practice should be able to accommodate these issues in tools, licensing, and norms.

Walkthrough for Windows App

Back in January I started working on a web-based application to help teachers and others make fact-checking infographics as part of a Misinformation Solutions Forum prize from RTI International and Rita Allen. I got it to work, but as we tried to scale it out we found it had

  • Security concerns (too much potential for hacking it)
  • Scalability concerns (too resource intensive on the server)
  • Flexibility concerns (too rigid to accommodate a range of tasks, and not enough flexibility on tone for different audiences)

Maybe someone can solve those issues as part of a server app. But after a small bout of depression I realized that you could solve all of issues by making it a desktop OS-native app.

What I’ve ended up with however, does more than simply build a set of fact-checking GIFs. It’s a flexible tool to present any web process or even non-web issue. It’s going to make it easy for people to educate others on how to check things, but potentially it’s a way to make our private work and processes visible in many other ways as well.

Here’s an example of output, which also shows the implementation of blockquotes and linking.

I’ve given it to a couple people so far to try, and the response I’ve gotten is — weirdly — how *fun* it is to explain things like this. And it is. It’s really odd.

In any case, if you have access to a Windows laptop or desktop, download, unzip wherever you want, read the license (it’s free software with the usual caveats), and fire it up. If you make something cool let me know.

Windows application.

Oh, and Mac users — I’m not able to build a version for Mac (I’m surprised I was able to build this one, tbh) but given someone with my hacky abilities can make this for Windows, I’m sure if there is demand for this someone of talent can make this for Mac in less than a week.

Also I’m thinking through the legal implication of hosting the produced walkthroughs on a central site — or whether it’s better to keep them distributed (everyone host their own, but share links). More on that later.