Some Preliminary Results On Cynicism and Online Information Literacy

We (AASCU’s Digital Polarization Initiative) have a large information literacy pilot going on at a dozen institutions right now using our materials. The point is to gain insight into how to improve our instruction, but also to make sure it is working in the way we think it is. Part of that involves formal assessment.

A few weeks ago we finished the Washington State classes in the pilot, in what we’re calling our high-fidelity implementation. For those unfamiliar with educational pilots I should note that that doesn’t mean the other implementations were worse or lower-value. It just means that since the materials were delivered by someone intimately familiar with how to deliver them (me) we have a high confidence that the intervention we are testing is what we think we are testing. 

In any case, we have some of our first pre/post data in, on a decent implementation.

Now, the key to our particular assessment is getting a series of free text answers scored, the rationales students provide for why they believe or don’t believe something they see in an authentic prompt, trust it or not trust it.  We’re working on that. That’s going to be our core metric.

In the meantime, however, we do have simple multiple choice scores on student trust levels of both dubious and trustworthy prompts. And it’s worth sharing what we’re seeing there in a very general way, because it doesn’t match a lot of what I hear people speculating about.

One important caveat to start: I am reporting here *only* on the four classes I taught directly. We have well over a thousand students in the multi-institutional pilot with something like 1,300 assessments already logged; it’s a big knotty, messy assessment problem that will take some time and money to finish. But what we’re seeing in the interim is important enough that reporting on the smaller group more immediately seemed warranted.

Rating Trustworthiness

So here’s the assessment directions the students got. They are pretty bland:

You will have 20 minutes total to answer the following four questions. You are allowed to use any online or offline method you would normally use outside of class to evaluate evidence, with the exception of asking other students directly for help.

You are allowed to go back and forth between pages, and can revisit these instructions at any time.

The students then took an A or B version of the test before the 4 hour intervention and the opposite test afterwards. The instruction was delivered in the classes over three weeks, with a three week gap after it before the post-assessment in order to capture skills decay. (Due to a scheduling conflict, one of the four classes received only 2 hours and 40 minutes of instruction, they are not included in the post-test data here, but their results were generally somewhere between the pre- and post-test results of the other classes).

Key to the assessment was we had a mixture of what we called “dubious” prompts, where a competent student should choose a very low or low level of trust (depending on the prompt), and  trustworthy prompts, where competent students would rate it moderate or higher.

So, for example, this is a dubious prompt: a conspiracy story that has been debunked by just about everyone:

Our target for this is that the students rate trust in it “Very Low” due to information you can find quite easily on it (using our “check other coverage” move)

And here is a paired trustworthy prompt in the news story category (our other prompts are in photographic evidence, policy information, and medical information):

In the above case, of course, the story is true, having been reported by multiple local and national outlets, and supported by multiple quotes from school officials. We set the target on this as meriting high or very high trust. The story as presented happened, and apart from minor quibbling about the portrayal, a fact’s a fact.

As you can see, this is all rough, which is why we are ultimately more interested in the free text replies. People might mean different things by “high” or “very high”. Arguments could be made that a prompt we considered very low should be rated “low”. Students might get the answer right for wrong reasons. Scoring the free text will show us if the students truly increased in skill and helpful dispositions.

But even with this very rough data we’re seeing some important patterns.

Finding One: Initial Trust of Everything Is Low

First, students rate dubious prompts low before the intervention:

Great, right?

Yeah, except not so much. Here are the trust ratings on the trustworthy prompts right next to them in blue:

On average, students in our four WSU classes rated everything, dubious and trustworthy alike, as worthy of low to moderate trust.

This actually doesn’t surprise me, as it’s what I’ve seen in class activities over the past couple of years, a phenomenon I call “trust compression“. We’re looking to make sure that this phenomenon is not a result of subject headings around the prompts or student expectations around material but we expect it to hold.

Finding Two: After Instruction the Students Differentiate by Becoming Less Cynical

I was going to do a big dramatic run-up here, but let’s skip it. After the pre-test we did (in three of the classes) four hours of “four moves” style instruction. And here’s what trust ratings look like on the assessment after that 4 hours of instruction (these are raw results, so caveat emptor, etc):

That’s the same y-axis there. You can see what is happening — the students are “decompressing” trust — rating trustworthy items more trustworthy and (with the exception of the baking soda prompt) dubious prompts more untrustworthy. The graph is a bit hard to read without understanding what an appropriate response is on each — on gun poll trust, for example, 2 is an acceptable answer  — it’s a survey done by a trustworthy firm and in line with many other findings, but is sponsored by Brookings and pushed by the Center for American Progress, neither of which can be seen as neutral entities. The Chemo prompt is deserving of at least a three, and the rocks prompt should be between three and four. But the pattern seems clear –most of the gap opening up is from the students trusting trustworthy prompts *more*.

How the students do this is not rocket science of course. They become more trusting because rather than relying on the surface features and innate plausibility of the prompts they check what others say –Snopes, Wikipedia, Google News. If they find overwhelming consensus there or reams of linked evidence on the reliability of the source, they make the call.

(Potential) Finding Three: Student answers may be less tribal after intervention

Emphasis on may, but this looks promising. We have not gone deep into to the free answers, but an initial scan of them seems to indicate that students are less tribal in their answers. To be fair, tribalism doesn’t figure much into either pre- or post- responses. Fatalism about the ideological filters of older adults may be warranted, but at least on the issues we tested with our first years (including coal industry benefits, nuclear power risks, alternative medicine, gun control, and child sex-trafficking conspiracy) there was far less tribalism in evidence than current discussion would have you think.

Where there was tribalism it tended to disappear in the post-test, for an obvious reason. The students in the pre-test were reacting to the page in front of them, trying to recognize misinformation. In doing so, they fell back on their assumptions of what was likely true and what was not, usually informed by tribal understandings. If you stare at a picture mutated flowers and ask whether it seems plausible then your answer is more likely to touch on whether you believe nuclear power is safe or not. This is the peril of teaching students to try and “recognize” fake news — doing so places undue weight on preconceptions.

If, on the other hand, you look not to your own assumptions but to the verification and investigative work of others for an answer, you’re far less likely to fall back on your belief system as a guide. You move from “This is likely because I believe stuff like this happens” to “There might be an issue here, but in this case this is false.”

(Much) more to come

We have a lot of work to do on with our data. We need to get the WSU free responses to the prompts scored, and as other institutions in our twelve institution pilot finish their interventions we need to get the free text scored there as well. If the variance and difficulty of the tests match, we’d like to get it all paired up into a true pre/post, and maybe even compare motion of high-performers to low performers. 

But as I look at the data I can’t help but think a lot of what-if fatalism about tribalism and cynicism is misplaced. I’ve talked repeatedly as fact-checking as “tools for trust”, a guard against the cynicism that cognitive overload often produces. I think that’s what we’re seeing here. It makes students more capable of trust.

I’m also just not seeing the knotty education problem people keep talking about. True, much of what we have done in the past — CRAAP, RADCAB, critical-thinking-as-pixie-dust and the like — has not prepared students for web misinformation. But a history of bad curricular design doesn’t mean that education is pointless. It often just means you need better curriculum. 

I’ll keep you all updated as we hit this with a bit more data and mathematical rigor.

How to Teach Older People Online Infolit

People often ask me what we can do about older people and online information literacy. Old people are not necessarily more confused than young people, but for various reasons they are positioned to do much more harm when they get things wrong. They also tend to be embedded in more ideological tribes whereas as young people form tribes around other interests.

My answer is this: teach the young people how to fact-check and then have them teach their parents. Young folks are already embarrassed about their parents’ cluelessness on the web, and my experience with young folks (in a middle class American context at least) is they have no trouble speaking up when your actions as a parent are embarrassing. So give young people the skills, and show them how to teach others.

A short example: I’m not a personal fan of the post-consumer recycling approach we’ve adopted to packaging in the U.S. But in the 1980s and 1990s we decided to teach a nation to recycle their trash. Did we go out and have massive education initiatives for adults on the recycling process and the importance of it? Nope. We educated the kids so that every parent who threw a yogurt cup into the wrong container had to endure the “why do you hate baby seals” stare of their fifth grader.  And some folks got resentful, but for most it was easier just to learn how to do it.

There are many other examples. My Dad quit smoking partially because his recently educated grade schoolers guilted him into it. Children of the 1970s were often the ones teaching their parents to not throw trash out the car window. College students of the 1990s were often the ones showing their parents how to work the new computer, or get on the web.

People — of all ages — are already there in terms of desiring to curb misinformation’s spread, but they need to be able to teach the skills to their parents in a systemic way. I talked to a person in D.C. a month ago whose mother always shares those fake “missing kid” memes on Facebook. And she always would comment “Mom, it’s fake” (or old or whatever). But it never occurred to her that she could show her Mom how to check it herself. When we get these checks down to easily demonstrable 10 second checks, that changes.

Teach the children and give them the skills and tools to teach their parents, stopping them from sliding into conspiracy subcultures and alternate realities. Teach the interns to teach their Senators and policy makers how to check this stuff. The college students to navigate health information for their aunt or uncle. Graduate wave after wave of people who know how to navigate the web and are committed to helping other to do better with it too.  That’s how you get this done.

 

It’s Not About the “Heat” of the Rhetoric, It’s About Its Toxicity

Lots of media people today talking about whether “heated” rhetoric resulted in what we have seen in the past few weeks. But this is the wrong frame.

The “heated rhetoric” approach to thinking about public discourse imagines political violence as a barroom brawl. Someone spills a drink. Someone calls that person an asshole. That person makes a comment about that person’s girlfriend, which results in a push, which leads to a shove, which leads to a punch and pretty soon people are fighting.

These dynamics do occur, both in personal situations and larger political ones. Gang warfare and neighbor disputes have these dynamics. These dynamics also exist within deliberative bodies. I’ve seen such things happen in local political settings too.

But that’s not what the last week has been about.

The problem with the rhetoric we are seeing is not how angry it is or how insulting. It’s not even the viciousness of the attacks.

Our problem is not heated rhetoric, but toxic rhetoric.

Toxic Rhetoric Doesn’t Just Inflame, It Warps Reality

Toxic rhetoric is used not just to attack, but to warp people’s reality. It often works as a system, with bottom up web-based networks providing dangerous meta-narratives that map on to mainstream news stories and that are reinforced by elite dog-whistling. When the different parts of the system are assembled the effect is different than any one part acting individually.

The prime example of this is the “caravan” meta-narrative that seems to have been the partial motivation of the apparent pipe-bomber and a central motivation of the synagogue shooter.

Consider the last social media message of the shooter.

bowers

Bowers states that HIAS, a Jewish charity that helps refugees, is “bringing in invaders”, and he can’t stand by and let his people “get slaughtered.” He knows this close to the election a mass shooting will not look good, but he has no choice, time is of the essence.

This is not the voice of someone that has gotten so mad or angry they are going to hurt somebody. It’s not about the heat, or even the hate. There is a belief here that if something is not done now it will literally lead to the slaughter of the white people. That is the motivation. Hate plays a part, but it is the delusion that triggers the action.

Why does he believe that the white race is literally weeks or days from annihilation from foreign invaders? And what does it have to do with the Jewish community?

His posts provide an answer. There is some speculation required here but not much. Bowers seems to have believed that the migrant “caravan” had been organized by George Soros as part of a vast Jewish conspiracy to subjugate white people. This theory first emerged in March, attached to a different caravan, and circulated in Facebook groups and other platforms. When the more recent caravan appeared it was attached to that caravan.

For people deep in “anti-globalist” social media distortion field, there were added overtones. A popular book among white nationalists, the alt-right, and a number of mainstream conservatives is the 1975 novel Camp of the Saints which tells the story of an “armada” of 800,000 impoverished Indians that winds its way slowly to the shores of France seeking asylum. Politicians dither about what should be done as it moves towards France’s shores, not knowing that the landing of the refugee ships is a signal for migrants everywhere to rise up and begin the white genocide.

58b7538d2200001f004ae1cf

This is not “heated rhetoric”. This is toxic rhetoric. It doesn’t just inflame the passions —  it rots the mind and poisons the intellect.

I want to be very careful about how I say this — but if you believed this was true, if you really believed your entire race was about to be extinguished by an invading migrant army controlled by all-powerful Jewish interests, then violence not an insane act, or even an act of anger. It would, in a demented way, be rational. And that — not angry words — is what makes our discourse so dangerous.

These Beliefs Are Insane, But the Perpetrators Are Probably Not and That Should Scare You

There is a rush whenever this happens to say that the people committing these acts were insane. This is never based on any real analysis, but usually based on the fact that they committed unspeakable violence and the fact they believed delusional things.

The “they committed violence” approach is easy to dismiss as it is tautological:

  • Q: “Why did they commit these unspeakable acts?”
  • A: “They were insane.”
  • Q: “How do we know they are insane?”
  • A: “They committed unspeakable acts.”

The second thing the media does to paint these folks as insane is show their beliefs. So Bowers apparently believed that George Soros was organizing the “caravan”, funding it and assisting it. Obviously delusional, right?

Except these views are quite common. Here’s a VP of Campbell’s Soup (now a former VP). Open Society is a charity that Soros runs:

johnston

Johnston was a former Secretary of the US Senate under Bob Dole. And these beliefs are quite common, propagated through a massive amplification and dissemination network on social media. Here, for example, is a “Soros” mention network that researcher Jonathan Albright pulled off of Instagram just before the synagogue attack.

DqYfsJ4XQAItrIm

The dense connections you see up there in red are a sign of a coordinated effort to get Soros disinformation in front of people. Exposure to these narratives provides the framework through which those exposed will view mainstream coverage and the statements of their chosen elites.

As a result, millions of people are consuming these narratives, along with false flag and crisis actor memes. And it is not hysteria to see those who read and spread these memes at different points along a radicalization spectrum. The former VP of Campbell’s Soup was not striking a pose. He had bought into the idea that not only was George Soros funding and coordinating an invasion, but that there was a conspiracy by the media to hide it, so much so that the camera operators and reporters were under orders from the networks to hide the “troop carriers and rail cars” that were there to take them north.

Going the next step, and believing this massive conspiracy of a thousand moving parts is a Jewish conspiracy wouldn’t even be the hardest radicalization step here. To a person who has been consuming toxic garbage for years and has bought into plainly impossible vast conspiracies this is the smallest, not the largest step.

This Was Predictable

Almost two years ago I wrote a post about the wrongheaded idea that when people consumed disinformation it was mainly a case of confirmation bias, with people becoming more confident (and perhaps more emotional) about ideas they already held. The better way to look at it I said was as a process of radicalization into conspiracy-driven networks and other alternate realities. Seeing it this way was crucial, I claimed, because bit by bit social media was pulling us toward a disaster much bigger than the 2016 election, where the net-enabled mainstreaming of conspiracy theory and the mythology of white supremacy would mix with disastrous results:

People exposed themselves to Facebook multiple times a day, every single day, seeing headlines making all sorts of crazy claims, and filed them in their famil-o-meter for future reference. We’ve never seen something on this scale. Not even close.

If Facebook was a tool for confirmation bias, that would kind of suck. It would. But that is not the claim. The claim is that Facebook is quite literally training us to be conspiracy theorists. And given the history of what happens when conspiracy theory and white supremacy mix, that should scare the hell out of you.

Many people provided welcome feedback to it, some supportive, some critical. Was this a moral panic perhaps? Was it really historically unique? Was the 2016 election just a weird blip? Did people’s minds really change or did they post these things without believing them? Silly Mike, you need to take these things seriously but not literally.

Years later the prediction has proven true, time and time again, with horrific regularity here and abroad. Yet we’re still talking about confirmation bias and heated conversations and civility as the core issues. We’re still stuck in tiny pilot programs around web-based literacy to address this stuff on stunningly small scales.

Meanwhile, post by post, click by click, people of all ages are being slowly groomed into conspiracy cultures that turn fear into violence and authoritarian rule. Once people’s reality is warped in this way, bringing them back is difficult, and yet we are moving at a snail’s pace on educational and technological fronts. The media is still talking about the problem as if the core was people being impolite. The world slowly slides toward a dark future, across the globe. We have educational solutions (just read the rest of this blog) but they remain un-deployed or under-deployed.

The events of the past week were completely predictable, in the most obvious way: they were predicted. Multiple times. They have happened before, in multiple countries, multiple ways, multiple incidents. This all should matter. I’m am crushed and broken by the events of the past week. But much of my anger and frustration is at the too-small educational solutions we continue to roll out as the entire edifice slowly sinks.

 

 

 

 

The Persistent Myth of Insurmountable Tribalism Will Kill Us All

New Knight Foundation-supported study out about college students which very much confirms what we see in classrooms. Students:

  • feel overwhelmed by the “firehose of news”
  • feel unequipped to sort through that news
  • want to read and share truthful accounts
  • believe in journalistic principles of accuracy and verification
  • but fall back on cynicism as a strategy, believing far more news to be fake or spun than really is

All this is exactly what we see in classrooms. Every class might have one or two hardcore partisans, but the vast majority of students feel OVERWHELMED. They want to do the right thing, but it seems impossibly time-intensive and complex.

What we find in this environment is that some students initially talk like hardcore partisans when looking at prompts, before they have the skills to navigate the overload. But that’s because the alternative to reacting tribally is an investigation they imagine is going to take them an afternoon. Once they have the skills, that tendency slips away. Here’s the sort of answers we’re getting after one week of skills training when a student looks at a story of declining arctic ice from the National Snow & Ice Data Center (an excellent source, BTW):

“This site appears trustworthy. I searched Google news and found it is cited by multiple credible sources including bigger weather news sites. I also used the “Just add Wikipedia” trick as a way to investigate funding and found that it is partially funded by the US through NASA, which shows there are probably experts there.”

“It’s good, they’ve been around forever and are affiliated with other research facilities like University of Colorado.”

(amalgamation of several student responses to protect student privacy).

We do enough of these in class that we can see they get to this point in less than 60 seconds for many tasks. If the habits hold, when someone tries to pull them slowly into post-truth land, they’ll have a natural resistance. Maybe enough to avoid the first steps down that slippery slope.

You know what I don’t see in my classes — in a Republican district, where a nontrivial number of students don’t believe in climate change? Any reaction of the sort that you “can’t trust the site because declining sea ice and climate change is a myth.” Not one.

It’s not just a Republican thing. We find the same thing with prompts for liberal hot-button issues on GMOs. Students — many of whom are very committed to “natural” products and lifestyles — make accurate assessments of the lack of credibility of sites supporting their opinions. They believe this stuff, maybe, but admit the given site is not a good source.

Now you might think that all the students (all of them!) are somehow secretly hiding their tribalism and saying what they know they need to say to get approval — even though none of our assignments are graded on anything but participation. If you haven’t taught a class before, maybe that sounds plausible to you — that all the students have simultaneously decided to hide their secret opinions and somehow mimic expert competence instead while not believing in it.

If you have taught a class before I know you are doubled over laughing at that idea. Good teachers know when students are faking it, or going through motions with secret resentment. That’s not what we see in our classes. We see excitement with the new skills, and above all RELIEF. You can see the great weight being lifted as the students learn 60-second fact-checks. I came in once to one section I taught and forgot to go over the homework, and the students were crushed. When I realized I had skipped it and went back to it, they lit up. They wanted to show off their new skills.

Not just the Republicans. Not just the Democrats. The students, all of them.

And yet everywhere I present to ADULTS, there are people that tell me these methods won’t work, because tribalism-yadda-yadda-yadda. They’ve usually never taught this stuff, certainly not this way. But they are convinced of tribalism as a fundamental truth, an intractable problem. They’ve taken it on faith that — unlike almost anything else in human life — tribalism is not one of many factors governing human behavior, but a sort of absolute veto that obliterates anything you throw against it, a dispositional antimatter.

Yep. Academics and bureaucrats can be fiercely tribalistic about the insurmountably of tribalism.

To be honest, believing our students just don’t want to know the truth is the professional corollary to the cynicism we see students come in with about news media. Cynicism may not provide comfort, but it provides absolution. In this world, at this particular moment, that may feel like the best possible deal you’re going to be able to cut. I get it.

But it’s also a waste of the frustrated idealism of our students. I don’t know where our students will be in ten years or what they will believe. I know that views do harden, even over the course of college. But they come to us, right now, wanting to do better at this, feeling guilty that they’re not, overwhelmed by the effort to close that gap. The fact that we don’t take advantage of that desire, that class by class we are letting this urge of students to do better wither on the vine so that they can later be groomed and radicalized by God knows who — that is something we will all come to regret, no matter which tribe we belong to. A massive, massive waste of human desire and potential in the face of looming catastrophe.

The report tells us the students would like to do better. The students in our classes learn to do better, and enjoy doing better. I’m really not sure what we’re waiting for.

.


Added 10/17: Someone below asked whether I could link to these skills we teach. Sure. The two best starting points are this set of four three-minute videos from Civix, and the textbook. Teachers can find other teaching resources — including importable online activities — here.

Useful thoughts on attention and information overload from 1971 (via Simon, Deutsch, Shubik)

Back in 2015, I was blogging less and using a homegrown personal wiki more. And I was thinking about this problem of collaboration and attention.

Going through my notes on the wiki from that time, I realized a bunch of my thinking had been formed by a book chapter from 1971 that I read in 2015, a transcription of a presentation and panel by Herbert Simon, Karl Deutsch, and Martin Shubik. Re-reading it I’m struck that for all its faults it provides insights that are even more relevant in 2018 than 2015. Here’s some ported notes and highlights:

Simon: a wealth of information = a scarcity of attention

Simon’s key contribution in the talk is to push the conversation from the idea of information overload (supply) to the problem of attention. And his key point is that as information increases, attention decreases:

[I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.

Simon: the cost of information is borne more by the consumer (in time) than the producer

Simon uses a news metaphor to make his point — the cost of information is mostly in the time to process it (when aggregated over many people) rather than produce it:

In an information-rich world, most of the cost of information is the cost incurred by the recipient. It is not enough to know how much it costs to produce and transmit information; we must also know how much it costs, in terms of scarce attention, to receive it. I have tried bringing this argument home to my friends by suggesting that they recalculate how much the New York Times (or Washington Post) costs them, including the cost of reading it. Making the calculation usually causes them some alarm, but not enough for them to cancel their subscriptions. Perhaps the benefits still outweigh the costs.

Simon: scarcity of attention must be a design principle for organizations and technology, but it is usually overlooked

The design principle that attention is scarce and must be preserved is very different from a principle of “the more information the better.” The aforementioned Foreign Office thought it had a communications crisis a few years ago. When events in the world were lively, the teletypes carrying incoming dispatches frequently fell behind. The solution: replace the teletypes with line printers of much greater capacity. No one apparently asked whether the IPS’s (including the Foreign Minister) that received and processed messages from the teletypes would be ready, willing, and able to process the much larger volume of messages from the line printers.

We overlook these things because we have a mythology of information poverty:

Our attitudes toward information reflect the culture of poverty. We were brought up on Abe Lincoln walking miles to borrow (and return!) a book and reading it by firelight.

Deutsch on the operations of attention

Deutsch makes some welcome corrections to Simon, who in many remarks not detailed above is far too trusting of technology. (The DDT example Simon uses is particularly painful).

Part of his point is attention is really a series of operations much bigger than just a spotlight of focus. A person that gives their attention has to (according to Deustch):

  • recognize loosely what it is one should pay attention to (the target), such as things unfamiliar, strangers, or things that do not fit
  • track the object of attention, and keep attention on it
  • interpret the object and ask what it resembles
  • decide which response to the object is most appropriate, and what should be done about it
  • carry out the response
  • accept feedback, and learn from the results of the response whether it was the rightone and how future responses should be corrected.

Deutsch argues that when you look at the whole cycle it involves not just attention but memory, and further, that the problem of filters is going to be solved partially by accepting some amount of redundancy. The reasoning is a bit complex, but familiar to people nowadays I think. Because institutional memory in organizations is expensive and a bit zero-sum you need redundancy in organizations and a networked, less hierarchical approach to information. This is turn prevents relevant information from being eliminated due to single bottlenecks.

Shubik on optimum information systems

I have become a fan of Simon over the past few years, so the insights in his observations are not surprising to me (more surprised by some of his oversights, actually). Shubik, on the other hand, hasn’t even been on my radar. But he’s good! Here he is on optimum information systems:

An optimum information system is not necessarily the one which processes the most data. An optimum system for protecting the average stock- holder does not supply him with full, detailed financial accounts. In fact, one can easily swindle the unwary by supplying them with financial details and footnotes they do not understand. It is now possible to bombard a generally uncomprehending public with myriad details on pollution, the pros and cons of insecticides, the value and dangers of irrigation schemes, on-the-spot reports of rioting and looting, televised moon landings, suicides, murders, and historical prices of thousands of stocks and commodities.

Shubik on the coming of computer network-based mobs, and, maybe, Gamergate

So for people not hip to what 1971 was about, you had “time-sharing” computers — which were multi-user mainframes mostly — and monitors, and a lot of thought was put into what happened when TV gets hooked into systems that allowed instant two-way communication, feedback, and interactivity. Shubik wonders in particular what mobs look like when the virtual is felt as real and demagogic leaders pair the instant feedback of communications systems with the viscerality of the a TV based medium. It’s of course a weird version, based on tech of the time, but still an amazing quote:

Consider some of the possible dangers. What is the first great TV, time-sharing demagogue going to look like? How will he put to use such extra features of modern communications as virtually instantaneous feedback? When will a TV screen with the appropriate sensory feelings be able to portray the boss behind his mahogany desk (two thousand miles away) who fires or chastises his employee, and makes him feel just as small, and his palms just as clammy with sweat, as if he were in the room with him? When will the first time-shared riot occur? Orson Welles came close in the thirties with a fairly good radio panic. Current techniques for mob control require physical proximity. In the Brave New World, will we still regard a mob as a great number of closely packed people, or will isolated mobs interacting via TV consoles and operating over large areas be more efficient?

Oettinger summarizes Simon’s contribution

Anthony Oettinger summarizes Simon’s contribution nicely:

Simon has offered three very deep, important, fundamental principles that shed light on things I had not perceived clearly:

  1. attention is a scarce commodity
  2. information technology allows effort to be displaced from possession, storage, and accumulation of information to its processing, even if the information is located in the world itself rather than in the file
  3. filtering and organizing the environment for persons whose attention is scarce are critical.

It remains for others to apply these general principles to particular organizations and explore their political and economic implications.

Deustch’s criticism of Simon

I have reservations about Simon’s enthusiasm, in the name of simplification and economy of thought, for throwing out vast amounts of what universities now teach. Much of what we learn in social science used to be interpreted against our knowledge of history. If we throw out too much historical data, many of our abstractions may lose meaning. A critical design problem for education is to determine the amount of memories from the past needed for producing and interpreting new information.

In general, Simon makes a very good case for the design aims of technology and education, but is not particularly good on technological prediction, whereas Shubik — even in asides — is incredibly prescient about technology and its risks. Deustch, in turn, serves as a good corrective on Simon’s penchant for an absolute leanness of process and storage — believing that memory plays more of a role in effective processing than Simon will admit, and pushing the idea that a more conservative approach to change in the face of human systems may be warranted — slow down taking action when information is inconclusive. (Even here, the results are fascinating, with Deustch using the example of how population is a more pressing issue than climate change, since the effects of overpopulation were well established but climate science murky).

The three parts, taken together make interesting reading, even today — or, perhaps, especially today. You can check out the whole thing here.

We’re Thinking About This Backwards

One of my great loves is Dewey. I share his belief that an educated engaged populace is crucial to democracy and democracy is crucial to the profession of those teaching in democracies. I think part of what we need to do is make sure all citizens have the tools they need to sort news from noise and speak truth to power.

At the same time, often when I speak to people, the questions come up — sure, media literacy is good, but how do we reach the people who drop out of high school, the people who don’t go to college. And so on.

Have I said enough that this is important? Well, here’s the other shoe dropping: our most pressing problems are not caused by Joe or Jane the Mechanic not getting a GED. They are caused by people in power — mid-level gatekeepers and up — who are allowing institutions to be corrupted by misinformation.

You can leave all the complaints about that formulation in the comments. I’ll admit that elites have always been prone to elite misinformation (see Iraq, Vietnam, Climate Change). But I would assert that such history shows the disasters that result when institutions become corrupted. The current configuration feels unique to me and other misinfo experts I talk to. The speed and frequency with which lies are created god knows where and then pushed up the chain to people with professional and political power is what’s frightening. High school dropouts are not your problem. Trump supporting gas station owners or conspiracy-minded baristas are not your problem. Your problem is the FBI agent consuming Twitter nonsense,  the politician that not only uses disinfo but comes to believe it, and the blue checkmarked mid-level elites that are unwitting vehicles pushing that stuff relentlessly into the view of those who act on it.

I started off with Dewey, and Jane the Mechanic. Here’s the relation. While mass education is good and should be pursued as a long-term solution, if I was going to target our online literacy immediately and had a limited number of seats, I would target it at everyone that will find their way to positions of influence. Politicians. Policy leads. Product managers at tech startups. Future FBI agents and social workers and department heads. I would look at the gears of democratic institutions — political, civic, administrative — and see who has their hands on the levers, from the mid-level bureaucrats to the top.

I’m committed to implementing our program broadly, but if you think the misinformation problem is Jane the Mechanic and her one vote every four years you’ve got it backwards. For immediate impact target those who make and enforce decisions and those who influence them, and stop scapegoating those without power or influence.

I need to think about this too in terms of how we grow the Digital Polarization Initiative. We’ve had good success in first-year programs, and we need to continue that. But the nature of our moment (and our limited resources) may require us to think about how to target this more efficiently as we expand. If people have some ideas of the sort of college programs we should be trying to get this training into, let me know in the comments. We probably need to also tap into the fact that the core of the American Democracy Project is still students who plan to go into those positions of influence and power and the faculty who teach them. More later I think.

A Suggested Improvement to Facebook’s Information Panel: Stories “About” Not “From”

Facebook has a news information panel, and I like it lots. But it could be better. First, let’s look at where it works. Here it is with a decent news source:

Untitled Project

That’s good. The info button throws the Wikipedia page up there, which is the best first stop.  We got the blue checkmark there, and some categorization. I can see its location and its founding date, both of which are quick signals to orient me to what sort of site this is.

There’s also the “More from this website.” Here’s where I differ with Facebook’s take. I don’t think this is a good use of space. Students always think they can tell what sort of site a site is from the stories they publish. I’m skeptical. I know that once you know a site is fake then suddenly of course you feel like you could have told from the sidebar.  But I run classes on this stuff and watch faculty and students try to puzzle this out in real time, and frankly they aren’t so good at it. If I lie and hint to them it’s a good site, bad headlines look fine. If I lie and hint it’s bogus site, real headlines look fake. It’s just really prone to reinforcing initial conceptions.

I get that what’s supposed to happen is that users see the stories and then click that “follow” button if they like what they see. But that’s actually the whole “pages” model that burned 2016 down to the ground. You should not be encouraging people to follow things before they see what other sites say about it.

Take this one — this says RealFarmacy is a news site, and there’s no Wikipedia page to really rebut that. So we’re left with the headlines from RealFarmacy:

realfarm

OK, so of course if you think this site is bogus beforehand it is so clear what these stories mean about the site. But look, if you clicked it it’s because a site named RealFarmacy seemed legit to you. These headlines are not going to help you — and if the site plays its cards right, it’s really easy to hack credibility into this box by altering their feed and interspersing boring stories with clickbaity ones. It’s a well known model, and Facebook is opening itself up to it here.

A better approach is to use the space for news about the site. Here’s some news about RealFarmacy.com:

real farmacy

Which one of these is more useful to you? I’m going to guess the bottom one. The top one is a set of signals that RealFarmacy is sending out about itself. It’s what it wants its reputation to be. The bottom one? That’s what its reputation is. And as far as I can tell it’s night and day.

This is why the technique we show students when investigating sites is to check Wikipedia first, but, if that doesn’t give a result, check for Google News coverage on the organization second. Step one, step two. It works like a charm and Facebook should emulate it.

Can you do this for everything? Sure. In fact, for most legit sites the “News about this source” panel would be boring, but boring is good!

news

Just scanning this I can see the Tribune has a long history, and is a probably a sizable paper. That’s not perfect, but probably more useful than the feed that tells me they have a story about a fraternity incident at Northwestern.

This won’t always work perfectly — occasionally you’ll get the odd editorial calling CNN “fake news” at the top. And people might overly fixate on Bezos’s appearance in coverage about the WaPo. But those misfires are probably worth filling in the data void in Wikipedia around a lot of these clickbait sites with news results that give some insight into the source. Pulling the site’s feed is an incomplete solution, and one that bad actors will inevitably hack.

Anyway, the web is a network of sites, and reputation is on the network, not the node. It’s a useful insight for programmers, but a good one for readers too. Guide users towards that if you want to have real impact.