The “They Had Their Minds Made Up Anyway” Excuse

BGR graciously linked my post from the weekend, a post showing that the scale at which fake news stories trend on Facebook can dwarf traditional news in terms of shares. (Thanks, BGR!)

However, they end with this paragraph, which I’d like to reply to:

On a related note, it stands to reason that most individuals prone to believing a hyperbolic news story that skews to an extreme partisan position likely already have their minds made up. Arguably, Facebook in this instance isn’t so much influencing the voting patterns of Americans as it is bringing a prime manifestation of confirmation bias to the surface.

As regular readers know, my core expertise is not data analysis of Facebook, but in how learning environments (and particularly online learning environments) affect the way users think, act, and learn. A long time ago I was online political organizer, but my day job for many, many years has been the investigation and design of net-enabled learning experiences.

The BGR conclusion is a common one, and it intuitively meshes with our naive understanding of how the mind perceives truth. I see something I agree with and I reshare it — it doesn’t change my mind, because of course I already believed it when I reshared it. However, from a psychological perspective there are two major problems with this.

Saying Is Believing

The first problem is that saying is believing. This is an old and well-studied phenomenon, though perhaps understudied in social media. So when you see a post that says “Clintons suspected in murder-suicide” and you retweet or repost it, it’s not a neutral transaction. You, the reposter, don’t end that transaction unchanged. You may initially post it because, after all, “Whoa if true.” But the reposting shifts your orientation to the facts. You move from being a person reading information to someone arguing a side of an issue, and once you are on a side of the issue, no amount of facts or argument is going to budge you. This may have been built into the evolution of our reason itself. In this way, going through the process of stream curation is at heart a radicalizing process.

I’ve said many times that all social software trains you to be something. Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be.

The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.

It’s worthwhile to note as well that the nature of social media is we’re more likely to share inflammatory posts than non-inflammatory ones, which means that each Facebook session is a process by which we double down on the most radical beliefs in our feed.

In general, social media developers use design to foster behaviors that are useful to the community. But what is being fostered by this strange process that we put ourselves through each day?

Think about this from the perspective of an Martian come to Earth, watching people reach for their phone in the morning and scrolling past shared headlines, deciding in seconds for each one whether to re-share it or comment or like it. The question the Martian would ask is “What sort of training software are they are using, and what does it train people to do?”

And the problem is that — unlike previous social sites — Facebook doesn’t know, because from Facebook’s perspective they have two goals, and neither is about the quality of the community or well-being of its members. The first goal is to keep you creating Facebook content in the form of shares, likes, and comments. Any value you get out of it as a person is not a direct Facebook concern, except as it impacts those goals. And so Facebook is designed to make you share without reading, and like without thinking, because that is how Facebook makes its money and lock-in, by having you create social content (and personal marketing data) it can use.

The second Facebook goal is to keep you on the site at all costs, since this is where they can serve you ads. And this leads to another problem we can talk about more fully in another post. Your average news story — something from the New York Times on a history of the Alt-Right, for example — won’t get clicked, because Facebook has built their environment to resist people clicking external links. Marketers figured this out and realized that to get you to click they had to up the ante. So they produced conspiracy sites that have carefully designed, fictional stories that are inflammatory enough that you *will* click.

In other words, the consipiracy clickbait sites appeared as a reaction to a Facebook interface that resisted external linking. And this is why fake news does better on Facebook than real news.

To be as clear as I possibly can — by setting up this dynamic, Facebook simultaneously set up the perfect conspiracy replication machine and incentivized the creation of a new breed of conspiracy clickbait sites.

There will be a lot of talk in the coming days about this or that change Facebook is looking at. But look at these two issues to get the real story:

  • Do they promote deep reading over interaction?
  • Do they encourage you to leave the site, even when the link is not inflammatory?

Next time you’re on Facebook you’ll notice there are three buttons at the bottom of each piece of content, outlining the actions you can take on that content. None of them says “read”.

Facebook Makes Conspiracies Familiar, and Familiarity Equals Truth

It would be terrifying enough if this existential problem — that Facebook was training people to be extremists and conspiracists through a process of re-sharing — was the worst bit. But it’s not. The larger problem is the far larger number of people who see the headlines and do not reshare them.

Why is this a problem? Because for the most part, our brains equate “truth” with “things we’ve seen said a lot by people we trust”. The literature in this area is vast — it’s one of the reasons, for example, that so many people believe that global warming is a hoax.

If you think about it, it’s not a bad heuristic for some things. If someone tells you that there’s better farmland over the mountain, and then another person tells you that, and then another person, then your mind starts seeing this as more likely to be true than not, especially if the folks telling you this are folks you trust. If someone tells you that there’s gold mines under every house in the town, but they can’t be touched because of federal laws — well that’s not something you hear a lot, so that’s obviously false.

You say — no, that’s not it at all! The gold mines are ridiculous, that’s why I don’t believe in them! The good farmland is logical! I’m a logical being, dammit!

I’m sorry, you’re not, at least on most days. If enough people told you about the gold mines, and every paper in the town made passing reference to the gold mines you would consider people who didn’t believe in the hidden gold mines to be ridiculous. Want examples of this? Look at the entire sweep of history.

Or look at Lisa Fazio’s work on the illusory truth effect. The effect of familiarity is so strong that giving you a multiple choice question about something you know is wrong — “The Atlantic Ocean is the largest in the world, True or False?” can actually increase your doubt that you are right, and even convince you of the false fact. Why? Because it counts as exposure to the idea that other people believe the lie. This is why Betteridge Headlines are so dangerously unethical. If I publish headlines once a week asking “Is There Gold Buried Under Sunnydale?” with an article that says probably not, eventually when you are asked that question by someone else your mind will go — you know, I think I’ve heard something like that.

All it takes is repetition. That’s it.

Don’t believe me? Go check out how many people believe Obama is a Muslim. Or ask yourself a question like “Does Mike Pence support gay conversion therapy?” without looking up anything and then ask yourself how it is that you “know” that one way or the other. If you know it, you know it because it seems familiar. A lot of people you trust have said it. That’s it. That’s all you got.

If Betteridge Headlines are borderline unethical, then Facebook is well over that line. The headlines that float by you on Facebook for one to two hours a day, dozens at a time, create a sense of familiarity with ideas that are not only wrong, but hateful and dangerous. This is further compounded by the fact that what is highlighted on the cards that Facebook is not the source of the article, which is so small and gray as to be effectively invisible, but the friendly smiling face of someone you trust.

source.png

The article that intimated Clinton may have murdered an FBI agent and his wife and burned them in their home to conceal the evidence was shared by over half a million people. If we assume a share rate of 20-to-1 that means that over 10 million people were exposed to the idea that this actually happened. If we assume that there is a lot of overlap in social networks, we can assume that they were exposed to it repeatedly, and it came from many people they trusted.

And that’s just one headline. People exposed themselves to Facebook multiple times a day, every single day, seeing headlines making all sorts of crazy claims, and filed them in their famil-o-meter for future reference. We’ve never seen something on this scale. Not even close.

If Facebook was a tool for confirmation bias, that would kind of suck. It would. But that is not the claim. The claim is that Facebook is quite literally training us to be conspiracy theorists. And given the history of what happens when conspiracy theory and white supremacy mix, that should scare the hell out of you.

I’m petrified. Mark Zuckerberg should be too.

Facebook and Twitter Are Probably Making Google a Liar as Well

Kevin Drum points out that if you search on Google to find out whether Hillary won the popular vote (she did) that one of the top results lies, right in the headline.

blog_google_popular_vote1

How does 70news, a right wing site fond of claiming Trump protests are being funded by prominent Jewish banker George Soros, get to the top of this Google result?

While I don’t know how this manifests in the algorithm, I am going to guess that the ranking this story gets is a result of attention and audience given to the story by Facebook. Let’s go to the Facebook API and take a look at the shares of these three top stories>

  • Heavy.com (“Are There Still Uncounted Ballots?”): 202 shares.
  • New York Times (“Clinton’s Substantial Popular Vote Win”): 65,398 shares.
  • 70news (“Final Election 2016 Numbers: Trump Won Both Popular and Electoral College Vote”): 252,158 shares.

Yes — you’re reading that right. A story from a site the specializes in various forms of alt-right conspiracies outperformed the New York Times on shares on this issue; in fact they got 300% more shares than the story from New York Times.

And Twitter? If you go to the 70news article now you’ll find that the writer got these numbers “off of Twitter.” The mind reels. It’s Facebook at the center of a whole conspiracy clickbait ecosystem.

One other thing to note here: most people get their news from headlines, not articles. So the minute you see that headline in a search result, the fake news is validated and becomes part of what people know. You can check out Lisa Fazio’s work on this if you want, on exposure to wrong information. We don’t really know what’s true — we only know what “sounds more familiar”. Facebook, Google, and Twitter have made the false many degrees more familiar than the true.

You might also check out this related story on the relative virality of fake versus real stories..

Despite Zuckerberg’s Protests, Fake News Does Better on Facebook Than Real News. Here’s Data to Prove It.

(An investigation in which we decide to use Facebook’s social graph API to see whether fake news or real news is more viral).

UPDATE: Since posting, there has been some discussion about this post’s use of the phrase “top stories from local newspapers”.  A clarification on how that phrase is used has been appended at the end of the post with some methodology, and some small clarifying edits have been made. The title and core claim of the post remains accurate and stands. What we present here is not the best possible measure of fake vs. real virality, but it is a meaningful one, and deserves to be addressed.

Mark Zuckerberg told us recently that fake stories on Facebook were quite rare, less than 1% of total content. I’m not sure how he computes “content”, exactly. Is my status update content? Each photo I upload?

Mark Zuckerberg says the notion that fake news influenced the U.S. presidential election is “a pretty crazy idea.”

He also says his company has studied fake news and found it’s a “very small volume” of the content on Facebook. He did not specify if that content is more or less viral or impactful than other information. [Source: NPR]

Well, if Zuckerberg can’t specify if fake news is more viral on Facebook, maybe we can, using the publicly available Facebook APIs. Let’s help him out!

The question I want to ask is this: how do popular fake Facebook stories from fake local newspapers compare to top stories from real local newspapers? Not “how many stories are there of each” but rather “Is fake news or real news more likely to be shared, and what’s the size of the difference?”

If Facebook is truly a functioning news ecosystem we should expect large local newspapers like the Boston Globe and LA Times to compete favorably with fake “hoax” newspapers like the Baltimore Gazette or Denver Guardian — fake “papers” that were created purely to derive ad views from people looking for invented Clinton conspiracies.

For our fake story, we’re choosing the most popular story from the Denver Guardian, a fake newspaper created in the final days of the election. Its story “FBI Agent Suspected In Hillary Leaks Found Dead In Apparent Murder-Suicide”has now been shared on Facebook well over half a million times, as you can see with this call to Facebook’s API. This story exists on a site made to look like a real local newspaper and details quotes from people both real and fake about the murder-suicide of an FBI agent and his wife supposedly implicated in leaking Clinton’s emails. According to the story he shot himself and his wife and then set his house on fire. Pretty fishy, eh? Add it to the Clinton Body Count.

murder-suicide

The story is, of course, completely fake. But at 568,000 shares (shares, mind you, not views) it is several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis.

“Oh, hold on Mike, you say, ‘orders of magnitude’, but a lot of people misunderstand that phrase. Something shared several orders of magnitude more frequently would have to be shared literally around a thousand times more.”

Yep. That’s what I am saying: this article from a fake local paper was shared one thousand times more than material from real local papers.

Don’t believe me?

For our “real” stories, we are choosing the stories the papers have identified as their most popular of the day, via their “most popular stories” section on their site. (We are not choosing the most popular story they have based on Facebook data — I don’t currently have a way to know that).

The Boston Globe’s most popular article today (according to their site) is an article from its famous Spotlight team (yes, that Spotlight team) on the tragedy of shutting down psychiatric facilities in Massachusetts and elsewhere and replacing them with nothing. Number of shares? 181.

The LA Times has an editorial piece that is today’s most popular on their site titled “We’re called redneck, ignorant, racist. That’s not true’: Trump supporters explain why they voted for him.”That ought to share more generally than LA people, right? Number of shares: 342 shares.

The Chicago Tribune has a truly national story currently trending, “Trump and advisers back off major campaign pledges, including Obamacare and the wall”. The story is originally from the Washington Post, but is about as major a story as you get. Number of shares: 1774.

Now you could go national as well — that story from the Tribune was from the Washington Post, and is a top trending story there as well. So what does a truly national, click-ready story get you? Number of shares: 38,162.

Let’s plot that on a graph, shall we? Again, remember that these are not random selections — these are stats for the trending stories from each publication:

shares

To put this in perspective, if you combined the top stories from the Boston Globe, Washington Post, Chicago Tribune, and LA Times, they still had only 5% the viewership of an article from a fake news site that intimated strongly that the Democratic Presidential candidate had had a husband and wife murdered then burned to cover up her crimes.

The fact that Mark Zuckerberg can shrug his shoulders with his best “Who, me?” face — I’m trying to stay logical here, but I feel very sick to my stomach. There is nothing trivial, rare, or occasional about fake news on Facebook. Fake news outperforms real news on Facebook by several orders of magnitude. The financial rewards for pushing fake news to Facebook are also several orders of magnitude higher, and so expect this to continue until Zuckerberg came come to terms with the conspiracy ecosystem he created, and the effect it has had of U.S. Politics.

UPDATE:  Dan Barker notes that there are some stories on the LA Times site (and other sites) that have outperformed what those sites self-identify as their “top stories” or “most popular stories”. As one example, a story about a KKK march in the LA Times yesterday got north of 250,000 shares on Facebook yesterday, but for reasons that aren’t clear is not listed as a “most popular” story on the LA Times site. That might be because people shared it without clicking through, or it could be because the algorithm they use rolls day-old stories off the trending list.

Outside of the fact that that still puts the LA Times at a disadvantage to the fake Denver Guardian, I think the analysis is still valuable here. What we are looking at is how likely a popular story on a news site is to go viral as compared to what can be achieved on a fake news site. This is interesting because the popular stories on a news site exist (to some extent) outside Facebook’s algorithm, and provide a sense of what we might think of as traditional top news stories.

Is it the best comparison that can be made? No — so let’s make better comparisons. Please. Show me up. Build the tools to do it, and get Facebook to open access to its data so it can be done systemically. But let’s not take Facebook at its word here.

Facebook Broke Democracy, but the Fix Is Harder Than People Realize.

You  know, I’ve always complained about the use of “broke” when applied to things like democracy. How simple, right? But over the past few days I’ve not been in my normal nuanced mood. I’ve said, in fifteen different ways over the past year, that our stream-based model of social media was making us dumber. But I’ve said it too subtly maybe?

In any case, this is a note that there are now dozens of thinkpieces out there on how Facebook broke democracy that are out over the past few days. And they’re good, and it’s refreshing to get people finally looking at the systemic bias of Facebook towards conspiracy sites and inflammatory political comment, and calling for Facebook to admit that they are, after all, a media company, and it is time they started taking responsibility for little things like ending life on this planet as we know it. In retrospect the world would have been a lot better off if Zuckerberg had stuck to his original idea of a “Hot or Not” knock-off for campus coeds. It really would have been.

But no, Facebook instead decided to move into news distribution with the same algorithms and structure it used to share the “Charley bit my finger” video to a billion people. Google provided the ad revenue model for conspiracy clickbait sites, and now your hairdresser cousin is 99% sure that Hillary Clinton may have personally murdered up to five FBI agents. They saw it on Facebook, after all.

I have a book I am working on which will combine a history of the web with some cognitive science and UX criticism to explain how all this came to be, but I want to flag one thing I’ve noticed in recent treatments of the issue. People are over-obsessed with the news feed algorithm.

The algorithm matters, because the algorithm amplifies a lot of bad things. But my sense, looking at the structure of Facebook, is that the computer portion of the algorithm is a bit player here. Rather, it’s how the whole system functions together.

Let me just give you one example. Here’s a card from Facebook:

card.PNG

Now forget about the algorithm that brought this here and focus instead on the card. Every decision on this card is maximized to keep you on Facebook. So, for example, my name is bold and blue and prominent. The headline is also prominent, and Facebook pulls the description from the page so that the Facebook reader can read a summary of an article without going to the article.

The question of where this is from? The source of the news? It’s there in the least looked at part of the card in a gray so thin and light that I don’t know you could get it any lighter without having an ADA case on your hands.

card2

Nothing on the card encourages you to click it or go elsewhere. Your options are Facebook-centric options. You can like it. You can comment. You can share it! Facebook has deliberately not called attention to how you click this to read it, because Facebook’s goal is that you read this headline and this summary and then either move on or spend time creating Facebook content — likes, comments, emojis, or shares. They have engineered a card, using the smartest data scientists in the world, that encourages you to read a headline and a description and never-ever click through to check the source or see the full story.

In the 1960s these folks would have worked for NASA. In the 1970s, maybe the NIH. Today they work for Facebook making sure you never leave the site to actually read the things that you share and react to. So instead of getting to the moon, we get to wherever the hell it is that we are now.

Because they succeed. Data science combined with design works. People on Facebook share material they don’t read all the time. And that’s the point. That’s how you produce revenue off of third party content without undermining your own dominant position.

And honestly, that’s just step one.  If I had time to go into it now, I’d explain how the whole sharing pattern — the stream-based model rather than what I call the garden-based model — prevents an iterative process of knowledge construction, instead reducing all knowledge to a stream of short headlines and summaries, of which your mind must form an intuitive sense. But I digress into nuance…

The point is there will be a lot of talk about algorithms over the next few weeks, and that’s good. But it is not that Facebook is an enlightenment engine loaded with a wrong scrap of code. Rather, Facebook’s entire model ends up being designed to produce stream-based reading behaviors, which produce money for Facebook and New World Order conspiracies for your cousin.  Sometimes in equal amounts. Happy days!

I’ll add one last thing here — maybe as a standard reminder of why I am talking about Facebook on this education blog. I talk about it because as educational technologists and instructional designers the great challenge of our age is to graduate students who can either thrive in in existing information environments or design better ones. We give our students four years practice doing library research and yet do not educate them about the environment in which they will gain much of their civic and personal knowledge. We must critique these environments at a level deeper than “Facebook is a corporation and therefore bad.” We must explain them at a level deeper than “Watch me crowdsource an answer here.” We need a comprehensive approach here, or our fact-tethered existence is going to continue to float away, unmoored by data, facts, or comprehension of consequences.

Notes on How Social Media Broke Our Democracy

I could not sleep last night at all. So I organized my notes I’ve been taking over the last year on the problem of doing politics in advertising funded stream-based systems.

I know this election was about so much more than that (so much more), and our problems are so much deeper. But I remain convinced that even if social media is not the fire or the fuel of Breitbartian racism it is in fact the oxygen that helps it thrive and spread.

There are 537 pages of notes in this PDF, and it may not be immediately clear what each has to do with the book, but in my head at least they all relate. They are worth a read.

notes-for-lost-in-the-stream-how-social-media-destroyed-our-democracy

Stronger Together

A while back a prominent blogger wrote a post called “Why didn’t you blog about Trump, Daddy?” The basic argument of the post was this — his blog was a professional space, and he liked to keep it about the subject he was an expert in, and let’s face it, how he came down on the vote wasn’t really going to matter to people anyway.

But against that there was this: this is not a normal election. This is the sort of election your kids and grandkids (or grandnephews and grandnieces)  will ask you about many years from now. They are not going to want to hear about how your blog was really about information environments and UX and open content. They are going to ask one thing: “What did you do to stop Trump?”

I’ve put in my canvassing time, and made my donations. I’ve written pieces on DailyKos that attempted to patch up the Sanders/Clinton divide.

But I want to put this here, in this non-political space, that this is the most important election I’ve lived through. And it has driven me nuts watching commentators say “What is the big vision of Clinton? What is she about?”

For me, it’s right behind her on signs at every event. It’s in the speeches and the ads. It’s expressed positively and negatively. It has inspired the speeches of Michelle Obama as a surrogate, speeches which could stand their own with any of the great speeches of the last 240 years. It has brought Glenn Beck — Glenn fricking Beck — to re-evaluate his life choices.

The vision is this: we’re stronger together. When we see our differences as our strength, we all win.

If you wanted me to, I could dive down and show you how that is reflected in two dozen different policy details of Clinton’s plans. The difference is in every detail, from the approach to police killings of black citizens, to free college tuition, to the acceptance of the LBGTQ community, to the proposals to fix Obamacare so it works for everyone. On every issue, in every policy, is what has become the core idea of the Democratic Party: when we help people live up to their potential, we grow the pie for everyone.

The Romney and McCain campaigns argued that we, as a nation, had gone overboard in some places with these things. That too progressive a tax decreased our productivity, or in Romney’s famous formulation, increased the laziness of the 47%. That tensions between religious and civil liberties needed to lean more into religion, and less into civil rights. These were not things I agree with. I fought hard against them. I think there is some ugliness under these ideas, when you strip away the veneer.

But Trumpism is different. It is not talking about growing the pie, or even dividing the pie a bit differently. Trumpism means one thing: you win by punishing others or bringing others low.

It’s not enough that Trump wins: Hillary must go to jail. It’s not enough we build a wall: Mexico must pay. It’s not enough that we restrict abortion: women who receive abortions must be prosecuted. Mexicans, Muslims, African-Americans, the disabled: through the power of Trump they will learn their place. They will get in line behind us, where they belong. They will not dare to speak back, or look us in the eye.

I find this terrifying.  It is not the politics of Reagan or even George W. Bush. It is the politics of a lynch mob. It is a politics of subjugation. Of punishment. Of mob justice.

There are some people out there who say that Hillary doesn’t have a vision. I think, in this election, she has the only vision that matters. We are stronger together. If you believe that, please go out and vote for Clinton tomorrow, and give her a House and Senate that allow her to govern.

Thank you, and we now go back to our scheduled programming.

I’m Writing a Book On the Disinfotopia of Current Social Media

I’ve decided to write a book before the year is out. I’ve decided this because I think the most pressing current issue for Open Pedagogy practitioners in the U.S. right now is how we address a social media environment that seems to be bringing out many of our worst demons, and I think several years of research on this issue has given me some insights here. In fact, I would go so far to say that the events of the past year have only served to validate the presentations I have given on The Stream and its problems over the last couple years. Add to that that I have over 532 pages of notes and pull quotes on the subject, burning a hole in my pocket.

I simply can’t imagine a more important task for us as instructional designers, teachers, and technologists than to deal with this issue, and I think the best way to deal with the subject is to do a deep dive.

I plan to start tonight, right after I cook the kids dinner. I hope to finish it over holiday break. I hope that if I post this intention here and people respond well to the idea it will get me to pump out pages rather than endlessly edit or delay. (My curse is I *love* research, and so the research phase never ends).

Here’s a description:

Lost in the Stream will be the first book to tell the story of how social media destroyed American political culture as seen through the 2016 primary and election.

It lays blame for that destruction on the omnipresence of the “Stream” in our lives:  the never-ending pull-to-refresh parade of news and outrage we have come to take for granted in our social media environments. It argues that the Stream, as exemplified by the personal feeds of Facebook, Twitter, and other sites, is creating a post-truth society, and that the 2016 election — the first “Twitter election” — is a harbinger of worse things to come if we do not address major flaws in the structure of our current social media environments.

There are a number of books that have tackled similar issues, pointing to insidious effects of the commercialization of the web, erosion of face-to-face culture, or the impact of the web on memory and concentration. This book acknowledges these impacts, but looks very specifically at how the “event-ification” of the web contributes to our current malaise. It presents the world of the web as it has become for many in the U.S. — polarizing, conspiracy-obsessed, brutally and unapologetically mob-like — and shows how The Stream is at the root of these problems. It suggests that older, pre-stream models of the web, from pioneers such as Vannevar Bush, Doug Engelbart, and Alan Kay, might hold the key to returning the web to its place as an augment of human intellect rather than a detriment to it.

Chapter List

Prologue: From Techno-utopianism to the Current Dystopia

Section One / Welcome to Disinfotopia

Chapter One: Islands in the Stream (That Is What We Are)
Chapter Two: The Facebook Conspiracy Factory
Chapter Three: Twitter’s Mob Justice
Chapter Four: Who Moved My Cognitive Surplus?
Chapter Five: Weaponized Transparency and the End of Organization(s)

Section Two / Old Models and New Hope

Chapter Six: As We May Think Again
Chapter Seven: Talking with Models
Chapter Eight: Reclaiming Calm
Chapter Nine: Can Education Save the Web?

Conclusion: The Web We Need