Wikity, One Year Later

Consider this my one year report. 😉

The core idea of Wikity was simple: what if we bent the world of social media a bit away from the frothy outrage factory of Twitter and Facebook towards something more iterative, exploratory, and constructive? I took as my model Ward Cunningham’s excellent work on wiki and combined it with some insights on how to make social bookmarking a more creative, generative endeavor. The shortest explanation of Wikity I can provide is this: Wikity is social bookmarks, wikified.

It took me four months just to get to that explanation.

What does “wikified social bookmarks” mean? Well, like most social bookmarking tools, we allow for people to host private sites, but encourage people to share their bookmarks and short notes with the world. And while the mechanisms are federated, not centralized, we allow people to copy each other’s bookmarks and notes, just like Delicious or Pinboard.

That’s what’s the same. But we also do three things current social bookmarking sites do not.

  • We don’t bookmark pages. We bookmark and name ideas, data, and evidence. A single page may have multiple bookmarks for the different ideas, theories, and data referenced in the page.
  • We provide a simple way of linking these “idea” bookmarks, so that finding one idea leads naturally to other ideas. Over time you create an associative map of your understanding of an issue.
  • As we revisit pages over time, we expand and update them, building them out, adding notes, sources, counterarguments, summaries, and new connections.

And the end result, after a year and about 300(!) hours of work, is something I love and I use every day. It’s a self-hosted bookmark manager for linked ideas and data that has (for me) revolutionized my ability to think through issues and find connections between ideas I would have otherwise missed. If you want to see me construct an argument about something, you can read my blog. But if you want some insight into how I conceptualize the space, you can visit my Wikity site, and follow a card like Anger Spreads Fastest.

I use it every day, and have accumulated over 2,000 “cards” in it, varying from interesting clippings on subjects of interest to more lengthy, hyperlinked reflections. As outlined a couple years ago in my Federated Education keynote, the cards often start out as simple blockquotes or observations, but often build over time into more complex productions, with links, sources, bibliographies, videos, and additional reflections.

(I’m tempted to recount every development decision here, to explain all the expansion and tweaking the product has undergone to get to its current state. I know some people may be looking at it and thinking “300 hours? Really?” But the path to product was never a straight one.)

Wikity Today

Wikity is open. It’s constructed as a WordPress theme, and you can download it from GitHub. It does a lot more than your average theme, but installation is a simple as uploading it to your WordPress theme directory and applying the theme. I learned PHP specifically for this project, so it’s not the most beautiful code you’ve ever seen. But it does work, and shows another way of thinking about our web interfaces and daily habits.

Now that Wikity is easily installable as a theme on self-hosted WordPress, I’ll be phasing out signups on the wikity.cc site, which I was running as a central space for new users to try out Wikity. In its place I hope to put an aggregation site that makes it easier for different people’s Wikity installs to see what other people are writing about. That will reduce the cost and effort of running and maintaining an enterprise server for other people’s content. I’ll be reaching out to the owners of the 127 sites on there.

I should also mention to some early users that the scope of what Wikity does has actually been reduced in many ways. There’s a way in which this is sad, but in other ways the biggest advance over the past year in Wikity has been realizing that the core of Wikity could be expressed as “social bookmarks, wikified.” If you’ve ever built a product, you know what I mean. If you haven’t — trust me, it’s a painful but necessary process.

You can still use Wikity for a variety of things other than wikified bookmarks and notes — I worked with a professor in the Spring, for example, to use it to build a virtual museum, and as far as I can tell, Wikity is the simplest way to run a personal wiki on top of WordPress. But the focus is a hyperlinked bookmarking and notetaking system, because after a year of use and 2,000 cards logged, I can tell you that is where the unique value is. The beautiful thing is if you think the value is somewhere else the code is up there and forkable — sculpt it to your own wishes!

Finally a promise: Wikity core is safe from the demons of decay, at least for now. It will continue to be maintained and improved, mainly because I am addicted to using it personally. On top of that, we’re currently organizing a Wikity event for Christmas break, to introduce educators to the platform as a learning and research tool for students.

Now lets talk about some of the struggles we’ve been through here, and where we’re going in the future.

The Long and Winding Road

I have to admit, I thought early on that there would be larger appetite for Wikity. There may still be. But it has proved harder than thought.

Part of the reason, I think, is that the social bookmarking world that I expected Wikity to expand on is smaller than I thought, and has at least one good solid provider that people can count on (Pinboard, written and maintained by the excellent Maciej Cegłowski). More importantly, people have largely built a set of habits today that revolve around Twitter and Facebook and Slack. The habits of personal bookmarking have been eroded by these platforms which give people instant social gratification. In today’s world, bookmarking, organizing, and summarizing information feels a bit like broccoli compared to re-tweeting something with a “WTF?” tag and watching the likes roll in.

I had a bunch of people try Wikity, and even paid many people to test it. The conclusion was usually that it was easy to use, valuable, cool — and completely non-addictive. One hour into Wikity people were in love with the tool. But the next day they felt no compulsion to go back.

We could structure Wikity around social rewards in the future, and that might happen. But ultimately, for me, that struggle to understand why Wikity was not addictive in the ways that Twitter and Facebook were ended up being the most important part of the project.

I began, very early on, compiling notes in Wikity on issues surrounding the culture of Twitter, Facebook, social media, trolling, and the like. Blurbs about whether empathy was the problem or solution. Notes on issues like Abortion Geofencing, Alarm Fatigue, and the remarkable consistency of ad revenue to GDP over the last century. Was this the battle we needed to have first? Helping people understand the profound negative impact our current closed social media tools are having on our politics and culture?

I exported just my notes and clippings on these issues the other day, from Wikity, as a pdf. It was over 500 pages long. I was in deep.

As the United States primary ramped up, I became more alarmed at the way that platforms like Facebook and Twitter were polarizing opinions, encouraging shallow thought, and promoting the creation and dissemination of conspiracy theories and fake news. I began to understand that the goals of Wikity — and of any social software meant to promote deeper thought — began with increasing awareness of the ways in which our current closed, commercial  environments our distorting our reality.

Recently, I have begun working with others on tools and projects that will help hold commercial social media accountable for their effect on civic discourse, and demonstrate and mitigate some of their more pernicious effects. Tools and curriculum that will help people to understand and advocate for the changes we need in these areas: algorithmic transparency, the right to modify our social media environments, the ability to see what the feed is hiding from us, places to collectively fact-check and review the sources of information we are fed.

Wikity will continue to be developed, but the journey that began with a tool ended at a social issue, and I think it’s that social issue — getting people to realize how these commercial systems have impacted political discourse and how open tools might solve the problem — that most demands addressing right now. I don’t think I’ve been this passionate about something in a very long time.

I’ve had some success in getting coverage of this issue in the past few weeks, from Vox, to TechCrunch, to a brief interview on the U.S.’s Today Show this morning.

cxulkdmukaagnhe

I think we need broader collaborations, and I think open tools and software will be key to this effort. This is a developing story.

So it’s an interesting end to this project — starting with a tool, and getting sucked into a movement. Wikity is complete and useful, but the main story (for me) has turned out to lead beyond that, and I’m hurtling towards the next chapter.

Was this a successful project? I don’t know what other people might think, but I think so. Freed from the constrictions of bullet pointed reports and waterfall charts, I just followed it where it led. It led somewhere important, where I’m making a positive difference. Is there more to success than that?

 

Fooled by Recency: Hoaxers Increment Dates on Fake Stories

Here’s a fake story that was shown a number of places on the web during the campaign, claiming that protesters of Donald Trump were being paid. This has been covered so many times by so many fake and satirical sites that it is now an article of faith among Republicans, due to exposure effects.

Here’s a major source of that hoax:

pro

You’ll note the publish date: November 11.

That’s what the site looks like today. But we can see what it looked like previously, courtesy of archive.org’s Wayback Machine.

Here’s what it looked like in March, sporting a publish date of March 24:

hoax.PNG

Here it is in June, sporting a date of June 16:

june.PNG

And in September it sported a date of September 11:

september

So it’s safe to conclude that one of the tricks in the fake news toolbelt is creating a feeling of recency through altering dates.

Another note — give the date futzing, I’m not sure we can trust the view counter, but captures from the Wayback do show it ticking up in a reasonable way. If the view counter is accurate (big if) we may also have a ratio of shares to reads.

The page was shared 423,000 times. It was viewed (by people coming through all sources, including but not limited to Facebook) 70,000. If (and again, a big if) we can trust the counter, the maximum click-through rate from Facebook would be 71/423, or 17%. In reality, it would likely be lower than that, as a significant number of people would come through other sources.

To put it another way, at least 83% of people who shared this never looked at it. Note that this is actually more than a recent study that says only 60% of people share without reading on Twitter.

So it’s a big if as to whether we can trust this counter, but if we can, there’s a couple interesting possibilities:

  • People share without reading on Facebook more than on Twitter
  • More highly viral content has a worse share-to-clickthru ratio
  • Explicitly political content has worse share-to-clickthru ratios

None of these are are firm conclusions, incidentally — just ideas I’ll be keeping in mind for the future.

 

Latest In Rebuttal Shopping: Trump’s IQ

I’m playing with this idea of Facebook posting as “rebuttal shopping”. The idea is that a lot of stuff that goes viral on Facebook is posted as an implicit rebuttal to arguments that the poster feels are being levied against their position. This stuff tends to go viral on Facebook because the minute the Facebook user sees the headline they know this is something they need, an answer to a question or criticism that irks them.

Maybe the idea holds together and maybe it doesn’t. But I’ll occassionally be looking at new things that are trending on Facebook and seeing how they do or don’t fit that pattern.

Today in rebuttal shopping we have this, on Trump’s IQ:

trump

I don’t think I have to prove this fake: there is no official record of Presidential IQ scores throughout history, and of course intelligence tests didn’t become common until well into the 20th century. Outside that, we even have the card description here, which references a “think thank” that proposed this, and the weird phrase “Intelligence professors”. But if you want to click through, it is a blog post on Prntly.com that references a “report” that turns out to be someone talking on a forum.

Number of shares on this are not groundbreaking, but clearly in viral territory for something posted two days ago: 24,836 shares.

One of the comments on it, liked 59 times, seem to support the rebuttal shopping idea:

“Democrats love to talk about IQ’s. Well they won’t be talking about Trumps. They thought Obama was the smartest president ever.”

However, a lot of the other comments rant about unrelated things, so maybe comments aren’t the best result here.

This IQ issue seems to be a ongoing thing. A satirical article on Empire News last year claiming that Obama scored the lowest on an IQ test of any president in history garnered over 30,000 shares, and a chain letter hoax in the early 2000s had George W. Bush as the the lowest in history.

Where does this leave “rebuttal shopping”? I’m not sure. The new Trump fake news seems to follow the rebuttal pattern, as does the Obama story. The Bush story confirms something that wasn’t much disputed at the time, however, and looks more like shopping for confirmation than rebuttal. It certainly is not resolving any cognitive dissonance.

We’ll play with the idea a few more days and see what it brings to the fore.

 

 

Scanning the Facebook Feed as a Rebuttal Shopping Experience

The Stream is a weird place. Your Facebook feed, for example, is a series of posts by various people that in some ways resembles a forum, but in other ways it’s not at all like a forum.  When you post something to Facebook, there’s not an explicit prompt you are responding to, which seems non-problematic when you are posting a cool new video you like, but a bit weirder when you are posting random articles.

A recent Jay Rosen tweet thread got me thinking about this a bit more deeply. Rosen suggests that the reason that fake news spiked before the election was demand-driven: many Republican voters were feeling uneasy about voting for Trump, and articles where Hillary was knocking off FBI agents and funding ISIS helped them feel better about that:

This is interesting, because the place where I became obsessed with the fake news phenomenon was in the primary, when a lot of Sanders supporters I respected and admired for their intelligence suddenly were posting bizarre vote-rigging stories.

As one example, the Inquistr story Election Fraud Study Points To Rigged Democratic Primary Against Bernie Sanders [Video] was repeatedly in my feed, with angry references to how this proved they had been right all along — Bernie was actually winning. The primary had been stolen!

The “study” the page linked was referred to as a “Stanford Study”,  a claim which took 60 seconds to debunk. It was the work of a current Stanford student of psychology. No background in politics or polling. No Stanford appointment. And the study itself was just a paper, written and shared via Google Drive — it hadn’t been peer-reviewed or even designed above and beyond what one might do for a blog post.  When you dug into the paper, there was nothing there — the computation of some effect sizes between states with paper trails and those without, and language which indicated that the author might not in fact have understood how exit polls work, or have been aware of the shift in demographic support for Clinton between 2008 and 2016. (In fact, in this respect there was an error that would disqualify it from ever being taken seriously).

I didn’t expect most of my friends would get the math, but even so, without the call to authority, and considering the major barriers to rigging an election, one would assume people wouldn’t re-share it. But reshare it they did. And quite a lot.

And this is where I think Rosen’s point is interesting. If you think about the Stream, with it’s lack of explicit prompts, how does one know what to share? One thought is that as you go through your stream you are doing something like shopping — you’re explicitly in the market for something. And very often that something is a rebuttal to an implied argument that is giving you some cognitive dissonance.

For the Trump supporters, the dissonance was that Trump was unqualified and racist and Hillary was just (in their minds) corrupt. But was she corrupt enough? And if not, how could they vote for him? Changing Clinton to a murdering ISIS follower allowed them to follow their gut, which really wanted to vote for Trump. It gave their gut the evidence it needed to rule the day.

For the more militant Sanders conspiracists, the dissonance was between the results they felt Sanders should have and the ones that he got. People’s gut had told them Sanders would be broadly popular, but the reality was that he was not quite popular enough. On the verge of having to accept that, Facebook threw out a lifeline for the gut: the election was rigged. Stanford scientists had proved it.

If you go through a few of the public (e.g. share to all) posts on this, which you can do with this search, you’ll find something really interesting — so many of the comments people write when sharing the piece are of the type “I knew it! I knew it!”. It’s the sort of reaction someone has when they are struggling to maintain belief in the face of cognitive dissonance and suddenly stumble on a lifeline.

capture
(Note, the post above is a public post (e.g. meant to be shared to the world). You can’t see, and shouldn’t share private/friends posts with that search. And even though the post is public I’m blacking out the person’s name out of consideration)

This isn’t to say that all this is innocent in the least. The “Stanford Study” that wasn’t actually a study or from Stanford was shared at least a 100,000 times via different sites shared on Facebook, including by HNN (share count of their version: 82,000), which changed the headline to the more zippy “Odds Hillary Won Without Widespread Fraud: 1 in 77 Billion Says Berkeley, Stanford Studies”. (Spoiler alert: the Berkeley study wasn’t from Berkeley either). And it got a big assist from state-sponsored entities like the Russian-owned RT News in this episode of Redacted Tonight, which was viewed approximately 125,000 times on YouTube:

(That’s YouTube views, BTW, which are serious stuff — you have to sit down and watch a significant percentage of it before the view will register).

The RT segment ups the ante, really highlighting the Stanford name, to much laughs. It’s a study out of a little community college called Stanford, the host jokes (again, it’s not). It’s getting to the point it’s really embarrassing, the host says, how people won’t admit it was rigged. How much evidence do you need?

There’s a similar story in the past couple days with a Breitbart story being shared that passed around a ludicrous map with a misleading headline about Trump winning the popular vote (in the heartland).

media

Now any person with half a brain can see how ridiculous this map is — if they stop to look at it. And any person that can stop to parse a sentence can see the gymnastics required here to claim this victory.

The people reposting this are not stupid. But crucially, they don’t stop to think about it. They see and they click, I think, because they know the moment they see this that this is a rebuttal they have been in the market for. They don’t need to evaluate it, because this is precisely what they have been looking for.

This is a bit rambling, but what I mean to say is I think Rosen is onto something here about the nature of the supply-side.  I’m starting to think of feed skimming as a sort of shopping experience, where you know the ten sorts of things that you are looking for this week. Some paper towels, a new sponge to replace the ratty old one, and a rebuttal to your snobby cousin who posted that article that made you feel for twenty seconds that you might be wrong about something. Just what I was looking for!

As I’ve said before, this doesn’t mean that the news only confirmed what you thought already. In fact, quite the opposite: this process, over time, can pull you and your friends deeper and deeper into alternate realities, based on well-known cognitive mechanisms.  But thinking of this process as not so much one of discovery as rebuttal shopping — often brought on by cognitive dissonance — is useful at the moment, and I thank Jay Rosen for that.

Facebook’s Wall Metaphor and Political Polarization

Let’s continue to look at how Facebook has harmed public discourse, because I’m getting increasingly worried that once Facebook blocks fake news people will say problem solved. And really, the problem is not even close to solved.

One of Facebook’s oldest features is the “wall” although we seldom call it that anymore. The wall in Facebook is your personal space, a place to express yourself, and occasionally a place for people to post messages to you. If you click on a person’s name in Facebook you see their wall. It’s the sum total of things they have posted to Facebook, along with the things that have been posted to them.

The wall (and Facebook itself) used to be a place for you to express non-newsy parts of yourself. You’d share status updates, various pictures of yourself with red solo cups, and links to funny videos or cool music. The wall was really an expression of you, the front page of your digital identity.  It was the collection of things that made you who you are, the place where you showed various photos of your feet at the beach and expressed your undying love of Yacht Rock.

Compare that to something like Delicious bookmarks, which is a crime against the eyes, but uses another metaphor:

bookmarks

These are not things that Brian Lamb “likes” or supports, or thinks are cool. The Delicious page is not meant to be his digital identity. It’s simply a list of things that Brian thinks might be worth reading.

If you take this article that is shown above, for example, on the subject of innovation, I have literally no idea whether Brian agrees or disagrees with it. In fact, knowing Brian’s personal opinion of talk about innovation, my guess is that this paper probably doesn’t align all that well with his views, but is interesting and informative enough that it is worth a read, so he’s bookmarked it.

Now let’s take a look at my “wall” in Facebook:

facebook

The thing about my wall is you can tell at a glance who I am. It’s my digital identity, and I — like most others — curate it as such. These aren’t just useful articles and videos. They are expressions of what I believe.

And this, I think, is a too unremarked turn in social media, because it has dramatic impact on what gets shared. If I can only share stuff that more or less agree with, that has profound impact of the flow of information through the network. Likewise, I’m unlikely to share things that aren’t core to my identity — for instance, meeting minutes from the last town meeting. I’m more likely to share things I’m passionate about.

I know some people will read this and say — ha, ha, no, I share all sorts of things. I am not bubbled!

You might think that is true, but in practice I bet it is usually not. Look at the top three posts on your Facebook feed. Are the viewpoints they express generally in line with yours? (And I don’t mean exactly, I mean generally) More importantly, are they core to your passions? Have you shared any boring but useful information in the last 24 hours?

My guess is you haven’t. You might very well do that occasionally, but it is not your daily practice. It is swimming upstream, against the bias of the interface and the over-arching metaphor of Facebook’s wall.

And this is a big problem. If you grew up in the 1980’s you can think of Facebook as a jean jacket or goth trenchcoat on which you can put a number of pins and patches:

11

From blog We Are Reckless. Couldn’t find one CC-licensed, can someone make one?

Together these pins and patches represent who you are. And that’s great. People like to express themselves. But when you do that with politics it looks like this:

vintage-pinback-buttons-lot-of-135-mostly-liberal-political-few-cal-bears-misc-fba9dba0388c972473b76cf3adf5fcdd

From Terapeak. Still available for sale!

Which is, again, great as expression. Peace is good. We should bomb less. And who likes fat cats anyways? So, awesome, great, wonderful.

But are a series of jacket buttons and patches really our best source for news?

This mixing of core identity with an information system has direct effects on what we share and see. We’ve talked about how Facebook’s interface encourages sharing without reading, but just as important is this piece of the problem — that we subconsciously want all news we share to show unambiguously who we are.  And while it’s OK that we don’t put buttons that we’re ambivalent about on our jackets, the world depends on us getting news and information that we are ambivalent about. In fact, that’s often the most important news.

I jokingly say that Facebook built a platform for distributing party selfies, turned it into a news source, and destroyed the world. That’s hyperbole, of course. But the first two points of that — that Facebook turned an identity platform into a news source — that’s absolutely true. It’s only the ultimate impact of that that is in doubt, and we’re increasingly seeing that impact has been more negative than we ever imagined.

 

Maybe Rethink the Cult of Virality?

udell

TechCrunch has a story seemingly sympathetic to Facebook’s plight, which has this graf in it:

Because Facebook and some other platforms reward engagement, news outlets are incentivized to frame stories as sensationally as possible. While long-running partisan outlets may be held accountable for exaggeration, newer outlets built specifically to take advantage of virality on networks like Facebook don’t face the same repercussions. They can focus on short-term traffic and ad revenue, and if people get fed up with their content, they can simply reboot with a different brand.

It then says, given this, Facebook is really between a rock and a hard place. They don’t want to become the truth police, right? But on the other hand they don’t want lies, either. What’s a billion dollar company to do?

Again, I’d say think a little bigger. We have prayed at the altar of virality a long time, and I’m not sure it’s working out for us as a society. If reliance on virality is creating the incentives to create a culture of disinformation, then consider dialing down virality.

We know how to do this. Slow people down. Incentivize them to read. Increase friction, instead of relentlessly removing it.

Facebook is a viral sharing platform, and has spent hundreds of millions getting you to share virally. And here we are.

What if Facebook saw itself as a deep reading platform? What if it spent hundreds of millions of dollars getting you read articles carefully rather than sharing them thoughtlessly?

What if Facebook saw itself as a deep research platform? What if it spent its hundreds of millions of dollars of R & D building tools to help you research what you read?

This idea that Facebook is between a rock and a hard place is bizarre. Facebook built both the rock and the hard place. If it doesn’t like them, it can build something different.

Banning Ads Is Nice, but the Problem Is Facebook’s Underlying Model

So Facebook will ban fake news sites from their ads network, as will Google, which shows you exactly how hard it would have been to do this six months ago. But in any case, problem solved, right?

Unfortunately, no. Fake sites that get traffic can get on other ad networks, or go the malware route, or find one of the million other ways to turn eyeballs into cash. It will be a bit harder, I suppose, but it’s not as if having a fake news factory is a high overhead business.

But even if all ad incentives were eliminated, we are still left with the real problem, which is that Facebook’s model for news distribution is not suited for news distribution or anything like it.

Let’s review. The way you get your stories is this:

  • You read a small card with a headline and a description of the story on it.
  • You are then prompted to rate the card, by liking it or sharing them or commenting on it.
  • This then is pushed out to your friends, who can in turn complete the same process.

This might be a decent scheme for a headline rating system. It’s pretty lousy for news though.

If you think about the difference between this and blogging, it’s instructive. While I know for a fact many people jump to the comments section without fully reading my blog posts, you’ll notice that the way we structure things here on the blog is not:

  • Blog title
  • Comments box
  • An invisible but implied link to the blog post body, opening in a new window

but rather

  • Blog title
  • Blog body
  • Comments box

which is because our assumption is that you read the post first, top to bottom, then you comment and maybe share. We also think that reading — not commenting — is the main dish here.  This is a good model, and the interface is structured around it. It led to a much higher level of discourse (again, speaking generally) in the mid-aughts than the things that replaced it.

Facebook, on the other hand, doesn’t think the content is the main dish. Instead, it monetizes other people’s content. The model of Facebook is to try to use other people’s external content to build engagement on its site. So Facebook has a couple of problems.

First, Facebook could include whole articles, except for the most part they can’t, because they don’t own the content they monetize. (Yes, there are some efforts around full story embedding, but again, this is not evident on the stream as you see it today). So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.

Second, Facebook wants to keep you on site anyway, so they can serve you ads. Any time you spend somewhere else reading is time someone else is serving you ads instead of them and that is not acceptable.

And so again, I want you to try and look at this Facebook interface with new eyes. It is literally a headline, a description, and a series of prompts asking you to react to the headline and description, and unless you think of it as a headline rating system it is really quite odd:

comment

And so when Facebook says things like — it’s not our software, it’s just the way users act, well I’m confused. This card is designed to teach users how to act, and with the help of their data scientists, it does that effectively. Because of issues around content licensing and their ad revenue model, they designed it to have users comment and react to stories they hadn’t read. I don’t think you get to turn around then and say, well it’s the users that aren’t doing due diligence on the stories.

Facebook is one of the biggest companies in the world with the smartest UX designers in the world. If they wanted their users to be critical readers, they would have made very different choices. They didn’t, and bad things happened. And now maybe we can all do better.

 

The “They Had Their Minds Made Up Anyway” Excuse

BGR graciously linked my post from the weekend, a post showing that the scale at which fake news stories trend on Facebook can dwarf traditional news in terms of shares. (Thanks, BGR!)

However, they end with this paragraph, which I’d like to reply to:

On a related note, it stands to reason that most individuals prone to believing a hyperbolic news story that skews to an extreme partisan position likely already have their minds made up. Arguably, Facebook in this instance isn’t so much influencing the voting patterns of Americans as it is bringing a prime manifestation of confirmation bias to the surface.

As regular readers know, my core expertise is not data analysis of Facebook, but in how learning environments (and particularly online learning environments) affect the way users think, act, and learn. A long time ago I was online political organizer, but my day job for many, many years has been the investigation and design of net-enabled learning experiences.

The BGR conclusion is a common one, and it intuitively meshes with our naive understanding of how the mind perceives truth. I see something I agree with and I reshare it — it doesn’t change my mind, because of course I already believed it when I reshared it. However, from a psychological perspective there are two major problems with this.

Saying Is Believing

The first problem is that saying is believing. This is an old and well-studied phenomenon, though perhaps understudied in social media. So when you see a post that says “Clintons suspected in murder-suicide” and you retweet or repost it, it’s not a neutral transaction. You, the reposter, don’t end that transaction unchanged. You may initially post it because, after all, “Whoa if true.” But the reposting shifts your orientation to the facts. You move from being a person reading information to someone arguing a side of an issue, and once you are on a side of the issue, no amount of facts or argument is going to budge you. This may have been built into the evolution of our reason itself. In this way, going through the process of stream curation is at heart a radicalizing process.

I’ve said many times that all social software trains you to be something. Early Facebook trained you to remember birthdays and share photos, and to some extent this trained you to be a better person, or in any case the sort of person you desired to be.

The process that Facebook currently encourages, on the other hand, of looking at these short cards of news stories and forcing you to immediately decide whether to support or not support them trains people to be extremists. It takes a moment of ambivalence or nuance, and by design pushes the reader to go deeper into their support for whatever theory or argument they are staring at. When you consider that people are being trained in this way by Facebook for hours each day, that should scare the living daylights out of you.

It’s worthwhile to note as well that the nature of social media is we’re more likely to share inflammatory posts than non-inflammatory ones, which means that each Facebook session is a process by which we double down on the most radical beliefs in our feed.

In general, social media developers use design to foster behaviors that are useful to the community. But what is being fostered by this strange process that we put ourselves through each day?

Think about this from the perspective of an Martian come to Earth, watching people reach for their phone in the morning and scrolling past shared headlines, deciding in seconds for each one whether to re-share it or comment or like it. The question the Martian would ask is “What sort of training software are they are using, and what does it train people to do?”

And the problem is that — unlike previous social sites — Facebook doesn’t know, because from Facebook’s perspective they have two goals, and neither is about the quality of the community or well-being of its members. The first goal is to keep you creating Facebook content in the form of shares, likes, and comments. Any value you get out of it as a person is not a direct Facebook concern, except as it impacts those goals. And so Facebook is designed to make you share without reading, and like without thinking, because that is how Facebook makes its money and lock-in, by having you create social content (and personal marketing data) it can use.

The second Facebook goal is to keep you on the site at all costs, since this is where they can serve you ads. And this leads to another problem we can talk about more fully in another post. Your average news story — something from the New York Times on a history of the Alt-Right, for example — won’t get clicked, because Facebook has built their environment to resist people clicking external links. Marketers figured this out and realized that to get you to click they had to up the ante. So they produced conspiracy sites that have carefully designed, fictional stories that are inflammatory enough that you *will* click.

In other words, the consipiracy clickbait sites appeared as a reaction to a Facebook interface that resisted external linking. And this is why fake news does better on Facebook than real news.

To be as clear as I possibly can — by setting up this dynamic, Facebook simultaneously set up the perfect conspiracy replication machine and incentivized the creation of a new breed of conspiracy clickbait sites.

There will be a lot of talk in the coming days about this or that change Facebook is looking at. But look at these two issues to get the real story:

  • Do they promote deep reading over interaction?
  • Do they encourage you to leave the site, even when the link is not inflammatory?

Next time you’re on Facebook you’ll notice there are three buttons at the bottom of each piece of content, outlining the actions you can take on that content. None of them says “read”.

Facebook Makes Conspiracies Familiar, and Familiarity Equals Truth

It would be terrifying enough if this existential problem — that Facebook was training people to be extremists and conspiracists through a process of re-sharing — was the worst bit. But it’s not. The larger problem is the far larger number of people who see the headlines and do not reshare them.

Why is this a problem? Because for the most part, our brains equate “truth” with “things we’ve seen said a lot by people we trust”. The literature in this area is vast — it’s one of the reasons, for example, that so many people believe that global warming is a hoax.

If you think about it, it’s not a bad heuristic for some things. If someone tells you that there’s better farmland over the mountain, and then another person tells you that, and then another person, then your mind starts seeing this as more likely to be true than not, especially if the folks telling you this are folks you trust. If someone tells you that there’s gold mines under every house in the town, but they can’t be touched because of federal laws — well that’s not something you hear a lot, so that’s obviously false.

You say — no, that’s not it at all! The gold mines are ridiculous, that’s why I don’t believe in them! The good farmland is logical! I’m a logical being, dammit!

I’m sorry, you’re not, at least on most days. If enough people told you about the gold mines, and every paper in the town made passing reference to the gold mines you would consider people who didn’t believe in the hidden gold mines to be ridiculous. Want examples of this? Look at the entire sweep of history.

Or look at Lisa Fazio’s work on the illusory truth effect. The effect of familiarity is so strong that giving you a multiple choice question about something you know is wrong — “The Atlantic Ocean is the largest in the world, True or False?” can actually increase your doubt that you are right, and even convince you of the false fact. Why? Because it counts as exposure to the idea that other people believe the lie. This is why Betteridge Headlines are so dangerously unethical. If I publish headlines once a week asking “Is There Gold Buried Under Sunnydale?” with an article that says probably not, eventually when you are asked that question by someone else your mind will go — you know, I think I’ve heard something like that.

All it takes is repetition. That’s it.

Don’t believe me? Go check out how many people believe Obama is a Muslim. Or ask yourself a question like “Does Mike Pence support gay conversion therapy?” without looking up anything and then ask yourself how it is that you “know” that one way or the other. If you know it, you know it because it seems familiar. A lot of people you trust have said it. That’s it. That’s all you got.

If Betteridge Headlines are borderline unethical, then Facebook is well over that line. The headlines that float by you on Facebook for one to two hours a day, dozens at a time, create a sense of familiarity with ideas that are not only wrong, but hateful and dangerous. This is further compounded by the fact that what is highlighted on the cards that Facebook is not the source of the article, which is so small and gray as to be effectively invisible, but the friendly smiling face of someone you trust.

source.png

The article that intimated Clinton may have murdered an FBI agent and his wife and burned them in their home to conceal the evidence was shared by over half a million people. If we assume a share rate of 20-to-1 that means that over 10 million people were exposed to the idea that this actually happened. If we assume that there is a lot of overlap in social networks, we can assume that they were exposed to it repeatedly, and it came from many people they trusted.

And that’s just one headline. People exposed themselves to Facebook multiple times a day, every single day, seeing headlines making all sorts of crazy claims, and filed them in their famil-o-meter for future reference. We’ve never seen something on this scale. Not even close.

If Facebook was a tool for confirmation bias, that would kind of suck. It would. But that is not the claim. The claim is that Facebook is quite literally training us to be conspiracy theorists. And given the history of what happens when conspiracy theory and white supremacy mix, that should scare the hell out of you.

I’m petrified. Mark Zuckerberg should be too.

Facebook and Twitter Are Probably Making Google a Liar as Well

Kevin Drum points out that if you search on Google to find out whether Hillary won the popular vote (she did) that one of the top results lies, right in the headline.

blog_google_popular_vote1

How does 70news, a right wing site fond of claiming Trump protests are being funded by prominent Jewish banker George Soros, get to the top of this Google result?

While I don’t know how this manifests in the algorithm, I am going to guess that the ranking this story gets is a result of attention and audience given to the story by Facebook. Let’s go to the Facebook API and take a look at the shares of these three top stories>

  • Heavy.com (“Are There Still Uncounted Ballots?”): 202 shares.
  • New York Times (“Clinton’s Substantial Popular Vote Win”): 65,398 shares.
  • 70news (“Final Election 2016 Numbers: Trump Won Both Popular and Electoral College Vote”): 252,158 shares.

Yes — you’re reading that right. A story from a site the specializes in various forms of alt-right conspiracies outperformed the New York Times on shares on this issue; in fact they got 300% more shares than the story from New York Times.

And Twitter? If you go to the 70news article now you’ll find that the writer got these numbers “off of Twitter.” The mind reels. It’s Facebook at the center of a whole conspiracy clickbait ecosystem.

One other thing to note here: most people get their news from headlines, not articles. So the minute you see that headline in a search result, the fake news is validated and becomes part of what people know. You can check out Lisa Fazio’s work on this if you want, on exposure to wrong information. We don’t really know what’s true — we only know what “sounds more familiar”. Facebook, Google, and Twitter have made the false many degrees more familiar than the true.

You might also check out this related story on the relative virality of fake versus real stories..

Despite Zuckerberg’s Protests, Fake News Does Better on Facebook Than Real News. Here’s Data to Prove It.

(An investigation in which we decide to use Facebook’s social graph API to see whether fake news or real news is more viral).

UPDATE: Since posting, there has been some discussion about this post’s use of the phrase “top stories from local newspapers”.  A clarification on how that phrase is used has been appended at the end of the post with some methodology, and some small clarifying edits have been made. The title and core claim of the post remains accurate and stands. What we present here is not the best possible measure of fake vs. real virality, but it is a meaningful one, and deserves to be addressed.

Mark Zuckerberg told us recently that fake stories on Facebook were quite rare, less than 1% of total content. I’m not sure how he computes “content”, exactly. Is my status update content? Each photo I upload?

Mark Zuckerberg says the notion that fake news influenced the U.S. presidential election is “a pretty crazy idea.”

He also says his company has studied fake news and found it’s a “very small volume” of the content on Facebook. He did not specify if that content is more or less viral or impactful than other information. [Source: NPR]

Well, if Zuckerberg can’t specify if fake news is more viral on Facebook, maybe we can, using the publicly available Facebook APIs. Let’s help him out!

The question I want to ask is this: how do popular fake Facebook stories from fake local newspapers compare to top stories from real local newspapers? Not “how many stories are there of each” but rather “Is fake news or real news more likely to be shared, and what’s the size of the difference?”

If Facebook is truly a functioning news ecosystem we should expect large local newspapers like the Boston Globe and LA Times to compete favorably with fake “hoax” newspapers like the Baltimore Gazette or Denver Guardian — fake “papers” that were created purely to derive ad views from people looking for invented Clinton conspiracies.

For our fake story, we’re choosing the most popular story from the Denver Guardian, a fake newspaper created in the final days of the election. Its story “FBI Agent Suspected In Hillary Leaks Found Dead In Apparent Murder-Suicide”has now been shared on Facebook well over half a million times, as you can see with this call to Facebook’s API. This story exists on a site made to look like a real local newspaper and details quotes from people both real and fake about the murder-suicide of an FBI agent and his wife supposedly implicated in leaking Clinton’s emails. According to the story he shot himself and his wife and then set his house on fire. Pretty fishy, eh? Add it to the Clinton Body Count.

murder-suicide

The story is, of course, completely fake. But at 568,000 shares (shares, mind you, not views) it is several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis.

“Oh, hold on Mike, you say, ‘orders of magnitude’, but a lot of people misunderstand that phrase. Something shared several orders of magnitude more frequently would have to be shared literally around a thousand times more.”

Yep. That’s what I am saying: this article from a fake local paper was shared one thousand times more than material from real local papers.

Don’t believe me?

For our “real” stories, we are choosing the stories the papers have identified as their most popular of the day, via their “most popular stories” section on their site. (We are not choosing the most popular story they have based on Facebook data — I don’t currently have a way to know that).

The Boston Globe’s most popular article today (according to their site) is an article from its famous Spotlight team (yes, that Spotlight team) on the tragedy of shutting down psychiatric facilities in Massachusetts and elsewhere and replacing them with nothing. Number of shares? 181.

The LA Times has an editorial piece that is today’s most popular on their site titled “We’re called redneck, ignorant, racist. That’s not true’: Trump supporters explain why they voted for him.”That ought to share more generally than LA people, right? Number of shares: 342 shares.

The Chicago Tribune has a truly national story currently trending, “Trump and advisers back off major campaign pledges, including Obamacare and the wall”. The story is originally from the Washington Post, but is about as major a story as you get. Number of shares: 1774.

Now you could go national as well — that story from the Tribune was from the Washington Post, and is a top trending story there as well. So what does a truly national, click-ready story get you? Number of shares: 38,162.

Let’s plot that on a graph, shall we? Again, remember that these are not random selections — these are stats for the trending stories from each publication:

shares

To put this in perspective, if you combined the top stories from the Boston Globe, Washington Post, Chicago Tribune, and LA Times, they still had only 5% the viewership of an article from a fake news site that intimated strongly that the Democratic Presidential candidate had had a husband and wife murdered then burned to cover up her crimes.

The fact that Mark Zuckerberg can shrug his shoulders with his best “Who, me?” face — I’m trying to stay logical here, but I feel very sick to my stomach. There is nothing trivial, rare, or occasional about fake news on Facebook. Fake news outperforms real news on Facebook by several orders of magnitude. The financial rewards for pushing fake news to Facebook are also several orders of magnitude higher, and so expect this to continue until Zuckerberg came come to terms with the conspiracy ecosystem he created, and the effect it has had of U.S. Politics.

UPDATE:  Dan Barker notes that there are some stories on the LA Times site (and other sites) that have outperformed what those sites self-identify as their “top stories” or “most popular stories”. As one example, a story about a KKK march in the LA Times yesterday got north of 250,000 shares on Facebook yesterday, but for reasons that aren’t clear is not listed as a “most popular” story on the LA Times site. That might be because people shared it without clicking through, or it could be because the algorithm they use rolls day-old stories off the trending list.

Outside of the fact that that still puts the LA Times at a disadvantage to the fake Denver Guardian, I think the analysis is still valuable here. What we are looking at is how likely a popular story on a news site is to go viral as compared to what can be achieved on a fake news site. This is interesting because the popular stories on a news site exist (to some extent) outside Facebook’s algorithm, and provide a sense of what we might think of as traditional top news stories.

Is it the best comparison that can be made? No — so let’s make better comparisons. Please. Show me up. Build the tools to do it, and get Facebook to open access to its data so it can be done systemically. But let’s not take Facebook at its word here.