Wikity, One Year Later

Consider this my one year report. ūüėČ

When I got a Shuttleworth Flash Grant one year ago, I knew just what I wanted to do. I wanted to make Wikity.

The idea of Wikity would evolve much over the next year, but the core idea of Wikity was simple: what if we bent the world of social media a bit away from the frothy outrage factory¬†of Twitter and Facebook towards something more iterative, exploratory, and constructive? I took as my model Ward Cunningham’s excellent work on wiki and combined it with some insights on how to make social bookmarking a more creative, generative endeavor. The shortest explanation of Wikity I can provide is this: Wikity is social bookmarks, wikified.

It took me four months just to get to that explanation.

What does “wikified social bookmarks” mean? Well, like most social bookmarking tools, we¬†allow for people to host private sites, but encourage people to share their bookmarks and short notes with the world.¬†And while the mechanisms are federated, not centralized, we¬†allow people to¬†copy each other’s bookmarks and notes,¬†just like Delicious or Pinboard.

That’s what’s the same. But we also do three things current social bookmarking sites do not.

  • We don’t bookmark pages. We bookmark and name ideas, data, and evidence. A single page may have multiple bookmarks for the different ideas, theories, and data referenced in the page.
  • We provide a simple way of linking these “idea” bookmarks, so that finding one idea leads naturally to other ideas. Over time you create an associative map of your understanding of an issue.
  • As we revisit pages over time, we expand and update them, building them out, adding notes, sources, counterarguments, summaries, and new connections.

And the end result, after a year and about 300(!) hours of work, is something I love and I use every day. It’s a self-hosted bookmark manager for linked ideas and data that has (for me) revolutionized my ability to think through issues and find connections between ideas I would have otherwise missed. If you want to see me construct an argument about something, you can read my blog. But if you want some insight into how I conceptualize the space, you can visit my Wikity site, and follow a card like Anger Spreads Fastest.

I use it every day, and have accumulated over 2,000 “cards” in it, varying from interesting clippings on subjects of interest to more lengthy, hyperlinked reflections. As outlined a couple years ago in my Federated Education keynote, the cards often start out as simple blockquotes or observations, but often build over time into more complex productions, with links, sources, bibliographies, videos, and additional reflections.

(I’m tempted to recount every development decision here, to explain all the expansion and tweaking the product has undergone to get to its current state. I know some people may be looking at it and thinking “300 hours? Really?” But the path to product was never a straight one.)

Wikity Today

Wikity, like all Shuttleworth projects, is open. It’s constructed as a WordPress theme, and you can download it from GitHub. It does a lot more than your average theme, but installation is a simple as uploading it to your WordPress theme directory and applying the theme. I learned PHP specifically for this project, so it’s not the most beautiful code you’ve ever seen. But it does work, and shows another way of thinking about our web interfaces and daily habits.

Now that Wikity is easily installable as a theme on self-hosted WordPress, I’ll be phasing out signups on the wikity.cc site, which I was running as a central space for new users to try out Wikity. In its place I hope to put an aggregation site that makes it easier for different people’s Wikity installs to see what other people are writing about. That will reduce the cost and effort of running and maintaining an enterprise server for other people’s content. I’ll be reaching out to the owners of the 127 sites on there.

I should also mention to some early users that the scope of what Wikity does has actually been reduced in many ways. There’s a way in which this is sad, but in other ways the biggest advance over the past year in Wikity has been realizing that the core of Wikity could be expressed as “social bookmarks, wikified.” If you’ve ever built a product, you know what I mean. If you haven’t — trust me, it’s a painful but necessary process.

You can still use Wikity for a variety of things other than wikified bookmarks and notes — I worked with a professor in the Spring, for example, to use it to build a virtual museum, and as far as I can tell, Wikity is the simplest way to run a personal wiki on top of WordPress. But the focus is a hyperlinked bookmarking and notetaking system, because after a year of use and 2,000 cards logged, I can tell you that is where the unique value is. The beautiful thing is if you think the value is somewhere else the code is up there and forkable — sculpt it to your own wishes!

Finally a promise: Wikity core is safe from the demons of decay, at least for now. It will continue to be maintained and improved, mainly because I am addicted to using it personally. On top of that, we’re currently organizing a Wikity event for Christmas break, to introduce educators to the platform as a learning and research tool for students.

Now lets talk¬†about some of the struggles we’ve been through here, and where we’re going in the future.

The Long and Winding Road

I have to admit, I thought early on that there would be larger appetite for Wikity. There may still be. But it has proved harder than thought.

Part of the reason, I think, is that the social bookmarking world that I expected Wikity to expand on is smaller than I thought, and has at least one good solid provider that people can count on (Pinboard, written and maintained by the excellent¬†Maciej CegŇāowski). More importantly, people have largely built a set of habits today that revolve around Twitter and Facebook and Slack. The habits of personal bookmarking have been eroded by these platforms which give people instant social gratification. In today’s world, bookmarking, organizing, and summarizing information feels a bit like broccoli compared to re-tweeting something with a “WTF?” tag and watching the likes roll in.

I had a bunch of people try Wikity, and even paid many people to test it. The conclusion was usually that it was easy to use, valuable, cool — and completely non-addictive. One hour into Wikity people were in love with the tool. But the next day they felt no compulsion to go back.

We could structure Wikity around social rewards in the future, and that might happen. But ultimately, for me, that struggle to understand why Wikity was not addictive in the ways that Twitter and Facebook were ended up being the most important part of the project.

I began, very early on, compiling notes in Wikity on issues surrounding the culture of Twitter, Facebook, social media, trolling, and the like. Blurbs about whether empathy was the problem or solution. Notes on issues like Abortion Geofencing, Alarm Fatigue, and the remarkable consistency of ad revenue to GDP over the last century. Was this the battle we needed to have first? Helping people understand the profound negative impact our current closed social media tools are having on our politics and culture?

I exported just my notes and clippings on these issues the other day, from Wikity, as a pdf. It was over 500 pages long. I was in deep.

As the United States primary ramped up, I became more alarmed at the way that platforms like Facebook and Twitter were polarizing opinions, encouraging shallow thought, and promoting the creation and dissemination of conspiracy theories and fake news. I began to understand that the goals of Wikity — and of any social software meant to promote deeper thought — began with increasing awareness of the ways in which our current closed, commercial ¬†environments our distorting our reality.

Recently, I have begun working with others on tools and projects that will help hold commercial social media accountable for their effect on civic discourse, and demonstrate and mitigate some of their more pernicious effects. Tools and curriculum that will help people to understand and advocate for the changes we need in these areas: algorithmic transparency, the right to modify our social media environments, the ability to see what the feed is hiding from us, places to collectively fact-check and review the sources of information we are fed.

Wikity will continue to be developed, but the journey that began with a tool ended at a social issue, and I think it’s that social issue — getting people to realize how these commercial systems have impacted political discourse and how open tools might solve the problem — that most demands addressing right now. I don’t think I’ve been this passionate about something in a very long time.

I’ve had some success in getting coverage of this issue in the past few weeks, from Vox, to TechCrunch, to a brief interview on the U.S.’s Today Show this morning.

cxulkdmukaagnhe

I think we need broader collaborations, and I think open tools and software will be key to this effort. This is a developing story.

So it’s an interesting end to this project — starting with a tool, and getting sucked into a movement. Wikity is complete and useful, but the main story (for me) has turned out to lead beyond that, and I’m hurtling towards the next chapter.

Was this a successful grant? I don’t know what other people might think, but I think so. Freed from the constrictions of bullet pointed reports and waterfall charts, I just followed it where it led. It led somewhere important, where I’m making a positive difference. Is there more to success than that?

Thanks again to the Shuttleworth Foundation which kicked me off on this ride. I’ll let you all know where it takes me in the future.

(And to my Wikity fans and users — don’t worry: Wikity is not going away. As long as I can’t live without it, it’s going to continue to be developed, just a bit more slowly).

Fooled by Recency: Hoaxers Increment Dates on Fake Stories

Here’s a fake story that was shown a number of places on the web during the campaign, claiming that protesters of Donald Trump were being paid. This has been covered so many times by so many fake and satirical sites that it is now an article of faith among Republicans, due to exposure effects.

Here’s a major source of that hoax:

pro

You’ll note the publish date: November 11.

That’s what the site looks like today. But we can see what it looked like previously, courtesy of archive.org’s Wayback Machine.

Here’s what it looked like in March, sporting a publish date of March 24:

hoax.PNG

Here it is in June, sporting a date of June 16:

june.PNG

And in September it sported a date of September 11:

september

So it’s safe to conclude that one of the tricks in the fake news toolbelt is creating a feeling of recency through altering dates.

Another note — give the date futzing, I’m not sure we can trust the view counter, but captures from the Wayback do show it ticking up in a reasonable way. If the view counter is accurate (big if) we may also have a ratio of shares to reads.

The page was shared 423,000 times. It was viewed (by people coming through all sources, including but not limited to Facebook) 70,000. If (and again, a big if) we can trust the counter, the maximum click-through rate from Facebook would be 71/423, or 17%. In reality, it would likely be lower than that, as a significant number of people would come through other sources.

To put it another way, at least 83% of people who shared this never looked at it. Note that this is actually more than a recent study that says only 60% of people share without reading on Twitter.

So it’s a big if as to whether we can trust this counter, but if we can, there’s a couple interesting possibilities:

  • People share without reading on Facebook more than on Twitter
  • More highly viral content has a worse share-to-clickthru ratio
  • Explicitly political content has worse share-to-clickthru ratios

None of these are are firm conclusions, incidentally — just ideas I’ll be keeping in mind for the future.

 

Latest In Rebuttal Shopping: Trump’s IQ

I’m playing with this idea of Facebook posting as “rebuttal shopping”. The idea is that a lot of stuff that goes viral on Facebook is posted as an implicit rebuttal to arguments that the poster feels are being levied against their position. This stuff tends to go viral on Facebook because the minute the Facebook user sees the headline they know this is something they need, an answer to a question or criticism that irks them.

Maybe the idea holds together and maybe it doesn’t. But I’ll occassionally be looking at new things that are trending on Facebook and seeing how they do or don’t fit that pattern.

Today in rebuttal shopping we have this, on Trump’s IQ:

trump

I don’t think I have to prove this fake: there is no official record of Presidential IQ scores throughout history, and of course intelligence tests didn’t become common until well into the 20th century. Outside that, we even have the card description here, which references a “think thank” that proposed this, and the weird phrase “Intelligence professors”. But if you want to click through, it is a blog post on Prntly.com that references a “report” that turns out to be someone talking on a forum.

Number of shares on this are not groundbreaking, but clearly in viral territory for something posted two days ago: 24,836 shares.

One of the comments on it, liked 59 times, seem to support the rebuttal shopping idea:

“Democrats love to talk about IQ’s. Well they won’t be talking about Trumps. They thought Obama was the smartest president ever.”

However, a lot of the other comments rant about unrelated things, so maybe comments aren’t the best result here.

This IQ issue seems to be a ongoing thing. A satirical article on Empire News last year claiming that Obama scored the lowest on an IQ test of any president in history garnered over 30,000 shares, and a chain letter hoax in the early 2000s had George W. Bush as the the lowest in history.

Where does this leave “rebuttal shopping”? I’m not sure. The new Trump fake news seems to follow the rebuttal pattern, as does the Obama story. The Bush story confirms something that wasn’t much disputed at the time, however, and looks more like shopping for confirmation than rebuttal. It certainly is not resolving any cognitive dissonance.

We’ll play with the idea a few more days and see what it brings to the fore.

 

 

Scanning the Facebook Feed as a Rebuttal Shopping Experience

The Stream is a weird place. Your Facebook feed, for example, is a series of posts by various people that in some ways resembles a forum, but in other ways it’s not at all like a forum. ¬†When you post something to Facebook, there’s not an explicit prompt you are responding to, which seems non-problematic when you are posting a cool new video you like, but a bit weirder when you are posting random articles.

A recent Jay Rosen tweet thread got me thinking about this a bit more deeply. Rosen suggests that the reason that fake news spiked before the election was demand-driven: many Republican voters were feeling uneasy about voting for Trump, and articles where Hillary was knocking off FBI agents and funding ISIS helped them feel better about that:

This is interesting, because the place where I became obsessed with the fake news phenomenon was in the primary, when a lot of Sanders supporters I respected and admired for their intelligence suddenly were posting bizarre vote-rigging stories.

As one example, the Inquistr story Election Fraud Study Points To Rigged Democratic Primary Against Bernie Sanders [Video]¬†was repeatedly in my feed, with angry references to how this proved they had been right all along — Bernie was actually winning. The primary had been stolen!

The “study” the page linked was referred to as a “Stanford Study”, ¬†a claim which took 60 seconds to debunk. It was the work of a current Stanford student¬†of psychology. No background in politics or polling. No Stanford appointment. And the study itself was just a paper, written and shared via Google Drive — it hadn’t been peer-reviewed or even designed above and beyond what one might do for a blog post. ¬†When you dug into the paper, there was nothing there — the computation of some effect sizes between states with paper trails and those without, and language which indicated that the author might not in fact have understood how exit polls work, or have been aware of the shift in demographic support for Clinton between 2008 and 2016. (In fact, in this respect there was an error that would disqualify it from ever being taken seriously).

I didn’t expect most of my friends would get the math, but even so, without the call to authority, and considering the major barriers to rigging an election, one would assume people wouldn’t re-share it. But reshare it they did. And quite a lot.

And this is where I think Rosen’s point is interesting. If you think about the Stream, with it’s lack of explicit prompts, how does one know what to share? One thought is that as you go through your stream you are doing something like shopping — you’re explicitly in the market for something. And very often that something is a rebuttal to an implied argument that is giving you some cognitive dissonance.

For the Trump supporters, the dissonance was that Trump was unqualified and racist and Hillary was just (in their minds) corrupt. But was she corrupt enough? And if not, how could they vote for him? Changing Clinton to a murdering ISIS follower allowed them to follow their gut, which really wanted to vote for Trump. It gave their gut the evidence it needed to rule the day.

For the more militant Sanders conspiracists, the dissonance was¬†between the results they felt Sanders should have and the ones that he got. People’s gut had told them Sanders would be broadly popular, but the reality was that he was not quite popular enough. On the verge of having to accept that, Facebook threw out a lifeline for the gut: the election was rigged. Stanford scientists had proved it.

If you go through a few of the public (e.g. share to all) posts on this, which you can do with this search, you’ll find something really interesting — so many of the comments people write when sharing the piece are of the type “I knew it! I knew it!”. It’s the sort of reaction someone has when they are struggling to maintain belief in the face of cognitive dissonance and suddenly stumble on a lifeline.

capture
(Note, the post above is a public post (e.g. meant to be shared to the world). You can’t see, and shouldn’t share private/friends posts with that search. And even though the post is public I’m blacking out the person’s name out of consideration)

This isn’t to say that all this is innocent in the least. The “Stanford Study” that wasn’t actually a study or from Stanford was shared at least a 100,000 times via different sites¬†shared on Facebook, including by¬†HNN (share count of their version: 82,000), which changed the headline to the more zippy “Odds Hillary Won Without Widespread Fraud: 1 in 77 Billion Says Berkeley, Stanford Studies”. (Spoiler alert: the Berkeley study wasn’t from Berkeley either). And it got a big assist from state-sponsored entities like the Russian-owned RT News in this episode of Redacted Tonight, which was viewed approximately 125,000 times on YouTube:

(That’s YouTube views, BTW, which are serious stuff — you have to sit down and watch a significant percentage of it before the view will register).

The RT segment ups the ante, really highlighting the Stanford name, to much laughs. It’s a study out of a little community college called Stanford, the host jokes (again, it’s not). It’s getting to the point it’s really embarrassing, the host says, how people won’t admit it was rigged. How much evidence do you need?

There’s a similar story in the past couple days with a Breitbart story being shared that passed around a ludicrous map with a misleading headline about Trump winning the popular vote (in the heartland).

media

Now any person with half a brain can see how ridiculous this map is — if they stop to look at it. And any person that can stop to parse a sentence can see the gymnastics required here to claim this victory.

The people reposting this are not stupid. But crucially, they¬†don’t stop to think about it. They see and they click, I think, because they know the moment they see this that this is a rebuttal they have been in the market for. They don’t need to evaluate it, because this is precisely what they have been looking for.

This is a bit rambling, but what I mean to say is I think Rosen is onto something here about the nature of the supply-side. ¬†I’m starting to think of feed skimming as a sort of shopping experience, where you know the ten sorts of things that you are looking for this week. Some paper towels, a new sponge to replace the ratty old one, and a rebuttal to your snobby cousin who posted that article that made you feel for twenty seconds that you might be wrong about something. Just what I was looking for!

As I’ve said before, this doesn’t mean that the news only confirmed what you thought already. In fact, quite the opposite: this process, over time, can pull you and your friends deeper and deeper into alternate realities, based on well-known cognitive mechanisms. ¬†But thinking of this process as not so much one of discovery as rebuttal¬†shopping — often brought on by cognitive dissonance — is useful at the moment, and I thank Jay Rosen for that.

Facebook’s Wall Metaphor and Political Polarization

Let’s continue to look at how Facebook has harmed¬†public discourse, because I’m getting increasingly worried that once Facebook blocks fake news people will say problem solved. And really, the problem is not even close to solved.

One of Facebook’s oldest features is the “wall” although we seldom call it that anymore. The wall in Facebook is your personal space, a place to express yourself, and occasionally a place for people to post messages to you. If you click on a person’s name in Facebook you see their wall. It’s the sum total of things they have posted to Facebook, along with the things that have been posted to them.

The wall (and Facebook itself) used to be a place for you to express non-newsy parts of yourself. You’d share status updates, various pictures of yourself with red solo cups, and links to funny videos or cool music. The wall was really an expression of you, the front page of your digital identity. ¬†It was the collection of things that made you who you are, the place where you showed various photos of your feet at the beach and expressed your undying love of Yacht Rock.

Compare that to something like Delicious bookmarks, which is a crime against the eyes, but uses another metaphor:

bookmarks

These are not things that Brian Lamb “likes” or supports, or thinks are cool. The Delicious page is not meant to be his digital identity. It’s simply a list of things that Brian thinks might be worth reading.

If you take this article that is shown above, for example, on the subject of innovation, I have literally no idea whether Brian agrees or disagrees with it. In fact, knowing Brian’s personal opinion of talk about innovation, my guess is that this paper probably doesn’t align all that well with his views, but is interesting and informative enough that it is worth a read, so he’s bookmarked it.

Now let’s take a look at my “wall” in Facebook:

facebook

The thing about my wall is you can tell at a glance who I am. It’s my digital identity, and I — like most others — curate it as such. These aren’t just useful articles and videos. They are expressions of what I believe.

And this, I think, is a too unremarked turn in social media, because it has dramatic impact on what gets shared. If I can only share stuff that more or less agree with, that has profound impact of the flow of information through the network. Likewise, I’m unlikely to share things that aren’t core to my identity — for instance, meeting minutes from the last town meeting. I’m more likely to share things I’m passionate about.

I know some people will read this and say — ha, ha, no, I share all sorts of things. I am not bubbled!

You might think that is true, but in practice I bet it is usually not. Look at the top three posts on your Facebook feed. Are the viewpoints they express generally in line with yours? (And I don’t mean exactly, I mean generally) More importantly, are they core to your passions? Have you shared any boring but useful information in the last 24 hours?

My guess is you haven’t. You might very well do that occasionally, but it is not your daily practice. It is swimming upstream, against the bias of the interface and the over-arching metaphor of Facebook’s wall.

And this is a big problem. If you grew up in the 1980’s you can think of Facebook as a jean jacket or goth trenchcoat on which you can put a number of pins and patches:

11

From blog We Are Reckless. Couldn’t find one CC-licensed, can someone make one?

Together these pins and patches represent who you are. And that’s great. People like to express themselves. But when you do that with politics it looks like this:

vintage-pinback-buttons-lot-of-135-mostly-liberal-political-few-cal-bears-misc-fba9dba0388c972473b76cf3adf5fcdd

From Terapeak. Still available for sale!

Which is, again, great as expression. Peace is good. We should bomb less. And who likes fat cats anyways? So, awesome, great, wonderful.

But are a series of jacket buttons and patches really our best source for news?

This mixing of core identity with an information system has direct effects on what we share and see. We’ve talked about how Facebook’s interface encourages sharing without reading, but just as important is this piece of the problem — that we subconsciously want all news we share to show unambiguously who we are. ¬†And while it’s OK that we don’t put buttons that we’re ambivalent about on our jackets, the world depends on us getting news and information that we¬†are ambivalent about. In fact, that’s often the most important news.

I jokingly say that Facebook built a platform for distributing party selfies, turned it into a news source, and destroyed the world. That’s hyperbole, of course. But the first two points of that — that Facebook turned an identity platform into a news source — that’s absolutely true. It’s only the ultimate impact of that that is in doubt, and we’re increasingly seeing that impact has been more negative than we ever imagined.

 

Maybe Rethink the Cult of Virality?

udell

TechCrunch has a story seemingly sympathetic to Facebook’s plight, which has this graf in it:

Because Facebook and some other platforms reward engagement, news outlets are incentivized to frame stories as sensationally as possible. While long-running partisan outlets may be held accountable for exaggeration, newer outlets built specifically to take advantage of virality on networks like Facebook don’t face the same repercussions. They can focus on short-term traffic and ad revenue, and if people get fed up with their content, they can simply reboot with a different brand.

It then says, given this, Facebook is really between a rock and a hard place. They don’t want to become the truth police, right? But on the other hand they don’t want lies, either. What’s a billion dollar company to do?

Again, I’d say think a little bigger. We have prayed at the altar¬†of virality a long time, and I’m not sure it’s working out for us as a society. If reliance on virality is creating the incentives to create a culture of disinformation, then consider dialing down virality.

We know how to do this. Slow people down. Incentivize them to read. Increase friction, instead of relentlessly removing it.

Facebook is a viral sharing platform, and has spent hundreds of millions getting you to share virally. And here we are.

What if Facebook saw itself as a deep reading platform? What if it spent hundreds of millions of dollars getting you read articles carefully rather than sharing them thoughtlessly?

What if Facebook saw itself as a deep research platform? What if it spent its hundreds of millions of dollars of R & D building tools to help you research what you read?

This idea that Facebook is between a rock and a hard place is bizarre. Facebook built both the rock and the hard place. If it doesn’t like them, it can build something different.

Banning Ads Is Nice, but the Problem Is Facebook’s Underlying Model

So Facebook will ban fake news sites from their ads network, as will Google, which shows you exactly how hard it would have been to do this six months ago. But in any case, problem solved, right?

Unfortunately, no. Fake sites that get traffic can get on¬†other ad networks, or go the malware route, or find one of the million other ways to turn eyeballs into cash. It will be a bit harder, I suppose, but it’s not as if having a fake news factory is a high overhead business.

But even if all ad incentives were eliminated, we are still left with the real problem, which is that Facebook’s model for news distribution is not suited for news distribution or anything like it.

Let’s review. The way you get your stories is this:

  • You read a small card with a headline and a description of the story on it.
  • You are then prompted to rate the card, by liking it¬†or sharing them or commenting on it.
  • This then is pushed out to your friends, who can in turn complete the¬†same process.

This might be a decent scheme for a headline rating system. It’s pretty lousy for news though.

If you think about the difference between this and blogging, it’s instructive. While I know for a fact many people jump to the comments section without fully reading my blog posts, you’ll notice that the way we structure things here on the blog is not:

  • Blog title
  • Comments box
  • An invisible but implied link to the blog post body, opening in a new window

but rather

  • Blog title
  • Blog body
  • Comments box

which is because our assumption is that you read the post first, top to bottom, then you comment and maybe share. We also think that reading — not commenting — is the main dish here. ¬†This is a good model, and the interface is structured around it. It led to a much higher level of discourse (again, speaking generally) in the mid-aughts than the things that replaced it.

Facebook, on the other hand, doesn’t think the¬†content is the main dish. Instead, it¬†monetizes other people’s content. The model of Facebook is to try to use other people’s external content to build engagement on its site. So Facebook has a couple of problems.

First, Facebook could include whole articles, except for the most part they can’t, because they don’t own the content they monetize. (Yes, there are some efforts around full story embedding, but again, this is not evident on the stream as you see it today). So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.

Second, Facebook wants to keep you on site anyway, so they can serve you ads. Any time you spend somewhere else reading is time someone else is serving you ads instead of them and that is not acceptable.

And so again, I want you to try and look at this Facebook interface with new eyes. It is literally a headline, a description, and a series of prompts asking you to react to the headline and description, and unless you think of it as a headline rating system it is really quite odd:

comment

And so when Facebook says things like — it’s not our software, it’s just the way users act, well I’m confused. This card is designed to teach users how to act, and with the help of their data scientists, it does that effectively. Because of issues around content licensing and their ad revenue model, they designed it to have users comment and react to stories they hadn’t read. I don’t think you get to turn around then and say, well it’s the users that aren’t doing due diligence on the stories.

Facebook is one of the biggest companies in the world with the smartest UX designers in the world. If they wanted their users to be critical readers, they would have made very different choices. They didn’t, and bad things happened. And now maybe we can all do better.