Cleanup Time

Today’s photo investigation.

cleaning up.jpg

The big “story” now is that the Women’s March left a big mess and that’s awful, and they should have cleaned it up. Here’s the image — it’s shocking!

Well, this is almost too easy. There’s two ways to do this. If you search Snopes for the term “Women’s March signs snopes” you’ll find an article that debunks the right-hand photo, at least partially:


What the Snopes article says about that righthand photo is it is signs left by the Women’s March, but these particular signs were left at the Trump International Hotel in D.C. as part of the protest. That’s why they are clustered together like that. Someone does have to clean them up, but it wasn’t routine littering. Additionally, the Parks Service has remarked the protest was tidier than previous events. While the Snopes article gives no single word ruling, their presentation is close to what they usually call “Mixed” — partially true, but misleading in presentation.

Speaking of cleaning up, what about the photo on the left-hand side:


Well, you see that “alamy” watermark by the guy’s waist? I’m guessing this is stock photography. And stock photography, in general, is not released after the day of an event, so I’m thinking this was taken long ago.

We’d like to right-click the photo and search by image, but my guess is that the two images pushed together won’t match anything. So let’s use the snipping tool.

Windows: Call up the “snipping tool“:


Use it to capture the piece of the photo you want to search for. Save it to somewhere you’ll remember:


Mac: Hold the “Command, Shift and 4” together and then select what you want to do a screen shot of. Save it to somewhere you’ll remember.

Now go to Google search and upload it, the way we’ve done with past photos.


Any of these results is probably good to click on, but I pick door number three, partially because it is so specific.

And when we do that, we have good luck. We get to the stock photo purchase page, where there is a full description:


We even get the date and location. It was shot seven years ago. So no, this was not from the march.


And… we’re done. Fake-a-rooni.

Road Trip

I like showing people how to debunk viral photos for a couple reasons. First, it requires small enough action that it can easily become a habit. You don’t need to do a lot of research or have a lot of knowledge.

Second, it shows how technological affordance (in this case Google Chrome’s right-click “Search by Image” function) works to create culture. We need to make you curious about the photos you see. But that’s a whole lot easier if the technology makes checking things two steps instead of eighteen.

Finally, it’s fun.

In any case, the photo of the day:


So this is part of the whole “Bikers for Trump” meme. Bikers are supposedly coming by the hundreds of thousands to provide “security” for the inauguration.

I’ll leave the larger issues of this fascination with biker-based security aside and ask a simpler question. Is this a picture of bikers headed to the 2017 inauguration?

The answer? No. And it takes about 30 seconds to find out.

First, right-click or Control-Click on the image and select search Google for image:


The Google search — for reasons known only to Google — will assume that the best name for this image is “Jesus”. Change it to “bikers”


Change the date (using the “tools” button) to end in 2016. If we find that this picture existed in 2016 it’s pretty clear it isn’t people headed to the inauguration in 2017. Let’s look at what we get:



While these are technically the dates that the pages that contain the photo were published (not the publication date of the photo), the results are probably good enough for us to doubt the photo.  We can be done here, in 30 seconds.

If you take about 30 seconds more you can do even better. On the second page of results we find a page from 2009:


We have Google translate that page, and find the image there posted on a Czech forum in 2013. In the process we see that this is a photo that has been used by a number of biker groups, but is still relatively rare, and the earliest posting was from a Czech forum.

So no, this is not a picture of Bikers for Trump.






The Impulse to Dive Deeper

This comes up in my feed today:


I go to retweet it, but stop. How do I know this is true? It’s a little alarm bell that goes off now when something seems just a little too perfect.

I right click on the image, search by image.



I look at the URLs, and I see “”.  I also note the “/news/ann-arbor/” file structure, which makes me think this is local news. That’s promising, if this is an account of a local story. I click through.


This is gold. A local account from a local paper of a story that happened locally years ago. The photo has a credit, and we have more information.

And I’m more informed now. I first looked at this photo and my mind naively assumed it was the South. Not consciously, but subconsciously. As I read the story I learn more about early efforts to celebrate Martin Luther King, new perspectives on how dangerous it was in some parts. I actually, I do a bit more than that, because the story makes me tear up a bit. Read it yourself, and you’ll see what I mean.

The whole process here takes a few minutes, and that’s only because reading the article takes a bit of time. The process of finding the article took ten seconds. In the end, I moved from senseless retweeting to actually learning something about our history.

I think some people think this stuff — Google Reverse Image, doing a Google Scholar search, looking up whois information on sites — is all just so *small* compared to Big Questions and Critical Thinking etc etc etc. And maybe it is.

But if you can imagine a life of these little habits, each one of which pushes you to dig a little deeper, explore a bit more, dive in a little further — I believe this is the way we start to build a better sort of society, a better sort of digital practice. We start with these habits, we move outward to questions, and deeper into reading. But without the habits, you won’t even start.








Monopolistic Digital Capitalism and Its Discontents

There is an excellent article in the Guardian by Evgeny Morozov, who gets at the heart of what we have come to call “the fake news problem”. According to Morozov, there are two “denials” that drive not only fake news (and a host of other corrosive clickbait), but our entire information environment:

The big threat facing western societies today is not so much the emergence of illiberal democracy abroad as the persistence of immature democracy at home. This immaturity, exhibited almost daily by the elites, manifests itself in two types of denial: the denial of the economic origins of most of today’s problems; and the denial of the profound corruption of professional expertise.

I disagree with some of Morozov’s points, but overall find it a compelling argument. The problem we are looking at concerns our entire information ecosystem; fake news and clickbait conspiracy are only the latest infestations of an increasingly out of balance environment.

Counterfeit experts

Calling the “liberal media” a form of “fake news” is a false equivalence. But so much of our current expertise compromised in some way, and the media is often a willing accomplice to its distribution. We see this, for example, in educational technology, where the totality of an “expert’s expertise” involves floating a startup, or working an organization that traces its money to outfits with larger (and more dubious) agendas.

Universities that used to fund research and development are increasingly reliant on corporate money as federal research funds shrink. Think tanks with political advocacy as a core mission fund an increasing amount of the studies bandied about by the press.

I believe in expertise. I believe we need a return to valuing expertise. But the trend over the past 40 years has been to mint expertise wedded to particular political results, whether its the junk science of Big Tobacco or the Merchants of Doubt of climate change. In economics, it’s even worse, with the best economists revolving through positions at banks, think tanks, and government. Can we trust an answer from that group on whether regulation works?

What happens when powerful interests learn to print expertise and push it into circulation? They same thing that happens when you do such things with currency. With no clear dividing live between the counterfeit and the real, the value of all currency suffers. (And if you read Merchants of Doubt, you’ll see that for some entities this is exactly the point — to sow a broad distrust in the idea of any expert consensus — or in the idea of expertise at all).

As we can see, the alternative to believing in expertise — the sort of knee-jerk nihilism we are seeing in some political quarters — turns out to be far more frightening than even our corrupted version of scholarship. But you can’t address the nihilism without addressing the environment that fostered it.

Monopolistic Digital Capitalism

Morozov notes:

The problem is not fake news but the speed and ease of its dissemination, and it exists primarily because today’s digital capitalism makes it extremely profitable – look at Google and Facebook – to produce and circulate false but click-worthy narratives.

To recast the fake news crisis this way, however, would require the establishment to transcend one of their denials and dabble in the political economy of communications. And who wants to acknowledge that, for the past 30 years, it has been the political parties of the centre-left and centre-right that touted the genius of Silicon Valley, privatised telecommunications and adopted a rather lax attitude to antitrust enforcement?

Again, this is correct. The reason these varieties of misinformation have propagated so quickly and fully is that we have developed an economy which is focused on rewarding distribution, not creation or value.

Facebook doesn’t produce content. It figures out ways to monetize the content of others. The content providers on Facebook (you and me) make nothing, and Facebook pays the providers of the content we share nothing. Facebook doesn’t benefit if you read a thought-provoking piece on the platform on that you think about on your morning drive. It makes money when you scroll, skim, comment, like, share. Like a food scientist looking for the flavor profile that makes people eat 23% more tortilla chips, Facebook’s focus is not on satiety, or even curiosity, but compulsion.

It’s worthwhile to note that this is not the only model out there. Podcasters,  for example, don’t benefit from clickbait sensationalism, but from content that can maintain sustained interest for twenty to thirty minutes at a time. Longform journalism relies on your feeling of having read something satisfying to build a brand identity.

Facebook relies on you having a compulsive relationship to Facebook that devalues direct relationships to other professional content providers. And so you get exactly what you’d predict you’d get.

Morozov talks about devolution of power to the individual in a way that is unclear to me, but his comment about the click-and-share drive of social media is on target:

The only solution to the problem of fake news that neither misdiagnoses the problem nor overpowers the elites is to completely rethink the fundamentals of digital capitalism. We need to make online advertising – and its destructive click-and-share drive – less central to how we live, work and communicate. At the same time, we need to delegate more decision-making power to citizens – rather than the easily corruptible experts and venal corporations.

This means building a world where Facebook and Google neither wield much clout nor monopolise problem-solving. A formidable task worthy of mature democracies. Alas, the existing democracies, stuck in their denials of various kinds, prefer to blame everyone but themselves while offloading more and more problems to Silicon Valley.

Would breaking up Facebook and Google solve this? I’m not sure. It probably wouldn’t hurt. What I am sure of is that solutions to our current malaise are as unlikely to come out of Silicon Valley as solutions to global warming are to come out of Exxon.

Amazon Might Be Your Next News Environment

My ideal news environment would be an international mix of both small and large papers and individual reporters doing paid work in ways that rewarded those with a dedication to facts and deep analysis over spin, clickbait, and press release stenography. We’d probably get part of the way there if we could figure out a reliable financial model to replace ad-based revenue on the net.

I think a more likely result of the past year’s Hindenburg excursion, however, is that a major corporation tries to go head-to-head with Facebook in the fight for the 30-minute Morning Scroll. Others have pointed to Google or some new startup as the likely challenger, but it’s clear to me that the corporation that will have the best shot at that is Amazon.

Why? Partially because I’m not sure that people are looking for a social network. Many are looking for what a social network does, which is to fill x minutes of a weekday or weekend morning with news at least partially fed to them from peers, which they can then comment and socialize around.

And for something like that, Amazon already has all the pieces of a news environment in place. It has hardware (Alexa, the Kindle Fire). It has publisher relationships. It has Audible. It even has a national/international newspaper of record, sort of — Jeff Bezos owns The Washington Post. Most of all, Amazon knows how to do something Facebook does not: Amazon knows how to get people to pay for things. A case in point is Amazon Prime’s new “Channels” platform where people can manage month by month subscriptions to anything from HBO to the small film geek site Fandor. It’s advancing similar efforts in spoken word content with its new Audible channels.

Keep in mind I am not promoting the vision below, merely thinking through where it might lead.

With an infrastructure like Amazon’s, you can imagine a world where sharing and infrastructure mix — where something shared from a subscription service is truly *shared* with others, i.e., made available free of charge. The Washington Post article or Audible show episode my friends rate up becomes available to me free of charge. Maybe it works its way into Alexa’s morning “Flash Updates” — three to five minute radio-like news updates that Alexa gives you on command. The Kindle Fire could show recently shared articles in a tab of the Fire.

Sharing could become a bit more meaningful as well — if you’re the one in your group of friends who bought the New York Times subscription, sharing a recent article might be something only you can provide. This in turn might make subscription more valuable to you, as it has the social benefit of building relationships with friends. Sharing becomes less “You must respect my opinion” and more “I’m excited to offer this to you.” This becomes even more useful with niche subscriptions — if you have a subscription to New Scientist and I don’t, I get a benefit from your support of them, assuming you share the most interesting articles with me. That’s the sort of generosity (“I buy this because my friends depend on it”) that can be very addictive, in good ways.

Social reading could be promoted, in some of the ways they have done with Kindle — sharing annotations and comments on text, for example. Amazon could provide what Facebook has never desired to provide: a deep reading environment that balances social features with a need for the focus reading requires. Text-to-speech could be utilized to provide a podcast for the morning drive of all the articles your friends had found interesting, spliced in with real audio selections from Audible.

Reading circles would be smaller than Facebook friend groups, and chosen not by who you knew or most liked to socialize, but by who you most liked to read with. (And maybe by who had the New Scientist subscription you wanted to leech off of).

The downsides are different from those of Facebook, but numerous. Ad-based surveillance and clickbait would be replaced by oppressive DRM and Amazon’s internal tracking. Information would not flow as freely across the network, and be invisible to outsiders. Amazon would have incentives to promote its own brands over other publishers. And even if Amazon tries to make sharing fluid, publishers have a way of making sharing clauses bureaucratic in their negotiations — “You get three shares a week to a circle of no more than thirteen friends and they disappear after fourteen and a half hours” would send people scurrying back to Facebook.

And ultimately our taste and inclinations bring everything down — what we imagine as a neatly interwoven experience of science updates, NPR, and independent media quickly becomes a distribution system for OW, My Balls when done at scale.

The implications for openness in such a world are worth a whole other article.

On the positive side you see a world not driven by ad revenue and clicks — one that feels more restful and productive. One that gets creators the money (and health insurance and security) they need to continue to do high quality work, to check things for accuracy and investigate issues rather than republishing corporate press releases.

Again, this is just a thought experiment, but it’s interesting, right? It’s a recognition that the battle is over the hour people spend in the morning reading Facebook provided stories and clickbait, and maybe the hour in the evening or at lunch they do. Amazon has a well-developed infrastructure to go head to head with Facebook there. No one else — not even Google — comes close.

Finding an Eagle Attack

So nobody took me up on my trace a viral photo challenge. I’m disappointed in you all. It’s like you have jobs or something.

In any case, I’ve walked through the solution to one of the images in a video. For what it’s worth I recorded the video without sound so that I could concentrate on what I was doing and then went back and narrated, which means the actions and words are not precisely synced. I also in the original video went much further and nailed down the precise time of the event, photographers involved and such, but here I wanted to stop at a simple solution most people could do before retweeting.

So here is the photo.


The questions were:

  • Was it staged? Photoshopped?
  • Was it a National Geographic videographer who was attacked?

We find that it wasn’t staged or photoshopped, but that there is no evidence that the videographer was from National Geographic. Here’s how we do that in 90 seconds.

As I said, the original video went longer and gathered more information about the event and the people involved. But it’s this first 90 seconds that is crucial.

I’ll also say that this turned out to be an easy one, since the event was covered by Reuters, a reliable news agency that publishes stories worldwide. But the techniques on more difficult material look the same.

Today’s Challenge: Trace Viral Photos Upstream

This tweet appeared in my stream yesterday.


I used the first photo here (guy with feet on fire) as an example in my evolving course materials on how to trace things to a source on the internet.

I also tracked down the other photos as well. It took barely any time at all. Maybe ten minutes for all four combined? (And only because one of them I got stuck on for a bit).

So challenge: can you track down the source of these other photos and tell me if they are real or staged, where they were taken, and if they involved National Geographic shoots? (Two hints: confirming at least one of the photos will involve using Google Translate to translate a page — so look for the translate option when you hit those pages. Also if you get overwhelmed by results, use Google’s date filter to show you only the oldest photos).

If you want to know how to do this in less than a few minutes for each photo, go ahead and read an early draft of the first chapter of the DigiPo course materials.

Photos follow:




Checking Internet-Based Claims

I’ve been working over the break to boil down how to check Internet claims into something short and active. Short, because longer prescriptions don’t work. Active, because we are trying to build habits.

Here’s what I’ve got so far.

  1. See if someone has already done the work. Some people call this the “Check Snopes First” rule, but there’s actually a broader array of sites you can check as well. My guess is if people checked Snopes and other sites first then 80% of the most pernicious stuff would disappear from our feeds overnight.
  2. Go upstream. Here’s the second maddening thing with claim-checking. I have watched in my long career student after student looking at an article on some third-tier clickbait aggregation site and trying to determine the validity of an article by evaluating that particular site. That site doesn’t matter. Go upstream and get as close to the original source as possible before starting your analysis.
  3. Look at what others say about the source. Once you find your site upstream, it’s time to get off it again. See what other people say about your source. Use tools such as whois, Google Scholar, and SourceWatch to find out who is behind the information on the site, what their agenda is, and what expertise they bring to the table.
  4. Get a second opinion. But not from the same doctor! So your final step is to look for corroborating (and disproving!) evidence. But here’s what happens a lot of times: people see a claim (“Ford Motor Company supports Black Lives Matter group”). They trace it to the source (rare, but sometimes happens). Then to verify it they search on the claim and find there are dozens of stories out there talking about this. The thing is, if all these stories stem from the same original source, they can’t be used to verify that source. So as you scan search results, be looking for a source that is going to bring in additional information, or approach the question from a different angle.

There’s some bigger understandings that inform these actions. One thing I’m thinking about a lot nowadays is how the level of syndication and rampant “reporting on reporting” creates the appearance of broad consensus within hours of an original claim. I mean, you’re seeing the same claim on literally hundreds of sites. It must be true, or at least valid, right?

Jon Udell is working on a Chrome extension that encodes some of the process we’re discovering works most consistently; you can see that work here. As I said, we’re still trying to get this down to something that almost becomes muscle memory — we don’t believe you’ll be able to fully investigate a site off of a recipe, but to borrow a term from Jon, I think we can make some “strategies for internet citizens” partially encode as habits.