I still don’t know quite what I’m doing with my newsletter, twenty weeks in. I’ve been writing quite a bit there. But should I also put that stuff on the web?
Usually I do a series of long pieces and quick hits for it. But yesterday I did a quick round up of news from the past few days and sent it out. I figured I’d put it here and see if it’s worth publishing here as well.
The headlines below aren’t “fake” — it’s a reference to a New Pornographers song from 2000. Andrew Bird covers it here. It ends: “I filled the whole front page / With the catchiest words I could find / Fake headlines, believe me come back / Fake headlines, believe them come back.”
Now our stories.
Fake polls are a real problem. We’re on the road to fake everything, apparently.
Corporate-funded medical journals are leading innovators in fake research.
Bot armies may (stress on “may”) be targeting journalists, trying to knock them off Twitter, by using openly bot-like behavior in support of them that gets the journalists banned. The bot behavior triggers spam protections in Twitter, locking the journalist out of their account. This isn’t proven, and is highly speculative but is something to watch.
Citing a need to build a team with a “digital first” mindset, the L.A. Times ousted its only person of color on the masthead, as well as a number of other veterans. Here’s hoping that new digital-first team will also be a diversity-first team.
WhatsApp has rolled out verified badges. We need to start thinking about media literacy with WhatsApp, which is going to ultimately touch more people than Twitter, at least directly. WhatsApp is already a source of misinformation and hoaxes in India, Malaysia, Haiti, Kenya, Spain, and Indonesia. In the U.S. WhatsApp could be a vector of disinformation for a segment of teens and emerging adults who have started to abandon daily reading of Facebook. Maybe in 2018?
Speaking of fake news in other countries, AltNews tracks and debunks fake news in India. Follow them to get a more global perspective.
Why doesn’t Instagram get flooded with fake news? Well, there’s no groups, and reposting is not part of the basic interface. Some people think that was intentional. There are lessons here.
Headline from 2010: Facebook Introduces Community Pages, Hopes To Make Them “Best Collections Of Shared Knowledge. What happened to this vision? Anyone know?
You need to read the Unleashed article by Cass Sunstein. More on this later but read it now.
Researcher Kate Starbird notes that FEMA is dealing with Harvey rumors on their front page, and asking people to correct them online. This is important because rumors in crisis events quite literally get people killed.
Robert Fanney is a great follow on climate change on Twitter, and recently expressed frustration with the idea that we can’t discuss the cause of disasters in the wake of them, the “now is not the time to say I-told-you-so about climate change” Harvey argument. A holistic treatment of disinformation needs not only to look at how misinformation spreads, but also how good information is suppressed. Tragedy policing is one of the ways that is accomplished and deserves serious study and attention.
Kris Shaffer runs the Disinformer newsletter, which combines a focus on misinformation and propaganda with digital humanities. Read it.
I wrote this post on teaching students to read Google searches. Its big contribution is probably my 300+ item Google question bank. These are questions you can plug into Google, and have students evaluate the results.
Also I should mention – we’re running 10 sections using Digipo at WSU Vancouver alone. This has taken off like wildfire, with the controlling idea now info-environmentalism.
So does this stuff belong in the newsletter or on the site or both? Let me know.