I’ve been investigating Google snippets lately, based on some work that other people have done. These are the “cards” that pop up on top sometimes, giving the user what appears to be the “one true answer”.
What’s shocking to me is not that Google malfunctions in producing these, but how often it malfunctions, and how easy it is to find malfunctions. It’s like there is little to no quality control on the algorithm at all.
Other people have found dozens of these over the past couple days, but here’s a few I found goofing off yesterday while half watching Incorporated on Syfy.
Prodded with the right terms, Google will tell you that:
- Sasha Obama was adopted
- Lee Harvey Oswald didn’t shoot JFK
- GMOs make you sick
Want some screenshots? Today’s your lucky day!
Now I’m sure that Google will reply that the results are the results. And I’m sure that other people will ask why I’m being such a special snowflake and stamping my iron boot on the neck of results I don’t like. (Their mixed metaphor, not mine!)
(By the way, trivia fact: one technique of populist dictatorships is to portray the opposition as simultaneously weak and effete while being all-powerful and brutal. Just some facts for your next pub trivia night…)
The truth is, however, that I have a fairly simple definition of a fact, and I would hope that a company who’s stated mission is “to organize the world’s information” would as well. For me a fact is:
- something that is generally not disputed
- by people in a position to know
- who can be relied on to accurately tell the truth
And so, not to be too Enlightenment era about this, but all these snippets fail that test. And not just fail: they fail spectacularly.
The person writing about the GMO health risks has no science background and is considered such a sham by the scientific community that when he appeared on Dr. Oz scientists refused to share the stage with him, fearing even that would be too much normalization of him.
The site writing about Sasha and Malia being adopted, “America’s Freedom Fighters”, is site specializing in fake news to such an extent that Google autosuggests “fake news” if you type it into the search box.
And the JFK conspiracy theory is — well, a conspiracy theory. It’s literally the prototypical modern conspiracy theory. It’s the picture in the dictionary next to the word “conspiracy theory”.
The truth is in cases like these cases Google often fails on all three counts:
- They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.
- They choose sites and authors who are in no position to know more about a subject than the average person.
- They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.
Google Should Not Be Family Feud
I never really got the show Family Feud when I was a kid. That’s partially because my parents mostly put me on a diet of PBS, which made anything higher on the dial look weird. But it’s also because it just didn’t jive with my sense of why we ask questions in the first place.
For those that haven’t seen Family Feud, here’s how it works. The host of Family Feud asks you a question, like “What builds your appetite?” You try to guess what your average American would answer.
You win if you guess something in the top five of what most people would say. So a lot of people say “smelling food” so that ranks in the list. No one says “not eating” so that doesn’t rank.
Watching this as a kid I’d always wonder, “Yes, but what actually builds your appetite the most?” Like, what’s the real answer? Don’t we care about that?
But Family Feud doesn’t care about that. It was never about what is true, it was about what people say.
I don’t think Google’s purpose is to aspire to be a Family Feud game show team, but it’s sometimes hard to tell. For example, a principle of “organizing the world’s information” has to be separating reliable sources from unreliable ones, and trying to provide answers that are true. But it’s clear that in many cases that’s not happening — otherwise quality control would be flagging these misfires and fixing them. The snippets, which create the impression of a definitive answer while feeding people bad science, conspiracy, and hate speech, make matters worse.
It should not be that hard to select good sources of information. For example, there is an excellent National Academies report on genetically engineered crops that was written by a mix of corporate and anti-corporate scientists and policy analysts. Here’s the conclusion of that study on health effects:
On the basis of its detailed examination of comparisons between currently commercialized GE and non-GE foods in compositional analysis, acute and chronic animal-toxicity tests, long-term data on health of livestock fed GE foods, and epidemiological data, the committee concluded that no differences have been found that implicate a higher risk to human health safety from these GE foods than from their non-GE counterparts. The committee states this finding very carefully, acknowledging that any new food—GE or non-GE—may have some subtle favorable or adverse health effects that are not detected even with careful scrutiny and that health effects can develop over time.
That’s actually what science looks and sounds like — having reviewed the data available, we find no evidence but are aware that, since impacts may take time to develop, there may yet be adverse impacts to appear.
If you went to a competent health sciences librarian and asked for material on this, this is what you’d get back. This report as one of the definitive statements to date on GMO safety. Because the librarian’s job is not to play Family Feud, but to get you the best information.
Google instead gives you the blog of a man with no medical or scientific training who claims GMOs cause infertility, accelerated aging, and organ damage. But “survey says!” that’s true, so it’s all good.
The world right now is in a post-truth crisis that threatens to have truly earth-shattering impacts. What Google returns on a search result can truly change the fate of the entire world. What Google returns can literally lead to the end of humanity as we know it, through climate change, nuclear war, or disease. Not immediately, but as it shapes public perception one result at a time.
I’m not asking Google to choose sides. I’m not asking them to put a finger on the scale for the answers I’d like to see. I’m asking them to emulate science in designing a process that privileges returning good information over bad. I’m asking that they take their place as a librarian of knowledge, rather than a Family Feud game show contestant. It seems a reasonable request.
“For example, a principle of “organizing the world’s information” has to be separating reliable sources from unreliable ones, and trying to provide answers that are true.”
I wouldn’t say it has to be. Should be but not has to be. If your aim is to organize the world’s information well it should be a principal. If your aim is to just organize info in a good enough way that no one else rivals you as a system then you’d not be as bothered. As long as the money keeps rolling in.
Yet another thought provoking post. An obvious answer would be for Google to make greater use of their Knowledge Graph.
If you simply Google “Sasha Obama” the first two lines in the Knowledge Graph sidebox are
“Sasha Obama
Barack Obama’s daughter”
So Google does know proper nouns. It also knows some common nouns. The query “who are Sasha Obama’s parents” triggers the knowledge graph as well. The results are
Sasha Obama > Parents
Barack Obama
Father
Michelle Obama
Mother
But “adopted” is a trickier common noun. You need to know the difference between “parents” and “biological parents.” The knowledge graph does not make this distinction. The query “who are Sasha Obama’s biological parents” loses the knowledge graph.
But it seems possible that the more sophisticated information to process terms like “adopted” may be on it’s way.
“I’m asking them to emulate science in designing a process that privileges returning good information over bad. I’m asking that they take their place as a librarian of knowledge, rather than a Family Feud game show contestant. It seems a reasonable request.”
I really love this. What if a search engine (say, Google) was based around not “organize the world’s information” but instead science, where “science” follows Alan Kay’s definition of “a set of heuristics for getting around what’s wrong with our brains.”
In this light, it’d be unacceptable (from a company values standpoint) to return the kinds of results shown in your post. Of course, I’d say they’re currently unacceptable anyway (as would probably any reader of this website!) but that their current mission of “organize the world’s information” is too spineless a value (because it permits all this incorrect bile and the company doesn’t seem to care — “hey, we’re just organizing it!” they seem to be saying).
Anyway, I get that society’s view of science is about atoms and whatever the Big Bang Theory dudes are up to (those zany scientists!), but I think technology companies need to stop cowering behind the (false) veil of “neutrality” and actually help us find a little truth.
But Google is still the king, right?