Hapgood

Mike Caulfield's latest web incarnation. Networked Learning, Open Education, and Online Digital Literacy


Instead of letting people vote on news, Facebook should adopt Google’s rater system

A message I sent to a newsgroup on Facebook’s recent proposal that users could rate sites as a solution. To my surprise, I find myself suggesting they should follow Google’s model, which, while often faulty, is infinitely better than what they are proposing.

==============

(Regarding the announcement), I think there’s a better, time-tested way of doing this that doesn’t deal with individual ratings but benefits from expert analysis and insight. Use a modified version of the Google system.

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.

The raters rate search quality off the documents, and coders try to code to get the score up, but the two pieces are separate.

It’s not a perfect system, but it provides so many things this Facebook proposal doesn’t even touch:

  • It defines a common set of standards as to what a “good” result looks like, without going into specific sources.
  • It provides a degree of public transparency that Facebook doesn’t even come close to
  • It provides incentives for publishers to act in ethical ways, e.g. high quality medical or financial advice (“your money or your life” categories) must be sourced to professionals, etc.
  • It separates the target from the method of assessing whether the target is being hit
I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.

In such a system Facebook would hire raters who would rate feed quality — not individual sites — for a variety of criteria which experts have decided is characteristic of quality news feeds (and which readers by and large agree with). That would probably mean ascertaining whether sites included in the feed had the following desirable attributes:

  • Separation of opinion and analysis and news content, with opinion in particular clearly marked
  • Sponsored content clearly marked, and comprising small portion of overall site
  • Syndicated content clearly identified
  • Satire pieces marked unmistakably as satire
  • News stories clear about the process and methods by which they verified news in an article (e.g. “Kawczynski declined to be interviewed Sunday, but in posts on his website and on Gab…”)
  • A retraction policy, and an email address to send noted errors to
  • Headlines that match in tone and meaning the content of the attached article
  • Descriptive blurbs in Facebook that accurately describe the content of the article
  • Pictures which are either related to the event or marked as stock or file footage with descriptive and accurate captions
  • Links where appropriate to supporting coverage from other news outlets
  • A clear and accurate about page which defines who runs the paper
  • A lack of plagiarism — e.g. does the content pass the “Google paste test” for material not marked as syndicated.
Raters would rate the quality of the sources showing up in their feed and Facebook engineers would work on improving feed quality by getting the ratings up. No one gets banned or demoted by name. Or promoted by name.
The place of experts and the public would be to clarify what they trust in news. In fact, the Trust Project has already done much of the work that would go into feed quality rating docs. I summarize their work and my simplification of it here:
Again, the rater guidance documents get published. We continue to argue over whether the guidance is correct and whether the implementation is meeting the guidance or being gamed. We still raise holy hell about misfires and get them to rethink guidance and code.
The approach Facebook is currently proposing, on the other hand, is essentially nihilistic, and like many nihilistic things it may have current utility (and may even work temporarily), but provides a lousy foundation for dealing for problems to come.
Mike.

P.S. By and large I think you will find both that the public would rather trust expert opinion on what  constitutes quality than trust their neighbor, and that the public more or less agrees — both right and left — with the practices in the bulleted list above.



Leave a comment