Establishing the Significant History of a Newspaper on Wikipedia

Ultimately one of the prime goals of the Newspapers on Wikipedia project (#NOW) is to make sure that significant local publications have an infocard, and thereby are more likely to generate a Google panel in the search results.

But that’s not the first, or hardest step.

The first, and more difficult, step is to establish the significant history of the given newspaper so that the article meets notability requirements and will not be deleted. Once that is accomplished it is relatively simple to go back and add infocards.

So how do we do that? And what does it look like? It varies from paper to paper — but here are some resources you can use and examples you can mimic.

Start With the Library of Congress Record

Here’s the main thing to understand about Wikipedia – a primary source cannot establish its own notability. So that long history in the paper’s about section? You can cite that, but only after you’ve established much of the history and significance through other sources.

The Library of Congress has a project about historical newspapers called Chronicling America and one of the nice results of that is that they have bibliographic records on many papers. This is a good starting place because it not only provides you a nice authoritative Library of Congress cite for your newspaper page, but it also alerts you to different names the paper published under, who the owners were, and whether it was preceded by a related publication.

It’s worth taking note of all these names and people, as they are going to be search terms for you. You can also start to build the chronology of the paper. If the paper is the result of a merger, you may want to cover the history of the previous papers it grew from in the article.

You can do a fancy LOC search using “site:” syntax or use their own internal search (which I found a bit lacking). But for most papers, this sort of thing gets you where you want to go.

loc search.PNG

When you get to the LOC page, note the first date (or year) of publication, the frequency, the publisher, and any preceding titles as you’ll work this all into your article.

Because different papers sometimes have similar names you’ll also want to check the town of publication and the publication years. Occassionally LOC will have multiple records for the same paper name in the same town and you have to find the right one.

Search Google Books

A useful way to establish notability is to search Google Books. For instance, through Google Books I learned the Wellesley Townsman published one of Sylvia Plath’s early poems, as well as an obituary that blamed her death on viral pneumonia. That’s interesting, and also adds to notability. I also learned that the Griffin Daily News played a significant role in stoking racial resentment — locally and nationally — in the 1890s.

What you’re looking for in these accounts is not a book sourcing a fact to these papers, but the papers either playing a role in events or being covered due to their importance. So if a book just cites the paper as a reference — well, that’s not really notable. But if it talks about the paper directly — maybe about the sale of it, or how it was the only paper to support a certain candidate for Governor, or when it went to a daily publication schedule  — that’s something to throw in the article. You might also see if notable people may have worked for the paper at one time and go to their articles and link them to the paper.

Google Books also does auto-citing pretty well — throw the link into the cite box and it builds the citation for you. Don’t pull the URL from the location bar, however, pull it from the link up top after hitting “clear search” — this should provide a link directly to the cited page.


Some older Wikipedians get a bit grumpy about autocites — they don’t look as nice, and when multiple cites are used they don’t compress into nice “Ibid’s” etc. I’m sympathetic, but it’s not something you should worry much about. Using autocite maximizes your research time and provides direct links to evidence, so on the whole it’s a good thing.

Historical Newspaper Archives Will Save Your Life

The most useful resource for finding out the history of a paper is other contemporary papers. Start by checking if your university’s library has subscriptions to newspaper archive search engines.

Nineteenth Century Newspapers:


ProQuest Newspapers:


And Nexis Uni:



If you don’t have the access you need from your institution or local library you might want to pay for a personal account somewhere. The “Publisher Extra” subscription level of costs $75 for six months and a NewspaperArchive account is $50 for six months. Both are excellent sources, especially for small local papers.

Even for these accounts, you may not have to pay any money at all — Wikipedia provides a number of free accounts of NewspaperArchive to Wikipedians that have a significant edit history and no institutional access, as do some local libraries.

The amount of hidden history that you can find news archives is extensive. Here’s my recent clippings on J. J. Benford, editor and initial publisher of the Albertville Herald in GA:

In there we have the entire early biography of this editor. We’ve also got various articles on the merger of the Albertville Herald with the Sand Mountain Reporter.

Clippings are also shareable with the general public which makes them very useful on Wikipedia.

Here’s a shot of NewspaperArchive with an article on Jesse Culp, the editor of the Sand Mountain Reporter in 1961 when the article was published:


Now it’s on Culp speaking to the PTA, but we learn that he had been editor of the Sand Mountain Reporter since it spun up in 1955, and that — like J.J. Benford (who ran the other town’s paper) — his background was in agricultural radio reporting. We also get a nice connection (and therefore link out) to the WAVU Wikipedia article.

And here’s a Nexis Uni page on a purchase of the paper in 1999:


Pulling It Together

When you write your article, these bits of research are used for small parentheticals, but they get cited as well. For instance, here is a page for the Sand Mountain Reporter I drafted this morning out of these references:


If using historical newspaper archives, links should go to “clippings”, not pages, per both Wikipedia and the archives. Clippings in these systems are a way to share specific articles publicly, and linking to the clipping — which is not behind a paywall — allows others to check your work and the accuracy of your citation without needing an account.

In this case we weren’t able to find anything worthwhile in Google Books about this paper, but by getting down the history — even of this rather small paper — we’re able to show its long and important history in the community. And we’re able to do this without citing the paper itself, instead relying on the Library of Congress and four other local papers to tell the story here.

It would be nice if we had a Google Books story or two — a Sylvia Plath style story, or even a mention in a book on local Alabama history. But I think you can make the argument to those that ask that the long and continued coverage of this paper by other papers shows its importance to the region. While the paper may seem a little slight, it is not simply a weekly shopper of pay-to-play features, but a true area newspaper with a significant history.

After you’ve established notability, you can go ahead and write up an infobox on the page (or let someone else do it) giving the most current stats of the paper, and think about what needs to be in those important first couple sentences of the page. Or fix citations — I notice I didn’t note the page of the paper here, which I should do. But  starting with the history and significance will get you off on the right foot.








Announcing the Newspapers On Wikipedia Project (#NOW)

TL;DR: I am announcing a project to get students and faculty to produce 1,000 new Wikipedia articles on significant English-language local newspapers by October 12, 2018. This will represent a substantial increase in Wikipedia coverage of these papers (An increase of 1,000 U.S. papers would be almost a 40% increase in U.S. coverage, for example). Join by doing it and telling me.


I’ve just read a stunningly good paper from Emma Lurie and Eni Mustafaraj. The paper is chock full of all sorts of insights for both the media literacy teacher (which of course I am) and the search UI/UX designer (which I was) and it feels like it was written just for me, to help me get better at what I do.

The core of the paper is this — Lurie and Mustafaraj nudged students with prompts into using lateral reading on sources, and then watched how they performed. In doing so, they were able to identify the ways in which untutored lateral reading succeeds and how it fails. This close examination yields a variety of insights in what search platforms, media literacy teachers, researchers, and others can do to better support readers in this process, as well as noting some pitfalls of current online literacy advice (including some of mine).

More on the whole paper later: here’s the part that matters right now.  One of the things that hinders students is the lack of decent Wikipedia documentation of local news sources. This, in turn, effects the information that comes up when students do lateral reading on a source, particularly in the Google panels, which readers notice but often find missing or less than helpful on smaller sources.

The researchers even quantify the issue: the USNPL lists 7,269 news sources in the U.S. Only 2,702 of those produce “knowledge panels” in Google, with the likely reason for lack of a panel being lack of a well developed Wikipedia page. Even aside from the knowledge panel problem, the lack of decent pages for local news means that students will not always be able to find any objective information, even on a deeper search.

What struck me though was that this is a solvable problem. And it’s one our students can help solve.

Students Can Learn About News While Learning About Wikipedia

Many faculty want to have their students work in Wikipedia to better understand how Wikipedia works, and to provide their students with authentic digital research projects. But finding articles that their students can add and work on is not always easy — notability requirements often lead to the deletion of student created pages.

But new newspaper and radio articles, provided they are on entities with a significant history, have a bit of an advantage in Wikipedia. Here are the notability guidelines for these sorts of articles:

Notability is presumed for newspapers, magazines and journals that verifiably meet through reliable sources, one or more of the following criteria:

  • have produced award winning work
  • have served some sort of historic purpose or have a significant history
  • are considered by reliable sources to be authoritative in their subject area
  • are frequently cited by other reliable sources
  • are significant publications in ethnic and other non-trivial niche markets

Publications that primarily carry advertising, and only have trivial content, may have relevant details merged to an article on their publisher (if notable).

These guidelines prevent you from adding your neighborhood shopper to Wikipedia, or advertising your new blog of local news. But that’s not the gap we’re looking to fill. Most papers on the USNPL that do not have Wikipedia pages have existed for decades or centuries; many are logged in the Library of Congress as significant historical publications. If students learn how to demonstrate that significant and extended history of these publications through the use of secondary sources, they should be able to easily meet notability guidelines for the papers we care to log.

In the process students will learn a number of things that will help them better evaluate news. They’ll understand the nature of local reporting, the history of it, and its importance even in an increasingly digital world. They’ll learn the names of various journalism awards and their reputation, to better help them to evaluate quality.

And they’ll also see some ugliness as well, and understand the was that local media has been used for ill. In a brief spate of edits over the weekend the random papers I pulled included one that advocated and joked about lynching at the turn of the century, and one that acted as the unofficial mouthpiece of the early Ku Klux Klan. I also noted a number of newspapers that served persons of color that were not represented in Wikipedia.

And of course, follow any paper’s history into the 1990s and early 2000s and you’ll find the same story again and again: local papers being bought out by often distant corporations with no connections to the community.

Why Historical? Why Newspapers?

To be quite honest, this is strategic on our part. Wikipedian deletionists worry — quite rightly — that small and trivial local publications may use Wikipedia as an advertising space, both to drop their promotional copy into and to juice their Google results.  Because papers with no significant history must demonstrate notability in other ways the battle about notability can become quite contentious.

We’re choosing an easier task to start — documenting existing newspapers with a significant history. We aim with this first project to pick papers 25 years old or older that have been noted repeatedly in other media due to their historical or community significance. We put “Historical” in the title to clearly signal to Wikipedia admins and others our good intentions and our prime argument for notability.

Why October 12?

It’s a safeguard. If we are not at our goal by October 10, I plan to go to the Open Education conference in New York and shame everyone into helping us cross the finish line.

You Join By Doing It and Telling Me

Here’s what you do.

Set Up Your Profile Page

First, if you don’t have a Wikipedia account get one.

Second, make a profile page on Wikipedia for yourself. Write a few paragraphs about yourself. You can be pseudonymous if you want, or let it all out there.

Add some text like this somewhere on the page (edit it to reflect you) and link to my profile page so I know about you:

Newspapers On Wikipedia Project (#NOW)

One of my current interests is improving the coverage of historic local newspapers in Wikipedia. One of the best ways for readers to sort out whether a newspaper is real or fake is to check Wikipedia to see if it has an article (and what that article says). Having newspapers documented is also crucial to Wikipedia internally, since many historical claims are sourced to local papers, and editors require context on the nature of the publication the material appears in.

Yet only 38% of local papers have a Wikipedia page. The problem is particularly bad with weekly papers in small towns, even though many of these papers have publication histories going back to the 1800s. I am participating in a project initiated by Michaelacaulfield in this area to improve the quality and reliability of local news sources, particularly historic newspapers.

Again, link to my userpage as above so I can find people doing this. Eventually we’ll get a WikiProject page set up but this will work for now. Tweet any edited or created pages to Twitter’s #NOW hash.

Putting this information on your page will help people reviewing your edits and creations to understand what you are trying to achieve.

Gnome Before You Create

Don’t jump right into creating pages. Spend a week or two visiting existing pages on local newspapers, seeing how they are set up, and making minor improvements in language, citation, or formatting.

This is called gnoming, and a history of gnoming demonstrates you are interested not in self-glorification or grinding a specific axe, but making Wikipedia better. WikiGnome actions are listed here, and doing them will make you a better writer and editor of articles. You can find some articles to gnome here.

More importantly, however, a person who has made useful edits and additions to a wide variety of other people’s pages builds social credit, and is less likely to have their pages deleted without a conversation first. You build the credit gnoming, you spend it creating new pages.

After gnoming, you might want to expand further out into making significant additions to pages.

Don’t Create Your New Page Until You Know How to Establish Notability

Once you’ve gnomed for a week or two, you’re ready to create a new page. You can go to the USNPL list or another list of local newspapers and start looking for ones that aren’t covered. But you should be careful before creating them, since it’s important to get new pages right on the first go.

If you’re a seasoned Wikipedian, you know how it works:

  • Secondary sources (multiple if possible) to demonstrate notability.
  • Use Google Books to find strong supporting links.
  • Reference information on collection records at the library of Congress.
  • If you have a account, search for coverage of your paper by other papers.
  • See if the paper is linked to any otherwise notable figures.
  • Try to get down the complete history of ownership — to the extent the history of changes in ownership were noted publicly by significant secondary sources it establishes notability, and it will also serve to get a broad array of citations in the article from the get-go.

Do all this before you launch, because in my opinion people have it easier if they get it right from the start.

If you’re not a seasoned Wikipedian, don’t fear. I’ll do up a video guide on how to write a notable stub on a local historical newspaper soon.

More soon.

Civix Releases New Online Media Literacy Videos

I worked with Civix, a Canadian non-profit, to do a series of videos showing students basic web techniques for source verification and contextualization. I had boiled it down to four scripts running six minutes apiece; Civix and their production partner managed to cut them down to about three minutes each after filming.

Here’s the introduction, which features a bit of narrative around Sam Wineburg and Sarah McGrew’s work and how it informs what we do:

This study came out after Web Literacy for Student Fact-Checkers, but it’s been one of the biggest influences on the continued development of our curriculum. It’s hard to summarize a study in six minutes — and my six minutes were cut down further by editing to three — but I think the presentation of the study survives here.

The second video encourages students to investigate the source before they invest time in reading it (with a heavy lean on Wikipedia as a first stop).

This is one example of how we’ve honed what we teach over the past 18 months.  Initially, we gave students a method for searching for information on a site by doing a search like [[]]. This finds coverage of a site that is not from the site itself.

It’s a great search strategy! And people loved it in workshops — the secret language of search!

But when I’d talk to faculty who were in the workshops a few months later they would say — hey, how’d that trick go again? I want to show it to my students.

Maybe I worry too much, but my guess is if the faculty member has to ask me for the trick two months later (“It’s site something something, or negative site, right?”) then I doubt their students are going to hang on to it either. So we went from the researcher-like [[]] to “just add Wikipedia to the omnibar.”

It’s the same with a lot of our techniques. We started with a book of more than two dozen verification techniques. We’ve got the core down to five starter techniques associated with three moves.


It reminds me of the old joke about the student who goes up to the expert and asks them “How long did it take you to  write that speech?”

“Ten years.”

It’s not a funny joke, but it’s applicable. Teaching this to faculty constantly over the past 18 months has boiled this down, down, down.

Video three reminds students to find the original source of reporting.

There’s so much more to say about this — of course in some cases intermediate reporting sources add additional verification or analysis, etc, etc. But the social web often pushes us low-quality re-reporting of higher quality originals, and the propaganda techniques of leveling and sharpening distort original stories along the way. Finding the source is an essential skill.

Finally a video on finding trusted sources:

For some, the “rely on established sources” piece of this is going to be the most controversial bit of advice. It became a bit more highlighted in the editing here than in my original script. When we drop this into a longer sequence of classes we have discussions about the necessity of non-established sources to a dynamic ecosystem, and about the worry that algorithms can filter out significant minority points of view.

But I actually like the video as it came out here.  When it comes to news reporting (which is the subject of the Civix/NewsWise videos here) we want publications with a history we can evaluate, and reporters who have either learned their craft from journalistic culture or long experience.  That can come from excellent non-profits like ProPublica, or for-profits like the LA Times. Even advocacy journalism like David Corn at Mother Jones. But that culture takes time to develop, and the truth is that older publications often have a level of rigor on this that newer online initiatives can’t touch. If I have to choose between sourcing a fact to hip new online mag  Babel or my hometown newspaper, I’m going to choose my hometown newspaper.

The bigger point in this video is the “broadening” technique that gets students out of the habit of just reading the version of the story that comes to them. The “Search Google News” habit reminds people when they learn of an interesting story through their feed they are not required to read that version of the story. The can go back and fish in Google News for a higher quality story on the same topic before they invest their time.

This is a simple realization that most people still haven’t had yet.  As I say — it’s the internet — you’re not stuck with that one story that comes to you. By going out and actively choosing a better story you will not only filter out false stories but also see the variety of ways an event is being covered.

Anyway, thanks to the Civix people for some great work — we had a deal that I’d help out on the videos as long as I could sculpt them so they would be useful for our Digipo Initiative classes as well. It worked out great and we intend to use these in our own work. We hope you will too!



Everything is depressing and messed up so let’s take a lunch break to talk about neartopias.

If you look up the phrase “neartopia” on the web you’ll find a couple solitary pages of someone proposing a anarcho-libertarian island government, but that’s not what I mean in my use of the term. Instead, I mean a particular brand of sci-fi — and speculative fiction more generally — that presents a world considerably more socially just and personally fulfilling than the one we currently inhabit in a way that seems at least partially achievable.

Think Kim Stanley Robinson’s Pacific Edge. Le Guin’s Dispossessed. In film, examples are less common, but a recent neartopia would be Black Panther’s Wakanda.

Neartopias are not utopias. They have problems. They have to have problems because problems are what drive plots. And on another level problems are just interesting in a way that non-problems are not. They also aren’t post-scarcity Star Treks, or visions of a perfect 6030 A.D. They are “near”-utopias both in the sense that they lack perfection and in that they seem near-enough to be achievable.

Neartopias also have blindspots. Each neartopia pulls from cultural assumptions that will be eventually — like all things — be revealed as problematic. The Golden Age of sci-fi produced some neartopias, for instance, but had a relationship with technological progress and industry, for example, that was — well, let’s say underdeveloped.

But these visions are fundamentally different than dystopias, which serve as a warning, which map a world we need to try to route around.

I was very into dystopias for a while. But like a lot of others– see, for example, the solarpunks — I’ve worried over the last few years about their efficacy as a tool for social justice and change.

Take Minority Report, a dystopia that imagines a world of constant surveillance and personalization, one where people are judged to be guilty before they commit a crime. A warning, right? Except, somehow when run through capitalism it becomes a blueprint for an IPO.

Worse yet are the “Utopia is secretly a Dystopia” plots, from The Giver, to Gattaca, to… well, just about any film that starts out with a utopian vision. These films often take as their target inequality or other current issues. More common formats recently are “the utopia built on the backs of the poor or non-elite.” or “the government that provides the good life in order to control you.” (Both of these have spawned a thousand YA dystopian series).

Those are important messages, but I wonder if they get garbled a bit in translation. The message of the Secret Dystopia seems to be that social and technical progress is always bought at the expense of someone else and that government provisioned services are always bought at the expense of freedom. But while these are biting critiques of our current moment,  it’s important to remember that these zero-sum patterns are not laws of physics, but rather products of a system designed to produce unequal outcomes and quell dissent. In using the future to critique our current reality, dystopias often serve to reinforce fundamentally conservative viewpoints, treating constructed elements of our current system as eternal truths that will replicate infinitely into the future.

As I’ve gotten deeper and deeper into the disinformation environment I’ve thought more and more about the role that art needs to play in moving forward a society that is overwhelmed by the sludge of our current politics and culture. And I keep coming back to this idea of Solarpunk, and, more broadly, neartopias:

To many, solarpunk represents an ignition for activism. “The great programs of the 20th century often began as fictional proposals, from moon landings to Social Security,” says Flynn. “It’s time we returned to higher ambitions for what we can do as a society.” When Ulibarri picks up a book, she’s looking for an escape that isn’t as familiar as dystopia is. “Maybe it is escapism, but it gives me a sense that things can get better,” she says.


Paris Smart City, by Vincent Callebaut

There’s not a big finale here — just lunchtime musings. But I’m curious how many other people have a hunger for this new vision of science fiction, from Wakanda to Solarpunk? I can’t be the only one. It’s time to show a future where technological progress is not bought at the cost of the oppressed. Where government can be a tool for good — not House of Cards with more computing power. Where we move beyond this current turd of a result and in to something better.

I’m in the market for neartopias — if you have some favorites I should read, throw them in the comments. Are there significant other strains outside of Solarpunk I should know about?


Google’s Big AI Advance Is… Script Theory?

Like many people I watched Google’s demo of their new Android system AI calling up a hair stylist and making an appointment with trepidation — was this ethical, to not disclose that it was an AI?

But now that the smoke has cleared, I’m realizing something a bit more disturbing. After years of Big Data  and personal analytics hype, the advance that Google demonstrated is an application of 1970s AI work that requires none of that.

Setting up a haircut appointment is a social script. It has a sequence of things that happen, usually in a predictable order. The discovery of the importance of social scripts in computational understanding of communication was a big part of what Schank and Abelson brought to the field of AI in the 1970s.

Scripts were important both in terms of computers navigating standard social situations, but also in understanding stories about those situations. When I studied linguistics, one of my favorite little facts was you could often discover socially legible scripts by noticing how stories were elided. For instance, if I say “So I go to a restaurant, and the server gives me the bill…” no one stops me and says “Wait, you got a bill before you ate anything? And who is this server person?” The understanding in storytelling is I can evoke a script and then start at the part of the story that deviates from the script. That’s how core they are to our thinking and discourse, and Schank and Abelson made the case in the 1970s that mapping out these scripts would be core to computer understanding as well.

While less physical than dining, booking a haircut over the phone is a script too. It follows a particular sequence and has slots where the unique bits go.  In general we find out if I am in need of a particular stylist, and then drill down on a date and time. Importantly, it works because I’ve learned the script and I know the things the hair stylist will ask and I have the answers the stylist requires. I know I need to provide date, time, and stylist, and I might need to supply a rough time of day preference — mornings, afternoons, end of day, before work. On the other hand, I know the stylist is not going to ask me if I’d rather have a chair nearer to the window or the bathroom or what type of music I prefer in the salon.

Here’s the thing: The precise nature of social scripts is that they often allow people with no knowledge of one another to negotiate transactions successfully. Preferences figure into that but are usually easily enumerated by each party — because that’s part of the script.

Because of this, I don’t really need personal analytics to discover that I like my cappuccinos extra dry. I have years of experience walking through scripts where I’ve learned to specify that, and the script has a very specific spot where that goes. The script has taught me how to concisely enumerate my preferences in ways useful to baristas.

In fact, analytics in these situations end up being a lesser reflection of the explicit inputs into the script. For example, Google might search my flight booking data and find I like window seats towards the front, that I prefer Alaska and layovers with a bit of buffer in them. But the patterns I produce in what I get for flights aren’t a mysterious secret sauce discovered by analytics, they’re the product of me specifically asking for nine things when I book flights. Nine things I can easily rattle off, because I’ve been doing the “booking a flight” script for years.

So here’s the question about the “haircut” demo: if the nature of the social script is you *don’t* need deep knowledge or background for the script to work, then what’s all the talk about personal data being Google’s prime AI asset about? What’s all the machine-learning hype?

After years of sucking up all our data Google’s big AI advance is… Script Theory. Which requires none of this. Maybe we should be talking about that.

Taking Bearings on The Star

One thing people may not realize is I use the exact same techniques we teach to students in my daily work. The skills we are giving students aren’t some dumbed-down protocol. They are great habits for reporters, researchers, and other professionals as well.

As an example, this article came up in my news alerts this morning.


I’m interested in fake news in Southeast Asia, so I’m glad to read analysis and opinion from a place like Malaysia, but I want to source-check, even if I think I know this source. So we strip off everything from that URL and add Wikipedia.



This pulls up a relevant Wikipedia page:



And clicking through we are reminded that The Star is effectively owned by the Malaysian government.


And then we’re back to the article after a 30 second detour.

For the record, I still read the column, but I didn’t share it, and if had had shared it I would have noted that it was a legitimate news source to some extent, but possibly compromised by its ownership. Sam Wineburg has talked about this process as taking bearings, and I like that term a lot. Before trudging blindly into an article, pull out the compass and the map and figure out where you landed. It’s so simple to do, there’s really no excuse for not doing it.

(I should note that I’ve elided a number of things I do know about Malaysia and government propaganda there for the sake of clarity in this post — but the truth is if I have any doubt about the source at all I use the process, just the same as a novice. I had a vague memory about this precise ownership issue, but the process is always likely to give me a better result than my unaided memory. And it’s actually less cognitively demanding as well.)

(EDIT: changed “heavily compromised” to “possibly compromised” since the initial wording expressed more certainty than I had wished to portray. Legitimate news organizations with ownership issues are often fine on many issues, whether a particular news item might be influenced is contextual.)

The “Just Add Wikipedia In the Omnibar” Trick

One thing we do in the Digital Polarization Initiative is to hone the actions we encourage students to take down to their most efficient form. Efficient meaning:

  • easy to memorize
  • quick to execute
  • with a high likelihood of providing a direct answer to the question you have

Our student fact-checkers rely heavily on Wikipedia, and usually the best first pass at getting a read on a site is to read the Wikipedia article on it. But what’s the fastest way to get the relevant article?

As an example, consider the organization Nuclear Matters which describes itself this way:


Nuclear Matters is a national coalition with a diverse roster of allies and members. Our Advocacy Council is made up of leaders from various areas, including labor organizations, environmental supporters, young professionals and women in the nuclear industry, venture capitalists, innovators in advanced nuclear technology and former policymakers and regulators.

This site is not quite claiming to be grass roots, but we notice the one word not here is “industry-funded”. And we’re curious — you have some varied members, but where does the money come from?

As mentioned, the best first stop on this is Wikipedia. I used to show students how to do the site search for Wikipedia using the “” syntax — but I found even faculty I taught this to were forgetting the syntax — or searching for “” which gives weird search results.

So I now just do this omnibar hack, using the URL to match against Wikipedia pages:

It works for a couple reasons I can discuss at a later time — but it’s a useful enough  habit I wanted to share it in a post.

BTW — In case people coming here don’t know, I currently run a national, cross-institutional project that aims to radically rethink how we teach college students online information literacy, where we teach them tricks and techniques like this. Ask me about it — my DMs are open. Or read the textbook: Web Literacy for Student Fact-Checkers and apply it to your own class — it’s free!