Neartopias

Everything is depressing and messed up so let’s take a lunch break to talk about neartopias.

If you look up the phrase “neartopia” on the web you’ll find a couple solitary pages of someone proposing a anarcho-libertarian island government, but that’s not what I mean in my use of the term. Instead, I mean a particular brand of sci-fi — and speculative fiction more generally — that presents a world considerably more socially just and personally fulfilling than the one we currently inhabit in a way that seems at least partially achievable.

Think Kim Stanley Robinson’s Pacific Edge. Le Guin’s Dispossessed. In film, examples are less common, but a recent neartopia would be Black Panther’s Wakanda.

Neartopias are not utopias. They have problems. They have to have problems because problems are what drive plots. And on another level problems are just interesting in a way that non-problems are not. They also aren’t post-scarcity Star Treks, or visions of a perfect 6030 A.D. They are “near”-utopias both in the sense that they lack perfection and in that they seem near-enough to be achievable.

Neartopias also have blindspots. Each neartopia pulls from cultural assumptions that will be eventually — like all things — be revealed as problematic. The Golden Age of sci-fi produced some neartopias, for instance, but had a relationship with technological progress and industry, for example, that was — well, let’s say underdeveloped.

But these visions are fundamentally different than dystopias, which serve as a warning, which map a world we need to try to route around.

I was very into dystopias for a while. But like a lot of others– see, for example, the solarpunks — I’ve worried over the last few years about their efficacy as a tool for social justice and change.

Take Minority Report, a dystopia that imagines a world of constant surveillance and personalization, one where people are judged to be guilty before they commit a crime. A warning, right? Except, somehow when run through capitalism it becomes a blueprint for an IPO.

Worse yet are the “Utopia is secretly a Dystopia” plots, from The Giver, to Gattaca, to… well, just about any film that starts out with a utopian vision. These films often take as their target inequality or other current issues. More common formats recently are “the utopia built on the backs of the poor or non-elite.” or “the government that provides the good life in order to control you.” (Both of these have spawned a thousand YA dystopian series).

Those are important messages, but I wonder if they get garbled a bit in translation. The message of the Secret Dystopia seems to be that social and technical progress is always bought at the expense of someone else and that government provisioned services are always bought at the expense of freedom. But while these are biting critiques of our current moment,  it’s important to remember that these zero-sum patterns are not laws of physics, but rather products of a system designed to produce unequal outcomes and quell dissent. In using the future to critique our current reality, dystopias often serve to reinforce fundamentally conservative viewpoints, treating constructed elements of our current system as eternal truths that will replicate infinitely into the future.

As I’ve gotten deeper and deeper into the disinformation environment I’ve thought more and more about the role that art needs to play in moving forward a society that is overwhelmed by the sludge of our current politics and culture. And I keep coming back to this idea of Solarpunk, and, more broadly, neartopias:

To many, solarpunk represents an ignition for activism. “The great programs of the 20th century often began as fictional proposals, from moon landings to Social Security,” says Flynn. “It’s time we returned to higher ambitions for what we can do as a society.” When Ulibarri picks up a book, she’s looking for an escape that isn’t as familiar as dystopia is. “Maybe it is escapism, but it gives me a sense that things can get better,” she says.

d65a5d4f44c8b434016075a61bfea760

Paris Smart City, by Vincent Callebaut

There’s not a big finale here — just lunchtime musings. But I’m curious how many other people have a hunger for this new vision of science fiction, from Wakanda to Solarpunk? I can’t be the only one. It’s time to show a future where technological progress is not bought at the cost of the oppressed. Where government can be a tool for good — not House of Cards with more computing power. Where we move beyond this current turd of a result and in to something better.

I’m in the market for neartopias — if you have some favorites I should read, throw them in the comments. Are there significant other strains outside of Solarpunk I should know about?

 

Google’s Big AI Advance Is… Script Theory?

Like many people I watched Google’s demo of their new Android system AI calling up a hair stylist and making an appointment with trepidation — was this ethical, to not disclose that it was an AI?

But now that the smoke has cleared, I’m realizing something a bit more disturbing. After years of Big Data  and personal analytics hype, the advance that Google demonstrated is an application of 1970s AI work that requires none of that.

Setting up a haircut appointment is a social script. It has a sequence of things that happen, usually in a predictable order. The discovery of the importance of social scripts in computational understanding of communication was a big part of what Schank and Abelson brought to the field of AI in the 1970s.

Scripts were important both in terms of computers navigating standard social situations, but also in understanding stories about those situations. When I studied linguistics, one of my favorite little facts was you could often discover socially legible scripts by noticing how stories were elided. For instance, if I say “So I go to a restaurant, and the server gives me the bill…” no one stops me and says “Wait, you got a bill before you ate anything? And who is this server person?” The understanding in storytelling is I can evoke a script and then start at the part of the story that deviates from the script. That’s how core they are to our thinking and discourse, and Schank and Abelson made the case in the 1970s that mapping out these scripts would be core to computer understanding as well.

While less physical than dining, booking a haircut over the phone is a script too. It follows a particular sequence and has slots where the unique bits go.  In general we find out if I am in need of a particular stylist, and then drill down on a date and time. Importantly, it works because I’ve learned the script and I know the things the hair stylist will ask and I have the answers the stylist requires. I know I need to provide date, time, and stylist, and I might need to supply a rough time of day preference — mornings, afternoons, end of day, before work. On the other hand, I know the stylist is not going to ask me if I’d rather have a chair nearer to the window or the bathroom or what type of music I prefer in the salon.

Here’s the thing: The precise nature of social scripts is that they often allow people with no knowledge of one another to negotiate transactions successfully. Preferences figure into that but are usually easily enumerated by each party — because that’s part of the script.

Because of this, I don’t really need personal analytics to discover that I like my cappuccinos extra dry. I have years of experience walking through scripts where I’ve learned to specify that, and the script has a very specific spot where that goes. The script has taught me how to concisely enumerate my preferences in ways useful to baristas.

In fact, analytics in these situations end up being a lesser reflection of the explicit inputs into the script. For example, Google might search my flight booking data and find I like window seats towards the front, that I prefer Alaska and layovers with a bit of buffer in them. But the patterns I produce in what I get for flights aren’t a mysterious secret sauce discovered by analytics, they’re the product of me specifically asking for nine things when I book flights. Nine things I can easily rattle off, because I’ve been doing the “booking a flight” script for years.

So here’s the question about the “haircut” demo: if the nature of the social script is you *don’t* need deep knowledge or background for the script to work, then what’s all the talk about personal data being Google’s prime AI asset about? What’s all the machine-learning hype?

After years of sucking up all our data Google’s big AI advance is… Script Theory. Which requires none of this. Maybe we should be talking about that.

Taking Bearings on The Star

One thing people may not realize is I use the exact same techniques we teach to students in my daily work. The skills we are giving students aren’t some dumbed-down protocol. They are great habits for reporters, researchers, and other professionals as well.

As an example, this article came up in my news alerts this morning.

malay

I’m interested in fake news in Southeast Asia, so I’m glad to read analysis and opinion from a place like Malaysia, but I want to source-check, even if I think I know this source. So we strip off everything from that URL and add Wikipedia.

wwwww

 

This pulls up a relevant Wikipedia page:

stars

 

And clicking through we are reminded that The Star is effectively owned by the Malaysian government.

sdhf

And then we’re back to the article after a 30 second detour.

For the record, I still read the column, but I didn’t share it, and if had had shared it I would have noted that it was a legitimate news source to some extent, but possibly compromised by its ownership. Sam Wineburg has talked about this process as taking bearings, and I like that term a lot. Before trudging blindly into an article, pull out the compass and the map and figure out where you landed. It’s so simple to do, there’s really no excuse for not doing it.

(I should note that I’ve elided a number of things I do know about Malaysia and government propaganda there for the sake of clarity in this post — but the truth is if I have any doubt about the source at all I use the process, just the same as a novice. I had a vague memory about this precise ownership issue, but the process is always likely to give me a better result than my unaided memory. And it’s actually less cognitively demanding as well.)

(EDIT: changed “heavily compromised” to “possibly compromised” since the initial wording expressed more certainty than I had wished to portray. Legitimate news organizations with ownership issues are often fine on many issues, whether a particular news item might be influenced is contextual.)

The “Just Add Wikipedia In the Omnibar” Trick

One thing we do in the Digital Polarization Initiative is to hone the actions we encourage students to take down to their most efficient form. Efficient meaning:

  • easy to memorize
  • quick to execute
  • with a high likelihood of providing a direct answer to the question you have

Our student fact-checkers rely heavily on Wikipedia, and usually the best first pass at getting a read on a site is to read the Wikipedia article on it. But what’s the fastest way to get the relevant article?

As an example, consider the organization Nuclear Matters which describes itself this way:

nuclear.PNG

Nuclear Matters is a national coalition with a diverse roster of allies and members. Our Advocacy Council is made up of leaders from various areas, including labor organizations, environmental supporters, young professionals and women in the nuclear industry, venture capitalists, innovators in advanced nuclear technology and former policymakers and regulators.

This site is not quite claiming to be grass roots, but we notice the one word not here is “industry-funded”. And we’re curious — you have some varied members, but where does the money come from?

As mentioned, the best first stop on this is Wikipedia. I used to show students how to do the site search for Wikipedia using the “site:wikipedia.org” syntax — but I found even faculty I taught this to were forgetting the syntax — or searching for “wikipedia.com” which gives weird search results.

So I now just do this omnibar hack, using the URL to match against Wikipedia pages:

It works for a couple reasons I can discuss at a later time — but it’s a useful enough  habit I wanted to share it in a post.


BTW — In case people coming here don’t know, I currently run a national, cross-institutional project that aims to radically rethink how we teach college students online information literacy, where we teach them tricks and techniques like this. Ask me about it — my DMs are open. Or read the textbook: Web Literacy for Student Fact-Checkers and apply it to your own class — it’s free!

 

OLC Innovate Privacy Concerns

Today, OLC Innovate leadership requested feedback from attendees on the issues of data collection and privacy raised by (among other things) the attendee tracking badges and session check-in procedure. I replied in email but am republishing it here, lightly edited:

I’m really glad to see you considering privacy issues, and mostly wanted to just thank you for that. I think OLC could lead the way here.

I felt the badges and the system of checking people into rooms was invasive and took away from the otherwise friendly feel of the conference. I don’t know if I want vendors, or OLC, or my boss knowing which events I attended and which I didn’t – and I certainly don’t want that data on a bunch of USB-based unsecured devices. What we have learned from the past decade is that you can’t really know how data will be misused in the future, and consent on data isn’t really meaningful because when data gets combined with other data it becomes toxic in ways even engineers can’t predict.

It seems to me that you have a few small pressing questions that you could answer without the tech. What sessions do people attend? Are there subgroups of attendees (managers, faculty, librarians) which seem to have less desirable session options?

Even if you still want to use the tech, if you scoped out the specific questions you wanted to answer you could do much better. You could not only capture that info in a less potentially toxic way, but you’d be more likely to use it in useful and directed ways. As just one example, if you replaced unique ids on the badges with a few basic subtypes – one code for managers, one for faculty, etc. – you would not be collecting personally identifiable information about people, but you would meet your goals. If you centralized the collection of information by job type you could also provide that information to speakers at the beginning of their session in ways that would be far more useful and safe than any undirected analytics analysis.

In short, do what we tell faculty to do in assessment of instruction:

  • Think about a *small* set of questions you want to answer
  • Collect only the information you need to answer those questions
  • Answer those questions by creating aggregate findings
  • Delete the raw data as soon as retention policy allows

You think you want to answer a lot of questions you don’t know yet by rooting in the data. Most of what we know about analysis tells us you’re far better off deciding what questions are important to you before you collect the data. I would go so far to share with attendees the five and no more than five questions that you are looking at answering each year with the data you collect, and explaining all data and its relation to those questions. After you answer a question a couple years in a row, swap in a new one.

 

(I’ll add that for all these issues there needs to be a meaningful opt-in/out. I would suggest that the de-individualized code be a removable part of the badge).

Web Literacy For Student Fact-Checkers Wins MERLOT 2018 Classics Award

Just a short note to say thank you to MERLOT’s review committee on ICT Literacy which awarded Web Literacy for Student Fact-Checkers the 2018 MERLOT Classics award in that category this past Thursday.

30167600_10101120830493827_5619231245835037630_o

It’s one of eight MERLOT Classics awards given out this year, with other awards in the areas of Biology, Teacher Education, Psychology, Sociology — and three other subjects I forget. (I’l update this when the awards are published to the MERLOT web site). Works are reviewed by a panel of experts in the subject, who determine what new OER resource in the subject deserves recognition.

It’s been a wild journey with this book. As I was telling faculty at SUNY Oneonta a few weeks ago, the book started out as a Google doc I worked on over Christmas 2016 (with Jon Udell and Catherine Yang providing some editing help). It was originally meant to be a simple handout for the courses I was building but it kept growing, and by the end of Christmas break it was clear it had become a short textbook, and I shifted the name on January 1st to a broader target:

wlCapture.PNG

 

put it up on Hugh McGuire’s excellent Pressbooks site, which allows the generation of PDFs and ePubs from the book, as well as providing a book-like WordPress theme.

weblit.PNG

The LibGuides community picked it up, and started listing as a top resource on their information literacy pages:

factcheck.PNG

Which led to weird moments, like finding out it was one of the suggested resources of Oxford’s Bodliean library (as well as Princeton’s, Tufts, etc.)

ox.PNG

A host of other people promoted it as well, making up their own infographics, and even applying it across other domains.

I still get emails every week from people who just want to express gratitude for the text. Saying that it’s been a life saver, that it’s changed their teaching, or that it finally said what they had been feeling all these years but just couldn’t verbalize. High school teachers, librarians, college professors, parents. Taking the time to write a thank you note and asking for nothing.

It’s weird, because I’ve spent so much my life building software, writing blog posts, and being a generally digitally minded person, swimming in overtly digital forms. Yet my biggest impact on the world so far may end up being this little course-guide-turned-book.  There’s probably some deeper thinking to be done on that point later, but for the moment I’m going to push hard against the “No one’s life was ever changed by a textbook” rhetoric I sometimes hear, because I get emails from people every week who say just the opposite. And it’s probably time to start listening to that. 🙂

We Should Put Fact-Checking Tools In the Core Browser

Years ago when the web was young, Netscape (Google it, noobs!) decided on its metaphor for the browser: it was a “navigator”.

netscape-navigator-loading-screen.png

The logo and imagery borrowed heavily from the metaphor of navigation, really coming to the fore with the release of Navigator 2.0, but continuing — with some brief interruptions — late into its product life.

I’m not a sailor, but I always took the lines in the various logos to be a reference to the craft of navigating by star charts. Images of maps and lighthouses also made appearances. I know the name and the brand was likely the work of some ad exec, but I’ve always liked this idea — the browser was supposed to make uncharted waters navigable.  It wasn’t just a viewer, but an instrumented craft, guiding you through a sometimes confusing seascape.

So what happened? It’s a serious question. Early in the history of the browser various features were introduced that helped with navigation: bookmarks, bookmark organization, browsable history, omnibar search, URL autocomplete (which ended  up eroding bookmark use). Icons showing when a connection was secure. Malicious site blocking. But as the web developed, the main focus of the browser wars ended up being less the browser as a navigation device and more the browser as an application platform. The interaction designs and renderings browsers support still advance year over year but the browser as a piece of user-focused software stalled decades ago. Mobile use, with it’s thin, crippled UI, just compounded that trend. Extensions were proposed as a solution for extensibility, but the nature of them just served to further impoverish core development. (Hat tip to TS Waterman who has been exploring extension-based solutions to this stuff, but it needs to be in core).

I think it’s time for the browser to put navigation of the information environment back at the center of its mission. Here’s some simple things that could be offered through the interface:

Hover for the original photo source. One of the most useful tricks in Chrome and Firefox is the right-click search for photo, which allows users to find original versions of photos fairly quickly and see if they have been modified. It’s a clunky process but it works. A hover function over a photo that tuned search to find the original (and not just “related photos”) could bring this practice into broader use.

Site info: Browsers expose some site info, but it’s ridiculously limited. Here’s some site info that you could easily provide users: date domain first purchased, first crawl of URL by Google or archive.org, related Wikipedia article on organization (and please financially support Wikipedia if doing this), any IFCN or press certification. Journal impact factor. Date last updated. Even better: provide some subset of this info when hovering over links.

Likely original reporting source: For a news story that is being re-re-re-reported by a thousand clickbait artists, use network and content analysis to find what the likely original reporting source is and suggest people take a look at that.

Likely original research source: For a scientific finding that is being shipped all over the internet with a thousand explosive and absolutely wrong hot takes, surface what looks like the original journal source. If an article talks about many findings, produce the best possible list of sources referred to in the article by looking at links, quotes, and names of quoted experts.

Likely original data source: when you see a statistic, where’s it from? What’s the original context? What public stores of data are available?

OCR images, and do all this for images too: A lot of disinfo is now in the form of images.

fun.png

OCR that text on the image, and make the same features available. What is the “Possible research source” of this? And if it tells me “Research source not found” is that a problem?

Related Sites: In what universe does this site reside? Alternet, Breitbart, or National Geographic? What links into this page, and do those links in confer authority or suggest bias?

There’s actually a lot more they could do as well. If you want, you can read the newly award-winning book Web Literacy for Student Fact-Checkers, look at each verification process described, and ask “How could the browser make that easier?” Most of the things in there can be relatively easily automated.

Boost the Antibodies

I get that such features will often break, and sometimes expose wrong information about source. Basically, the Google snippets problem. And I get that most people won’t use these tools. But my model of how this impacts society is not that everyone makes use of these tools, but that the five percent of people who do create a herd immunity that helps protect others from the worst nonsense. We can’t make every cell invulnerable, but we can boost the antibodies that are already in the system.

It’s also true that his should be done at the social media platform level as well. And in apps. I’ll take it anywhere. But it seems to me that browser providers are in a unique position to set user expectations around capabilities, and provide an interface that can deal with misinformation across its life cycle. It could also push these tools into social platforms that have been reluctant to provide this sort of functionality, for fear of dampening “virality” and “engagement”.  Plus, the sort of users likely to fight disinfo already hover over links, look for SSL indicators, and use omnibar search. Giving them more tools to make their community better could have an outsized impact on the information environments we all inhabit. My work has shown me there are plenty of people out there that want to improve the information environment of the web. Isn’t it time we built a browser to help them do that?