OLC Innovate Privacy Concerns

Today, OLC Innovate leadership requested feedback from attendees on the issues of data collection and privacy raised by (among other things) the attendee tracking badges and session check-in procedure. I replied in email but am republishing it here, lightly edited:

I’m really glad to see you considering privacy issues, and mostly wanted to just thank you for that. I think OLC could lead the way here.

I felt the badges and the system of checking people into rooms was invasive and took away from the otherwise friendly feel of the conference. I don’t know if I want vendors, or OLC, or my boss knowing which events I attended and which I didn’t – and I certainly don’t want that data on a bunch of USB-based unsecured devices. What we have learned from the past decade is that you can’t really know how data will be misused in the future, and consent on data isn’t really meaningful because when data gets combined with other data it becomes toxic in ways even engineers can’t predict.

It seems to me that you have a few small pressing questions that you could answer without the tech. What sessions do people attend? Are there subgroups of attendees (managers, faculty, librarians) which seem to have less desirable session options?

Even if you still want to use the tech, if you scoped out the specific questions you wanted to answer you could do much better. You could not only capture that info in a less potentially toxic way, but you’d be more likely to use it in useful and directed ways. As just one example, if you replaced unique ids on the badges with a few basic subtypes – one code for managers, one for faculty, etc. – you would not be collecting personally identifiable information about people, but you would meet your goals. If you centralized the collection of information by job type you could also provide that information to speakers at the beginning of their session in ways that would be far more useful and safe than any undirected analytics analysis.

In short, do what we tell faculty to do in assessment of instruction:

  • Think about a *small* set of questions you want to answer
  • Collect only the information you need to answer those questions
  • Answer those questions by creating aggregate findings
  • Delete the raw data as soon as retention policy allows

You think you want to answer a lot of questions you don’t know yet by rooting in the data. Most of what we know about analysis tells us you’re far better off deciding what questions are important to you before you collect the data. I would go so far to share with attendees the five and no more than five questions that you are looking at answering each year with the data you collect, and explaining all data and its relation to those questions. After you answer a question a couple years in a row, swap in a new one.

 

(I’ll add that for all these issues there needs to be a meaningful opt-in/out. I would suggest that the de-individualized code be a removable part of the badge).

Web Literacy For Student Fact-Checkers Wins MERLOT 2018 Classics Award

Just a short note to say thank you to MERLOT’s review committee on ICT Literacy which awarded Web Literacy for Student Fact-Checkers the 2018 MERLOT Classics award in that category this past Thursday.

30167600_10101120830493827_5619231245835037630_o

It’s one of eight MERLOT Classics awards given out this year, with other awards in the areas of Biology, Teacher Education, Psychology, Sociology — and three other subjects I forget. (I’l update this when the awards are published to the MERLOT web site). Works are reviewed by a panel of experts in the subject, who determine what new OER resource in the subject deserves recognition.

It’s been a wild journey with this book. As I was telling faculty at SUNY Oneonta a few weeks ago, the book started out as a Google doc I worked on over Christmas 2016 (with Jon Udell and Catherine Yang providing some editing help). It was originally meant to be a simple handout for the courses I was building but it kept growing, and by the end of Christmas break it was clear it had become a short textbook, and I shifted the name on January 1st to a broader target:

wlCapture.PNG

 

put it up on Hugh McGuire’s excellent Pressbooks site, which allows the generation of PDFs and ePubs from the book, as well as providing a book-like WordPress theme.

weblit.PNG

The LibGuides community picked it up, and started listing as a top resource on their information literacy pages:

factcheck.PNG

Which led to weird moments, like finding out it was one of the suggested resources of Oxford’s Bodliean library (as well as Princeton’s, Tufts, etc.)

ox.PNG

A host of other people promoted it as well, making up their own infographics, and even applying it across other domains.

I still get emails every week from people who just want to express gratitude for the text. Saying that it’s been a life saver, that it’s changed their teaching, or that it finally said what they had been feeling all these years but just couldn’t verbalize. High school teachers, librarians, college professors, parents. Taking the time to write a thank you note and asking for nothing.

It’s weird, because I’ve spent so much my life building software, writing blog posts, and being a generally digitally minded person, swimming in overtly digital forms. Yet my biggest impact on the world so far may end up being this little course-guide-turned-book.  There’s probably some deeper thinking to be done on that point later, but for the moment I’m going to push hard against the “No one’s life was ever changed by a textbook” rhetoric I sometimes hear, because I get emails from people every week who say just the opposite. And it’s probably time to start listening to that. 🙂

We Should Put Fact-Checking Tools In the Core Browser

Years ago when the web was young, Netscape (Google it, noobs!) decided on its metaphor for the browser: it was a “navigator”.

netscape-navigator-loading-screen.png

The logo and imagery borrowed heavily from the metaphor of navigation, really coming to the fore with the release of Navigator 2.0, but continuing — with some brief interruptions — late into its product life.

I’m not a sailor, but I always took the lines in the various logos to be a reference to the craft of navigating by star charts. Images of maps and lighthouses also made appearances. I know the name and the brand was likely the work of some ad exec, but I’ve always liked this idea — the browser was supposed to make uncharted waters navigable.  It wasn’t just a viewer, but an instrumented craft, guiding you through a sometimes confusing seascape.

So what happened? It’s a serious question. Early in the history of the browser various features were introduced that helped with navigation: bookmarks, bookmark organization, browsable history, omnibar search, URL autocomplete (which ended  up eroding bookmark use). Icons showing when a connection was secure. Malicious site blocking. But as the web developed, the main focus of the browser wars ended up being less the browser as a navigation device and more the browser as an application platform. The interaction designs and renderings browsers support still advance year over year but the browser as a piece of user-focused software stalled decades ago. Mobile use, with it’s thin, crippled UI, just compounded that trend. Extensions were proposed as a solution for extensibility, but the nature of them just served to further impoverish core development. (Hat tip to TS Waterman who has been exploring extension-based solutions to this stuff, but it needs to be in core).

I think it’s time for the browser to put navigation of the information environment back at the center of its mission. Here’s some simple things that could be offered through the interface:

Hover for the original photo source. One of the most useful tricks in Chrome and Firefox is the right-click search for photo, which allows users to find original versions of photos fairly quickly and see if they have been modified. It’s a clunky process but it works. A hover function over a photo that tuned search to find the original (and not just “related photos”) could bring this practice into broader use.

Site info: Browsers expose some site info, but it’s ridiculously limited. Here’s some site info that you could easily provide users: date domain first purchased, first crawl of URL by Google or archive.org, related Wikipedia article on organization (and please financially support Wikipedia if doing this), any IFCN or press certification. Journal impact factor. Date last updated. Even better: provide some subset of this info when hovering over links.

Likely original reporting source: For a news story that is being re-re-re-reported by a thousand clickbait artists, use network and content analysis to find what the likely original reporting source is and suggest people take a look at that.

Likely original research source: For a scientific finding that is being shipped all over the internet with a thousand explosive and absolutely wrong hot takes, surface what looks like the original journal source. If an article talks about many findings, produce the best possible list of sources referred to in the article by looking at links, quotes, and names of quoted experts.

Likely original data source: when you see a statistic, where’s it from? What’s the original context? What public stores of data are available?

OCR images, and do all this for images too: A lot of disinfo is now in the form of images.

fun.png

OCR that text on the image, and make the same features available. What is the “Possible research source” of this? And if it tells me “Research source not found” is that a problem?

Related Sites: In what universe does this site reside? Alternet, Breitbart, or National Geographic? What links into this page, and do those links in confer authority or suggest bias?

There’s actually a lot more they could do as well. If you want, you can read the newly award-winning book Web Literacy for Student Fact-Checkers, look at each verification process described, and ask “How could the browser make that easier?” Most of the things in there can be relatively easily automated.

Boost the Antibodies

I get that such features will often break, and sometimes expose wrong information about source. Basically, the Google snippets problem. And I get that most people won’t use these tools. But my model of how this impacts society is not that everyone makes use of these tools, but that the five percent of people who do create a herd immunity that helps protect others from the worst nonsense. We can’t make every cell invulnerable, but we can boost the antibodies that are already in the system.

It’s also true that his should be done at the social media platform level as well. And in apps. I’ll take it anywhere. But it seems to me that browser providers are in a unique position to set user expectations around capabilities, and provide an interface that can deal with misinformation across its life cycle. It could also push these tools into social platforms that have been reluctant to provide this sort of functionality, for fear of dampening “virality” and “engagement”.  Plus, the sort of users likely to fight disinfo already hover over links, look for SSL indicators, and use omnibar search. Giving them more tools to make their community better could have an outsized impact on the information environments we all inhabit. My work has shown me there are plenty of people out there that want to improve the information environment of the web. Isn’t it time we built a browser to help them do that?