More fascinating research out of China on social media, this time directly related to my obsession with the Garden and Stream models of social media.
Roughly, the finding of the study is this: when readers have the option to retweet a message their comprehension of the message falls significantly. The researchers found that:
…“repost” did not promote but hindered participants’ online information comprehension. Messages that were reposted were more likely to be understood incorrectly than correctly. This finding has overarching implications given that the majority users of micro-blogging sites only read and repost others’ messages (Fu and Chau, 2013 and Kaplan and Haenlein, 2011). …
How much more incorrectly? Students in the repost condition got *twice* as many comprehension questions wrong on the messages they read as the control group, which was presented the exact same messages, with no option to repost.
There’s some caveats here — the participants were reading tweet-sized messages, but only had 300 ms apiece to read them. That’s pretty tight, and it is meant to test their model, which assumes that this is a resource contention issue — if you have to be asking yourself “Should I retweet this?” at the same time you are reading the tweet your cognitive resources are split, and comprehension suffers.
At the same time, this matches the experience that many of us have on Twitter, where one half of our brain in on a loop asking “Is this retweetable?” while the other half deals with the mundane task of, you know, understanding what we are looking at.
The study presents an even more stunning finding (and one I am still not sure I am reading correctly). People in the reposting condition, when presented an offline document after reading and reposting, still really suck at comprehension:
For the offline reading comprehension test, participants were first asked to read an article, “More than a feline: The true nature of cats,” from New Scientist. The article was translated into Chinese with a total of 2176 characters. A comprehension test was compiled based on this article, including 11 multiple-choice questions that all had excellent discrimination values in a pretest. Participants’ scores on the test (0–11) were used as the index of offline information comprehension.
The results? People in the no-reposting group did 50% better on the comprehension test, even though the test was on an offline document with no reposting option.
Participants in the no-feedback group (M = 5.95, SD = 1.23) outperformed those in the feedback group (M = 4.05, SD = 1.99) on offline reading comprehension, t(39) = 3.63, p = .001, d = 1.15.
The authors hypothesize that this as well is due to cognitive depletion of resources — the mind, exhausted from dual-tasking through the repost activity, has less to give the final task.
I find these experiments interesting, even if they are only the beginnings of real research on these issues. From my perspective, I wonder if the cognitive resources issue is only part of it — as I’ve said before in my presentation on the Garden and the Stream, the nature of the stream is it pushes you away from comprehension and into rhetoric. Rather than seeking to understand, the denizen of the modern Twitter or Weibo feed seeks to sort incoming information as right or wrong, helpful or unhelpful, worth retweeting or not retweeting, worth getting into a righteous rage about or not.
Once the information is sorted as foe or ally, witty or dull, etc. we are done. At its most extreme, the stream replaces comprehension with classification, with each decision forming an irreversible ruling on the item, never to be revisited, recombined, reoganized, or rethought. In this race to do this we retweet articles after reading two paragraphs in, and vilify links we haven’t even clicked through. It doesn’t just compete with existing resources — it perverts the questions we ask of what we read.
That’s not in this study’s data, of course, but I think it is consistent with its findings. I look forward to more work in this area.
8 thoughts on “Retweeting and Comprehension”
hey Mike – I’m too lazy to get to the fulltext of this, so I’m just gonna ask real quick:
a. How many participants in the study?
b. And wouldn’t a quasi-experimental design been better? So take the group A and group B participants and SOMETIMES give group A repost/retweet options and sometimes give group B? and just make sure that it’s the “repost” that’s making the difference not the particular people? You know what I mean?
Because the second result about depletion has me a little suspicious. I can understand how after years of social media we get it, but I don’t understand how we get it after playing around a little during a lab experiment. Maybe I’m not understanding the full thing… and maybe I should just go get the full article…but it’s 1 am 🙂
Size is 40 which is low, but the design was experimental with a control condition and an intervention condition. Since the decision to retweet or not retweet exhausts cognitive resources during the actual reading you can’t really do it on a tweet by tweet basis. Also, the things you retweet might be different in comprehension difficulty than things you don’t, and at 300ms just *seeing* if there’s a repost button takes resources, so the design I think is right.
Though the study has a small n, the size of the effect is big enough that you get to pretty decent p values.
Depletion is real thing in other contexts (e.g. if you read something complex and then read an unrelated thing that’s also complex you do worse than if you read something simple and then complex) so the depletion itself doesn’t surprise me, but the size of the difference is surprising. They back this up with a survey the students take which seems to bolster the idea that the students in the retweet condition felt substantially more drained than students in the no-retweet condition.
It’s a study that should be replicated multiple times and at larger scales. But I think talking about its importance and hidden implications is the way to get more studies done.
Yeah but where is the pre-test? How do we know that it’s not a coincidence that students in A were like that to begin with (i am not a fan of experimental designs in any case but i know there are things they cab do to avoid that problem – is the effect size a correlation or causation? That might not be the exact right terminology but i think you know what i am asking here?)
Your p-value handles some of that. They also used regression to separate some knowledge effects. You can’t handle it with a pretest anyway, because comprehension advantages are likely to be domain specific, which means that different readers will have diffferent advantages on different tweets.
Let me know if I’m explaining something you already know, but broadly, it’s unlikely that all or most of the slower readers got assigned to treatment and all or most of the faster readers got assigned to control. How unlikely? Well, for *all* of the people to sort that way randomly you have about the same chance as flipping a coin 20 heads in a row.
It gets murkier when we look for slighter advantages (a slightly smarter control group, for example), but the idea that it was only a slight advantage is undermined by the size of the advantage — the control group didn’t do *slightly* better, it demolished the intervention. When you throw all this together you get (according to the study) a p < .001 which is actually pretty bulletproof.
There's a whole literature on how p values are overrated (which I agree with) but they do address the randomness problem you raise pretty well. (Left unaddressed: systemic bias, common causes, confounds, effect size, real-world impact, etc).
This is to say I think the study could have many problems, but I think the likelihood the result (that control outperforms intervention) is an artifact of chance is low. The other problems can be ironed out by replication and complementary designs.
Yes, super interesting study and findings. Larger sample size/replication would go a long way to support/refute findings. But I’d hedge my bets with what they found. And as Mike said it pushes cognitive resources away from comprehension to the rhetoric. Whether you actually get to the rhetoric or not probably doesn’t matter (interesting to measure how much energy people put into thought of retweeting), but the cognitive load that happens in that switch is what pans out to be the comprehension blocker.
Ok but then my next question is… In a real life situation we don’t have that time limit but obviously there are other limitations on our time. Does this study transfer well outside the lab?
This is coming from me, a speed reader who just made a huge mistake because I speedread something..
But I am also thinking of my own repost behavior. There are instances where I intentionally share before reading something anyway… It’s a long story maybe?
I really should probably go read this entire paper… And since it’s closed access and not easy to share maybe I will understand it well haha 😉
Wait: People actually take the time to understand a post before they retweet it? Wow. I should try that.