This is the new story out — it’s a mancovery! From Bloomberg:
Men, who lost more than twice as many jobs as women during the worst economic slump since the Great Depression, have landed 88 percent of the non-farm jobs created since the recession ended in June 2009. The share of men saying the economy was improving jumped to 41 percent in March, compared with 26 percent of women, according to the Bloomberg Consumer Comfort Index’s monthly expectations gauge.
“The recovery is a mancovery,” said Heather Boushey, a senior economist at the Washington-based Center for American Progress. “I don’t see improvement for women in the past year, whereas for men this is the best year in years.”
So here’s the question — if men lost 100% more of the jobs in the recession than women, what percentage of new job openings would we expect them to get back if we were looking for an equitable recovery? Is it above 88%? Below 88%? What is the exact percentage?
You might know the answer to this already, but, if you don’t. you can do a Mental Experiment. Plug in some fake numbers and find out!
Here’s what I found out:
Untitled from Mike Caulfield on Vimeo.
So the 88% does, to some extent, represent a “mancovery”, though maybe not by the amount it initially seems (66% of jobs going to men would be equitable so this is about 33% more jobs than we might expect).
Could you have figured this out without an experiment? Absolutely. An easy way to look at this is that if men lost double the jobs, they must have lost 66% to women’s 33%. And if 66% of the jobs lost were lost by men, then 66% of the jobs returning should be men’s jobs. But that’s easy in retrospect. Things like that are not always clear when you first come to a novel problem.
So, the important point is, as always, if you don’t understand something, plug some fake numbers in and play around a bit. For most problems like this it’s easy and inexpensive to do a thought experiment.
A poor man said to a rich one: “All my money goes for food.”
“Now that’s your trouble,” said the rich man. “I only spend five percent of my money on food.”
(From a Sufi tale, recounted here.)
Percentages are a really helpful tool, obviously. But raw numbers can matter too.
From Kevin Drum’s Chart o’ the Day:
Lots of interesting stuff going on there. Notice, in particular, how the trust in science falls off a cliff for moderates in the 70s. It’s also fascinating that conservative trust in science used to be as high (if not higher) than that of liberals 40 years ago. This is still the case in Europe — for the most part there is no liberal/conservative divide in trust in science.
It gets even more interesting when you look at the subpopulations. You might think, for instance, that the decline in conservative belief in science has been driven by shifts in the attitudes of the least educated conservatives. Nope:
Less-educated conservatives didn’t change their attitudes about science in recent decades. It is better-educated conservatives who have done so, the paper says.
In the paper, Gauchat calls this a “key finding,” in part because it challenges “the deficit model, which predicts that individuals with higher levels of education will possess greater trust in science, by showing that educated conservatives uniquely experienced the decline in trust.” This finding also could make it difficult to change attitudes. Gauchat writes that the educational attainment data suggest “that scientific literacy and education are unlikely to have uniform effects on various publics, especially when ideology and identity intervene to create social ontologies in opposition to established cultures of knowledge (e.g., the scientific community, intelligentsia, and mainstream media).”
Lifecycle impact is an invaluable tool in making fair comparisons. It’s easy, for example, to get hung up on the small amount of mercury in a CFL bulb, a percentage of which can escape into the environment if the bulb is crushed in a landfill.
But the biggest contributor to mercury pollution is coal-fired plants, which push gigantic amounts of mercury into the environment as part of their normal operations.
So how do we compare the mercury impact of the two different bulbs? We calculate how much mercury is produced via electricity over the lifetime of the bulb (here standardized to 8000 hrs. of use, since CFLs last longer). Then we add the mercury in the bulb itself to the lifetime use figure. It seems obvious, and it’s certainly a common way to do it — but it’s an incredibly powerful way to look at things compared to the alternatives.
My new favorite term from epidemiology: J-Curve.
There’s a lot of things that increase your mortality in a more-or-less linear way. The more you smoke, the greater your all-cause mortality risk, for example. This isn’t to say you increase your chance of death by 100% moving from one pack a day to two. But on average, your mortality goes up for each additional cigarette you smoke a day. Ten cigarettes is not going to be better for you than five, ever.
Some things, though, don’t work like that. It’s harmful to be overweight, but it’s harmful to be underweight too. Some studies claim alcohol is like this — having no alcohol correlates with a higher mortality than having a drink or two a day, but once you get past a drink or two a day mortality climbs again. The curve is shaped like a “J”, hence the name.
Understanding that things can work this way is important. Vitamin E deficiencies have been correlated with increased cancer mortality, so a lot of people take vitamin E supplements, assuming it’s a linear relationship. But vitamin E supplements have been correlated with increased cancer risk.
Likewise, a lot of health gurus today will point to the harmful effects of over-consumption of sugar, gluten, or dairy (or heck, even fat/oils), and act as though this proves elimination of this thing will dramatically increase your health. It might — if it is a linear relationship. But if it’s a J-curve, you could end up doing as much harm as good.
The Silicon Valley conception of privacy isn’t working for anyone except Silicon Valley. We know that. Charlie Stross, who is one smart dude, points out that if you follow the corporate-driven push to overshare to its logical conclusion your phone becomes a handy-dandy genocide machine, or, in the near term, the perfect device for this year’s Rufie-carrying girl stalker. Moreover, this is not some bizarre side-effect of social software, but is a flaw built into to how the software thinks about you, the product it is serving up to others.
That seems shrill and alarmist, but lately I don’t think it is. There are a lot of benefits to sharing, but also a lot of drawbacks, as any college grad who has missed out on a job due to a red solo cup picture can tell you. And because we get our media from the entities that came up with this system, we tend to see the benefits as systemic and the downsides as localized. But think about that for a minute or two and you realize that that can’t possibly be right.
Anyway, I’ve been thinking how it all ends lately. I don’t think it ends with us all running our own open source servers, going off the corporate surveillance grid. I don’t think we’ll be switching to Diaspora. We’re locked into these services.
So what’s the next vector? I think what we’ll be seeing soon are pro-privacy viruses. Imagine a “benevolent virus” that, instead of keylogging your credit card number, resets all your Facebook settings to the most private settings and sets your homepage to instructions for reopening up permissions (if that’s what you want to do). Or a virus that sits resident in memory and corrupts cross-site tracking cookies in real-time. Or one that shows you every bit of information that is retreivable about you on the internet, and asks if you are good with that.
I don’t think these should be created — there’d be a lot of unforseen side effects. But I think they are coming, and I think they are more likely to have a broader impact on privacy than scattered DIY projects.
In the end, I imagine they will fail — but it will be an interesting phase of this drama…