By Colby Cosh - Wednesday, December 12, 2012 - 0 Comments
The CBC provided us with an interesting case study in science reporting on Monday as its “community team” blog trumpeted “UN climate change projections made in 1990 ‘coming true.’”
Climate change projections made over two decades ago have stood the test of time, according to a new report published Monday in the journal Nature.
The world is warming at a rate that is consistent with forecasts made by the UN’s Intergovernmental Panel on Climate Change 22 years ago.
Climate scientists from around the world forecasted the global mean temperature trend for a 40-year period, from 1990 to 2030—and at this halfway point the report authors have found the projections “seem accurate” after accounting for natural fluctuations.
These are absolutely all the numbers you are going to get out of this news item. And if you peruse the new assessment of the 1990 IPCC predictions, which was actually published on the Nature Climate Change website, what you find is a more nuanced picture than the CBC’s “They nailed it, no worries” interpretation implies.
David Frame and Dáithí Stone write that the 1990 IPCC report predicted a rise in global mean temperatures of between 0.7 degrees C and 1.5 degrees C by the year 2030; on a linear interpolation, we might have expected half the increase to have occurred by now. The actual observed warming during the past 20 years (almost all of it taking place in the first ten) has been in the vicinity of 0.35 degrees C to 0.39 degrees C, “on the borderline” of the range given in 1990. In other words, the IPCC’s point estimate was high, and the overall warming has been consistent with the outer confidence bounds of their stated prediction, but barely.
Frame and Stone think, with some justification, that this is a pretty good performance given the simplicity of the climate models available at the time. It’s especially good, they think, because the models could not predict what would happen in the economy, or below the planet’s crust. Their story is that the Earth caught a series of lucky breaks despite the substantive failure of greenhouse gas reduction efforts.
The highlighted [IPCC] prediction assumed a business-as-usual scenario of GHG emissions; three other scenarios were considered and in fact Scenario B (which assumed a shift to natural gas, a decrease in the deforestation rate, and implementation of the Montreal Protocol, all independent of global climate negotiations) was closer to the mark as of 2010, especially with respect to methane emissions… Of course, [even these Scenario B] predictions were based on idealized future scenarios that did not foresee the eruption of Mount Pinatubo, the collapse of the Soviet Bloc industry, or the growth of some Asian economies, so one could argue that the prediction is right for the wrong reasons.
The authors conclude by noting that predicting the future is a lot harder than predicting the past—and, unfortunately, the resolving power of crystal balls has not improved much since 1990.
…the 1990 prediction following [the IPCC's] business-as-usual scenario covered a full 0.4ºC range due solely to uncertainty in the climate sensitivity that has not narrowed substantially so far, whereas a larger range was implied by the examination of further scenarios of emissions and a larger range still should have been considered owing to uncertainty in the evolution of natural forcings and internally generated variability.
Believers in and skeptics of the threat from anthropogenic climate change will both find promising fodder in this paper for conversion into mountains of delicious hay. (Mind the carbon emissions, though.) I’ll resist the temptation to join in that exercise, but it is very clear that the authors’ “Well done” message to the IPCC carries a sizable asterisk. If the CBC is going to report on a scientific paper, why not show some indication somebody has read it?
By Colby Cosh - Thursday, July 22, 2010 at 10:32 AM - 0 Comments
A detection kit for the most common date rape drugs is going on sale throughout Canada shortly, according to the Montreal Gazette. The Gazette did not have to look far to find someone to denounce the ethical premise of such apparatuses: a spokesman for a Vancouver women’s shelter said “This is a cynical attempt to make some money and shame on the company for feeding off the fear that women, reasonably, have of being raped.”
I suppose most of us would respond with something very like Adam Smith’s classic formulation: we are not to look to a “lack of cynicism” for the answers to our social problems, any more than we look to the fellow-feeling of the butcher and the baker to provide us with sustenance. If something like the Drink Detective—which consists of a pipette and three pieces of treated paper—enabled us to end drug-facilitated rape tomorrow, that would be a very good thing indeed.
Unfortunately, almost 100% of barroom beverages contain a highly effective substance that diminishes inhibitions and impairs memory. More to the point, it is odd that a test for “date rape drugs” other than ethanol should be criticized on the premise of its effectiveness without any attempt at an inquiry into that effectiveness. The Drink Detective website, by itself, doesn’t encourage confidence. It features a supposedly independent, but thinly sourced, “technical report” into the accuracy of the kit. One press release on the site, perhaps in a ham-handed attempt to double the market for the product, recycles the urban legend that “In some countries, it is even possible to be drugged and incapacitated so that organs, such as kidneys, can be surgically removed and sold.”
You are probably wondering whether there have been any peer-reviewed studies of the Drink Detective, and why, if there are, they aren’t mentioned on the “Science” page of the product’s website. The answer to your first question is “Yes”. And you probably already have a potential answer to the second if you’ve studied statistics.
An team of public health researchers in Liverpool published a study of the Drink Detective in the journal Addiction in 2006. They found that the Drink Detective was significantly superior to a rival product, and as a technical feat of fast, cheap detection of complex molecules, the kit deserves not just praise but wonder. But is it really of much use? The authors found that the overall sensitivity of the kit was about 69.0% and its specificity was 87.9%. In plainer English, this means that for every 100 samples of adulterated booze, the test will, on average, miss (100-69), or 31; and for 100 non-drugged drinks, the test will give (100-87.9), call it 12, false positives.
Women who are hyper-conscious of the possibility of drug-assisted rape will not be happy to hear that the Drink Detective gives a clean bill of health to almost one-third of drink-tampering sociopaths. But the false positives are a concern too: it would be easy to design a test that “caught” every single spiked drink if you didn’t care about specificity as well as sensitivity. (A heuristic of “Run straight home if a napkin becomes moistened when you dip it in your glass” would have 100% sensitivity.) In situations where the real odds of getting a spiked drink were as high as 1 in 100, a test with 88% specificity would still finger 12 innocents as toxic creeps for every 1 guilty man it identified. Even at a reasonable-sounding price per kit of $5.99, test fatigue seems likely under realistic circumstances.
The Drink Detective’s manufacturers had some specific gripes about the Liverpool test—complaining, for instance, that the testers’ use of pharmaceutical-grade GHB was inappropriate—but they had received the benefit of the doubt in at least one large, obvious way: the kit was put through its paces, not in a dimly-lit pub toilet by experimenters half-wrecked on Cosmos, but by sober scientists working in a laboratory. It is hard to disagree with the conclusion that “Use of drug detector kits by the public in the night-time environment…may create a false sense of security (false negatives) and undue concern (false positives) among kit users.” And the same could be said—to her credit, Daisy Kler of the Vancouver Rape Relief and Women’s Shelter does say it—about the overall focus on drug-facilitated sexual assault by strangers. No one is certain how often this really happens, and the best guess is “not very”.
By Colby Cosh - Tuesday, May 4, 2010 at 2:26 PM - 80 Comments
Here’s the lede of a science story from Saturday’s Winnipeg Free Press:
WINNIPEG — Depression and substance abuse plague about half of American women who reported having an abortion, according to a new University of Manitoba study.
The study, published in the current issue of the Canadian Journal of Psychology, suggests there’s an association between mental disorders and abortion…
Eager to investigate this shocking headline claim—the Edmonton Journal, picking up the story, literally gave it the headline “Depression or drug abuse found in half of women who aborted”—I set out to find the study. This presented something of a problem, since there has not been a “Canadian Journal of Psychology” since 1993. I spent a little while rifling through Canadian Psychology and the Canadian Journal of Experimental Psychology until a helpful reader on Twitter clued me in. Yes, you guessed it: it can be found in the Canadian Journal of Psychiatry. First place I should have looked, really.
That’s an understandable mistake. It’s a bit more of a problem that the first sentence of the article—an article that includes a warning from the lead author to the effect that it is “important the study is not misinterpreted”—is totally false. Because of, y’know, misinterpretation.
The paper, entitled “Associations Between Abortion, Mental Disorders, and Suicidal Behaviour in a Nationally Representative Sample”, does what it says on the tin: the data are taken from interviews with a demographically representative subset of the U.S.’s National Comorbidity Survey Replication project. It is hard to know what numbers the reporter added or multiplied or pulled out of a hat to reach the conclusion that “Depression and substance abuse plague about half of American women who reported having an abortion.” (I spoke to the lead author of the study, and she can’t figure it out either.) But a good guess would be that she looked at this section from the article’s main chart—
—and simply added together the estimated lifetime incidence of depression among women who had had an abortion (29.3%) and the lifetime incidence of substance-use disorders (24.6%). It will probably have occurred to you that there might be some overlap there between depression and substance abuse, which go together like poached eggs and hollandaise. You don’t need a Ph.D. to know that the depression group is likely to contain almost all of the women in the substance-abuse group.
And this naïve math (which is hardly attributable to a failure to grasp hyper-advanced statistics) is compounded by the wording of the offending sentence, which doesn’t say that “some percentage of abortion recipients have, at some point before or after getting an abortion, experienced depression or substance abuse or both.” It uses present tense, unjustifiably implying that all the women in question are plagued by both problems now.
This mess is already being picked up, “carelessly” garbled even further, and circulated around the globe by pro-lifers, despite the personal entreaties of the scientist who helped the newspaper with its reporting and the many, many methodological and interpretive caveats in the original study. This kind of thing is exactly why a lot of scientists hate talking to reporters. Nor does it make sincere research into therapeutic abortion any easier. The UM study can’t be used to attribute psychiatric morbidity to abortion, but it could be used by fair-minded pro-lifers (let’s assume for the sake of argument that there were some) to raise questions about abortion’s place in our society and argue for a research program.
Oh, I know: we’re a hundred years away from that kind of discussion being possible. But the inadvertent propagation of urban legends only pushes that day further into the future.