By Colby Cosh - Thursday, April 11, 2013 - 0 Comments
The Globe reported late Tuesday, on the basis of six home games, that “Rogers Centre has become a home run haven”, claiming that extra-long gophers are being hit under the don’t-call-it-Skydome at “a record-setting rate”.
Through the first homestand last week that saw Toronto win two of six games, the home side and their opponents combined for 23 homers, the most in the majors, and at 3.83 home runs per game, second in the 30-team league. Rangers Ballpark in Arlington saw four home runs per game.
The home run average across the majors the first week was 2.14 per game, so what has transpired at Rogers Centre so far represents a huge increase. Should the trend continue–a big if, considering 75 regular-season home games remain–upwards of 310 home runs stand to be clubbed at the Rogers Centre, which would set a major-league record.
That last part, of course, is sheer foolishness: almost every ballpark will have some six-game stretches in which a “record-setting rate” of home runs is achieved. This particular streak is being noticed because it came at the start of the season, a reporter took notice, and the phenomenon can be tenuously pegged to some hypothetical or downright imaginary changes in airflow and other conditions at Rogers. Fans love home runs, though, so I’d like to express the Rogers empire’s appreciation to the Globe for whipping up interest.
What I wondered, as a journalist wary of the Sin of Small Sample Size, was how much of this home-run phenomenon we can attribute to the park, as opposed to the team. The Jays, after all, went bananas with trades and free-agent signings in the offseason, and they were already a power-heavy team. The road teams visiting Rogers hit 12 home runs over the six games in 254 plate appearances. That’s a rate of 5.3%. Overall this season, non-Jays teams have hit 250 in 8,771 plate appearances outside Rogers Centre: a rate of 2.9%. The difference looks impressive, but how significant is it statistically?
By the conventional standard, the answer is “just barely”. If you run what’s called a “chi-squared test for equality of proportions”, you find that a difference of that magnitude would arise by chance only 4.875% of the time; in the sciences the usual habit is to set the significance cutoff at 5% or less. And this goes to illustrate one of the big problems with classical hypothesis testing in the sciences. If we checked all thirty big-league teams through early-season samples of similar size, and the teams all actually hit home runs at the same rate in the long run, we would still expect 5% of the 30, or one-and-a-half, to have a “significantly” unusual apparent home-run rate.
The issue is illustrated well by a famous web cartoon about green jellybeans. And that’s exactly what we have here: a big pile of green jellybeans. Unsurprisingly, it is too soon to start telling just-so stories about how balls are travelling further because of the mojito fumes from the centre-field bar at Rogers. How pleasant to think so, though.
By Patricia Treble - Thursday, March 21, 2013 at 5:24 PM - 0 Comments
Recently the health of the older generation of royals has been under a microscope. First the Queen all but disappeared from view after she was admitted to hospital with gastroenteritis on March 4. Big public engagements were cancelled, including a trip to Italy, though she did continue with those that were in the safe confines of royal residences.
Finally, on Wednesday, she moved back into the limelight, going to the Baker Street Station of the London Underground for the 150th anniversary of the oldest subway system in the world.
Then today, Buckingham Palace confirmed that her cousin, HRH Prince Edward, duke of Kent, was admitted to hospital. He’d suffered a minor stroke, sources said. All of his engagements have been cancelled.
And that brings up a demographic time bomb placed at the heart of the Windsor team. For, according to Tim O’Donovan’s meticulous accounting of annual royal duties, members of the family undertook 4,470 engagements in 2012. And of those, 25 per cent were done by Windsors over the age of 76, including the Queen, Prince Philip, the duke of Kent and his sister, Princess Alexandra. Extend the group of royals to those age 60 and older and the number jumps to 3,019 or 67 per cent.
By Bookmarked and Chris Sorensen - Friday, March 1, 2013 at 9:00 AM - 0 Comments
Wheelan’s earlier book, Naked Economics: Undressing the Dismal Science, is regarded as one of the best—and most readable—introductory texts on the subject. A former U.S. correspondent for The Economist who now teaches at Dartmouth College, Wheelan skilfully cut through all the jargon and graphs to demonstrate that economics is really about people and their behaviour. Now Wheelan is attempting to demystify another important, but equally tedious-sounding field: statistics. As Wheelan points out, the inferences made from statistical data underpin much of modern life, from the movie suggestions delivered by Netflix to your chances of developing heart disease. They are also easily misunderstood, manipulated or, in rare cases, plain wrong. But how many of us know enough about stats to tell?
In a bid to explain both the power and pitfalls of statistical analyses, Wheelan draws on engaging examples that range from sports to game shows. They include: why marketers of Schlitz beer were willing to subject their brew to a blind taste test among 100 fans of a rival brand in front of a Super Bowl audience (most beers Schlitz competes with taste the same and, with a large enough sample size, roughly half of tasters were likely to pick Schlitz regardless of their stated preference); and a discussion of what’s come to be known as the Monty Hall problem: should Let’s Make a Deal contestants, faced with three doors, one of which hides a car and two that hide goats, opt to change their selection after the host reveals a goat behind one of the two doors they didn’t pick? (Yes, Wheelan argues, because the chances of winning jump from one-in-three to two-in-three.)
Still, the book feels far more dense and textbook-like than Wheelan’s previous work. It may well be that, unlike economics, which is sometimes described as a “pseudoscience,” statistics is necessarily more math-like. But given the increasing importance of stats to our everyday lives, odds are it’s worth the extra effort.
Visit the Maclean’s Bookmarked blog for news and reviews on all things literary
By Colby Cosh - Sunday, February 10, 2013 at 3:46 AM - 0 Comments
Sometime last year I found myself wondering about the effects of residential schools on the younger generations of aboriginal Canadians. The schools have more supporters than you might think, more than almost anyone likes to admit, amongst former attendees; the resentment felt toward them by those who had terrible experiences is matched by the ferocity with which Indian families agitated to keep the better ones alive late in their existence. We have chosen to take a monolithic view of the residential schools as a bad idea, full stop—to the point at which any educational intervention into Indian welfare that smacks of paternalism will now be run from as if it were a rabid grizzly. (Just for starters, the scale of the residential schools was obviously one of the problems; if there had been four, instead of 80 or more, they could perhaps have been run with some professionalism and accountability.)
It is hard to be sure that this is fortunate. And it is hard to be sure that it is helpful, for if there are other systematic explanations for Indian poverty and social issues, the “it’s all because of those hellish residential schools” explanation might cause us to overlook them. The schools have been shut down for a long time now; they can’t be blamed for the remainder of eternity, any more than I can attribute my incompetence with money to the Highland Clearances. Though maybe I should give it some thought.
Anyway, it turns out that there are surprisingly detailed data concerning Indian social welfare. The federal Aboriginal Affairs department collects and calculates a “community well-being index” for all Canadian communities, and has used the numbers to identify top-performing Indian bands, in order that policy lessons might be extracted from them. The latest index data are old, dating to the 2006 census, but visualizing them still teaches useful things about Indian societal health.
The tool I used is called a “box-and-whisker plot”, or, for short, a “boxplot”. The Great Tukey (peace be upon him) gave the boxplot to us, describing it as a “microscope” for data analysis. But presenters of statistical information for public consumption don’t show boxplots very often, because their features are not too intuitive. It lets you put series of numbers side-by-side and eyeball them for differences in the distributions. The parts of a boxplot are thus: (1) a box around the “interquartile range”, or the middle half of the data; (2) a line through the box at the median; (3) a “whisker” usually extending outward from the box up to 1.5 times the interquartile range from the median (but no further than the furthest actual data); (4) individual dots for outlying data points beyond the whisker. The length of the whisker was chosen by Tukey so that data matching a normal, symmetrical bell curve would have few outlying points, no more than 1% of the sample; many dots are thus a convenient quick indication that a data set is non-normal. (That’s important for statisticians because it rules out further analysis techniques that assume normality.)
I’m not going to quiz you on all that: a boxplot is not too intuitive, but it’s intuitive enough that you can just look and feel. So here’s a picture of First Nations well-being (as of 2006) broken down by province, with tiny P.E.I., largely FN-free Newfoundland, and Inuit communities set aside:
Why did I want to look at this information this way? Because Canada actually performed an inadvertent natural experiment with residential schools: in New Brunswick (and in Prince Edward Island) they did not exist. If the schools had major negative effects on social welfare flowing forward into the future we now inhabit, New Brunswick’s Indians would be expected to do better than those in other provinces. And that does turn out to be the case. You can see that the top three-quarters of New Brunswick Indian communities would all be above the median even in neighbouring Nova Scotia, whose FN communities might otherwise be expected to be quite comparable. (Remember that each community, however large, is just one point in these data. Toronto’s one point, with an index value of 84. So is Kasabonika Lake, estimated 2006 population 680, index value 47.)
On the other hand, and this is exactly the kind of thing boxplots are meant to help one notice, the big between-provinces difference between First Nations communities isn’t the difference between New Brunswick and everybody else. It’s the difference between the Prairie Provinces and everybody else including New Brunswick—to such a degree, in fact, that Canada probably should not be conceptually broken down into “settler” and “aboriginal” tiers, but into three tiers, with prairie Indians enjoying a distinct species of misery. (This shows up in other, less obvious ways in the boxplot diagram. You notice how many lower-side outliers there are in Saskatchewan? That dangling trail of dots turns out to consist of Indian and Métis towns in the province’s north—communities that are significantly or even mostly aboriginal, but that aren’t coded as “FN” in the dataset.)
I fear that the First Nations data for Alberta are of particular note here: on the right half of the diagram we can see that Alberta’s resource wealth (in 2006, remember) helped nudge the province ahead of Saskatchewan and Manitoba in overall social-development measures, but it doesn’t seem to have paid off very well for Indians. This isn’t a surprising outcome, mind you, if you live in Alberta; we have rich Indian bands and plenty of highly visible band-owned businesses, but the universities are not yet full of high-achieving members of those bands, and the downtown shelters in Edmonton, sad to say, still are.
These little boxes go some way toward explaining why the Harper government’s focus on Indian-band accountability may make less sense to Ontarians than it does to Albertans—or why Harper’s prairie base might have had a different reaction to the conditions and the controversy in Attawapiskat than Eastern voters did. It is data of which everyone should be aware, and I wish there were an easier, more natural way to depict it. I’m also curious about how the same data will look once they’re compiled from the 2011 census, heaven knows when.
By Colby Cosh - Saturday, February 2, 2013 at 9:28 AM - 0 Comments
Phil Birnbaum, who along with “Tom Tango” is probably one of Canada’s two great gifts to quantitative analysis in sports, has been studying the NHL over the past few weeks. It was only after a second or third reading of his series breaking down luck versus skill in the NHL standings that I was able to really grasp what he was saying. I’m a fluent speaker of basic stats-ese, but not a native. Phil is a pretty approachable explainer of things (including some of the things devised by Tango), so usually I don’t have to bash myself over the head too hard with his findings. But I didn’t see how interesting the message was until now.
Probably all hockey fans know instinctively that the introduction of the shootout has injected a fair amount of randomness into the year-end NHL standings. Birnbaum, looking at the shootout-era data, has now shown just how much. In the old NHL that still had ties, it took an average of 36 NHL games for a team’s actual talent to become as important to its standings position as sheer randomness. “Talent” is defined here as repeatable ability, ability relevant to prediction: after 36 games, your team’s distance in the standings from .500 would be about half luck and half “talent”, and that would be reflected in your guess as to how they would do in the next 36 games (assuming nothing else about the team had changed). Over a full season, we could be confident that there was little randomness left in the ordering of the teams in the league table.
But in the new post-ties NHL, Birnbaum notes, the standard deviation of standings points has shrunk from about .2 per game to .15. Continue…
By Colby Cosh - Sunday, November 4, 2012 at 4:19 AM - 0 Comments
The whole world is suddenly talking about election pundit Nate Silver, and as a longtime heckler of Silver I find myself at a bit of a loss. These days, Silver is saying all the right things about statistical methodology and epistemological humility; he has written what looks like a very solid popular book about statistical forecasting; he has copped to being somewhat uncomfortable with his status as an all-seeing political guru, which tends to defuse efforts to make a nickname like “Mr. Overrated” stick; and he has, by challenging a blowhard to a cash bet, also damaged one of my major criticisms of his probabilistic presidential-election forecasts. That last move even earned Silver some prissy, ill-founded criticism from the public editor of the New York Times, which could hardly be better calculated to make me appreciate the man more.
The situation is that many of Nate Silver’s attackers don’t really know what the hell they are talking about. Unfortunately, this gives them something in common with many of Nate Silver’s defenders, who greet any objection to his standing or methods with cries of “Are you against SCIENCE? Are you against MAAATH?” If science and math are things you do appreciate and favour, I would ask you to resist the temptation to embody them in some particular person. Silver has had more than enough embarrassing faceplants in his life as an analyst that this should be obvious. Continue…
By Peter Nowak - Friday, September 7, 2012 at 10:00 AM - 0 Comments
Last week’s post about how the budgets for television shows may need to go down in order to adapt to the internet sparked some interesting discussion over on Twitter. The discussion involved films, of course, with one commenter suggesting that A-list actors such as Tom Cruise command huge salaries because they’re proven draws.
That got me thinking: do movie executives really cast their movies based on the drawing power of the actors? Of course they used to, so the better question is perhaps whether they still do? And if so, is it possible to play games with such a system, similar to how baseball manager Billy Beane played “Moneyball” with the Oakland Athletics?
Surely I’m not the first person to have thought of this – it would actually only surprise me if this sort of thing wasn’t widespread in Hollywood.
Beane’s Moneyball strategy, for the uninitiated, was a system of picking players based on non-traditional statistics. For much of its history, Major League Baseball has aligned the value of its players according to traditional stats, like batting average, home runs, stolen bases, earned run average and so on. If one guy consistently hits .300 and 40 home runs, then he’s an all-star who should make big bucks, or so the system has gone.
Beane, however, didn’t have those big bucks to spend with the A’s, so he instead focused on what he felt were more important statistics, such as on-base average and slugging percentage. After all, it doesn’t really matter how a player gets on base – whether it’s through a hit, a walk or even hit by a pitch – because once he’s there, he has the same chance to score a run as a good hitter, which is the only thing that matters in a game that’s decided by one team outscoring the other.
As dramatized in the Brad Pitt film, Beane put together a successful team based on his stats that had no bona fide all-stars, just players who put together solid numbers but were paid modestly. The “Moneyball” strategy has of course had a big effect on baseball since, with many teams now employing statisticians that study such numbers.
The logic seems to apply to movies as well. Over the past year, Tom Cruise was again the highest paid actor, according to Forbes. The illuminating part, however, comes from looking at the magazine’s most overpaid actors list, which calculates the revenue from their last three films against salaries. Right there at ninth most overpaid is Cruise, whose movies earn $6.35 for every dollar he’s paid.
Contrast that with the most profitable actor, Kristen Stewart, whose movies (which have basically been Twilightfilms, so far) earn $55.83 for ever dollar she’s paid.
The two lists are quite obvious when compared. The overpaid list includes established, big A-listers including Cruise’s ex-wife Nicole Kidman and comedians such as Adam Sandler, Will Ferrell and Eddie Murphy. The most profitable list, meanwhile, is made up mostly of young actors such as Stewart’s co-star Robert Pattinson, Daniel Radcliffe and Shia Labeouf.
The major flaw with Forbes’ process is equally obvious when the types of movies the actors star in are considered. People go to see comedies based on the actor/comedian, while not many go to big event movies like Transformers to see Labeouf. Comedy actors thus probably merit higher pay while their movies earn less than blockbusters, which pay their stars relatively little. This skew explains much of the two lists.
Still, the inclusion of dramatic actors such as Cruise and Kidman on the overpaid list does lend credence to the fact that paying an actor large amounts of money to star in a movie is pretty risky, if not foolish. From a financial perspective, it would seem to make more sense to play Moneyball with actors. As long as it’s not a movie that’s completely dependent on the actor’s personality, young players consistently deliver a better bang for the buck.
By Mika Rekai - Tuesday, August 7, 2012 at 9:07 AM - 0 Comments
Top gerontology researchers in the United States are looking into the case of a mysterious woman from Regina, the Canadian Press reports.
Top gerontology researchers in the United States are looking into the case of a mysterious woman from Regina, the Canadian Press reports.
Records from the Saskatchewan government show that the woman, born in 1899, is the oldest person on record in the province.
If the records are up to date and the woman is still alive, she would be one of the oldest people in the world and part of a small group of “supercentenarians,” people over the age of 110.
Researchers in Los Angeles are hoping to contact family members so they can interview them about the woman’s lifestyle and genetic history. The gerontologists are trying to determine trends and commonalities among supercentenarians to shed light on what determines a long life.
In Canada, Saskatchewan has an uncommonly high number of centenarians, people over the age of 100. According to Statistics Canada, Saskatchewan has twice the national average of centenarians, with a rate much closer to Japan than the rest of Canada.
By Colby Cosh - Monday, July 30, 2012 at 12:17 AM - 0 Comments
The New York Times ran a deeply contrarian editorial Saturday about math education in the United States. In it, political scientist Andrew Hacker argues that the youth of America is being crucified on a cross of higher math.
A typical American school day finds some six million high school students and two million college freshmen struggling with algebra. In both high school and college, all too many students are expected to fail. Why do we subject American students to this ordeal? I’ve found myself moving toward the strong view that we shouldn’t. Continue…
By Aaron Wherry - Tuesday, July 24, 2012 at 12:52 PM - 0 Comments
Vic Toews takes credit for the latest decline in the crime rate.
Crime rate down 6% – shows
#CPC tough on crime is working. Rate is still 208% above 1962 levels, more work for our gov’t to do
Questions for further discussion: If the overall crime rate’s decline demonstrates the success of the Harper government’s approach, does the rise in the homicide rate demonstrate a failing on the part of the government? And does the general decline in the crime rate since 1991 validate the policies of previous governments?
By Aaron Wherry - Friday, June 29, 2012 at 8:38 AM - 0 Comments
Stephen Harper, June 2008. It’s one thing that they, the criminals do not get it, but if you don’t mind me saying, another part of the problem for the past generation has been those, also a small part of our society, who are not criminals themselves, but who are always making excuses for them, and when they aren’t making excuses, they are denying that crime is even a problem: the ivory tower experts, the tut-tutting commentators, the out-of-touch politicians. “Your personal experiences and impressions are wrong,” they say. “Crime is not really a problem.” I don’t know how you say that.
Rob Nicholson, July 2008. “We don’t govern by statistics in our government.”
Rob Nicholson, July 2009. “We don’t govern on the latest statistics.”
Rob Nicholson, September 2011. “We’re not governing on the basis of the latest statistics.”
Vic Toews’ spokeswoman, yesterday. But a spokeswoman for Federal Public Safety Minister Vic Toews disputed their claims, saying since the Conservatives took office, firearms-related homicides have decreased by 28%. “These statistics show that our government’s tough-on-crime approach is working,” Julie Carmichael said in an email.
The national homicide rate peaked in 1975 at 3.03 homicides per 100,000 people. It has gradually declined since then, first falling under 2.0 in 1997.
By Julia Belluz - Friday, February 10, 2012 at 8:29 AM - 0 Comments
“One in five Canadians experiences a clinical mental illness and many more struggle with stress or grief.”—Globe and Mail, 02/07/2012
One in five of us has or will suffer from a mental illness: for years, we’ve peppered our news stories, health pamphlets, and advocacy campaigns with this statistic about the goings on in our heads. There are even entire mental health websites dedicated to it, such as OneInFive.ca courtesy of Dalhousie University.
It’s a number that knows no boundaries. In the U.S., a new national report found that one-fifth of American adults experienced mental illness in the past year.
By Aaron Wherry - Friday, November 18, 2011 at 2:08 PM - 24 Comments
Rob Nicholson, July 2008. “We don’t govern by statistics in our government.”
Rob Nicholson, July 2009. “We don’t govern on the latest statistics.”
Stockwell Day, August 2010. “We’re very concerned . . . about the increase in the amount of unreported crimes that surveys clearly show are happening. People simply aren’t reporting the same way they used to.”
Rob Nicholson, September 2011. “We’re not governing on the basis of the latest statistics.”
Jeff Watson, this morning in the House. “Madam Speaker, with our tackling violent crime act, measures to strengthen parole, pardons and sentences for violent criminals, funds for more frontline police and to prevent at-risk youth from a life of crime, only this Conservative government is making our communities and streets safer. According to StatsCan’s just released 2010 crime severity index, Windsor–Essex is the safest region in Canada. Among the safest Canadian communities over 10,000 people, the town of LaSalle ranks 2nd, Tecumseh 4th, Kingsville 7th, Lakeshore 8th, Essex 12th. Windsor is the 7th safest big city of 32, and topping the list of 238 safest towns and cities is my hometown, Amherstburg. Thanks to our dedicated police, strong community involvement, our government’s investments to prevent crime and tough laws to crack down on criminals, Windsor–Essex is the safest region in Canada.”
Local officials in Windsor and Essex County have cited a number of possible explanations for the recent success there, including shifting demographics, community assistance, police involvement in schools and “luck.”
By Aaron Wherry - Friday, September 30, 2011 at 5:32 PM - 23 Comments
New figures show a 69% response rate for the National Household Survey—higher than Statistics Canada expected, but still not sufficient to replace the long-form census.
“You simply can’t get reliable data from a voluntary survey,” said John Brewster, who teaches statistics at the University of Manitoba and is president of the Statistical Society of Canada. ”I mean, this is something we teach in every course in statistics, for example. The data will almost certainly be biased. And we don’t really know at this time the magnitude or direction of that bias.”
By Colby Cosh - Thursday, August 25, 2011 at 12:20 PM - 101 Comments
A group of Calgary neurologists has published a report on foreseeable complications faced by locals who have returned from going abroad and receiving trendy “liberation therapy” for multiple sclerosis. It is not clear whether the casefiles include the woman who was inadvertently liberated from the world by the treatment, but their contents sound troubling enough. “These five cases,” the authors note in their abstract, “represent the beginning of a wave of complications for which standardized care guidelines do not exist.”
They sound somewhat nervous, don’t they? It is almost as if they had not heeded the repeated reassurances of journalists and “liberation” enthusiasts that venous angioplasty and stent installation in major neck veins are routine procedures, of about as much clinical concern as having one’s shoe size measured. That tricky little distinction between veins and arteries turns out to be fairly important to the discussion: as an April letter in Clinical Neuroradiology pointed out, “Balloon dilatation and stent implantation have not primarily been developed for the venous system and are associated with a substantial risk for complications…with possible fatal outcomes.” [Emphasis mine]
Since the butcher’s bill is beginning to be drawn up, and not just in Calgary, it may be worth examining how well the “chronic cerebrospinal venous insufficiency” theory has fared over a full year of research. In April, SUNY Buffalo researcher Robert Zivadinov, a close colleague of CCSVI theorist Paolo Zamboni, delivered a controlled study of 500 patients that offered, at best, feeble confirmation of Zamboni’s original results. Zivadinov’s findings, as Colleague Anne Kingston pointed out at the time, could conceivably provide some comfort to both sides of the debate. But the one thing one could not possibly do with Zivadinov’s figures was to reconcile them with Zamboni’s original study, which claimed a perfectly sensitive, perfectly specific link between indicia of CCSVI and the presence of MS.
In the meantime, other results from preliminary studies of CCSVI and MS have been trickling out, to less fanfare. There is a cruel unrelentingness to them—a lamentable finality even to the titles of the articles. From Italy alone we have “No evidence of chronic cerebrospinal venous insufficiency at multiple sclerosis onset” (January); “Proposed chronic cerebrospinal venous insufficiency criteria do not predict multiple sclerosis risk or severity” (July); “Progressive multiple sclerosis is not associated with chronic cerebrospinal venous insufficiency” (last week).
A German team attracted some attention in January with a finding that “Intracranial venous pressure is normal in patients with multiple sclerosis”. A similar study from a VA hospital in Texas, using Zamboni’s own detection criteria to define the presence of CCSVI, was published earlier this month. The title: “No Cerebral or Cervical Venous Insufficiency in US Veterans With Multiple Sclerosis”. Meanwhile, the journal Neurology has a preprint from Greece which confirms the objectivity of the proposed CCSVI criteria—but also confirms the absence of any apparent link with MS. And for what it’s worth, a June study of animal models provides a smidgen of evidence against Zamboni’s speculation that vascular problems create autoimmune difficulties by causing localized deposits of iron to be left in the brain.
There is also the new study you might have read about which establishes that most of the gene markers statistically linked with MS are known to influence the immune system. For my money, that is actually an overhyped blow to the Zamboni hypothesis, in comparison with the lengthening train of papers finding no simple empirical connection between veins and MS at all. Most researchers agree that the CCSVI hypothesis is still worth following up with randomized controlled trials of larger size and longer duration. But they advocate this, not because there is any doubt that MS is fundamentally immunological, but because some far less radical variant of Zamboni’s idea might conceivably be, well, sort of true-ish. (See, for example, this note from neurologists in Erlangen: “…it certainly seems awkward to think of the complex disease MS solely as result of a simple venous outflow obstruction. Yet, the investigation of new vascular concepts as one variable in the pathophysiology of the autoimmune attack seems very worthwhile…”.)
Other researchers are frankly not so open to keeping up a chase that was, after all, set off by a study (Zamboni’s 100%-specific 100%-sensitive investigation) that almost certainly has to have been junk. The frustrations of a few scientists are discernible in the literature: one German group basically thumbed their noses at CCSVI by calling it the “perfect crime”—a supposed primary cause of MS that seems to leave no trace when sought in MS patients, using any means, by anyone but Zamboni or his very early supporters. Another comment in a senior journal asks whether CCSVI is “science fiction”. Either way, unfortunately, the premature enthusiasm for “liberation therapy” is cold inescapable fact.
By Claire Ward - Friday, July 15, 2011 at 9:00 AM - 16 Comments
The Toronto event was widely reported to have been attended by a million people—an impossible statistic
As far as estimates go, one million has a nice ring to it. Last week, a number of Toronto media outlets, including Macleans.ca, reported a dramatically inflated statistic: that one million people attended Toronto’s annual Pride parade on July 3. Maclean’s has since determined that this is both physically impossible, given the dimensions of the space, and highly improbable, given previous estimates of attendance. The number of attendees remains unconfirmed by Toronto police, the City of Toronto, and Pride Toronto organizers. So how did the media get it wrong? The erroneous news reports were a case of broken telephone that can be traced back to a 2009 estimate, which states that Pride week drew 1,120,000 visits. But visits, Maclean’s has learned, have little to do with attendance as we understand it.
“Attendance is a tricky word,” says Michael Harker, senior partner at Toronto-based Enigma Research Corporation, the research company behind the 2009 report. “There’s a big difference between visits and unique attendees. Visits is, a guy comes three times, we count him three times. Uniques is, he comes three times, we count him once.” That one million figure, then, accounts for total visits to the 2009 festival—multiple returns over the span of four days—and not for boots on the ground at the festival’s flagship parade. The total number of uniques was actually 411,450, which, again, does not represent just parade attendees but all visitors over the course of four days. Enigma did not provide an estimate for how many people were at the parade itself. In fact, no one did.
Const. Victor Kwong, media relations officer at the Toronto Police Service, explains that the police don’t give estimates anymore. “We used to do estimates, but we got a lot of complaints. People would say, ‘Oh, you’re lowballing so that the event gets less press,’ or, ‘You’re highballing so the event gets more support.’ ”
By Colby Cosh - Tuesday, December 28, 2010 at 11:57 AM - 68 Comments
When the U of T Cities Centre announced a couple weeks ago that middle-class neighbourhoods are disappearing in Toronto, the Globe and Mail latched onto the study and squeezed it for all it was worth. Or, rather, what little it is worth and then some. The Globe used the study to craft a news article with a horror-movie lede, to order up a nostalgic Margaret Wente column, and to conduct a live online chat debating the issues raised. In the chat a user named “Paul” brought up a technical question for the study’s lead author, David Hulchanski:
I note that the maps drive off AVERAGE income. Do we know what they would look like if they drove off MEDIAN income? The published maps tell us that there is a growing class of people with super-high incomes. I think maps based on the median would be more informative about the middle class.
Let’s raise a glass to Paul. Even if you don’t understand why his point is important, you can see in the chat that Hulchanski’s answer is unsatisfactory: he says both that his team didn’t have median-income numbers going back far enough to make them the focus of the study and that he’s confident it wouldn’t make any difference. I think a criminal lawyer would call this “presenting an alibi and a justification at the same time.”
Hulchanski’s study found that the proportion of middle-income neighbourhoods in Toronto was 66% in 1970; it is now just 29%. Low-income neighbourhoods made up 19% of the city in 1970; that figure’s now 53%. Paul’s problem is that these types of neighbourhoods are defined relative to the mean individual income for the whole city ($88,400 in 2005). A middle-income neighbourhood is one whose residents are within 20% of the mean either way, while a low-income neighbourhood is 20%-40% below it. But a mean or average, unlike a median (i.e., the income that half the city makes more than and half makes less than), is sensitive to scale changes in individual outliers at the top of the distribution.
We can see the problem if we perform a thought experiment and imagine another city; we’ll call it Otnorot. In 1970, Otnorot had an unusual economic structure: it was divided into 100 equal-sized neighbourhoods numbered 1 to 100, each with an average real income corresponding (by total coincidence) to its number. In miserable Neighbourhood 1, the residents scrape by on 1 credit per year per person. In Neighbourhood 47, they make 47 credits on average. In Neighbourhood 100, they make 100 credits apiece, the filthy plutocratic bastards.
What would the Prof. Hulchanski of imaginary Otnorot report back to us about the economic structure of his city? The average income of the neighbourhoods (and the people in them) is, as the young Carl Friedrich Gauss could tell us instantly, the sum of the numbers 1 to 100 divided by 100: 50½ credits. Neighbourhoods 41 to 60, or 20 in all, are “middle-income” neighbourhoods within 20% of that mean. The “low-income” neighbourhoods are numbers 31 to 40; there are 10 low-income neighbourhoods.
By 2005, the vast majority of Otnorotians are living just as they and their forefathers always did. In Neighbourhoods 1-99, real incomes have not changed at all, nor have the relative population sizes changed. Neighbourhood 1 still earns 1 real credit per person, which buys exactly what it did in 1970. Neighbourhood 99 still earns 99. Only in Neighbourhood 100 has there been a change. Perhaps the residents held shares in the wildly successful Otnorotian version of Trivial Pursuit; perhaps they put their heads together and invented smell-o-vision. For whatever reason, they have gone from wealthy to superwealthy (at nobody else’s particular expense, or at least nobody’s in Otnorot), and they now earn a fantastic 8,000 credits per citizen every year.
For most Otnorotians, life hasn’t changed. The presence of the one new hyperrich neighbourhood would certainly have social effects, probably a mix of good and bad; you could, for example, almost certainly expect the Royal Otnorot Museum to acquire a hideous new glass mega-extrusion. But you wouldn’t say that the Otnorotian middle class had disappeared.
And yet—Shock! Concern!—that is exactly what Otnorot’s version of Prof. Hulchanski finds, unwisely using average incomes as his baseline. The overall average income for Otnorot is now a whopping 129½ credits a year, so no group at all outside lucky Neighbourhood 100 reaches the lower middle-income cutoff (103.6). The lower bound for a “low-income” neighbourhood, however, is now 77.7 credits. Where we once had just 10 low-income neighbourhoods out of 100, now everybody from 78 to 99 is defined as low-income, so we have 22.
It so happens that in Otnorot, lukewarm social science performed at public expense and promoted by newspaper editors is punished by means too horrendous to translate into English. Things are done differently in the real Toronto, a mercifully liberal-minded place. But the processes that so confused our alterna-Hulchanski are surely, in an oversimplified way, the same processes that have confused the real scholar. Observers of inequality have observed a genuine, dramatic numerical increase in it over the past two or three decades; one only need have been looking at business-magazine “rich lists” for a while to see that billionaires, all but unknown in the early 1980s, are now as common as seagulls.
There are real social and political dangers from this, to the degree that we allow economic power to translate into social and political power. But it does not mean that the “middle class” has really disappeared or dwindled. It only means that the logarithmic scale of possible incomes has stretched out at the top in a new Gilded Age, a realm of pervasively low marginal taxes and new deregulated industries.
Toronto might really, in some sense, have become bifurcated more arrestingly between rich and poor. But the Cities Centre’s measurement procedure cannot prove that this has really happened. Would it be a good thing for social conditions in Toronto if the Bridle Path were annihilated by a meteor? If that happened, Prof. Hulchanski (and the Globe) would probably be able to report several “low-income” neighbourhoods magically re-entering the “middle class”.
Respectable social science of this sort will ordinarily work with medians or with log-income (as the UN Human Development Index does), or it will approach inequality questions with the aid of the Gini coefficient—a metric totally absent from the Hulchanski study. No doubt Prof. Hulchanski would give the same sour-grapes defence he gave to our friend Paul: don’t have the numbers, don’t need the numbers. But there’s a further question. Why should we necessarily be concerned with between-neighbourhood inequality at all? The Cities Centre would use the same “average income” figure to describe and classify both Neighbourhood X, where everybody makes a healthy $100,000 a year, and Neighbourhood Y, where half the residents make $200,000 and half make nothing, bartering and stealing for their living. Funny sort of egalitarianism, if you ask me.
By Aaron Wherry - Wednesday, November 24, 2010 at 10:06 AM - 127 Comments
Tony Clement takes on the worldwide statistical conspiracy.
By Aaron Wherry - Wednesday, November 24, 2010 at 9:06 AM - 25 Comments
Chris Selley runs the numbers on homicide.
Statistics Canada data show that in 2009, just 18.1% of “solved” homicides — meaning those in which a suspect was identified — were committed by someone unknown to the victim. That’s 82 murders, total. (If the same rate held true among unsolved homicides as well, the total number would be 110.) … There were 515 homicides in Canada in 2007. More likely ways to die included not just the traditional heart disease (50,499 deaths), suicide (3,611) and motor vehicle accidents (2,882) but such un-newsworthy occurrences as pneumonia (5,272), renal failure (3,664) falling down (2,677), poisoning (1,347) and skin cancer (875).
By Julia Belluz - Thursday, November 18, 2010 at 11:20 AM - 0 Comments
Recent polls provide a portrait of the class of 2011
You’re not like your parents, but you confide in them. You’ve been stamped the iPod generation, but you believe in the power of print and that some technologies are evil. Recent polls provide a portrait of the class of 2011.
Seventy-nine per cent believe it’s possible to create your destiny, and 52 per cent feel you will fulfill every one of your dreams. Almost all of you feel you will make it to graduation, and nearly two-thirds say you’re engaged and enthusiastic about school.
Is there a god? Not likely. You live in the moment, and probably do not participate in religion. In fact, your belief in science may trump your belief in god.
By macleans.ca - Monday, November 8, 2010 at 10:30 AM - 3 Comments
Plus, Antonia Fraser’s marriage to Harold Pinter, the fakeness of statistics, and Stephen Sondheim
The former Canadian general and head of the UN peacekeeping mission in Rwanda during the 1994 genocide, Dallaire has always been brutally open about the horrors he saw there and their effects upon him. Only “constant therapy and an unrelenting regimen of drugs” keep the memories at bay, he writes in his new book. But nothing has managed to soothe the shock Dallaire experienced when he saw preteen killers, armed to the teeth with machetes and rifles, advancing upon him.
In some 30 wars across the world, he notes, hundreds of thousands of child fighters—their ranks endlessly renewed by kidnapping or by scooping up kids orphaned by AIDS, famine or violent conflicts—have become “the ultimate, cheap, expendable, yet sophisticated human weapon.” Children are, in fact, horrifically perfect for the job. They’re small enough to transport easily in large numbers, yet big enough to handle modern lightweight arms, and heavy enough also to set off land mines so adults can safely follow. They have no real sense of fear and, when indoctrinated young enough, their capacity for loyalty and for barbarism exceeds that of adults. The girls—40 per cent of child soldiers—double as sex slaves and, in long-lasting wars, as mothers of the next generation of fighters.
For Dallaire, almost as bad as the war situation he describes with such cold eloquence is the fact that the world seems to be doing little about it. The better to bring home the emotional truth of his subject, he crafted three fictional chapters on the abduction, indoctrination and killing (by a UN peacekeeper) of a child soldier. Dallaire pulls off fiction with considerable skill, but readers who are more interested in solutions will be relieved when he turns to practical suggestions. One in particular would make children far less useful to their adult controllers: a serious effort to stamp out the trade in lightweight weapons.
- BRIAN BETHUNE
By Colby Cosh - Thursday, July 22, 2010 at 10:32 AM - 0 Comments
A detection kit for the most common date rape drugs is going on sale throughout Canada shortly, according to the Montreal Gazette. The Gazette did not have to look far to find someone to denounce the ethical premise of such apparatuses: a spokesman for a Vancouver women’s shelter said “This is a cynical attempt to make some money and shame on the company for feeding off the fear that women, reasonably, have of being raped.”
I suppose most of us would respond with something very like Adam Smith’s classic formulation: we are not to look to a “lack of cynicism” for the answers to our social problems, any more than we look to the fellow-feeling of the butcher and the baker to provide us with sustenance. If something like the Drink Detective—which consists of a pipette and three pieces of treated paper—enabled us to end drug-facilitated rape tomorrow, that would be a very good thing indeed.
Unfortunately, almost 100% of barroom beverages contain a highly effective substance that diminishes inhibitions and impairs memory. More to the point, it is odd that a test for “date rape drugs” other than ethanol should be criticized on the premise of its effectiveness without any attempt at an inquiry into that effectiveness. The Drink Detective website, by itself, doesn’t encourage confidence. It features a supposedly independent, but thinly sourced, “technical report” into the accuracy of the kit. One press release on the site, perhaps in a ham-handed attempt to double the market for the product, recycles the urban legend that “In some countries, it is even possible to be drugged and incapacitated so that organs, such as kidneys, can be surgically removed and sold.”
You are probably wondering whether there have been any peer-reviewed studies of the Drink Detective, and why, if there are, they aren’t mentioned on the “Science” page of the product’s website. The answer to your first question is “Yes”. And you probably already have a potential answer to the second if you’ve studied statistics.
An team of public health researchers in Liverpool published a study of the Drink Detective in the journal Addiction in 2006. They found that the Drink Detective was significantly superior to a rival product, and as a technical feat of fast, cheap detection of complex molecules, the kit deserves not just praise but wonder. But is it really of much use? The authors found that the overall sensitivity of the kit was about 69.0% and its specificity was 87.9%. In plainer English, this means that for every 100 samples of adulterated booze, the test will, on average, miss (100-69), or 31; and for 100 non-drugged drinks, the test will give (100-87.9), call it 12, false positives.
Women who are hyper-conscious of the possibility of drug-assisted rape will not be happy to hear that the Drink Detective gives a clean bill of health to almost one-third of drink-tampering sociopaths. But the false positives are a concern too: it would be easy to design a test that “caught” every single spiked drink if you didn’t care about specificity as well as sensitivity. (A heuristic of “Run straight home if a napkin becomes moistened when you dip it in your glass” would have 100% sensitivity.) In situations where the real odds of getting a spiked drink were as high as 1 in 100, a test with 88% specificity would still finger 12 innocents as toxic creeps for every 1 guilty man it identified. Even at a reasonable-sounding price per kit of $5.99, test fatigue seems likely under realistic circumstances.
The Drink Detective’s manufacturers had some specific gripes about the Liverpool test—complaining, for instance, that the testers’ use of pharmaceutical-grade GHB was inappropriate—but they had received the benefit of the doubt in at least one large, obvious way: the kit was put through its paces, not in a dimly-lit pub toilet by experimenters half-wrecked on Cosmos, but by sober scientists working in a laboratory. It is hard to disagree with the conclusion that “Use of drug detector kits by the public in the night-time environment…may create a false sense of security (false negatives) and undue concern (false positives) among kit users.” And the same could be said—to her credit, Daisy Kler of the Vancouver Rape Relief and Women’s Shelter does say it—about the overall focus on drug-facilitated sexual assault by strangers. No one is certain how often this really happens, and the best guess is “not very”.
By Colby Cosh - Friday, July 16, 2010 at 3:28 PM - 0 Comments
The objections to the census on Biblical grounds are now a thing of the past; the objections on the ground that the census is inquisitorial have also, there is good reason to believe, gradually lost their force… it is now agreed among all civilized nations that a census is a useful and desirable thing.
Thus spake the scientist and administrator G.B. Longstaff (1849-1921) in an address to the Royal Statistical Society on June 25, 1889. Longstaff’s discussion of the imperial census activity scheduled for 1891 sheds fascinating light on today’s Canadian debate: I’m not sure anyone has yet pointed out, as Longstaff did that night, that New France, Acadia, and Newfoundland are where the first censuses of any kind since antiquity were taken, and that only then did the idea return to find acceptance in Old Europe.
It took about fifty years, mind you, to convince the restless, suspicious people of Britain to accept even the most rudimentary nose-count; among the new factors which predisposed them to accept it was Malthus’s Essay on the Principle of Population (1798). (As time goes by I become more convinced that Malthus, for his approach rather than his conclusions, belongs to the rank of Hume and Adam Smith in the history of ideas, and may even approach that of Newton and Darwin.) Readers will find matter of particular interest in Part II of the body of Longstaff’s address, wherein he discusses what questions it is appropriate to ask in a census. He commences, perhaps revealing his training as a lepidopterist, with a taxonomic observation:
Statisticians may be divided into two classes, (a) those who clamour for much information on many subjects, even though such information be confessedly very imperfect; and (b) those who, being of a more sceptical turn of mind, prefer to ask for very little, and to concentrate their efforts on getting that little with the greatest attainable accuracy.
By Colby Cosh - Monday, April 19, 2010 at 3:11 AM - 19 Comments
As the paid-up holder of a Mainstream Media club card, can I warn the sportswriters away from making too much of the statistical fluke of all eight first-round NHL playoff series starting out tied through two games? The warning will arrive too late for some, but others may yet be saved.
As a landmark of NHL parity, the large number of 1-1 results in 2010 is not going to prove very useful. Imagine that game outcomes are statistically independent of each other and that the better team has a p chance of winning each individual game in the home team’s rink. If that’s the case, then the chance of a given series standing level after two games is 2(p)(1-p).
The 1-1 tie is always, for realistic values of p, the most common outcome. In a world of perfect parity—all teams are equal, no home-ice advantage, p = 0.5—half the series will be tied 1-1 after two games. And because the chance of the better team going up 2-0 is counterbalanced by a decreased chance of the other team going up 2-0, the overall chance of a tied series doesn’t drop off very fast as you depart from the parity condition, p = 0.5. For p = 0.6, about 48% of the series are still tied 1-1 after two games. (The better team is ahead in 36%, or 0.6²; the worse team is up 2-0 in 16%, about 0.4².)
But you can see that having eight series tied 1-1 will be incredibly rare even in the world of perfect parity. The probability of that happening in a given year will be the total product of the chances of a 1-1 tie in each of the series. Given an average overall value of p, the odds of all eight series starting out equal works out to, at most, (2(p)(1-p))8—a pretty small number, demonstrating the great flukiness of the “eight ties” outcome. Even in the perfect-parity world the expected frequency works out to 1 time in every 28, or 256, years. In the real world, the right average figure for p is probably around .54, giving us an “eight ties” year about 1 time in 269. In a fairly extreme non-parity world where the 1-4 seeds had an average 60-40 edge—that is to say, p = 0.6—the “eight ties” outcome would happen once every 355 years.
In other words, using this fluke as any kind of sign, indicator, or test for parity is about like insisting on reading a book only by the light of Halley’s Comet. You’d better have a comfortable chair. And plenty of kids, so they and their progeny can continue the observations (over several millennia) after you die in it…
By Colby Cosh - Thursday, April 8, 2010 at 10:40 AM - 4 Comments
Can an anonymous stats guru turn the Blue Jays around?
Everybody who takes an introductory stats course at university learns about Student’s t-test, a technique useful with small-sample experiments. “Student,” in this case, has nothing to do with the classroom. It was the pen name of William Sealy Gosset (1876-1937), one of the most important figures in the development of modern statistics. Gosset had a celebrated scientific career, but his alter ego got the immortality. His discoveries arose from his work as a brewer and agriculturist for Guinness, which had a strict policy against the publication of trade secrets; hence the pseudonym.
Last week, the Toronto Blue Jays announced the hiring of baseball’s modern-day answer to “Student”—the New Jersey-based, Montreal-born author, programmer and analyst known to the world as “Tom Tango.” Tango is among the most respected figures in the field of “sabermetrics,” the application of scientific and quantitative methods to baseball, which is perhaps best known as the subject of Michael Lewis’s non-fiction bestseller Moneyball. A wide-ranging baseball philosopher whose topics of study range anywhere from millimetric variances in the strike zone to multi-million-dollar team payrolls, Tango is the lead author of 2006’s The Book: Playing the Percentages in Baseball, perhaps the most important sabermetric manual of the past two decades. When the Kansas City Royals’ Zack Greinke won the American League Cy Young Award last year, the right-hander said that he was especially fond of a statistic called “FIP” and pitched with it constantly in mind. FIP stands for Fielding-Independent Pitching (a method of factoring the defence out of a pitcher’s earned run average and crediting him only for events over which he has sole control). Its inventor: Tom Tango.
But while he is admired, prolific and an active correspondent with other scholars, Tango remains an enigma. He keeps his real name a closely guarded secret. Long known in the online sabermetrics world as “Tangotiger,” he tacked on the “Tom” and dropped the “Tiger” solely to have something semi-respectable-looking to put on the cover of The Book. “There are a lot of old-timers who think that I should sign my Christian name,” he blogged in 2008. “I don’t see why it’s anyone’s business other than mine.”