By Emily Senger - Wednesday, February 6, 2013 - 0 Comments
Missouri professor Dr. Curtis Cooper has discovered a new world’s largest prime number and…
Missouri professor Dr. Curtis Cooper has discovered a new world’s largest prime number and it’s 17-million digits long.
The number is 2 multiplied by itself 57,885,161 times, less one, reads a press release issued by researchers. And, for anyone who needs a high school math refresher, this new prime number can be divided by only one and itself.
The discovery was part of a project called the Great Internet Mersenne Prime Search, in which volunteers use their personal computers to look through prime candidates, with anyone who discovers one eligible for a cash prize.
To read the entire prime number, you’d have to download 22.5MBs, explains CNET. Want to try? The link is here.
By Emma Teitel - Friday, January 11, 2013 at 9:47 AM - 0 Comments
Why is illiteracy considered a legitimate deficit, while innumeracy is seen as a punchline condition?
Tori Spelling’s resolution for 2013 is to get back into her skinny jeans. Wyclef Jean’s is to never again remain silent in the face of violence because it’s “a scar on the world’s cheek.” Mine might be more ambitious than both: I am going to learn how to add. In my life, this is anything but a trivial endeavour. I happen to be innumerate—which means I do not, and cannot, do simple math. In fact, I avoid it at all costs. Literally. I’d rather pay the whole dinner bill than try to calculate the tip.
There are many people like me, some of whom may be reading this column: otherwise seemingly well-adjusted members of society who hand the cashier a $20 bill for a coffee when they have exact change, or never bother to count the change when the cashier hands it back because they doubt they’d be able to determine if there was an error. And besides, it would take so long that everyone in line behind them would probably leave the store. (For innumerates, the fear of attempting math is compounded by the fear they’ll hold up the line indefinitely if they do attempt it.)
One recent American study found that for math-phobic people, the anticipation of numerical computation actually triggers a brain reflex commonly associated with pain. According to a recent report in Britain’s Independent, “the number of [British] adults who have numeracy skills no better than those expected of an 11-year-old has shot up from 15 million to 17 million—49 per cent of the adult population—in the last eight years.” That’s a lot of pain, and a lot of self-defeating, ironic surrender. The last time I took math was in the 10th grade. It was a remedial class called personal finance, where the only reason anyone touched a calculator was to steal the batteries. Continue…
By Colby Cosh - Sunday, November 4, 2012 at 4:19 AM - 0 Comments
The whole world is suddenly talking about election pundit Nate Silver, and as a longtime heckler of Silver I find myself at a bit of a loss. These days, Silver is saying all the right things about statistical methodology and epistemological humility; he has written what looks like a very solid popular book about statistical forecasting; he has copped to being somewhat uncomfortable with his status as an all-seeing political guru, which tends to defuse efforts to make a nickname like “Mr. Overrated” stick; and he has, by challenging a blowhard to a cash bet, also damaged one of my major criticisms of his probabilistic presidential-election forecasts. That last move even earned Silver some prissy, ill-founded criticism from the public editor of the New York Times, which could hardly be better calculated to make me appreciate the man more.
The situation is that many of Nate Silver’s attackers don’t really know what the hell they are talking about. Unfortunately, this gives them something in common with many of Nate Silver’s defenders, who greet any objection to his standing or methods with cries of “Are you against SCIENCE? Are you against MAAATH?” If science and math are things you do appreciate and favour, I would ask you to resist the temptation to embody them in some particular person. Silver has had more than enough embarrassing faceplants in his life as an analyst that this should be obvious. Continue…
By Colby Cosh - Monday, July 30, 2012 at 12:17 AM - 0 Comments
The New York Times ran a deeply contrarian editorial Saturday about math education in the United States. In it, political scientist Andrew Hacker argues that the youth of America is being crucified on a cross of higher math.
A typical American school day finds some six million high school students and two million college freshmen struggling with algebra. In both high school and college, all too many students are expected to fail. Why do we subject American students to this ordeal? I’ve found myself moving toward the strong view that we shouldn’t. Continue…
By Claire Ward - Tuesday, July 12, 2011 at 4:12 PM - 2 Comments
The answer is nine. Barely.
Don’t forget to read the article that inspired the experiment about inflated attendance numbers at Pride parade in Toronto
By Colby Cosh - Tuesday, December 28, 2010 at 11:57 AM - 68 Comments
When the U of T Cities Centre announced a couple weeks ago that middle-class neighbourhoods are disappearing in Toronto, the Globe and Mail latched onto the study and squeezed it for all it was worth. Or, rather, what little it is worth and then some. The Globe used the study to craft a news article with a horror-movie lede, to order up a nostalgic Margaret Wente column, and to conduct a live online chat debating the issues raised. In the chat a user named “Paul” brought up a technical question for the study’s lead author, David Hulchanski:
I note that the maps drive off AVERAGE income. Do we know what they would look like if they drove off MEDIAN income? The published maps tell us that there is a growing class of people with super-high incomes. I think maps based on the median would be more informative about the middle class.
Let’s raise a glass to Paul. Even if you don’t understand why his point is important, you can see in the chat that Hulchanski’s answer is unsatisfactory: he says both that his team didn’t have median-income numbers going back far enough to make them the focus of the study and that he’s confident it wouldn’t make any difference. I think a criminal lawyer would call this “presenting an alibi and a justification at the same time.”
Hulchanski’s study found that the proportion of middle-income neighbourhoods in Toronto was 66% in 1970; it is now just 29%. Low-income neighbourhoods made up 19% of the city in 1970; that figure’s now 53%. Paul’s problem is that these types of neighbourhoods are defined relative to the mean individual income for the whole city ($88,400 in 2005). A middle-income neighbourhood is one whose residents are within 20% of the mean either way, while a low-income neighbourhood is 20%-40% below it. But a mean or average, unlike a median (i.e., the income that half the city makes more than and half makes less than), is sensitive to scale changes in individual outliers at the top of the distribution.
We can see the problem if we perform a thought experiment and imagine another city; we’ll call it Otnorot. In 1970, Otnorot had an unusual economic structure: it was divided into 100 equal-sized neighbourhoods numbered 1 to 100, each with an average real income corresponding (by total coincidence) to its number. In miserable Neighbourhood 1, the residents scrape by on 1 credit per year per person. In Neighbourhood 47, they make 47 credits on average. In Neighbourhood 100, they make 100 credits apiece, the filthy plutocratic bastards.
What would the Prof. Hulchanski of imaginary Otnorot report back to us about the economic structure of his city? The average income of the neighbourhoods (and the people in them) is, as the young Carl Friedrich Gauss could tell us instantly, the sum of the numbers 1 to 100 divided by 100: 50½ credits. Neighbourhoods 41 to 60, or 20 in all, are “middle-income” neighbourhoods within 20% of that mean. The “low-income” neighbourhoods are numbers 31 to 40; there are 10 low-income neighbourhoods.
By 2005, the vast majority of Otnorotians are living just as they and their forefathers always did. In Neighbourhoods 1-99, real incomes have not changed at all, nor have the relative population sizes changed. Neighbourhood 1 still earns 1 real credit per person, which buys exactly what it did in 1970. Neighbourhood 99 still earns 99. Only in Neighbourhood 100 has there been a change. Perhaps the residents held shares in the wildly successful Otnorotian version of Trivial Pursuit; perhaps they put their heads together and invented smell-o-vision. For whatever reason, they have gone from wealthy to superwealthy (at nobody else’s particular expense, or at least nobody’s in Otnorot), and they now earn a fantastic 8,000 credits per citizen every year.
For most Otnorotians, life hasn’t changed. The presence of the one new hyperrich neighbourhood would certainly have social effects, probably a mix of good and bad; you could, for example, almost certainly expect the Royal Otnorot Museum to acquire a hideous new glass mega-extrusion. But you wouldn’t say that the Otnorotian middle class had disappeared.
And yet—Shock! Concern!—that is exactly what Otnorot’s version of Prof. Hulchanski finds, unwisely using average incomes as his baseline. The overall average income for Otnorot is now a whopping 129½ credits a year, so no group at all outside lucky Neighbourhood 100 reaches the lower middle-income cutoff (103.6). The lower bound for a “low-income” neighbourhood, however, is now 77.7 credits. Where we once had just 10 low-income neighbourhoods out of 100, now everybody from 78 to 99 is defined as low-income, so we have 22.
It so happens that in Otnorot, lukewarm social science performed at public expense and promoted by newspaper editors is punished by means too horrendous to translate into English. Things are done differently in the real Toronto, a mercifully liberal-minded place. But the processes that so confused our alterna-Hulchanski are surely, in an oversimplified way, the same processes that have confused the real scholar. Observers of inequality have observed a genuine, dramatic numerical increase in it over the past two or three decades; one only need have been looking at business-magazine “rich lists” for a while to see that billionaires, all but unknown in the early 1980s, are now as common as seagulls.
There are real social and political dangers from this, to the degree that we allow economic power to translate into social and political power. But it does not mean that the “middle class” has really disappeared or dwindled. It only means that the logarithmic scale of possible incomes has stretched out at the top in a new Gilded Age, a realm of pervasively low marginal taxes and new deregulated industries.
Toronto might really, in some sense, have become bifurcated more arrestingly between rich and poor. But the Cities Centre’s measurement procedure cannot prove that this has really happened. Would it be a good thing for social conditions in Toronto if the Bridle Path were annihilated by a meteor? If that happened, Prof. Hulchanski (and the Globe) would probably be able to report several “low-income” neighbourhoods magically re-entering the “middle class”.
Respectable social science of this sort will ordinarily work with medians or with log-income (as the UN Human Development Index does), or it will approach inequality questions with the aid of the Gini coefficient—a metric totally absent from the Hulchanski study. No doubt Prof. Hulchanski would give the same sour-grapes defence he gave to our friend Paul: don’t have the numbers, don’t need the numbers. But there’s a further question. Why should we necessarily be concerned with between-neighbourhood inequality at all? The Cities Centre would use the same “average income” figure to describe and classify both Neighbourhood X, where everybody makes a healthy $100,000 a year, and Neighbourhood Y, where half the residents make $200,000 and half make nothing, bartering and stealing for their living. Funny sort of egalitarianism, if you ask me.
By Colby Cosh - Monday, April 19, 2010 at 3:11 AM - 19 Comments
As the paid-up holder of a Mainstream Media club card, can I warn the sportswriters away from making too much of the statistical fluke of all eight first-round NHL playoff series starting out tied through two games? The warning will arrive too late for some, but others may yet be saved.
As a landmark of NHL parity, the large number of 1-1 results in 2010 is not going to prove very useful. Imagine that game outcomes are statistically independent of each other and that the better team has a p chance of winning each individual game in the home team’s rink. If that’s the case, then the chance of a given series standing level after two games is 2(p)(1-p).
The 1-1 tie is always, for realistic values of p, the most common outcome. In a world of perfect parity—all teams are equal, no home-ice advantage, p = 0.5—half the series will be tied 1-1 after two games. And because the chance of the better team going up 2-0 is counterbalanced by a decreased chance of the other team going up 2-0, the overall chance of a tied series doesn’t drop off very fast as you depart from the parity condition, p = 0.5. For p = 0.6, about 48% of the series are still tied 1-1 after two games. (The better team is ahead in 36%, or 0.6²; the worse team is up 2-0 in 16%, about 0.4².)
But you can see that having eight series tied 1-1 will be incredibly rare even in the world of perfect parity. The probability of that happening in a given year will be the total product of the chances of a 1-1 tie in each of the series. Given an average overall value of p, the odds of all eight series starting out equal works out to, at most, (2(p)(1-p))8—a pretty small number, demonstrating the great flukiness of the “eight ties” outcome. Even in the perfect-parity world the expected frequency works out to 1 time in every 28, or 256, years. In the real world, the right average figure for p is probably around .54, giving us an “eight ties” year about 1 time in 269. In a fairly extreme non-parity world where the 1-4 seeds had an average 60-40 edge—that is to say, p = 0.6—the “eight ties” outcome would happen once every 355 years.
In other words, using this fluke as any kind of sign, indicator, or test for parity is about like insisting on reading a book only by the light of Halley’s Comet. You’d better have a comfortable chair. And plenty of kids, so they and their progeny can continue the observations (over several millennia) after you die in it…
By macleans.ca - Friday, April 2, 2010 at 9:00 AM - 24 Comments