Wednesday, November 7, 2012

Triumph of the 'quants'

This morning we salute an historic election victory. And no, I'm not referring to Barack Obama earning four more years in the presidential hot-seat (a victory now immortalised in the most-popular-tweet-ever). The historic victory that has the twitterati salivating with admiration*, is the triumph of the nerds--and one 'nerd' in particular; Nate Silver of the New York Times blog fivethirtyeight. Silver correctly predicted the outcome in all 50 states--as illustrated in this graphic comparison of his state-by-state prediction map with a map of actual results--beating his 2008 feat of correctly predicting 49 out of 50 (he missed out that time on Indiana, which Obama took by 0.1%).

The day before the election Silver gave Obama a 92% chance of victory, defying the 'gut-instinct' pundits who saw the election as 'too close to call', or those who only weeks ago were talking about the Romney campaign's 'momentum', while Silver still had the probability of a Romney victory at just 25%. The xkcd comic summed things up neatly as follows: "To surprise of pundits, numbers continue to be best system for determining which of two things is larger" (cartoon here).

The financial crisis did much to discredit the value of 'quants' and their stats-based analyses. But the lesson from the crisis was not so much that 'quants-are-bad', but that quantitative (predictive) modelling needs to be applied carefully, and only where the underlying model has valid statistical relevance (I've blogged about this previously here).

So does Silver's triumph herald a new dawn in the public's attitude towards statistics and quantitative analysis? There seems little doubt that the success of the fivethirtyeight formula--both in terms of its predictive power and its ability to attract a big audience--will change the nature (or at least the methods) of political punditry. But what about its potential to improve the image of stats and quantitative, data-based analysis more widely?

I saw a tweet today from a former economics lecturer of mine, who said he had his masters class running election prediction simulations in Stata (a statistical/econometric software programme) this week. Seems to me like a great way to inspire students' interest in these analytical techniques. Economics lecturers regularly lament the difficulty of trying to get undergraduates to engage with stats--many, particularly those who are more politically minded, seem to simply switch off at the sight of an equation. This appears to be almost a form of learned behaviour--evidence of a dysfunctional relationship with numbers--a deep distrust of these seemingly arcane methods of analysis and scepticism about their relevance to the 'real-world'.

The success of movements such as CoderDojo and the RaspberryPi project in promoting computer coding as a hobby for kids (big and small) proves the potential appetite for 'nerdy' pursuits, when presented as engaging, creative--as opposed to mechanistic--activities. Here's hoping the (electoral) triumph of the nerds can do something similar for getting the kids interested in numbers.

____________________________________________

*For example, @monsieurcorway tweeted: 'There's a great quote from a Romney supporter last night at Mitt's 'victory' party... He was asked if he turned up thinking Mitt would win. His response? "I read Nate Silver, that son of a bitch." Says it all. Well done, Nate - You legend.' Or this, from @alanbeattie: "Nate Silver can kill people by shooting lasers from his eyes ".

Tuesday, October 30, 2012

Expert accountability

The ruling last week by an Italian court that seven scientists were guilty of manslaughter for failing to warn citizens about the threat of a major earthquake in the city of l'Aquila, raises some difficult questions about accountability and ethics in relation to the provision of 'expert' opinion and scientific advice to governments and the wider public. The scientific community and global media have reacted with outrage, comparing the verdict and threat of lengthy jail sentences for the scientists involved with the persecution of Galileo and the burning of witches.

I agree in principle and in spirit with this sense of outrage. The ruling appears to betray a lack of understanding of the inherent uncertainty in any scientific endeavour, which is particularly pronounced when it comes to forecasting future events. Furthermore, putting science on trial in this way threatens to curtail the progress of scientific inquiry generally, and more specifically--in the nearer term--to make the position of those charged with civil protection and disaster prevention, in Italy at least, almost untenable. However, there remains an important point relating to accountability and ethics that appears to have been overlooked amidst all the righteous indignation.

In an excellent article in Nature, Stephen S. Hall outlines the sequence of events that led up to the tragedy. In an extraordinary meeting of the National Commission for the Forecast and Prevention of Major Risks, the seven scientists--who were all members of the Commission--reached the conclusion that an earthquake was "unlikely" (if not impossible). This in turn was interpreted by a government official, speaking at a press conference, as meaning the situation in l'Aquila was "certainly normal" and posed "no danger". The same official further added that the sequence of minor quakes and tremors that had been occurring in the region was in fact "favourable ... because of the continuous discharge of energy".

This interpretation is apparently contrary to the scientific evidence. The same article, by Hall, quotes Thomas Jordan, director of the Southern California Earthquake Center at the University of Southern California in Los Angeles, and chair of the International Commission on Earthquake Forecasting (ICEF), as suggesting that in the aftermath of a medium-sized shock in a seismic swarm (a sequence of tremors), the risk of a major quake can increase anywhere from 100-fold to nearly 1,000-fold in the short term, although the overall probability of a major quake remains relatively low--at around 2%, according to a study of other earthquake-prone zones in Italy (G. Grandori et al. Bull. Seismol. Soc. Am. 78, 1538–1549; 1988), also quoted by Hall.

A 1,000- (or even 100-) fold increase in the probability of a major quake, would have been a very different message for public consumption than the "anaesthetizing" reassurances given to the media at the press conference. Clearly, the most egregious error committed here was in the over-, or indeed mis-interpretation, of the scientific evidence by the government official who spoke to the press. The scientists were--apparently--correct in their assessment that a major quake remained "unlikely", although not impossible. But, was this the best way of characterizing the risks for the general public?

One of the people affected by the L'Aquila earthquake, quoted in Hall's article, admits that he feels "betrayed by science"--"Either they didn't know certain things, which is a problem, or they didn't know how to communicate what they did know, which is also a problem."

It may be that the error here was not in mis-stating the risk, but in not being specific enough about it. There is an understandable reluctance to use statistics, probabilities and scientific terminology in the public communication of scientific evidence. But at times we take this too far. The public is not stupid, and--problems with common misunderstandings in relation to probability notwithstanding--would be better served by the scientific community and public officials avoiding condescending reassurance in favour of the clear presentation of facts.

Is woolly language, such as "unlikely" really any more useful or informative than saying simply "we don't know"? Might the committee have been better to present the available statistical evidence--including details about the changes in probabilities--while acknowledging that the precise timing and location of a major quake is essentially unpredictable?

The advice could have stopped short of ordering a full-scale evacuation--which, on the basis of the best available evidence, would have been unnecessary 98% of the time--and instead, simply presented that evidence, enabling people to make their own informed decisions about what level of risk they were willing to accept.

Returning to the issue of accountability and ethics, to what extent should scientific or other experts be held accountable for the advice they give to governments or to the wider public? In the case of the l'Aquila tragedy, a more relevant question might be; should researchers be held accountable for the way in which their findings are interpreted by policy-makers and, in turn, by the media? Should researchers generally consider how their findings are likely to be interpreted before making them public? This almost certainly places an unreasonable burden on researchers.

And yet--while researchers can't be expected to control how others interpret their findings, a greater effort needs to be made in communicating the science--its achievements and its limitations--directly to the public. With the recent proliferation of sources for news, opinion and analysis, the authority of traditional media outlets and the role of journalists and editors as the gatekeepers of public information, is increasingly being challenged. This presents both a challenge and an opportunity for scientific engagement with a wider audience. It is increasingly difficult for members of the public to distinguish the signal of scientific or expert analysis from (a) noise and (b) intentionally biased or deceitful opinion emanating from thinly disguised lobbyists, portraying themselves as independent 'experts'. On the other hand, the internet enables researchers to disseminate their findings, methods and data without intermediation by journalists or politicians.

The 'science on trial' headlines may sound melodramatic, but scientists from both the social and hard-sciences are right to feel they are being challenged to justify their art as at no other time in living memory. Public confidence in "science"--in its broadest sense--has been undermined by episodes such as the 'climategate' controversy. The discipline of economics, similarly, has been widely criticised for not predicting the financial crisis--and more fundamentally, for persisting with models that appear unable to explain 'real world' phenomena. This critique certainly has some merit, and economics as a discipline is evolving to take account of the lessons from related disciplines, notably psychology, biology and epidemiology. However, it has to be recognised--both by researchers and those who would criticise their efforts--that models, by their very nature, are imperfect simplifications of the world they are trying to explain. One clear responsibility of any researcher, is to think carefully about the domain of validity of the models that they use [PDF], to define their limitations, and to communicate this in an unambiguous and honest way, along with any findings from the research.

Reflecting on the trial of the seismologists, the former president of Italy's National Institute of Geophysics and Volcanology, concludes that "scientists have to shut up". On the contrary, the lesson for scientists from this tragedy and the subsequent trial, is to be more proactive in our engagement with the public.

A good starting point might be the establishment of a voluntary code of ethics for researchers. This would include, for example, a commitment to publish annually a list of all sources of funding for one's research. Furthermore, the code might also contain a commitment to make public not only our research findings but also the data and methodology used (including relevant context, limitations and assumptions). Signing up to this code could be a prerequisite for any government advisers, and could similarly become a useful tool for the media in screening 'expert' commentators.


More generally this code would be based on three fundamental guiding principles; honesty, transparency and humility. Going back to Hall's Nature article, he quotes a man who lost his wife and daughter in the earthquake, lamenting the fact that "the science, on this occasion, was dramatically superficial, and it betrayed the culture of prudence and good sense that our parents taught us on the basis of experience and of the wisdom of the previous generations." Perhaps the greatest lesson from this tragedy is the need for a greater degree of humility when it comes to the predictive powers of even the most sophisticated scientific models. 



Saturday, August 25, 2012

Dan on crime - a lesson in the abuse of statistics

Dan O'Brien, writing in the Irish Times on Friday, claims that poverty and inequality are "not key reasons for law breaking" and that "recessions have had no discernible effect [on crime rates]". I call bullshit, and here's why (written as a direct reply to Dan's article, this is an edited version of the comment I left on the Irish Times site).

************

"Aw, people can come up with statistics to prove anything Kent. Forty percent of all people know that."
            - Homer J. Simpson

************

Wow, this is a breathtakingly ill-informed piece of lazy journalism, and an outrageous abuse of statistics!

The commenters on the site have already pointed out some of the flaws in your argument, but there are other ways in which this is simply wrong.

For anyone interested in a serious discussion of incarceration, I would highly recommend David Cole's article from the New York Review of Books from a few years back (and which I previously blogged about here).

According to Cole, “most of those imprisoned are poor and uneducated, disproportionately drawn from the margins of society” (referring to the US prison population).

The US has by far the highest incarceration rate in the world. However, somewhat inconveniently for your argument, Cole also points out that up to 1975 the US incarceration rate had been steady at about 100 per 100,000. Since then, the rate has ballooned to 700 per 100,000. If putting the crooks behind bars is really what prevents crime, it seems strange that such a massive increase in the incarceration rate apparently had no preventative effect on the 'crime waves' of the 1980s, to which you also make reference.

Incidentally, it is also slightly inconvenient for your argument that Russia, a country that you refer to as having “a very high murder rate”, also has the second highest incarceration rate in world. Huh.

But of course all of these superficial correlations are meaningless anyway (as you point out yourself!). What you are doing is taking two trends that happen to be moving in the same direction (or in some cases opposite directions), and assigning causation, in blatant disregard of your own caveat about correlation not necessarily implying causation!

Your country comparisons are also spurious. You simply can't compare crime rates and income levels across countries without at least attempting to control for some other relevant factors. Any applied economist worth their salt would know this. Two such relevant factors, which you mention in your article, are the rate of drug use (or perhaps more importantly narcotics production) and demographics. Controlling for these might lead to a very different picture of the relationship between income and crime (or it may not, the point is we simply don't know, based on the evidence you present). In any case, it seems likely that relative poverty and relative deprivation (i.e. within countries) would be more important drivers of crime than aggregate national income levels.

It is astonishing that you would make such sweeping assertions about what does or does not cause crime on the basis of so little evidence, and that the Irish Times would publish this piece seemingly without having done even the most basic fact-checking. (On that point, it is worth referring to Paul Krugman's recent article in which he outlines the fact-checking process that each of his op-ed pieces goes through before being published in the New York Times.)

You have done a disservice to economics and statistics with this article – as well as showing an almost total disregard for the other social-sciences which have produced voluminous literatures on the socio-economic causes of crime. I sincerely hope that the Irish Times will give the opportunity to someone with some expertise in this area to write a response to this article.







Friday, August 24, 2012

The GAA, Irish culture and playing by the rules

Some thoughts in response to Sean Moran's excellent article on the (non)-enforcement of rules in the GAA, and Irish society more generally.


(I posted this as a comment at the end of the article also)

The article raises a very important issue about Irish culture with regard to rules and rule-breakers. There is also an interesting parallel here with political and economic affairs in that the nearer you get to the 'top' - in whatever context - it seems, the less stringently the rules of the game are applied. 

In relation to the hurling example referred to in Sean's article, I was also disappointed with the reaction of the TV pundits on Sunday (making excuses for the violence on display and refusing to countenance the idea that this shouldn't be a part of the game). 

I fully agree with the idea that hurling is - and should remain - a physically intense contact sport. That intensity - in terms of physicality, skill and speed - is part of what makes it such an attractive sport both to play and to watch. However, it is the marriage of that intensity with - what is generally - an honest and sporting atmosphere amongst players and spectators, that sets the GAA, and hurling in particular, apart from other sports. 

I thought the GAA had begun to grow out of its adolescent need to appear "manly" at all costs. What were common practices in the past, such as mocking players for wearing white boots for example, or even for wearing a helmet, seem largely to have disappeared. But sadly, experienced analysts, all true 'hurling men' and supposedly experts on the game - cannot seem to make what should be a fairly simple distinction between a physical intensity that is fair and sporting - shoulder to shoulder challenges, pulling on the ball with both hands on the hurl etc. - as opposed to some of the cynical and ugly stuff we saw on Sunday (and in other games). The wild strokes on Michael Rice and TJ Reid were just the most egregious examples of this.

Striking your opponent with the hurl should always result in a sending off. As should interfering with another player's helmet. There is also what appears to be a fairly common practice of 'butting' your opponent with the end of the hurl - usually into the ribs or stomach - which has become the standard greeting onto the field for a newly arrived sub. This is another form of striking with the hurl which should result in a straight red card and yet seems to pass almost without comment, even when it is picked up by the cameras. 

Between the reappearance of this 'all part of the game' attitude to indiscipline and the public support from some senior GAA figures for Sean Quinn, it has been a regressive summer in terms of GAA culture.


Update: On a brighter note, this inspirational and very funny speech from Cork inter-county hurler Donal Og Cusack at the Foyle Pride Festival, represents a massive and welcome step forward for GAA and Irish culture: http://www.facebook.com/foylepride/posts/275794492524567 

Wednesday, August 22, 2012

Scientific theory is always up for grabs

Laurence Kotlikoff (an economist at Boston University ) has written an important op-ed piece on Bloomberg about the politicization of the economics profession: "Economists risk labeling as political hacks". While I agree with the general thrust of the argument, and much of the specifics, I was troubled by the line "Economic theory isn't up for grabs". Below is my response, left as a comment after the original article.

*****

This is an excellent article and you raise some very important points that every economist should be concerned about.

However, in the last section of the piece (titled "Consumption Spree") I think you take the argument a step too far. In particular, I have a problem with the line "Economic theory isn't up for grabs. Economic facts aren't a matter of choice." Here you are doing a disservice to economics by exaggerating its claims to scientific impartiality. No theory exists in a vacuum and empirical "facts" must be interpreted in order for them to have any meaning (this is true even for the "hard" sciences, but especially so for social science such as economics)

The preceding discussion on savings rates provides a perfect illustration of this. You take an existing theory (life cycle savings model) and use it to interpret some empirical facts (savings rates, tax incentives) resulting in an explanation of America's low savings rates. This is all perfectly valid. But it involves the selection of a model - based on a particular world view - and the interpretation of the empirical evidence through the prism of that model.

Your argument sounds convincing - and I have no doubt this is at least part of the explanation for low savings rates. But the reader - and certainly other economists - should be free to agree or disagree with your particular interpretation, and to offer alternatives. Indeed alternative explanations of low savings rates have been offered by people who start with a different model or world view and make a different interpretation of the available evidence.

This is how good science should work. It is always up for grabs.

*****

Monday, July 23, 2012

Media disasters?

What influence does popular media coverage have on the allocation of humanitarian aid following a natural disaster?

In his excellent short films (see here and here) Arno Waizenegger documents the responses of both the international aid community and the international media to the tsunami that devastated the Indonesian province of Aceh in December 2004. Donors pledged so much money (about US$8 billion) to the region that one aid agency - Medecins sans Frontieres - took the unprecedented step of announcing that they would not be accepting any further donations for that cause. The tsunami also became a major international "news event". Was the massive aid response directly attributable to the intense media coverage of the event? Waizenegger points to another 'silent disaster' - a civil conflict that claimed 15,000 lives - that had been ongoing in the same region for 29 years prior to the tsunami, but received little of either international media attention - in part because the government had previously banned foreign journalists from entering the areas affected by the conflict - or humanitarian assistance for its victims.

Of course, the 2004 tsunami was of such a scale - around 170,000 were killed in Aceh alone - that it was almost certain to receive international attention both from the media and from aid organizations. The question is whether relatively marginal disasters are more likely to receive humanitarian aid if given media attention. In their elegant paper on this topic "News droughts, news floods and U.S. disaster relief", Thomas Eisensee and David Stromberg find evidence that U.S. disaster relief depends on whether or not a disaster occurs during periods when there are other newsworthy events - such as the Olympic Games - that effectively crowd out media coverage of the disaster.

This is the topic of my latest research. I use newspaper archives to quantify the amount of media attention given to a particular disaster event. Based on data gathered from the Washington Post archive, for disasters occurring in developing countries between 1995 and 2010 (a sample of around 3,400 events), there appears to be some distinct patterns of media coverage across regions and disaster types. Earthquakes and storms appear to generate the most media coverage on average (with about 0.6 news articles per person killed), floods generate considerably fewer stories (around 0.14 articles per person killed), while news coverage for drought episodes is an order of magnitude lower again (at just 0.03 articles per person killed). Part of the reason for the relative lack of attention on droughts may be due to the nature of such events, given that they are slowly evolving crises as opposed to dramatic lightning strikes. This also has the added effect of making it difficult to define start and end dates for a drought event. As a consequence the search window (from two days prior to the event onset, up to 40 days after the event) is unlikely to cover the entire drought 'event' - given that some drought episodes can last months or even years - and therefore may not capture the total amount of media attention that the event receives.

Turning to media coverage by region, again we find a distinctive pattern in the data. Disasters that occur in Latin America and the Caribbean generate the most news coverage (with an average of around 0.6 news articles per person killed), those occurring in either the East Asia Pacific or Europe and Central Asia regions generate slightly fewer stories (on average around 0.4 per person killed), while South Asia (with just 0.16 articles per person killed), the Middle East North Africa and Sub-Saharan Africa regions (each with just 0.11 articles per person killed) are relatively neglected. Of course these figures are simple averages and take no account of the relative distribution of disaster types across regions. Therefore the relative neglect of Africa could be a reflection of the relative lack of media attention on drought events, for example. A more structured analysis may uncover whether or not these patterns represent a genuine tendency for some disaster types and regions to be relatively neglected and if such media biases have any influence on the political decision to grant humanitarian aid relief to a disaster affected region.

The research will also extend to archive searches in major news publications of other countries (starting for linguistic convenience with Anglophone countries - US, Canada, Australia, UK and perhaps Ireland - reflecting the home bias of the author) to investigate whether the media's influence on relief decisions differs across countries. 

This is early stage research and comments or suggestions are most welcome.

Sunday, March 4, 2012

The euro crisis and the fiscal compact treaty

Frank Barry and David McWilliams both had insightful pieces on the euro crisis and the fiscal compact treaty in today's Sunday Business Post (SBP content appears to be behind a paywall). Both make the point that the treaty won't solve the euro's problems. Fiscal deficits were not the cause of the eurozone crisis. Rather it was structural imbalances within the eurozone. The logic of monetary union requires a fiscal transfer union to help absorb the inevitable shocks that will occur in any large, diverse economy. Are Europeans prepared for the creation of a federal European (eurozone) state and the concomitant transfer of national sovereignty to a supra-national government?

Monday, January 2, 2012