Science vs Belief: Nate Silver and the US Presidential Election

        i.            Nate Silver and the ‘two cultures’

Despite finding myself enraged on a daily basis, I just couldn’t drag myself away from the US election coverage. The naked rancour that now dominates US politics made for compelling, if infuriating, viewing; toxic spin, not policy, seemed to be the weapon of choice. Though it came as no surprise – the emergence of, for example, the ‘Birthers’ had forewarned us that certain people weren’t going to let truth get in the way of a good story – the partisanship of the election was still shocking to behold. The opposing worldviews seem so far apart you’d be forgiven for thinking these were not two parties but two cultures staring suspiciously at each other across a chasm of mutual incomprehension (witness the post-election petitions by disgruntled Republicans asking that their states be allowed to secede from the union; as I write the Texan petition has gathered 111,000 signatories).

This culture clash, however, is not the subject of this piece (though it’s certainly part of the story); this year’s election – by way of statistician Nate Silver – has highlighted another culture clash, that between those who embrace science, its methodology and its lessons, and those who don’t. For all that the bulk of the creationists and climate change deniers fall into the Republican camp this is far from being simply a matter of left versus right. The left has its own share of science sceptics – from credulous New Agers through MMR refuseniks to the anti-science wing of the Green movement – but more generally ignorance of science, whether wilful or not, does not neatly follow party lines.

The backlash against Nate Silver and his scientific approach to political prognostication is not simply party political (the fact that a lot more Republicans seem to be angry with him than Democrats is largely a consequence of his conclusions rather than his methodology), it exposes the existence of a journalistic archetype who is not merely ignorant of science but actively threatened by it. C P Snow’s “Two Cultures” divide is alive and well over 50 years on; there are still those who think themselves superior because they had an education in the humanities, who feel not merely comfortable with, but proud of, their ignorance of the basics of science.

At heart this piece is an examination of the Nate Silver backlash and what it tells us about the world we leave in and the imperfect humans who inhabit it. Along the way we’ll be discussing, among other things, why the word momentum should be left to the physicists, why despite the fact that (well OK, because of the fact that) Nate Silver predicted the winner of all 50 states correctly his model is flawed, why Silver’s vindication may turn out to be a pyrrhic victory for lovers of science, but above all we’ll be examining why an ignorance of probability and of the scientific method is most definitely not something to be proud of.

We’ll start, however, by explaining who Nate Silver is. When I first started writing this piece a week before the presidential election Nate Silver was all but unheard of outside of the US. Subsequently his profile has soared (at least in that section of the internet I inhabit) and I feel a bit like my favourite cult band have been catapulted into the mainstream; I know I should feel glad for him but I preferred it when he was my little secret.

For those of you who don’t know who he is he’s been writing a blog– FiveThrityEight – offering a statistical analysis of all the latest political polls since early 2008 (before that he was a prominent baseball statistician/computer modeller). On the blog, alongside his incisive articles unpicking the latest figures, there are constantly updated forecasts of the election results that are keenly followed by journalists and politicians as well as other interested parties such as gamblers. His spectacular track record in correctly predicting election results – the 2008 presidential election in particular was a triumph for Silver – earned him a stellar reputation such that in 2010 the New York Times paid Silver to relaunch the blog on NYTimes.com.

Unsurprisingly FiveThirtyEight was closely followed in the run-up to the 2012 election and with the increased exposure came the inevitable backlash. While some objected to Silver’s conclusions, others objected to his methodology (many objected to both). These can be crudely characterised as follows: on the one hand we have the Republican ‘we don’t like the message so we’re going to shoot the messenger’ backlash, on the other the dyed-in-the-wool politico ‘how can someone who’s never even been on the frontline possibly think they know more than me’ backlash.

      ii.            If you don’t like the facts, attack, attack, attack.

Let’s get the Republicans out the way first. The Republican narrative after Obama’s flat performance in the first debate was that Romney now had momentum on his side, that the election was now up for grabs. The national polls certainly seemed to suggest the popular vote would be close thus when Nate Silver’s numbers failed to go along with this narrative he suddenly found himself the focus of Republican ire (though Silver did have Obama’s chances of re-election dropping significantly after the first debate he still remained a good favourite essentially because whatever the national polls might say the polls in the so-called battleground states were firmly in Obama’s favour).

Their efforts to undermine Silver and his conclusions took numerous forms. At the gentler end of the spectrum were attempts to bring him down a peg or two, to give an example: “more than a few political pundits and reporters, including some of his own colleagues, believe Silver is highly overrated”. Then there were the accusations of bias such as the wonderfully ironic “When you weight a poll based on what you think of the pollster and the results and not based on what is actually inside the poll (party sampling, changes in favorability, job approval, etc), it can make for forecasts that mirror what you hope will happen rather than what’s most likely to happen.” And last but not least the personal attacks such as the extraordinary “Nate Silver is a man of very small stature, a thin and effeminate man with a soft-sounding voice” which I think we can all agree really nails what’s wrong with Silver’s number-crunching.

On the face of it this all seems something of an overreaction to a few numbers spat out by a computer but for several reasons the Republicans felt it was important to discredit Silver and his model. For a start people are more likely to vote in close elections than in predictable ones (which may explain why the Democrats made no attempt to push forward the ‘Obama’s a big favourite’ story suggested by Silver’s numbers). More importantly there was the simple human desire to make bad news go away. Large swathes of the Republican party had already demonstrated their susceptibility to this desire in their denial of global warming but you’d think they’d baulk at allowing it to determine the way they ran an election campaign. Well if the faces at Romney’s ‘victory’ party on election night and the subsequent rumours coming out of the Romney camp are anything to go by it seems, incredibly, that they genuinely had convinced themselves they were going to win, that they based their campaign around the belief that their own analysis and their own numbers were right and everyone else, including Nate Silver, was wrong.

Part of the problem is the atmosphere of paranoia fostered by the Tea Party, the Birthers and so forth. If you imagine everything has a political motive it’s very easy to perceive Silver, a self-proclaimed Obama supporter (one, moreover, who had foolishly failed to declare that he had access to the Obama campaign’s private polls in 2008), as biased and his methodology – the fact that his model assigned weights to polls, rather than just taking a crude average – provided an obvious mechanism by which he could skew the results in favour of his bias (critics tended to conveniently ignore the fact that the weighting was predetermined according to sample size and reliability, not foisted onto the poll post-hoc depending on its results).

    iii.            The myth of momentum.

Arguably the most important rationale behind the decision to pick on Silver, however, was the belief that momentum is important in elections and thus, by undermining the idea that Romney had momentum, Silver’s numbers were harming Romney’s chances. As one Republican columnist put it “some could argue that [Silver’s] predictions are even a sort of self-fulfilling prophesy (good numbers beget good fundraising and good press coverage, which, in turn, begets good poll numbers)”. This is quite a complement to Silver – I accept he can sway the betting markets but I’m far from convinced he can influence the actual election.

Conventional wisdom has it that momentum is important, we’re predisposed to believe trends will continue. If a basketball player has scored three baskets in a row we think he’s more likely to score the next one; the relative performances of the US and Chinese economy in recent years suggest to us that China will become the world’s biggest economy by 2020; if Romney’s poll rating is rising we expect it to keep rising. Unfortunately conventional wisdom doesn’t know what it’s talking about. It has been consistently shown that trends such as these are descriptive not predictive, that there is little correlation, indeed often a negative correlation, between performing better than normal and continuing to perform better than normal, that long term performance is a vastly better predictor of the future than short term performance.

For many people this idea is about as counter-intuitive as it comes, few findings of behavioural economics have been greeted with greater scepticism, hostility even, than what is known as the ‘hot hand fallacy’. Momentum just feels real and unfortunately the occasions we should be paying most attention to the numbers, when they contradict our intuition, are also the occasions we’re most likely to just dismiss them as nonsense.

Arguably there is more of a case to be made for momentum being important in politics. Politics, after all, is about beliefs and if people believe in momentum that belief could become self-fulfilling not least because people like to be on the winning side. The classic example of this is the battle for the Republican and Democratic nominations – here the perception of momentum is important firstly because it will attract funds (vitally important in these battles) and secondly because of tactical voting (rather than picking the candidate you most want to win you might choose an acceptable candidate you perceive to have a more realistic chance). These are not really factors in the presidential race however and the evidence is (and yes, yet again, this is numbers talking not conventional wisdom) momentum doesn’t tend to be a factor in two party elections. Silver’s own analysis of Senate races shows that, if anything, pride tends to come before a fall.

It seems to me the important concept journalists should be talking about is regression to the mean not momentum. After the first presidential debate Romney, unsurprisingly, surged in the polls. Given this fact which would you think more likely as the memory of Obama’s listless performance fades, that those who had switched to Romney after the debate would think better of it and switch back or that voters who hadn’t yet switched would now decide to do so? To me it’s clearly the former – if you don’t switch quickly why would you switch at all? – but conventional wisdom apparently sees it differently. There is a good reason why you might decide to switch later – if Romney had continued to outperform Obama – and this, of course, is what we, with our ‘what is happening now is the best predictor of the future’ psychology, are liable to expect. But again we’d be far better off applying regression to the mean than momentum – which is more likely, after all, that an uncharacteristically bad performance will be repeated or that the next performance will tend towards the historical average? Unless you’re talking about physics, momentum is a word best avoided; its widespread use, particularly by sports commentators and political analysts, rests on an assumption – that being ‘on form’ is predictive rather than merely descriptive – that simply isn’t warranted by the evidence (the number of times you hear the phrase ‘the momentum has changed’ is a bit of clue).

     iv.            The revenge of the non-nerds.

Reading some of the Republican attacks on Silver brings to mind a quotation I first heard in the context of the Credit Crunch – “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” (Upton Sinclair – I, Candidate for Governor: And How I Got Licked (1935)) – a quotation even more apposite when we consider the second strand of the anti-Silver backlash.

It was not just Republicans who felt threatened by Nate Silver; perhaps even more vocal in their misgivings were the journalistic establishment (albeit particularly those of a Republican bent). For a certain type of political journalist it was not so much what Silver said that annoyed them, it was how he said it, and in particular how he arrived at his conclusions.

Silver wasn’t, of course, the first statistician to attempt to analyse elections but he was the first to get enough attention to be seen as a real threat to narrative driven journalism. It was in the wake of the Republican attack that the floodgates opened but rumbling of discontent had been going on for years. The bile, for example, from Geoffrey Dunn (a Democrat) in the Huffington Post a couple of months ago is a wonder to behold. This is good old fashioned nerd-baiting: “Here’s a guy who’s never been centrally involved in a national election, whose political acumen comes from a calculator, and, who, I am willing to bet (and I haven’t seen the numbers) has never organized a precinct in his life, much less walked one, pontificating about the dynamics in the electoral processes as if he actually understood them”. Elsewhere Dunn complains about Silver’s lack of “moral outrage”, “Does this guy have no soul?” he asks; emotion, as we all know, leads to far more rational conclusions than cold, inhuman number-crunching. Then the coup de grace “It’s the intangibles” – the anecdotalist’s trump card, numbers can never capture the essence of what is really going on – “He doesn’t understand that when there’s a runner on second and two out, you want Derek Jeter at the plate–I don’t care what the numbers tell you–simply because it’s Derek Jeter”.

This sort of instinct trumps science, belief trumps reason, intangibles trump numbers nonsense can be found time and again in the many columns attacking Silver and his methods. The pundits seem outraged at the suggestion a geek sat at a computer could possibly know better than an experienced, well-connected political journalist. The rather obvious idea that having strong ideas about politics and personal insights into the characters involved might actually be harmful to your assessment of the situation never crosses their minds. Here’s TV pundit, and former Republican Congressman, Joe Scarborough : “Nate Silver says this is a 73.6 percent chance that the president is going to win? Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning. And you talk to the Romney people, it’s the same thing”. That Scarborough believes a load of partisan political insiders have a better idea about the election than the actual people who are going to vote (as measured by the polls Silver’s model relies on) shouldn’t surprise us, in some ways his very existence relies on this – if Silver can do a better job using publically available information where does this leave pundits like him who play on their insider status.

Anyone who has read Michael Lewis’s Moneyball, or seen the film, may find this all very familiar (a particularly apt resemblance given Nate Silver originally made his name as a baseball statistician, in particular through his PECOTA computer program designed to predict future player performance). Moneyball is the true story of how, in the face of fierce resistance from the baseball establishment, the statistical approach transformed baseball. Ultimately the ‘this is how we’ve always done it’ brigade and the ‘how can someone who’s never even played big league baseball possibly understand it’ crew couldn’t withstand the assault on ‘common sense’ for the simple reason that the stats boys were getting far better results. It turned out sticking your fingers in your ears and denying reality didn’t make the bad news go away.

       v.            ‘Expert Political Judgement’

However before we hang out the bunting in anticipation of the inevitable downfall of uninformed political punditry we should bear in mind journalism is not baseball. Baseball, ultimately, is about winning and once it became clear that being under the sway of anecdotes rather than data had seriously adverse consequences on the field of play the triumph of ‘Moneyball’ became inevitable. Unfortunately it isn’t immediately obvious that in the world of political journalism making instinctive rather than scientific predictions has similarly adverse consequences; certainly choosing instinct over calculation hugely increases the chance of you getting it wrong but this doesn’t actually seem to matter much.

As Philip Tetlock detailed in his seminal 2005 work Expert Political Judgement so-called ‘political experts’ tend to be appalling bad predictors consistently outperformed by the crudest of algorithms (let alone the sophisticated algorithms used by the likes of Silver). What’s more Tetlock found that the pundits who were the most certain tended to be the worst predictors. This sounds like it should be good news, that the know-it-all windbags will be exposed for what they are and gradually weeded out leaving just thoughtful unbiased analysts, but unfortunately the world doesn’t work like that. Pundits just aren’t held accountable for their predictions: the public seem to prefer confident pronouncements to nuanced assessments, soundbites to careful analysis, and take very little notice when a prediction turns out to be wrong. Know-it-all windbagism is incentivised – it pays to stand out, to be bullish and controversial, especially given you can trumpet your predictions if they pan out but you’re unlikely to be pulled up on them if they don’t.

The unfortunate truth is journalism is not about being right, it is about being compelling; the media exist to sell papers or to attract listeners or viewers, being genuinely informative comes a distant second. It is in the media’s interest to make every little thing look game changing, to hype up every little gaffe, to bang on about momentum, to impose a simple narrative on a complex reality and to make every election look as though it’s on a knife-edge. While Nate Silver takes a weighted average of every poll to get closer to reality the media will generally build their stories around one of the outlying (and thus probably least reliable) polls. We get the media we deserve – truth is all very nice but drama runs the show.

Given all this it seems sensible to ask, yet again, quite why so many political journalists have taken it upon themselves to belittle Silver and his methods. Silver is not really that much of a threat to their jobs, there was absolutely no reason why there couldn’t be room for both types of ‘analysis’ in the media. It seems clear there must be more to it than simply Luddites smashing the machinery. The obvious, if puzzling, explanation is that the pundits in question really did believe they were better at predicting elections than Silver. Why draw attention to Silver just before the election; why choose to fight the battle on the one issue the stats guy had an obvious advantage over you, namely accuracy; why make yourself accountable for your predictions unless you genuinely believe what you’re saying?

     vi.            The problem with Probabilities

At heart, I fear, statistics and probability are just utterly alien concepts to many pundits – to them evidence and opinion are intimately linked, science has absolutely no role to play in political journalism. They misunderstand what Silver is doing on just about every level. In the words of journalist Ezra Klein “Lots of pundits don’t like Nate Silver because he makes them feel innumerate. Then they criticize him and prove it.”

Let’s go back to Geoffrey Dunn and a Huffington Post article from election day. Referring to a Silver comment from a year earlier he writes “Let me say that “predicting” an underdog without knowing who the opponent is going to be is an absolutely absurd move in politics”. Later he writes “six of the biggest variables [one of which is Hurricane Sandy to give you an idea of where Dunn’s coming from] in this year’s campaign, Silver never saw coming and he never factored for them in any of his predictions”. Now it could be that Dunn is too stupid to understand what Silver is all about, it could be that he is deliberately misrepresenting what Silver is saying in order to attack him, but I suspect he’s just too arrogant to even try to understand.  To anyone who knows anything about probability these statements are beyond ridiculous. What he seems to be saying is that if we don’t know everything that will happen we shouldn’t even bother trying to make informed assessments about the probabilities of future events.

Looking at what Silver actually wrote in the 2011 column Dunn references we read the following: “On Aug.12… Obama’s stock on Intrade, a popular political betting market, dipped below 50 percent for the first time. It has hovered just below the 50 percent threshold, usually at about 48 percent, ever since. Obama has gone from a modest favorite to win re-election to, probably, a slight underdog. Let’s not oversell this. A couple of months of solid jobs reports, or the selection of a poor Republican opponent, would suffice to make him the favorite again.” This is followed by several thousand more words explaining how this conclusion was reached (as Dunn rightly observes there is no mention of Hurricane Sandy). Despite claiming to have read this column Dunn is apparently still unable to comprehend that to get a workable probability of Obama’s election chances all you have to do is combine judgements as to how likely each Republican candidate is to win the nomination with assessments of Obama’s chances should that candidate win. Presumably Dunn believes, or so his remark on the events “Silver never saw coming” would suggest, that we’re literally powerless to know anything about the future – how can you decide who is favourite in an NFL march if you don’t know if either side will fumble the ball; how can you offer life insurance if you don’t know if the applicant will be knocked over by a car; how can call you California for Obama when you don’t know if he’s about to be exposed as a shape-shifting lizard?

Dunn is not alone in this, probability blindness appears to be extremely prevalent. Here is best selling author and NYT journalist David Brooks – “If you tell me you think you can quantify an event that is about to happen that you don`t expect, like the 47 percent comment or a debate performance, I think you think you are a wizard. That`s not possible.” He has a point, it certainly isn’t possible to quantify the unknown unknowns, the absurdity of his statement is in the belief that anyone is saying they can. What probability does, and this is way, way too subtle for the likes of Brooks and Dunn, is acknowledge the uncertainty and factor it in. Thus if two months before the election Silver’s model says Obama would have a 80% chance of re-election if the election were held immediately Silver assumes Obama’s actual chances are lower than that because there are two months for it to all go wrong; as time goes on and the potential for unforeseeable events lessens the two figures converge. If we want to talk ‘wizards’ let’s think about the pundits with their psychic link to the truth.

Part of the problem is pundits, and people in general, don’t like uncertainty – this is why pundits are so bullish and why their audience responds to that – they don’t want to be told what might happen they want to be told what will happen. They can’t understand, for example, that a probability is not a prediction, that we can’t actually know what’s going to happen but we can make a pretty good stab at quantifying the possibilities.

It is ironic then that another strand of criticism of Silver is of his certainty – how can he have the arrogance to come up with such a precise figure? Jonah Goldberg writing in the Chicago Tribune the day after the election in a piece charmingly called “Nate Silver’s numbers racket” produced the following gem – “On any given day, Silver might have announced that — given the new polling data — “the model” was now finding that the president had an 86.3 percent chance of winning. Not 86.4 percent, you fools. Not 86.1 percent, you Philistines. But 86.3 percent, you lovers of reason.” It’s hard to know what Goldberg wants here. Would he rather Silver had written ‘about 85%’, or perhaps just ‘Obama is a good favourite’? Would he rather, in other words, have had less information in more words? What he would probably rather is that Silver would just go away because he’s speaking a different language. The likes of Goldberg just cannot grasp that being specific is not the same as being certain, that the specificity is a result of the process by which the analysis was done – a computer repeatedly simulates the election, each time with minutely different variables based on the possible vote shares in each state, then calculates the percentage of simulations that result in an Obama win – to smudge that specificity would be patronising in the extreme. It is uncertainty, not certainty, that emerges from the statistical model – Silver is quantifying his uncertainty not using numbers to make it go away – a fact both threatening and bewildering to a punditry who believe their own impressionistic analysis reveals an untrammelled truth.

Implicit in Silver’s approach is not the conviction we find among pundits but the realisation that you are, in Silver’s words “a flawed, imperfect creature who’s struggling to get better”. Where pundits seem to believe that if their account doesn’t conform with reality the fault lies in reality, the statistically based view not only accepts the fallibility of its model it actually has a plan as to what to do if the model appears defective. In the words of another stats guy Drew Linzer – “If it turns out there’s a flaw, we can find it, spot it and we can work on addressing it as opposed to people whose commentary is based on some thoughts in their head”; this is the crux of the matter, the adaptability of the scientific method versus the rigidity of self-generated belief.

Generally speaking the pundits seem, in common with much of humanity, to be blind to their own subjectivity while seeing it all too clearly in others. As we have seen the Republicans accused Silver of letting his liberal bias creep into his figures and weighting the polls to get the answer he wanted. Quite why he would want to do this given he, unlike them, is judged almost exclusively on results isn’t clear but fortunately for him it turns out that, as many people have observed, ‘the facts have a liberal bias’ too.

Silver succeeded precisely because he stripped the subjective biases out of the data. No one could claim that polls are perfect, they are subjective snapshots subject to sample bias and containing a large amount of random noise, but this does not mean they can’t produce meaningful results. Those who claim that polls are worthless because they are unrepresentative of true voting intentions are failing to grasp what should be obvious – imperfect and useless are not actually the same thing. Individual polls may be unreliable (though that doesn’t stop the pundits obsessing over them, particularly the even less reliable outliers) but when aggregated the randomness, through the law of large numbers, will tend to iron itself out. Moreover if a poll’s methodology leads to bias it will tend to be consistently (though not always) biased the same way and as such you can adjust the polls based on historical tendencies (which is precisely what Silver is doing in his model). But yet again the proof of the pudding is in the eating – no amount of manipulation by Silver could turn iron into gold, the success of the model itself demonstrates that the raw data being fed into it is fit for purpose. It would seem that the pollsters, in the US at least, know what they are doing.

Perhaps the most revealing demonstration of the widespread failure to understand the difference between probability and prediction is that no-one seems to have noticed that the election results did reveal a flaw in Silver’s model; because the fact that Silver ‘called’ 50 out of 50 states correct is actually an indictment of his model not an affirmation of it. According to his probabilistic model, across all 50 states the underdog’s chances add up to about 200% (Florida (49.7%), North Carolina (25.6%), Virginia (20.6%), Colorado (20.3%), Iowa (15.7%) and New Hampshire (15.4%) making up the bulk of the uncertainty); as such Silver should, on average, get two states wrong. Perhaps the failure of the pundits to point this out might be connected to the fact that this actually suggests Silver’s model is even better than he claims, that he is being too conservative in his confidence levels (and quite how closely his projections of vote share at national and state level matched up with the reality would certainly seem to support this conclusion). Alas I fear a more likely explanation is they just can’t grasp the difference between probability and prediction.

  vii.            Gambling and accountability

To most people, I imagine, the idea that Silver would actually have been more right if he’d got two states wrong will seem wilfully foolish, to them that would mean he had been right 48 times and wrong twice as opposed to right 50 times. Probability isn’t, to put it mildly, terribly intuitive.

To a gambler like myself, however, this is crucial. Winning at gambling is not about prediction it is about probability, it is not about who you think is going to win it’s about what price they are to win. One of the main reasons most people lose at gambling is precisely because they fail to understand that the question to ask is not ‘who do you fancy?’ but ‘who do you think is value?’

As it happens in this election the favourite and the value were the same thing; Obama was consistently favourite in the betting markets and his price was consistently ‘lower’ than Silver’s model suggested it should be. As such several of my Silver-following friends – there are few things more guaranteed to warm a shrewd gambler’s heart than a computer model coming to a different conclusion to the ‘conventional wisdom’ – built up a big position on Obama. Let’s be clear, they weren’t backing Obama because Silver said he would win but because Silver gave him a greater chance of victory than the prices available on the betting markets. Had the betting markets made Obama a bigger favourite than Silver they would backed Romney; Silver would have correctly ‘predicted’ the election winner but on the basis of the information he provided the correct course of action would have been to back the loser.

Bad gamblers – like pundits – are results orientated, before the event they want to know who will win, after the event they judge whether or not it was a good bet on whether or not it won. Good gamblers – like stats guys – know they can’t predict the future so they don’t even try, they think in terms of probabilities and bet when the market’s judgement of the probabilities (i.e. the odds) is sufficiently different from their own; they understand that a value bet is a value bet regardless of whether it wins.

It should come as no surprise that shrewd gamblers follow Silver’s blog, that the betting markets proved a better guide to the election outcome than the pundits; unlike the pundits, gamblers suffer consequences if they are proved wrong. Accountability is the key, when the money’s down the truth will out. Throughout the run-up to the election Silver was constantly referencing the betting markets (in particular the betting exchanges Intrade and Betfair), and how they made Obama good favourite too, as ammunition against the backlash – it is a Republican credo after all that markets are efficient.

In one of the more surreal episodes in the election Nate Silver attempted to hold Joe Scarborough accountable for the following, extraordinary outburst:  “anybody that thinks that this race is anything but a tossup right now is such an ideologue, they should be kept away from typewriters, computers, laptops and microphones for the next 10 days, because they’re jokes”. This was aimed squarely at Nate Silver; in response Silver asked him to put his money where his mouth, offering a $1000 charity bet (later raised to $2000) on the election outcome. You’d think that if Scarborough was going to make this sort of incendiary statement he’d be prepared to back it up but apparently not (maybe Silver should have offered him 5/4, evens on a toss-up is not actually value, after all, just the right price), all Silver got for his troubles was a dressing down from the NYT public editor (which tells you a lot about the strange American attitude to gambling).

Of course the fact that Obama won, and won easily does not mean Scarborough was wrong – maybe it was a toss-up when he said those words – the problem is the intemperate language, the utter certainty, the complete disregard for the fact that pundits have consistently proved useless at predicting elections whereas statisticians have proved accurate. Scarborough’s outburst is classic pundit-think – if you don’t agree with me you’re a joke is the basic message – it reminds me of nothing so much as Irish Premier Bertie Ahern’s outburst against the naysayers warning about an Irish housing bubble (shortly before it burst): “Sitting on the sidelines, cribbing and moaning is a lost opportunity. I don’t know how people who engage in that don’t commit suicide”.

 viii.            Will anything actually change?

In the words of legendary economist JK Galbraith: pundits “forecast not because they know but because they are asked. Maybe we should stop asking.” In many ways the problem is not the likes of Scarborough or Dunn, it is the media organisations themselves and, indeed, their audiences. Conceivably, by making a concerted effort to make themselves accountable for their predictions by drawing attention to Silver and the scientific method, the pundits have hastened their own downfall but alas I rather doubt it. There will always be those for whom politics is like a sport, for whom the role of political journalists is as cheerleaders not truth-tellers. Besides I can’t see the public embracing stats anytime soon, the narrative model of political journalism chimes with the human way of looking at the world on too fundamental a level for its obvious predictive failure to seriously dent its appeal. Perhaps some people, maybe even some political journalists, may have grasped the reality that the last ten years is a much better guide to the future than the last ten minutes, that momentum is baloney, that you’re far better off nuancing the stats with the ‘intangibles’ than the other way round, however I can’t see a revolution happening anytime soon.

Nevertheless it’s been a good election for science. The Democrats have won – in itself a blow for the climate change deniers, the stem cell research impeders and so forth – but more importantly they won running a campaign rooted in evidence not gut feeling. Their amazing success in mobilising the vote didn’t come about by chance but through careful research: whether deciding how to portray Romney in attack ads or exactly what to say to potential voters on the doorstep, data not conventional wisdom showed the way. The Republicans meanwhile were mired in a world of their own making where belief shaped the evidence and not vice versa and they paid the price for it. Whether the climate change denying, creationist wing of the party is actually capable of grasping that ‘if I don’t like it then it isn’t happening’ isn’t actually a terribly clever approach to problem-solving, that evidence, reason and science are liable to provide better answers than ideology, emotion and belief, remains to be seen, but it’s hard to see the Republicans winning an election anytime soon if they don’t.

In many ways the backlash against Silver mirrored the Presidential election itself – one side, taking emotion not reason as their lodestone, were ultimately defeated at least in part because of their belief that their own subjective worldview gave a better insight into reality than the available evidence. Ultimately this, alas, is just human nature – we seek out evidence that agrees with us and seek out reasons to discredit that which disagrees with us. We trust our instincts and emotions too much, anecdotes not statistics resonate with us (charities have long been aware that “nine year old Maria is in danger of starvation” raises much more money than “two million people are at danger of starvation”). This together with the counter-intuitive nature of probability ensure that, alas, the Nate Silver approach will always remain a closed book to large sections of the population.

On the plus side this is good news for my gambling friends, those who are well aware that the secret is not to try to be right this time, but to try to be right over time. I’m not sure I entirely endorse Nassim Nicholas Taleb’s suggestion that the “characteristic feature of the loser is to bemoan, in general terms, mankind’s flaws, biases, contradictions and irrationality – without exploiting them for fun and profit” but while the world remains irrational shrewd gamblers might as well take advantage of it by keeping an eye on the FiveThirtyEight blog. (Note to UK gamblers – when Silver turned his attention to the 2010 UK election he was less successful. This may just have been randomness biting back but more likely British politics is not as amenable to modelling, firstly because three party systems are harder to model but mostly, one assumes, because British polls simply aren’t as accurate as their American counterparts).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s