Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why political journalists can’t stand Nate Silver (markcoddington.com)
206 points by plinkplonk on Nov 3, 2012 | hide | past | favorite | 176 comments


This is a very polite interpretation of (some) traditional journalists' dislike for Mr Silver. It offers a consistent explanation for the dislike and vitriol, but there is another explanation to explore. Silver presents a method of investigation that is not only epistemologically different from traditional punditry, but threatening to it, and their dislike is centered on the threat his method poses, not just its intellectual merits or misunderstanding thereof. There are plenty of things for journalists to intellectually object to - why focus their efforts here?

Consider that Mr Silver has been given awards and accolades most journalists would kill for: Time's list of 100 most influential people in the world, best political blog Webby award, Rolling Stone's 100 agents of change, the editor in chief of Politico listed him as "one of the most powerful people on earth", he's given a prestigious lecture at Columbia Journalism School, etc. He's received accolades, even from traditional journalism sources, that most journalists, even at the NYTimes, would never dream of. Speaking of which, the Public Editor of the Times wrote "he’s probably (and please know that I use the p-word loosely) its most high-profile writer at this particular moment."

All this for someone who has never worked a proper newsroom since college, never had a television show, cable or otherwise, and just started a blog four years ago because he was (supposedly) annoyed at laws that threatened his livelihood - online poker.

So imagine seeing someone like this receiving all the awards you've coveted since you started journalism in earnest. While the recoil against Mr Silver might be clothed in intellectual differences for some, the reason they care probably has more to do with seeing someone who differs from their traditional approach getting so much recognition.

Finally, I'd note that there are many traditional journalists who support Mr Silver's approach - his awards from traditional journalist organizations speak to this. Rather, it's a very specific group that seems to feel threatened.


They've hated him ever since he spoiled the 2008 primary for them. Nate has said he started following politics because of poker, but he started the 538 blog because of the Hillary vs. Obama primary.

The news media were still touting it as a close horse race when Nate knew that it was already over. Obama already had enough delegates that it was almost impossible for him to lose. He saw it as blatant deception on the part of the news media as a way to sell more newspapers. When he pointed this out, he just flat out embarrassed them.

I really miss the original 538 blog. Nate would give snarky commentary on whatever the political news of the day was. Now that he works for the NY Times all he does is crunch the numbers. It's not nearly as entertaining.


His twitter is more snarky like the old blog. Might change a bit since the NYT said he shouldn't be making bets with Scarborough


He's not an employee of the NY Times, per se, though, and I don't think he's bound to their reporter's rules. He licenses the 538 brand to the Times (similar to the deal Freakonomics got).


Small pet peeve of mine -- "per se" means "in itself". So, if you read back your sentence, "He's not an employee of the NY Times, in itself", you'll realize this is an improper usage.

It's popular to mis-use "per se" when you mean "necessarily", as in, "she's not my girlfriend, per se", but this is not correct, either. I expect this is how you meant it -- "he's not really an employee of NY Times, he's more like a contractor".

Here's an appropriate use of "per se":

"An aggressive psychological interrogation, per se, is not torture. It only becomes torture when it causes physical harm or lasting psychological trauma to the prisoner." <-- note, this is just a language example, not a statement of my position :)


> He's not an employee of the NY Times, himself. His work is contracted by them.

Why wouldn't it be parsed as that?


I think whatever the relationship, it's clear he no longer comments on anything that isn't directly poll-related. Someone is telling him to "stay in his lane" and he's doing it. His pieces are all long, detailed poll analyses.

Nate could be pretty hilarious in print. I want the old short snarky articles back.


Yeah, but they can still exert pressure on him.


The Times derives much more benefit from the association than he does.


"his livelihood - online poker."

Nate was also an incredible asset to the early online poker community. He was one of the most respected members of 2+2 and really helped pushed poker strategy forward. His detailed and analytical approach to the game helped make a lot of people a bunch of money during the poker boom.


When I first started hearing about the 538 stuff I was like wait whattttt isnt that Nate Tha' Great? Crazy enough, it was. This world is very small.


Is this still available online in some form?


This should give you his archived posts http://archives2.twoplustwo.com/dosearch.php?Cat=0&Forum...


Isn't online poker illegal in US now?


I don't think it's illegal to play. It's just illegal for a US financial institution to transfer funds to an entity that runs an online poker service. Or something to that effect.


It's not illegal to play poker online. I'm not an expert in the legality/legislation of online poker but the PPA[1] is a good place to start.

[1] http://theppa.org/


Playing is not illegal, but transferring money overseas can amount to wire fraud and money laundering. When Silver was doing his best poker work, the laws were far muddier.


Yeah, but you can play on foreign sites. Just use BTC and you should be safe.


After all the number-crunching it was amusing to see a headline which even a headless chicken could have come up with.

"Ohio Has 50-50 Chance of Deciding Election"

http://fivethirtyeight.blogs.nytimes.com/2012/10/23/oct-22-o...

Perhaps some pundits dislike Mr Silver because he is able to rationalise what they only have a gut feeling for.


Not so much rationalize - he's actually putting it on a statistical and quantitative foundation. There's a fundamental difference between arguing from narrative and history, and technical arguments based on the statistics of how people actually vote based on polling.


Worse that that, he's demonstrating that the 'access' and 'insights' and 'instincts' on which their status as Notable Pundits depends is actually worthless. They are simply making shit up, and what they invent is invariably a tight race scenario because that's what gets the most profitable ratings. In othr words, he is demonstrating both their fraudulence and the disservice they do to democracy. And people LOVE him.


"... what they invent is invariably a tight race scenario because that's what gets the most profitable ratings."

I disagree. I much prefer the article's explanation for why they invariably invent a tight race: journalistic 'objectivity'. As Coddington pointed out, when all the highest sources on both sides of the campaign say it's tight, who is Silver to disagree?


one of the comments on the post noted that it should probably have used 'pundit' rather than 'journalist' throughout, to be more specific as to the kind of journalist who objects to silver.


I agree with this article's conclusion in Silver's case, but I think for different reasons. I think the journalistic epistemology it dismisses is actually a good one, but is not being demonstrated in the attacks on Nate Silver. Long-form, long-lead-time investigative journalism is perhaps where it's best demonstrated, and in that case blends somewhat into the epistemology used by anthropologists, sociologists, and ethnographers: the idea that to understand a situation or culture you need to spend considerable time with it, observing how it works, how the people there think and act, etc., even being yourself embedded in it for a time. But you have to actually do it, which takes considerable work and usually long periods of time, and TV pundits are not. And it's better for some things than others: ethnographers rarely claim that their expertise is in predicting the outcome of elections. If you want to predict something, especially something numerical, an ethnographer would generally say that you're asking the wrong person for that prediction, and should go ask a statistician. That might be one difference with journalists: journalists have less humility about the conclusions they can draw.

I don't think that can always be replaced by just data-crunching, in part because you just move the same problem to the 2nd-order problem of interpretive frames for data, which requires the same in-depth ethnographic field work to get right. The proper balance is a big debate in qualitative vs. quantitative sociology, though.

I do think that when it comes to predicting the outcome of elections, on the other hand, the decision is not hard, because it's almost a best case of a prediction that can be quantified based on available data. So I'd say this is more an issue of punditry vs. careful methodology (of any kind), rather than qualitative vs. quantitative epistemologies. In this specific case, it's mainly just partisanship: some people don't like that Silver's data shows Obama with an ~80% chance of winning, so plug their ears.


I'd agree with that. For the elections, it's hard to rev up the base when someone points to the numbers and says you have a poor chance of winning. Rove has an editorial at the WSJ right now that's their #1 article, about how Romney has good chances to win; it feels similar to how in 2006 he claimed to have "the math" before the Republicans got their drubbing. It's almost like putting out a big name editorial like that is the last "get the base out to vote" attempt before the election, regardless of what the actual odds are. Problem is having someone like Silver running numbers in a clearly explained way that anyone can replicate, with a high measure of past accuracy, serves to debunk the efforts of people like Rove almost by default.


This is an important point. Elections follow similar patterns every cycle, including much bravado. Silver totally disrupts that with data--it's not just the punditry that is angry with him, it's political operatives. He's pulling the curtain back and revealing the "wizard of Oz", so to speak.

If you really want to know which side feels less-than-confident about their chances, the best indicator is "leaks" from campaign insiders that talk about the candidates' future plans. Laying the groundwork early is an irresistible urge for those who think they might be in trouble.


> In this specific case, it's mainly just partisanship: some people don't like that Silver's data shows Obama with an ~80% chance of winning, so plug their ears.

News flash: It's no secret that the news media (except for Fox) has a distinct Democratic tilt. I agree with your premise that journalists are closet partisans, but I disagree that they'd be upset at an analysis that favors Obama for partisan reasons.


Honest Q that I hope won't sound too partisan: I think it's pretty accurate that most members of the news media tilt to the Democratic side -- although I think they tend to be much more in the Clinton model rather than the Kucinich model, which is to say about what would have been considered moderate Republican in the 60s (socially fairly liberal but economically rather lassez-faire, save for a few fairly stock Democratic positions like being pro-union).

However, that doesn't necessarily mean the actual news stories are biased. There may be tilts in the sense that, say, a Keynesian and a Hayekian might sincerely intend to write the same story objectively but still reveal a bit of bias in the finished result -- but isn't that qualitatively different from the kind of editorializing we see on Fox (and, since they decided that counter-programming Fox was their ticket to success, MSNBC)?


There are plenty of right-wing journalists outside of fox.


Then name 3 of them, from memory, without checking wikipedia.

Bonus points if their run talk show with a political slant like Colbert.

EDIT: thanks for the downvote, but I'm still waiting for triplets.


Michelle Malkin, Jonah Goldberg, David Brooks (debatable), Charles Krauthammer

talk show: Glenn Beck? Rush Limbaugh? Pat Buchanan is on the communist PBS show McLaughlin Group.

Colbert isn't a journalist, but talk radio is 99% conservative or if not outright conservative then advocating "common sense"-y solutions that somehow always come out conservative.

Didn't check wikipedia or even look up articles.


David Brooks, David Frum, Bill Kristol. Not talk-show guys, 'cause I don't watch much TV.


Joe Scarborough quite literally is a republican (6 years in the house from FL-1) and runs three (I think) hours of coverage every morning on the "most liberal" of the cable networks. The whole notion of the "liberal media" as a unified force is itself a creation of the decidedly conservative wing of the media dominated by News Corp. MSNBC has a bunch of coverage that slants left. Fox is actually run by republican partisans. Yet it's the people on the right who complain the loudest, and the reason is precicely that they have their own partisan media to push the message.

Sigh...


I think the downvote is for the attitude. Your edit didn't help. Are the lurkers supporting you in email yet?


Why should Fox be excluded?


Because they have a clear Republican tilt.


Well, yes. So the comment is essentially "journalists lean Democratic if you ignore a huge concentration of Republicans". Which is not much of a statement at all.


I'm sorry, but does Fox News actually have many journalists? From what I've seen, they mostly run pundits, so the statement is still technically true.


On the subject of actual newsroom expenses: Pew Research Center for Excellence in Journalism has an interesting piece on the budgets of major cable news networks. [1]

These charts (from [1] and [2]) are particularly interesting:

http://www.stateofthemedia.org/files/2011/01/31_Cable_Revenu...

http://stateofthemedia.org/files/2011/03/20_Cable_Cable-Chan...

http://www.stateofthemedia.org/files/2011/01/5_Cable_CNN-Lea...

http://www.stateofthemedia.org/files/2011/01/21_Cable_CNN-Re...

http://www.stateofthemedia.org/files/2011/01/22_Cable_The-Sa...

In short, when compared to CNN: despite operating few domestic and foreign bureaus, Fox News pulled in more revenue and had nearly as large an audience (as measured by viewers tuning in for 60 minutes or more monthly). Fox also allocates over 70% of its budget to "program expenses" (including salaries for its hosts), whereas CNN's program expenses are about 44%. The difference in staffing figures are also drastic: ~4000 for CNN, 1272 for Fox.

[1] http://stateofthemedia.org/2011/cable-essay/

[2] http://stateofthemedia.org/2011/cable-essay/data-page-2/


> I think the journalistic epistemology it dismisses is actually a good one

I disagree with this. Basically, the journalistic "epistemology" the article describes is that journalists get access to information the rest of us can't see, so we have to take their word for whatever they say about it. That doesn't seem like a good thing to me. I don't want journalists, or anybody else, to pre-digest my information for me. I want to see the raw data. If they want to show me their spin on the data in a separate article, that's OK, but if I can't see the actual data they base their spin on, I don't trust their conclusions.


Exsmple: Judith Miller of the NYT and her reporting on Iraq.


(submitter here) The original title was "Why political journalists can’t stand Nate Silver: The limits of journalistic knowledge"

I had to edit it because of the eighty character limit. Sorry about that.

The central point of the article is about misunderstandings between people with different 'modes of knowledge', and not really about current USA politics (in which case I wouldn't have posted this here,I'm not a US citizen and so detached from the partisanship.) Hopefully the discussion here won't get into political argument.


I definitely hear that. In fact, Krugman wrote a piece along these lines, arguing that many journalists are very uncomfortable dealing with data. Instead of looking at lots of data and trying to draw conclusions, they want some magical insider to tell them what's really happening.

http://krugman.blogs.nytimes.com/2012/10/30/scoop-dupes/


It is not often that I find myself agreeing with Krugman (who likes to ignore data that does not fit his views). But in this case -- possibly because no data is actually invovled -- I agree completely.

And unfortunately, that's true for most of the people, most of the time - if you asked your physician to substantiate any non-trivial recommendation he makes, you'd find that data supporting those recommendations is severely lacking or irrelevant. But most people want the magical insider (the doctor, in this case) to tell them what's happening, rather than the facts.


I think in the case of journalism that the "doctor" wants to preserve some flexibility in interpretation that Mr. Silver takes away.


An alternative to truncating to get under 80 is to take out most of the vowels: "Why pltcl jrnlsts cn't stnd Nate Silver: Th lmts of jrnlstc knwldg". That's only 66 characters.


I don't understand why this is getting down voted. Vowel dropping preserves much more information than truncating.


Because it requires a lot more mental effort to parse a headline without vowels, and people don't want to expend mental effort to parse headlines; they want to be able to make an instant decision on the question "do I bother clicking on this headline or not?" Lack of vowels makes that much harder.


Or easier, if you're lazy like me.


Joan Didion wrote an article in the 1980's titled "Insider Baseball" who made an analogy between reporting on elections and reporting on the going-on in baseball locker rooms. Typical election journalism gives the ready the illusion that they're getting an inside view, but ultimately the voter is disenfranchised because the story isn't about the voter's choice, it's about the campaigns.

I remember an Issac Asimov story about a future where only one person had to vote, because, based on that person's vote, a supercomputer could predict who would have won the election. That's the strange future of 538. Politics is no longer the art of the possible, it's just like betting on sports.


She has a wonderful authorial voice. "Insider Baseball" is here:

http://www.nybooks.com/articles/archives/1988/oct/27/insider...

Note: don't try to read it from a left/right perspective. That's not where she's coming from.


I also thought her book Political Fictions http://en.wikipedia.org/wiki/Political_Fictions was in a similar vein.


The Asimov story is called Franchise.

http://en.wikipedia.org/wiki/Franchise_(short_story)


Elections already are about the campaigns. If there is more certainty about the outcome of the election, they will inevitably be less about the campaigns. That would lower turnout... but would it be bad?


I think Paul's point was that typical election coverage is about the campaigns themselves, as if the campaign was intrinsically important [1]. But the campaign is not what's relevant to a voter. That is, they don't need a narrative about how the campaign is doing, they need information on their choices.

[1] Insider campaign coverage is certainly interesting, but that's different from being important. I'm interested in behind-the-scenes narratives in just about everything.


If that was really a problem, they could just make voting mandatory. Like it's already in some countries.


I hope that the rise of Nate Silver portends a day when journalism will not be practiced almost exclusively by people who are functionally illiterate when it comes to all things quantitative. When so much of our world is governed by relationships that are inherently quantitative, it's tragic that, of the people who make it their life's work to explain that world to us, so few are equipped to do the job.


Quantitative journalism. In light of that term I think we can see that Fox/HuffPo/etc. practices qualitative journalism, which I would translate to "feelings-based."


Julian Assange had a similar term, "scientific journalism". Scientific in the sense that you have to show your raw data.

Nate is a hybrid. The details of his model are not on Github or anything, but he does describe his general approach, and he works from publicly available information.


Don't forget Engadget to that list, as today's editorial (posted earlier) shows.


I rarely deign to type the names of any Nick Denton properties.


And then, god forbid, perhaps politicians who are not innumerate? A man can dream.


Why has Obama been going to Florida? The only reasonable answer is that he is looking not only at the snapshot numbers someone like Nate Silver does a good job of reporting, but the trend, way ahead of current polling. Campaigns are very numerate.

For the same reasons, Chris Christie and Michael Blomberg feel safe throwing Romney under the bus.


Regarding Christie, I'm willing to assume that his praise on Obama is genuine, and not calculated. Christie has had to deal with an enormous disaster, and I can easily see him being grateful if the President has, in fact, been very helpful.


I should say, politicians who don't act innumerate in session.

Good point that they act extremely numerate (is that a word?) in how they respond to polling numbers.


Bloomberg is not a Republican anymore (He's an independent), for the record.


Politicians, by and large, are fairly numerate. Speechwriters rarely care about numbers, and policymakers care not at all.


> Politicians, by and large, are fairly numerate.

Got any data to back this up?


Politicians are highly numerate. They just lie about their incentives so the appear innumerate when misjudged.


Only politicians in the vanishingly small percentage of districts that are actually competitive.


Many of those districts are still competitive, just at the primary level instead of the general election.


Primary elections are only competitive when there isn't an incumbent. And even then rarely, because there is a candidate that the party favors.


Playing to the lowest common denominator. Any politician who says they don't read the news or polls is lying.


It's even beyond that really. Most journalists are just functionally illiterate, period. Journalism is no longer a respectable profession, and it largely now draws from a pool of people who wouldn't be good at much else. As far as I can tell, the journalism majors that have a bit of intellectual horsepower all end up as lawyers.


Oh no, you have the problem precisely backwards. Journalism is now a very respectable profession. Which is why it attracts people who don't know anything.

It was much better when journalism was a trade that you went into. Street savviness was the journo's hallmark. They had no loyalty to power, because they came from the wrong side of the tracks, and their livelihood depended on exposing secrets. Great journalists were ugly, ink-stained, alcoholic wretches.

Those people are gone. Now there's two kinds of professional journalists. Middle class people who were educated to do journalism at university, fighting over the few scraps of remunerative work left. And people who want to shill for one of the dominant political coalitions. These groups get to the gym a lot more often, but are constrained by their loyalty to the system.

So, journalism would be better if it was less respectable. We also need to put a stop to all the social mobility that's happening. When you block an entire group of people from attending university or holding office on some irrational basis like ethnic origin, you usually get some really good journalists. But there is cause for hope. Perhaps the end of privacy means we'll get more people in the media like Eliot Spitzer.


I found out today that the US weather service gives a % chance of rain estimates. That's good but not enough.


Would you rather find a certainity that every now and then turns into a lie? I wouldn't.


... It's good that one part of the government (Navy?) presents it's predictions with confidence values abd so will introduce the concepts to the nation in an everyday format

but just the weather service is not enough

(what I would have said had I more time first time)


Dr. Sam Wang, a neuroscientist at Princeton University, wrote a spirited defense of Silver on the Princeton Election Consortium blog (http://election.princeton.edu), where Wang also does some insightful statistical analysis on the Presidential election.

Wang sometimes disagrees with Silver, but supports the notion of a data-driven approach. Fascinating stuff on his blog, too: he's running Bayesian prediction models, Random Drift, and has a popular meta-margin that is worth checking out.

He also takes some delightful shots at journalists and others who are trying to keep the focus on political "horse race" reporting, instead of using more rigor. And he has a great sense of humor - his "Nerds Under Attack" post is hilarious - http://election.princeton.edu/2012/10/29/nerds-under-attack/


It's hardly a war between epistemologies, journalists just have to fill time. They've had Nate Silver on to celebrate his work a dozen times when he serves the purpose, lets them tell a different story than the day before.

They're not against his methodology, they're against his current results, which don't make the race as interesting as it could be. (If Nate was the only one saying it was a close race, they'd be pushing his narrative above all others.)

News is biased, but not how most people think, it's biased towards conflict (and novelty, doomsaying, and sensationalism).

It's just entertainment, there's no permanent epistemology to entertainment.


News is biased, but not how most people think, it's biased towards conflict (and novelty, doomsaying, and sensationalism).

They could, however, be using those entertainment factors to cover, say, the fall of real wages, or the current state of unions, or what's actually in a Congressional budget, or any number of other topics that seem to get very little air-time.


I'd also like to see more air time devoted to the Congressional budget, but I can't think of a way of presenting it that the average TV viewer would find entertaining.


The problem isn't that it's been tried and found difficult, but it's been assumed to be difficult and not tried (to paraphrase P. G. Wodehouse)


I disagree.

I did work for a startup that specifically showed un-biased news on a variety of pertinent topics from a variety of great sources.

Yet the videos that made the most money for them (advertising revenue) were the stupid shit you'd expect. Gossip, memes, sexual related content...

At the end of the day the most mainstream news stories/clips were far more lucrative than the topics a typical HNers/New Yorker reader might enjoy.


The OP referred to turning various kinds of interesting subject matter into entertainment, versus merely reporting it. I think this is actually an interesting idea, and I don't think it has really been tried.

The typical entertaining news story "writes itself". Gossip, stupid pet tricks, etc.

Indeed the entertainment industry is itself very conservative in its choices of subject matter.


That word "entertaining" is the root of the problem. People don't want to actually think about issues; they want to be entertained. You can't fix problems by being entertained.


> News is biased, but not how most people think, it's biased towards conflict

Exactly. They're called "stories" for a reason.


What about "articles?"


I work in media, specifically journalism. By and large, they're "stories." What makes news "news" is pretty much the same thing that makes entertainment "entertainment" (uniqueness, continuity, composition, etc.). There's a big difference between gathering and filtering facts of the day and writing a news piece, which requires the gathering and filtering of facts, while presenting them in a narrative format, with quotes and conflict (which is sometimes called balance, but I tend to disagree with that argument). For most news with a short lead time, it's produced by formula (see: the inverted pyramid), and often decided by editors shooting from the hip as to what is a good story.


Your eliding of news and entertainment is troublesome.


In what way?

Conflict itself is a news value. Something may be news precisely because there is conflict, and exponentially so if that conflict affects a news organizations' readers via proximity or other news values. Where no conflict can be found, it can be (and often is) generated via balance. Science writing is full of it, same with education writing and politics. Find one source that says one thing, another source that says the opposite and you've generated built-in conflict. If the conflict is sensational enough, your editor puts it on the front page.

This is a real problem in journalism. Journalists are specifically trained as generalists. There's a culture of avoiding becoming experts in their beats, instead they strive for access to experts. The argument is that by not being an expert, one can write for a general audience more clearly. Just follow the formula.

Anyway, news is narrative. That is the chosen format. Compelling narratives do better at selling news than dry factual reporting.

There's a reason why, for example, when newspapers do polls that are outliers from the consensus they blare that information as loudly as possible. It's not because they honestly think they're right, it's that they've spent upwards of $20,000 conducting the survey and now they're going to get their worth out of it.


The problems you describe are purely a result of editorial policies that can be changed. There is no natural law that dictates that access or narrative (editorialization) be a fundamental aspect of news publishing. Season 5 of "The Wire" goes into this with some detail.


It's like ultra-sound, takes all the fun out of guessing the sex of the baby. If he methods were less reliable or the race was closer so the result was more in doubt, he would be fine feed for the grist mill. Right now he's just a party pooper.


It's in the campaign's interests to paint is a tight race, because they don't want to promote political apathy. They want people to vote.

To put it another way if his model suggested, e.g., Obama has a 100% chance of winning, and everyone knew that, his supporters would be less inclined to vote and he could lose. So there's a chaotic element to these things that's just difficult to predict, and journalists are better placed to handle this chaotic element (another example: Hurricane Sandy). That said, given the abundance of polling data, I think his model stands to be a much better predictor than journalistic intuition.


If Obama was forecast to havee 100% chance of winning, wouldn't voters for alternative candidates also be less inlcined to vote?


They might also view it as a challenge to prove the pundits wrong, and turn out in large numbers.


Depends how "alternative" those candidates are -- it might convince more people that they can vote for their actual favorite instead of issuing a strategic vote for whichever of the two major-party candidates they find less objectionable than the other.


I think the premise--or at least the wording--is wrong. Journalists like Nate Silver. It is political pundits (who are almost the opposite of journalists) who can't stand him.

Fittingly, pundits typically dislike actual journalists for the same reason: they attempt to report things accurately rather than through a party or ideological filter.


My problem with journalists is they try too hard to report things "fairly."


Agreed. Setting aside the impossibility of unbiased reporting, the current habit of "repeat what both sides say, but don't fact-check anything" is appalling. If one side is outright lying, and you are reporting what they say but not the factual truth behind it, you're doing a disservice to all involved.


There is plenty of fact checking being done by journalists, but it has shifted to "fact check" stories that are published separately.

This is in part because it takes longer to fact check things, but there is tremendous competitive pressure to be first with news. So the on-scene reporters report what was said in near-real-time, and then the fact check reporters look through and report on how true it was.


Sure, but those "fact check" stories are useless. People see the first thing. They aren't likely to see the next.

I get the rationale, but it doesn't do what reporters need to actually do to be an effective part of the political process.


If we're being honest, it's overwhelmingly political pundits of a particular political persuasion who can't stand him. Hence the rise of thinly-veiled pundit arms like the Unskewed Polls nonsense.


It's another symptom of poisoned politics here in the US. As far as I've seen, the people who are attacking Nate Silver are Republicans who simply don't like what Silver's stats are pointing to. They're the same people who claimed last month's unemployment figures were somehow manipulated because it showed a downward trend in unemployment.

It's also an insight into the state of journalism and the press in this country. It's pretty hard to find objective reporting these days, sources of 'news' seems to be full of highly opinionated punditry rather than unbiased reporting.


If Silver were predicting the race going the other way, Democrats would attack him. People tend to attack things that don't fit their desired outcomes. Leftists attack Fox for bias but they have no problem with the bias of MSNBC. It isn't bias people care about, it's bias against their own views. There are very few objective people.


Did Democrats attack 538 when Silver predicted huge losses in the Congressional races in 2010? I don't recall this.

It's not for nothing that one party is considered the anti-science party.


No one pays attention to off-year races.


Actually, Democrats attacked one another and/or Obama when the race tightened in early October. You wouldn't believe the wailing and gnashing of teeth. But seldom would you hear somebody say, well, the polls must be wrong.

The difference is that generally when the polls sour for Dems, they blame themselves collectively. When Repubs see the polls sour, they blame the polls. Neither is objective, but one of them is somewhat less isolated from reality.


Do you have anything to support your implication that MSNBC is as "ideologically biased" (my words, not yours) as Fox News, and thus as worthy of attack for their bias? Consider that, as one example, FNC's VP of News has handed down memos instructing producers to adopt a conservative stance in their stories [1].

[1]: http://mediamatters.org/research/2004/07/14/33-internal-fox-...


I hope more supposed "journalists" come out and criticize Nate Silver for his methods. Then maybe we can start having a real discussion based around facts and data with real analysis across the board.

Take for instance discussions around social security. I swear most of the arguments I hear have no real data to back it up. Nate Silver represents a new generation of reporting that threatens the status quo journalist. Notice I don't say traditional journalism because I don't believe you can replace true journalism (go out and actually do investigation and reporting). I just believe Nate Silver is leading the way to more fact based approach.


It may be more that the old fact-based approach that has been corrupted and stymied over the past 40 years (Watergate) is giving way to a new data-driven fact-based approach, and the people who have spent their careers in the old, broken model now have to fight for their continued relevance and employability.


I'm not very familiar with Silver's background. I was about to write up how similar this type of analysis (and the ensuing reaction to it from the established pundit class) was to the baseball world and the rise of sabermetrics. Then I saw his background and found out that that's where he started. Not surprising.


It's worth emphasizing the fact that Nate Silver's model != Nate Silver's prediction. He has explicitly said many times that if you were to ask him what he thought the outcome would be, he wouldn't go entirely by his model. Rather, he uses the model as the basis of his predictions. This confuses a lot of people because he generally doesn't make predictions publicly, but rather he analyzes what's happening based on the model.


The article seems down to me (503), but here's one that (judging from the headline alone) is about the same topic: "People Who Can't Do Math Are So Mad At Nate Silver"

http://www.theatlanticwire.com/politics/2012/10/people-who-c...

e: The article is now up for me, and it definitely covers the same topic.


Can what Nate Silver be doing be considered science?

He's dealing with terribly sketchy data. Response rates to political polls in America are south of 10%, and there's no proof that the portion of the population willing to have a conversation about their political preferences with a stranger is representative of the population as a whole.

He's also dealing with incomplete data. The 'likely voter' screen pollsters use to determine who's actually going to the polls isn't always revealed, and it varies from organization to organization. Polling is a cash-strapped industry and some polls' likely-voter screens are much more porous, cutting down on staff required and voter contacts needed to generate a 'statistically-significant' sample. But details of the likely-voter screens and response rates aren't always available.

His results are also completely unverifiable. Let's say his final prediction is 80-20 in favor of one candidate, but the other candidate wins. Well, his model did say that'll happen one time in five, so you can't really criticize it.

The appropriate amount of precision to conclude with, given all of the fuzzy inputs, is something like 'well, we can't tell who's going to win - it's close, although this one candidate's chances look slightly better'. It's not 'Candidate A's chances improved from 74.6% yesterday to 76.5% today'.

That's what makes Nate Silver so irritating - he doesn't know any more than the journalists, but he claims he does.


>The appropriate amount of precision to conclude with, given all of the fuzzy inputs, is something like 'well, we can't tell who's going to win - it's close, although this one candidate's chances look slightly better'. It's not 'Candidate A's chances improved from 74.6% yesterday to 76.5% today'.

Your post shows a horrible misunderstanding of how this all works. When you have sketchy data, you don't just disregard it -- it's still telling you something and the role of statistics here is to extract, from amidst all the noise, that information.

Just because something is hard doesn't mean it's not worth the attempt.

>His results are also completely unverifiable.

Kind of. But how his methodology performs over all, over a wide range of elections, is verifiable. Wikipeida mentions this:

>The accuracy of his November 2008 presidential election predictions—he correctly predicted the winner of 49 of the 50 states—won Silver further attention and commendation. The only state he missed was Indiana, which went for Barack Obama by 1%. He also correctly predicted the winner of all 35 Senate races that year.

Whether his models perform like this consistently is a very real way that the methodology can be verified.


A couple of points:

1) His model is easy to verify... wait for the elections to be over, and see how much his predictions correlate with reality. He predicts a great many races, and the results speak for themselves.

"The accuracy of his November 2008 presidential election predictions—he correctly predicted the winner of 49 of the 50 states—won Silver further attention and commendation. The only state he missed was Indiana, which went for Barack Obama by 1%. He also correctly predicted the winner of all 35 Senate races that year."

http://en.wikipedia.org/wiki/Nate_Silver

2) Also, his model accounts for the response rate figures you cite - that was actually the point of his most recent blog post. The reason Nate gives Romney any chance at all to win is because his model predicts that there is less than 1/5 chance that the polls are systematically biased (based on data since 1968). He thinks that if he's wrong, it's exactly for the reason you say - that those who respond to the polls are a small group that don't represent the population. He has a large enough sample size to eliminate sampling error, and we're close enough to election day to discount error due to polls being a snapshot in time. So there is only systematic bias left to discuss...

"So why, then, do we have Mr. Obama as “only” an 83.7 percent favorite to win the Electoral College, and not close to 100 percent? This is because of the other potential sources of error in polling."

And that error is simply that polls don't reflect reality, and he thinks that is about a 15% chance.

http://fivethirtyeight.blogs.nytimes.com/2012/11/03/nov-2-fo...


His model is easy to verify... wait for the elections to be over, and see how much his predictions correlate with reality.

FWIW, he got a bunch of congressional races wrong in 2010.


He's not the Oracle at Delphi; he gives probability estimates, not inescapable prophecy. It's inevitable that some elections will go to the candidate who he predicts has a lower chance of winning. The real question is, how often? If it happens more or less often than he predicts, then his predictions are biased and should update on that fact.


One factor that messes with any model is lack of consistent data. House races aren't publicly polled nearly as often as statewide polls. And furthermore, it's not necessarily that he "got races wrong." When he says Obama wins 80% of the time, he's also saying that Romney wins 20% of the time. 1 out of 5 times, Romney will win and Obama will lose, despite being a favorite to win.


What you just posted is misleading. Silver's model gave the GOP a 2 in 3 chance of winning the House in 2010, predicting a net gain of 45-50 House seats, and the majority - http://fivethirtyeight.blogs.nytimes.com/2010/09/10/g-o-p-ha...


Also, his modeling is MUCH more accurate for presidential races, where there is more polling data, and thus a lower sampling error.


It is important to note that the cited accuracy was for the predictions made the day before (or perhaps even later) the election. It would be interesting to know how accurate the predictions were X weeks out.


I don't think Silver claims to have data or knowledge that is better than what is available to the public.

What he does different - and his isn't the only model built off of polling data, nor are 538's probabilities that different than other models - is just trying to decipher all the available polling data to reduce the noise and focus on the signal.

I also don't think he sees himself as being in the business of picking a winner, as in saying "I believe X will win tomorrow".

One thing you might be missing is that his predictions are verifiable at the state level - a result where he predicted the overall winner but only got 30 of 50 states right versus getting 49 out of 50 states right are pretty different.

Polling aggregation and models are as much "science" as the practice of polling is in the first place.


One thing you might be missing is that his predictions are verifiable at the state level - a result where he predicted the overall winner but only got 30 of 50 states right versus getting 49 out of 50 states right are pretty different.

This really isn't true, because something like 35-40 of the states aren't even remotely up for grabs and anyone intelligent could predict them after 15 mins of looking at recent polls and previous election results. So he's trying to pick winners for 10-15 states where the race is closer, but for probably half of those, it's really not that close and you'd have a pretty good chance of getting them right if you just eyeballed it. So now we're talking about 5-7 really tight swing states, and you're basically saying that predicting 44/50 states is very different than predicting 49/50. Not so sure that it is, especially given that it was a 50/50 choice for each of those states, and he did it for one election. Also, lots of pundits and bloggers made predictions in 2008; you'd expect someone to be mostly right. Fooled by Randomness and all that.

For what it's worth, I think Silver probably is right and Obama will win (though I'd prefer he didn't), but his methods do lean towards being relatively unverifiable on a short timeline.


It seems like you are confusing predictions about this election with overall model accuracy.

The state by state predictions are the primary method by which you can judge the accuracy of his model. If his model forecasts a state as 60% for one candidate, you can assess that level of accuracy by looking at all 60% predictions.

Let's take some examples from todays date on FiveThirtyEight.

* Florida 54.8% chance of Romney win

* Virginia 67.0% chance of Obama win

* Nevada 67.9% chance of Obama win

* North Carolina 79.6% chance of Romney win

* New Hampshire 80.4% chance of Obama win

* Iowa 80.7% chance of Obama win

* Nevada 88.7% chance of Obama win

Other states like Texas, Utah, Idaho, Wyoming are projected at 100% for Romney and New York, California, Oregon, and Illinois at 100% for Obama.

If any of the 100% states go to the opposite candidate, that is a model problem. If some of the ones specified with extremely high percentages 95% go against his predictions with a high margin of victory, again that is a model problem.

Finally, some of those close races should go against the models prediction.

Let's take the 7 states listed above. 5 of the 7 are projected for Obama and 2 for Romney. However there is only about a 37.68% chance of that exact distribution happening. I break it down as follows:

Obama-Romney * 0-7 0.02%

* 1-7 0.14%

* 2-5 2.26%

* 3-4 12.14%

* 4-3 31.48%

* 5-2 37.68%

* 6-1 16.00%

* 7-0 0.28%

Each of these numbers are probabilistic statements about the likelihood of the overall event occurring based on the probabilities. They are from a single simulation of the 7 state probabilities run 10,000 times. Each of them has it's own distribution, eg 5-2 was 37.68% in the first run, 37.38% in the next, then 38.03%, then 36.85%, etc.

This is a very simple model prediction, but by taking all 50 states into account you can get a very clear assessment of how well his model is actually predicting the outcome of the election. Complicating matters is the time series nature of the predictions.

This type of model is precisely the way you get away from "Fooled by Randomness and all that". His model is clearly articulating the amount of uncertainty in the forecast.


but his methods do lean towards being relatively unverifiable on a short timeline.

Perhaps. But I think a lot of his critics fall into this trap of thinking that what Silver thinks is that he is offering some sort of revolutionary and perfect prediction, or that he thinks he is doing some sort of groundbreaking science.

He's just trying to make a prediction based on all of the available information. If election predictions are inherently "relatively unverifiable", Nate Silver isn't capable of changing that - just working with what he is given (and doing a great job of explaining all of the ins and outs of this stuff to the layman).


In fact he says a good amount of this himself. I think some of his readers impute a greater level of "scientificness" to his numbers than he himself claims. He's had many posts throughout the fall explaining where his model is based on some assumptions that could turn out to be incorrect, and key parameters fit based on relatively limited data. For example, an important one is how you translate current poll leads to likelihood of winning on election day, i.e. why does an x% lead a week before the election give you y% chance of winning? His method is to look at the empirical distribution of poll misses in the 11 elections 1968-2008, make some normality assumptions, and use that to estimate poll->results mapping, which serves as a single estimate of a whole bunch of miscellaneous sources of error (likelihood the polls are systematically biased this year, likelihood of a last-minute change, etc.). But of course that's a small number of data points, and not IID ones, either, all of which he acknowledges. All he really claims is that this model is a reasonable attempt to integrate the available data.


The headline predictions aren't his only predictions.

Silver also publishes projected margins of victory (with margins of error), and he does so for every state and every senate race, not just the battleground ones. As such, it's possible to do a fairly substantial review of his model against actual results.

Statistics are never perfect, but his predictions are far more transparent and verifiable than anything else on the market.


Electoral-Vote.com used a much simpler method (averaging polls) and got 48 out of 50 states right.

EV.com had Missouri as a tie (McCain ultimately won by 0.1%, 538 got the state right) and had Indiana wrong (as did 538).


The guy made a model that he (and the nytimes) think is reasonable. The model takes inputs and spits out a probability. As the inputs change slightly so does the output. He's well within his right to publish these changes. What's the problem? That you just can't accept the validity of day-to-day basis-point changes in 'chances of something happening' when the inputs might be dirty? Can't you just say 'oh, that's nice' and take it as a directional signal without allowing it to irritate you?

I'll take a thoughtful statistical analysis any day over the words of a single television journalist giving me second-hand information from a 'source within the campaign' especially when any campaign insider has an absurdly strong incentive to manufacture public perception in his candidates favor.


>His results are also completely unverifiable. Let's say his final prediction is 80-20 in favor of one candidate, but the other candidate wins. Well, his model did say that'll happen one time in five, so you can't really criticize it.

That's a good point. He is using percentage chances everywhere but then running only one trial (election).


Clearly a wager is the way to solve this problem (Bayesian interpretation of probability). Is it legal to gamble on an election result? They could use the following payouts - J represents a regular journalist using 50/50 odds, N represents Nate Silver using 75/25 Obama/Romney:

Romney win:

J: +1000

N: -1000

Obama win:

J: -600

N: +600

With those payouts each side has an expected value of +200 dollars for this wager, using the specified odds they believe to be true, so both should be willing to take it.


Imtrade and trade sports do this. It is also trivial to simulate with play money over many elections and races.


This is, from my recollection, an incorrect assessment of Silver’s methodology.

He runs (continuous?) Monte Carlo simulations over his inputs, and when he says that there's an 80-20 chance in favour of candidate O, he's saying that 80% of the simulations resulted in a win for candidate O, not candidate R.


You're forgetting that he predicts the specific numbers of the outcome, not just the result. When he predicts that candidate A will win with X electoral votes, and the result is very close to X, that's a substantial verification of his prediction, especially if it repeats several elections in a row.


> [Silver's evaluation] involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.

No, as far as I know the actual model and its parameters are not public. I understand why this is the case but without that information the basis for the conclusions are effectively as obscure as those produced by the conventional journalists decried in the article.


Silver tries to be transparent about his model. For a non- technical description, see [1]. He used to have something more technical/concrete on his personal blog. I'm guessing you can still find that easily.

The fact that he doesn't include specific equations in his NYTimes blog is likely only to improve accessibility to non-technical readers.

That said, he has given enough detail for others to roughly replicate his work. See [2]

[1] http://fivethirtyeight.blogs.nytimes.com/methodology/

[2] https://github.com/jseabold/538model


You substantially underestimate the effect of the (unreported) model parameters. A person can often arbitrarily change the prediction by tweaking the parameters.

Thank you for the link to the attempt to reproduce Silver's results. I will have to spend more time to determine how closely they agree -- it isn't obvious after a cursory look.


That's a good point about the importance of parameters.

I wonder whether Silver would be responsive if you emailed him asking about parameters.


Weird, in '08 he had a page giving the model in detail, including some of the fit values (e.g. the imputed pollster "house effects"), but I can't find it this time around. Change since the NYT purchase?



It's honestly people just not understanding what he's doing, not bothering to try, and then making inane statements about it. There's a reason he's popular, and it's not that journalism is ignoring him, or "can't stand him". It's because a lot of them point at him as their reasoning for events.

Sure, a few ignorant ones say things to earn a bit of attention, but they are neither the standard nor the rule.


Silver’s methods cannot possibly produce more reliable information than the official sources themselves. These are the savviest, highest inside sources. They are the strongest form of epistemological proof — a “case closed” in an argument against calculations and numbers.

Now where have I heard that before . . .


A gist.io of the article to ease the sudden load on the site (or if you simply can't get to it due to 50* error):

http://gist.io/4007765 (Source: https://gist.github.com/4007765)


Punditry is analysis of known facts.

Journalists do it through their prism, statisticians do it through theirs (and yes Nate Silver is a statistician and so his assumptions will affect his predictions)

Other journalists perform analysis on secret or private facts. The leaks, off the record briefings of normal political discourse, as well as actual real investigative journalism (both of them who are left)

Pundits who don't get the difference are doomed to be replaced by pundits with a copy of SASS.

Investigative journalism is just doomed.


"Journalists get access to privileged information. . . , then evaluate, filter, and order it through the rather ineffable quality alternatively known as “news judgment,” “news sense,” or “savvy.” This norm of objectivity is how political journalists say to the public. . . .

The author uses the word "objectivity" when the right word is "subjectivity". 'savvy', 'news sense', 'gut feeling' are never objective. The author himself says so a few sentences later:

Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment.

And yet, he concludes by going back to "objectivity" :

When journalistic objectivity is confronted with scientific objectivity, its circuits are fried.

The bottom line is: this is the age-old war between what I'd call 'science' (Silver, given his methods) and 'art' ( the journalists ).


I think the author knows what he is talking about.

Journalistic objectivity is about reporting the news without your personal bias. I think you can give your subjective opinion (making predictions, etc) and still have it be considered objective journalism.

Thats why his tldr is about the clash between the two objectivities.


>Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based.

Oh, bullcrap. Silver is hand weighting different polls based on whether or not he thinks they're reliable. That, in turn, is based on his own " subjective and nebulous professional/cultural sense of judgment".

In other words, he's doing exactly what the political journalists are doing and then smearing a thin layer of math over it. This isn't "scientifically based" at all.


No, even that’s TL;DR: When journalistic objectivity is confronted with scientific objectivity, its circuits are fried.

-- <social science> != Science


And <any kind of science or pseudoscience> != <scientific objectivity>. The idea of scientific objectivity is "forming conclusions based only on independently testable measurements." That seems applicable here.


political polling data is not independently testable


There's actually a fairly major independent test that will be concluded on Tuesday, November 6th.

... but you're technically correct :)


If political polling data gives you information about the outcome of an election -- if it can make your probability estimate more accurate than that of someone who hasn't seen any such data -- then I don't see what's so all-important about independent testability. Evidence is evidence; the possibility that a poll is skewed merely weakens its evidentiary value, by increasing the probability that the poll would have those results if they're not true.


Huh? If you know the methodology it’s very simple to independently test polling data.


What are elections?


Check out http://electionanalytics.cs.illinois.edu/ for an alternative to Nate Silver. It uses bayesian analysis and dynamic program to come up with a deterministic snapshot of the election.


There is one interesting question here: Is is harmful to the electoral process if there is far less of a perception of uncertainty about what candidate will win?

I'm not certain that there is. The modern political media largely follows politics like sports, and analyze whether a particular event is likely to be good or bad for a candidate, and such shallowness does not make the electorate more informed.

However there is a case to be made that more certainty would lower voter turnout, and that that would be bad.

I think that the only thing that could clearly improve our political situation is proportional representation and limits on consecutive terms.


The irony is that 538 can only exist because of gut-based journalism. The polls it aggregates aren't cheap to conduct. Newspapers and television commission them so that they can be the first to run a story alerting the public to some shocking new result, even (or especially) if it's an outlier. If the public stops paying attention to traditional horse-race poll journalism, these outlets won't have any motivation to continue polling.


I don't think most people would put polling in the "gut-based journalism" category. When people dismiss horse-race reporting and the "gut" stuff, they're dismissing pundits telling us what they think will happen based nothing on the other pundits they talk to and what the campaigns are telling them.

In a world where people place less importance on what people like Mark Halperin and Dick Morris think, news organizations would still have plenty of reason to conduct polling - that would be a world in which people value data over what "thought leaders" are telling us what they think.


People may value data, but they don't value a datum. Individually, none of the dozen polls that report in every day are newsworthy except insofar as they feed the commentariat's need for something to hang stories on. 538 is newsworthy because his statistics can draw meaningful results from multiple polls. In aggregate, polls are valuable, but without poor journalism their value to each organization funding individual polls is probably less than the cost of conducting them.


“How does Scarborough know that Silver’s estimate is incorrect? He talked to sources in both campaigns. In Scarborough’s journalistic epistemology, this is the trump card: Silver’s methods cannot possibly produce more reliable information than the official sources themselves.”

Well, the campaign insiders do have access to internal polling data that the rest of us don't. *

* Although, Nate Silver apparently did get to see that data in 2008.


If traditional political journalists assume that the race is too close to call because both campaigns tell them so, then they are gullible chumps, plain and simple.

The losing campaign will try to claim the race is close because they don't want their supporters to be discouraged and not turn out.

The winning campaign will try to claim the race is close because they don't want their supporters to get lazy and not turn out.


Such an objective approach like Silver's removes the market for the subjective narrative (convention bumps, opinion swings after gaffes, perception of the candidates, etc.) journalists rely on during the many months of campaigning. That's a threat to their jobs. It would be so amazing if the market for punditry would dry up.


Except it doesn't have to. It's easy to apply a subjective narrative to changes in numbers.

For instance, Obama's electoral vote EV and chance of winning, according to 538, sharply plummets after the first debate before turning around. Between the 3rd and 12th of October, Romney actually tripled his chances of winning from about 13% to about 39%. Then it turned around and Romney dropped to about 16%. It's not difficult to build a dramatic story around that.

In fact, this kind of thing is exactly what sportswriters and business journalists do all the time. Which means ESPN is on a (slightly) higher plane of journalism than most political commentary.


Amazingly, Silver himself said it better than I did above: http://www.dailykos.com/story/2012/11/03/1154733/-Nate-Silve...

Here's the quote on pundits saying the election is a toss-up: "then you should abandon the pretense that your goal is to inform rather than entertain the public".


Have you read 538? Nate has commented on all of those things. He just takes a more quantitative approach.


Experts are bad at predicting political outcomes, full stop. Even the "best" experts are absolutely stomped by simple statistical models[1].

And what does Nate Silver use? That's right.

[1] http://chester.id.au/2012/07/29/review-expert-political-judg...


If Nate Silver is right, then there is no reason to pay any further attention to the election (other than showing up and voting of course). That means fewer people watching or reading the news and their advertisements.


Tl;dr Silver is an agent of disruption, and the incumbents are terrified of him.


From what I've heard, Nate Silver received publicity for picking 49 out of 50 states in the 2008 election. I'd like to see how his model fares in 2012 and beyond, or a higher confidence interval at the very least. Is it falsifiable? Then we might be talking about science.


What the hell do you mean "is it falsifiable"? He's making concrete predictions about the outcome of an event that is mere days away. Voters will have the opportunity to select an outcome different from what's been predicted, so it's possible for him to be wrong. Is there some other definition of falsifiable that you are using?


It's even better than that: by looking at his state-by-state predictions and comparing them to election results, we'll be able to quantify his degree of wrongness.


You're not implying that a Romney win would falsify his model, are you?


I don't think he has a specific "model" for his "predictions". Reading his recent article, he uses statistics and weights informations gathered by polls to determine the probability of each candidate winning per state. You can't really be "wrong" in that sense, but the actual result can vary due to statistical sampling error, polling error, or bias in the polls.

Source: fivethirtyeight.blogs.nytimes.com/2012/11/03/nov-2-for-romney-to-win-state-polls-must-be-statistically-biased


a million likes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: