Site Meter

Defending Nate Silver

You may also like...

6 Responses

  1. Dave Hoffman says:

    I think part of the problem that many have with Silver is that although his method is totally nonfalsifiable, he presents (sometimes) as if he’s doing something scientific.

  2. Gerard Magliocca says:

    I like Silver’s work, but the only potential flaw is the pollsters could be guilty of group think. In other words, polls aren’t really independent data points that can be statistically analyzed with precision. But that doesn’t mean it’s true this year.

    The other problem, of course, is that you can’t bet the spread in elections, so a probability forecast is no fun.

  3. A.J. Sutter says:

    Dave, are you saying that he could be doing something differently that would be falsifiable? Or that predictions such as he is making — about probability distributions relating to a future event — are inherently not falsifiable? (More concretely: Silver doesn’t predict that Obama will win, which would be falsifiable if Romney in fact wins; rather his prediction that Obama has a 75%, or whatever, chance of winning the Electoral College, which would not be falsified by a Romney win.) If the latter, do you reject all Bayesian methods? See also the Santa Fe Institute approach to complexity, whose proponents (e.g., here and here) say that in the frequent cases where outright prediction of certain events may not be possible, a probability distribution based on repeated runs of simulations might be the best we can do. (I think this is in fact what Silver is doing.)

    For that matter, don’t econometrics-based explanations, such as those used in ELS, have falsifiability problems as well? Even some practitioners of that art have noted this issue.

  4. Brett Bellmore says:

    Replication is kind of the gold standard for science, and I doubt he’s re-run any of these elections multiple times. The basic problem here is that N is small, and every election has been different in multiple possibly significant ways.

    The other problem, of course, is that response rates for the polls have been dropping steadily, and are now in the pathetic single digits. That brings the entire practice of polling into question, and certainly makes error bars calculated on sample sizes dubious. We simply don’t know that those who respond are a representative sample of the larger population.

    That’s not a problem you can solve by just aggregating polls. They’re all subject to the same potential error source.

  5. prometheefeu says:

    @Brett Bellmore:

    But nevertheless, you could repeat the method and validate the methodology across elections assuming he keeps his method constant. Basically, you should be able to regress the election outcomes on the outcome probabilities.

  6. Brett Bellmore says:

    That’s true, the issue being that, as I said, every election is different in potentially relevant ways, and there haven’t been a lot of Presidential elections. With that latter being the key point.

    Bet his model has more free parameters than there have been modern Presidential elections… Unavoidably so, there are more states than Presidents have been elected.

    This is not even to suggest that he’s not sincere, or that his model is actually wrong. Just that I don’t place a lot of confidence in it.