Why is the New York TimesPres. Poll so much more pro-Hillary than others
November 3, 2016 9:46 AM   Subscribe

538 is about 60-something for Hillary today. Why the huge discrepancy?
posted by DMelanogaster to Law & Government (10 answers total) 2 users marked this as a favorite
 
The NYT is broadly in line with consensus (see the 8 forecasts here). 538 has oscillated higher and lower than consensus. Nate Silver argues this is a feature not a bug. Others argue this is evidence of undue sensitivity to polling shifts amplified by strong internal assumptions. We'll know who's right on Tuesday.
posted by caek at 9:53 AM on November 3, 2016 [4 favorites]


Andrew Prokop has explained at Vox why Nate Silver is bullish on Trump.

For what it's worth, Silver's model is not so much pro-Trump as pro-uncertainty.
posted by madcaptenor at 9:56 AM on November 3, 2016 [10 favorites]


Every poll aggregator has to make a lot of assumptions. The assumptions that 538 uses (which are nicely described in that Prokop article - basically, giving much more weight to recent polls and using them to infer information in states that weren't polled) means that it swings up and down a lot more and a lot faster than everyone else. One problem with these sites and 538 in particular is that the hyper-fine percentages ("Clinton has a 65.9% chance of winning") imply more precision than is really warranted. For what it's worth, the Times is hardly the most Clinton-favoring aggregator; the Princeton Election Consortium has been >95% Hillary win for weeks.
posted by theodolite at 10:14 AM on November 3, 2016 [1 favorite]


Most forecasters assume the probabilities in each state are independent. 538 builds a correlation matrix into their monte carlo simulation, so that (in simulation) if Alabama is 10% more for Trump, then Mississippi is too. This has had a huge impact on predictions for this election since huge blocks of states are moving up and down for Trump roughly simultaneously (form your own ideas here about why this is happening).

There's a ton of argument online about whether this is the right way to do it, with good points in both camps. Ultimately there just aren't enough presidential elections to really validate anyone's model.
posted by miyabo at 10:18 AM on November 3, 2016


Best answer: All of these forecasts (NY Times, Princeton Election Consortium, 538) are really models that take the same - or similar - data from polls to produce their results. The flow of data is people are surveyed by pollsters who then produce (based on their internal assumptions) poll results, which are then aggregated by forecasters (NYT/PEC/538) into models that produce some sort of probability.

I think that the key differences from 538 to other models that are affecting the probabilities right now (these are not the only differences) are:

Higher weighting of more recent data. If there are three polls (of equal size and quality) in a state, one 10 days ago saying Clinton +5, one nine days ago saying Clinton +6 and one today saying Clinton +1, what should be used as her margin in this state? You could say +1 which is the most current poll, you could average all three and wind up at +4, or you could go somewhere in between these, weighting the more recent one more heavily. 538's assumptions are more "aggressive" than others, using more recent results more heavily, which hurts Clinton when the margins are narrowing (as they now appear to be doing).

Use of trend-line adjustments. Let's say two weeks ago, the national poll margin was Clinton +5 and state X was polled and Clinton was also +5 there. Today, the national poll margin is Clinton +2 but state X hasn't been polled again, so the most recent information is the two week old +5 number. What should we assume for Clinton's margin in state X? 538 does some adjusting of state-level results based on national polls, so they would say that - even though there isn't a new poll - Clinton's probably dropped in this state as she's dropped nationally. I don't think a lot of the other models do this, so state X will remain at +5% until someone does some polling there, which is a weird Heisenberg uncertainty principle that voter sentiment doesn't actually change until it's observed. In any case, as the national margin narrows, this means 538 thinks that more states will move. (This is generally less important at the end of the campaign as there's a lot of polling right now, but with a tightening margin it's possible that some more marginal states like NM and MN are back in play, and they aren't being polled a lot right now.)

Assumption of interstate correlation. This is, in my opinion, both the most important thing to include and also the hardest thing to do; 538 weights this heavily - I know PEC does not do this at all, and I haven't seen the NYT say anything about this. These "other" models treat each state as an independent probability, so the odds of winning Nevada and the odds of winning Arizona are not connected. 538 goes to some effort to correlate these. As an example, one possibility is that Latinx turnout is lower than everyone expected. That will hurt Clinton generally, but will particularly hurt her in the Southwest. She'll be much less likely to win NV or AZ, a little less likely to win OH or NC and her odds in IA or NH will be largely unaffected. Or let's say less-educated whites turnout in large numbers, that'll help Trump everywhere - especially in the Midwest. What the 538 model tends to say is that the election will be decided by the behaviours of these sorts of demographic groups trending nationally or regionally, rather than by each state flipping their coin in turn. In this case, because there are a lot of states that are close, the possibility for a big shift is greater - one group shifting by 2% could change the results in a number of states, and that would produce a different result.

Treatment of undecided voters. In this election, there are an unusually high number of undecided and third party voters; usually there are about 5% in the home stretch, and this time there are more like 15%. 538 uses undecided voters to increase the uncertainty in their model. I'll note that undecided voters have burned modellers in the past; in the last Canadian election, the Liberals won a substantial majority that wasn't seen in advance by models. Part of this was that a lot of the "undecided" voters were actually voters who hadn't decided between the Liberals and the New Democrats (the other left-wing party) but who had sure decided that they weren't going to vote for the Conservatives. I wonder if there's something similar here, where a lot of the undecided voters could be something like Republicans who are undecided if they can bring themselves to vote for Trump.

538 tends to suggest more uncertainty - while it's true that their model suggests Trump is more likely than others, their model also suggests a Clinton landslide is more likely than the others. (It's just that the landslide isn't as important.)

One result I've seen in other domains is that often times averaging multiple independent models produces a better sense of the outcome than the individual models themselves; looking at the half dozen models aggregated by the NYT, that would reduce the effect of the high-uncertainty 538 and overconfident PEC models and produce a high-80s Clinton result.
posted by Homeboy Trouble at 10:49 AM on November 3, 2016 [14 favorites]


We'll know who's right on Tuesday.

Except that we won't. Unless one of the models goes to 100/0, they're saying there's an x% chance of a particular candidate winning. Even if the model is 99/1 in favor of Clinton, a Trump victory wouldn't mean it was wrong, just that something very unlikely has happened.
posted by Jahaza at 10:55 AM on November 3, 2016 [26 favorites]


For polling generally, see also this Drum post regarding polling and response rates:

http://m.motherjones.com/kevin-drum/2016/11/chart-day-voting-intentions-are-probably-set-stone-now

Also interesting: a glimpse at how the polling sausage is made. Give a bunch of people the same data and ask them the state of the race: http://www.nytimes.com/interactive/2016/09/20/upshot/the-error-the-polling-world-rarely-talks-about.html?_r=0
posted by booooooze at 11:57 AM on November 3, 2016


Except that we won't. Unless one of the models goes to 100/0, they're saying there's an x% chance of a particular candidate winning. Even if the model is 99/1 in favor of Clinton, a Trump victory wouldn't mean it was wrong, just that something very unlikely has happened.

While true, I think a Trump victory (turn, turn, turn, curse, spit) will provide evidence that a model saying this is a 35% outcome is better than a model saying it's a <1>
I digitized the charts on three model sites and made this figure of electoral vote probabilities for Clinton. (The results aren't perfect because it's based on digitized data rather than observed probabilities.) From the figure, I think that a Trump win or Clinton landslide of over 340-350 EV would be a strong argument against the PEC model, for example.

You can see that 538 has the most uncertainty (probably due to high correlation between the states) - represented as the flattest slope, while PEC has an almost vertical curve with the NYT in between them. It also has a more competitive race (this is probably mostly due to weighting more recent polls more heavily), because the curve is to the left (i.e. better results for Trump are more likely). The big difference is the result of the combination of these two assumptions; if you shift 538 over to the right so it has the same median as the NYT, it would give Clinton more like an 80% chance.
posted by Homeboy Trouble at 12:01 PM on November 3, 2016 [3 favorites]


"We'll know who was right" was glib, but it is possible to assess the relative quality of the forecasts after the election, even if those forecasts are probabilistic.

Of course if 538 gives Trump at 51% chance of becoming president while PredictWise gives him a 49% chance, and Hillary wins, then you can't say 538 was "wrong" from that information alone.

But there isn't only one event on Tuesday. There are 50 state elections, and the forecasters have made predictions for each of them. If 538 gets, say, 45/50 right (for example, in the sense of assigning probability > 50% to the actual winner), while the others get 50/50 then it would be taking skepticism to ludicrous extremes to insist the other 7 forecasters got lucky. That's just an example of how you might compare the results.
posted by caek at 12:25 PM on November 3, 2016 [4 favorites]


Homeboy Trouble's analysis is covered to a large extent in this recent Vox article that discusses Nate Silver's methods. There's also an interesting graph which shows how the trends for fivethirtyeight and the shot show the same trajectories, but Silver's methodology; uses this weighting system for polls that shifts his probabilities (for HIllary Clinton) lower.
posted by bluesky43 at 5:33 PM on November 3, 2016


« Older Tricks to help me be more chill about life   |   Seeking sweet television shows as an antidote to... Newer »
This thread is closed to new comments.