National

[ National ] [ Main Menu ]


  


440661


Date: September 03, 2024 at 17:06:38
From: old timer, [DNS_Address]
Subject: Don’t Trust the Election Forecasts

URL: Don’t Trust the Election Forecasts


Don’t Trust the Election Forecasts

The data doesn’t support the obsession with presidential
prognostications.


By Justin Grimmer

09/03/2024 10:00 AM EDT

Justin Grimmer is the Morris M. Doyle Centennial Professor of Public
Policy in the Department of Political Science and Senior Fellow at the
Hoover Institution, Stanford University.

Even as Joe Biden’s presidential candidacy teetered and polls showed him
clearly losing to Donald Trump, the election forecasting site 538 was still
estimating that Biden was likeliest to win. It was a conclusion based on
odd modeling assumptions that led the site’s original founder, Nate Silver,
to declare the 538 model “very obviously broken” and for the site’s new
chief to acknowledge an adjustment to its model when it relaunched with
Kamala Harris’ candidacy.

The episode is notable not just for the skirmishing between rival
forecasters — but because it revealed how little value should be placed in
these projections at all.

I’m a political scientist who develops and applies machine learning
methods, like forecasts, to political problems. The truth is we don’t have
nearly enough data to know whether these models are any good at making
presidential prognostications. And the data we do have suggests these
models may have real-world negative consequences in terms of driving
down turnout.

Statistical models that aggregate polling data and use it to estimate the
probability of each candidate winning an election have become extremely
popular in recent years. Proponents claim they provide an unbiased
projection of what will happen in November and serve as antidotes to the
ad hoc predictions of talking-head political pundits. And of course, we all
want to know who is going to win.

But the reality is there’s far less precision and far more punditry than
forecasters admit.

Election forecasts have a long history in political science, but they entered
the political mainstream because of Silver’s accurate predictions in the
2008 and 2012 elections. Now, many news outlets offer probabilistic
forecasts and use those models to declare that candidates have an
expected number of Electoral College votes and probability of winning the
election. See ABC News’ 538, The Economist and Silver Bulletin, among
others.

Are these calculated probabilities any good? Right now, we simply don’t
know. In a new paper I’ve co-authored with the University of
Pennsylvania’s Dean Knox and Dartmouth College’s Sean Westwood, we
show that even under assumptions very favorable to forecasters, we
wouldn’t know the answer for decades, centuries, or maybe even millenia.

To see why, consider one way to evaluate the forecasts: calibration. A
forecast is considered calibrated if the estimated probability of an event
happening corresponds to how often the event actually happens. So, if a
model predicts Harris has a 59 percent chance of winning, then a
calibrated model would expect her (or another candidate) to win 59 out of
100 presidential elections.

In our paper, we show that even under best-case scenarios, determining
whether one forecast is better calibrated than another can take 28 to
2,588 years. Focusing on accuracy — whether the candidate the model
predicted to win actually wins — doesn’t lower the needed time either.
Even focusing on state-level results doesn’t help much, because the
results are highly correlated. Again, under best-case settings,
determining whether one model is better than another at the state level
can take at least 56 years — and in some cases would take more than
4,000 years’ worth of elections.

The reason it takes so long to evaluate forecasts of presidential elections
is obvious: There is only one presidential election every four years. In fact,
we are now having only our 60th presidential election in U.S. history.

Compare the information available when forecasting presidential elections
to the amount of information used when predicting stock prices,
forecasting the weather or targeting online advertising. In those settings,
forecasters commonly use millions of observations, which might be
collected almost continuously. Given the difference, it isn’t surprising that
forecasters in other settings are more easily able to identify the best
performing model.

The paucity of outcome data means that election forecasters have to
make educated guesses about how to build their statistical models.

Consider how forecasters use polling information: They often calculate a
moving average of polling results. To make this average, forecasters
assign different weights to polling firms, make assumptions about the
kinds of polling errors that are likely to occur and even how those errors
are correlated across states. Or consider how forecasters use
“fundamentals” — factors like the state of the economy, the party
currently in the White House or the president’s approval rating.
Forecasters have to decide what factors to include in their model and
which prior presidential elections are relevant for fitting their model.

Because of the lack of outcome data, each of these assumptions are made
based on what forecasters find plausible — whether based on history or
on what produces seemingly useful predictions for this election. Either
way, these are choices made by the forecasters.

Statistical models do offer the chance for forecasters to be clear about
these assumptions, whereas pundits’ assumptions are often unstated or
difficult to determine. But without data to evaluate how the assumptions
affect calibration or accuracy, the public simply does not know whether
the modeling decisions of one forecaster are better than the modeling
decisions of the other.

While we lack evidence that probabilistic forecasts are accurate, there is
real evidence that they can create confusion and potentially deter voters
from coming to the polls.

A large-scale survey experiment conducted by Westwood, New York
University’s Solomon Messing and the University of Pennsylvania’s
Yphtach Lelkes shows that forecasts are deeply confusing to Americans —
causing them to mix up a candidate’s probability of winning with that
candidate’s expected vote share.

In their experiment, they found that sometimes when people see a model
forecast (say, a 58 percent chance of victory, or a 58 in 100 chance) they
erroneously think that this means that a candidate will win 58 percent of
the vote. Indeed, they write, “More than a third of people estimate a
candidate’s likelihood of winning to be identical to her vote share, and on
average people estimate that likelihood to be closer to the vote share than
the probability of winning after they see both types of projections.”

These election forecasts may also create a false sense of security among
some citizens about the odds of their side winning, which ultimately
causes them not to vote because they feel it’s not necessary.

In a second experiment, Westwood, Messing and Lelkes determined what
information people might use when deciding whether to participate in a
fictitious election. They found that their participants were very responsive
to information when it was provided in terms of probability. And a high
probability that their side was likely to win would have made them less
likely to cast a ballot. But the same information, provided in terms of vote
share, made little difference to their participation.

The bottom line: Probabilistic forecasts are often misinterpreted, and
when they are, they may cause voters to stay home.

It’s still possible that these forecasts may end up being the best way to
predict the outcome of presidential elections. But right now, we simply do
not know if these models are particularly accurate. And we certainly do
not know if small fluctuations in the probability of a candidate winning
represent anything other than modeling error or meaningless random
variation.



Responses:
[440663] [440686] [440664] [440668] [440685] [440690] [440687]


440663


Date: September 03, 2024 at 17:26:29
From: ao, [DNS_Address]
Subject: Re: Don’t Trust the Election Forecasts


This is fun..

Now that the polls are showing what we all knew as soon as Kamala threw
her hat in you're finding virtue in posting articles about how fickle polls can
be. Of course this stands in stark contrast to your strong belief in them
when they supported your guy and your dissing of anyone who mentioned
that polls aren't all that worth while so many months out as it was when we
discounted your's..

What are you going to say next month when the margin continues to grow?
"Everybody says" it's fake news? Sheesh, you better hope your boy has an
October Surprise up his sleeve..

And no, OT, this is not trolling. I truly find the juxtaposition fascinating.


Responses:
[440686] [440664] [440668] [440685] [440690] [440687]


440686


Date: September 04, 2024 at 04:13:50
From: akira, [DNS_Address]
Subject: Re: Don’t Trust the Election Forecasts


According to ao, if old timer doesn't look at the world in black & white terms by
claiming polls to be infallible, and instead posts an article suggesting them to be
imperfect, you criticize him.

So thoughtful.


Responses:
None


440664


Date: September 03, 2024 at 18:25:05
From: old timer, [DNS_Address]
Subject: Re: Don’t Trust the Election Forecasts


sorry dude, i’m not a hypocrite like you when you started showing polls
only after sleepy joe finally stepped aside. i’m still posting polls and
posted one this morning which showed it’s still a close race. i also always
agreed in the fallibility of polls as predictors. but why focus on me in all
your posts instead of the topics that are posted?

it’s also good for people to understand that polls that forecast winners are
quite different than opinion polls that show the topics that are most
important to voters or polls that showed biden’s approval rating was the in
the toilet for a long, long time. damn near every article on polls said they
were a snapshot of the moment and not very good at forecasts


Responses:
[440668] [440685] [440690] [440687]


440668


Date: September 03, 2024 at 19:37:31
From: Redhart, [DNS_Address]
Subject: Re: Don’t Trust the Election Forecasts


And I still think polls are crap lol.


Responses:
[440685] [440690] [440687]


440685


Date: September 04, 2024 at 04:06:41
From: akira, [DNS_Address]
Subject: lol


but you'll post them when they agree with you, as you've recently done...


Responses:
[440690] [440687]


440690


Date: September 04, 2024 at 08:55:48
From: Redhart, [DNS_Address]
Subject: Re: lol


Actually if you remember, there was a big kerfuffle when
I didn't post them and chose other excerpts I found more
relevant.

You yelled when I did that, as you always do when I
post just about anything.


Responses:
None


440687


Date: September 04, 2024 at 04:58:42
From: shadow, [DNS_Address]
Subject: Re: lol


...and when you post snarky little comments like this, it's
not trolling...but when others do so, it is...

Got it...


Responses:
None


[ National ] [ Main Menu ]

Generated by: TalkRec 1.17
    Last Updated: 30-Aug-2013 14:32:46, 80837 Bytes
    Author: Brian Steele