At Deck, we work hard to produce high-quality data used by campaigns and organizations up and down the ballot to make crucial strategic decisions. In 2022, 1,112 users across 438 campaigns in 39 states, and 63 independent organizations used Deck’s data to run better campaigns. Now that the election cycle has wrapped, we are beginning the process of collecting election results and evaluating our data so we can continue to provide the most accurate information possible to help Democrats win.
At Deck, we produce a vote share forecast and probability of winning for every state and federal race in the country. In this blog post, I’ll cover exactly what happened with our forecasts, highlight the unique perspectives they offer, and talk briefly about how we build them. We’ve been creating high-quality forecasts at Deck since 2016 and have worked hard to improve those forecasts over time. Here you can read more about our 2020 and 2021 forecasts!
Deck is able to make more accurate predictions because we build models that look at real voter data, not survey responses. We use deep learning technologies to analyze millions of pieces of data to predict election outcomes. We look at historic precinct-level election results, data on each candidate’s media coverage gathered through partnerships with organizations like Aylien and Critical Mention, itemized campaign finance data gathered from dozens of state and local jurisdictions, candidate issue stances gathered by organizations like VoteSmart, candidate demographics gathered by organizations like Ballotpedia and the Reflective Democracy Project, district-level economic indicators, and the demographic and socioeconomic traits of the voters responsible for each precinct’s election results.
Deck’s median absolute error (MAE) was 2.9pp for Democratic vote share across all state and federal races in November 2022. Deck’s forecast’s underpredicted Democratic vote share by an average of 1.04pp, meaning that our error was generally skewed towards a poor Democratic performance. To do this analysis, we gathered election results data from Ballotpedia and the New York Times in over 4,000 elections across the country. We then compared predicted Democratic vote share to actual vote share to find our median absolute error.
Deck’s forecasts more accurately predicted statewide and federal races, with an average error of 1.84 pp in Senate races, underpredicting Democratic vote share by 0.74pp. By comparison, we had an average error of 2.95pp for state house and 3.07pp for state senate races.
MAE in Dem Vote Share (pp) |
Dem Vote Share Bias (pp) |
N Races | |
Total |
2.9 |
-1.04 |
4393 |
US Senate |
1.84 |
-0.74 |
32 |
US House |
2.66 |
-1.13 |
403 |
Governor |
2.75 |
1.17 |
36 |
Other Statewide Races |
1.8 |
0.69 |
111 |
State Senate |
3.07 |
-1.36 |
806 |
State House |
2.94 |
-1.03 |
3005 |
This cycle, significant polling bias against Democrats stoked fears of a red wave. But Deck’s forecasts not only much more accurately predicted the final results of elections, but most importantly was far less biased in doing so. This information is crucial for organizations making strategic decisions about which races to spend on and how close those races will be. A strong anti-Democratic bias across the board means we miss chances to contest races and pick up more Democratic wins.
Deck | Polling | |||||
MAE |
Bias |
N Races |
MAE |
Bias |
N Polls | |
Total |
2.49 |
-0.94 |
165 |
3.6 |
-3.0 |
1236 |
US Senate |
1.66 |
0.19 |
29 |
3.42 |
-2.21 |
503 |
US House |
2.95 |
-1.9 |
103 |
5.67 |
-5.82 |
233 |
Governor |
2.8 |
1.06 |
33 |
3.2 |
-2.5 |
527 |
Like many polls and forecasts, our forecasts underpredicted Democratic vote share, but ours did so to a far lower degree. In races where public polling was available, the average error in Deck forecasts was 2.49pp, underpredicting Dem vote share by 0.94pp. Meanwhile, polls had a median absolute error of 3.6pp, underpredicting Dem vote share by 3pp. We looked at over 1,200 public polls collected by 538 in 165 unique Senate, House, and governor’s races to create this assessment.
Besides looking at absolute error and bias, we can also look at how these forecasts changed over time. In the graph below we show the changes in both MAE of our forecasts over time and the bias of that error.

This graph shows how stable our error was over time. This is really important because when making strategic resource allocations, your organization wants to know in June what is likely to happen in November. And although there is some fluctuation in bias for especially state legislative campaigns our errors are relatively stable over time.
Our forecasts will never replace polling; there is no substitute for asking voters directly what their opinions on an election or issue are. But our forecasts do provide a much needed – and, crucially, more stable and less biased – measure of how an election will shake out.
It’s also worth exploring where we see variation in these forecasts. We usually don’t see drastic changes during the course of an election, but new information like finance reports can trigger big changes, especially for state legislative races. This is because we train our models using data from a wide range of sources about candidates, elections, and voters themselves.
Our forecasts have lower bias and are generally more stable over time because of this approach; using past data on real voter behaviors provides an alternative perspective on how voters will behave. This approach also allows us to produce cost-effective scores that update daily for every single state legislative race in the county, creating the opportunity to spend in lower-profile races and make a more concrete difference for Democrats.
We are excited to continue improving these scores and working with organizations to help understand the dynamics of an election. If you would like a much more in-depth exploration of how we build forecasts, you can see that laid out here. If you have any questions please send us an email at help@deck.tools! We look forward to hearing from you.