Obama campaign manager Jim Messina has a blunt message for pollsters


Summing up the lessons learned from a massive investment in data and technology, Obama campaign manager Jim Messina has a blunt messagefor pollsters: "We spent a whole bunch of time figuring out that American polling is broken."
At a Politico forum on Monday, Messina spoke about the campaign's "three looks at the electorate" that gave him a deeper understanding of "how we were doing, where we were doing it, where we were moving -- which is why I knew that most of the public polls you were seeing were completely ridiculous."
David Simas, the Obama campaign's director of opinion research, provided The Huffington Post with more details about those three sources of polling data:
• Battleground Polls. The Obama campaign never conducted a nationwide survey. For a broad overview of public opinion, it relied on lead pollster Joel Benenson to survey voters across 11 battleground states (Colorado, Florida, Iowa, Michigan, Nevada, New Hampshire, North Carolina, Ohio, Pennsylvania, Virginia and Wisconsin) at regular intervals throughout the campaign.
Benenson conducted the aggregated battleground polls once every three weeks during the spring and early summer of 2012, every other week during the late summer, and twice a week for the final two months of the campaign. These surveys were used to test messages and to glean overall strategic guidance, but not to make individual state assessments.
• State Tracking Polls. To gauge the battleground states, the campaign conducted state-specific tracking polls on a similar schedule, shifting to three-day rolling-average tracking in each state after Labor Day, with sample sizes ranging between 500 and 900 likely voters every three days. The surveys were conducted by a team of Democratic pollsters: John AnzaloneSergio Bendixen (among Latino voters), Cornell BelcherDiane FeldmanLisa Grove and Paul Harstad. These surveys helped drive message testing and strategy but also tracked the standings of Obama and Mitt Romney in each state.
• Analytics. Overseen by its internal analytics staff, the campaign also conducted parallel surveys in each state to help create and refine its microtargeting models and to provide far more granular analysis of voter subgroups. These surveys used live interviewers, very large sample sizes and very short questionnaires, which focused on vote preference and strength of support, with no more than a handful of additional substantive questions. During September and October, the campaign completed 8,000 to 9,000 such calls per night.
The call centers that completed these analytics surveys typically specialize in "voter identification," the process of contacting most or all individual voters in a state to identify supporters who can then be targeted in subsequent "get out the vote" efforts. But the Obama campaign's approach to voter targeting was different. It called very largerandom samples of voters to develop statistical models that generated scores applied to all voters, which were then used for get-out-the-vote and persuasion targeting.
The Obama campaign preferred such modeling over traditional brute-force voter ID calling, according to a member of the analytics staff, "because our support models more efficiently (and quite accurately) told us who supported us and who opposed us."
The analytics staff also routinely combined all of their data sources -- Benenson's aggregate battleground survey, the state tracking polls, the analytical calls and even public polling data -- into a predictive model to estimate support for Obama and Romney in each state and media market. Their model had much in common with those created by Nate Silver for The New York Times and by Simon Jackman for HuffPost Pollster. It controlled for the "house effects" of each pollster or data collection method, and each nightly run of the model involved approximately 66,000"Monte Carlo" simulations (a number frequently cited by Messina and others in recent weeks), which allowed the campaign to calculate its chances of winning each state.
The massive scope of its polling effort helped guide the Obama campaign in ways that would be impossible with conventional polling. In late October, for example, its tracking detected a roughly 5 percentage point drop in support for Obama in the Green Bay, Wis., media market. A typical tracking survey in a market that size might have only 100 interviews (with a margin of error of +/- 10 percentage points), but the Obama campaign had far more data at its disposal. "Because we were conducting close to 600 interviews in the market every three days," Simas explained, "we had confidence in the market-level decision making."
The internal polling and modeling also told the Obama campaign a different story about voter trends than that emerging from the public polls. Simas said that from April through the conventions, the race was "fixed" in the battleground states at a 3-to-4 point margin (50 percent for Obama, 46 or 47 percent for Romney). There was "a bit of erosion for Romney right after the [Democratic convention] and in the midst of the 47 percent video period" in mid to late September, during which Obama's advantage expanded to roughly 6 points (50 percent to 44 percent), Simas said.
Within 48 hours after the first presidential debate in early October, those voters returned to Romney and the race "settled back" into the same 3-to-4 point lead for Obama across the 11 battleground states that the campaign's polling had shown all along. "Our final projection was for a 51-48 battleground-state margin for the president, which is approximately where the race ended up," Simas said.
The most recent results compiled by HuffPost Pollster for the 11 battleground states show Obama leading Romney by a 3.6 point margin (51.1 percent to 47.5 percent), although many provisional ballots have yet to be counted and only two of the states has produced final certified results so far.
National public polls showed bigger shifts toward Obama in September and back to Romney in early October. They also indicated a late mini-surge to Obama that his campaign's internal polling and models did not detect.
Unlike most of the public media polls, the Obama campaign's surveys relied exclusively on samples drawn from the official lists of registered voters. These lists allow for a different approach to reaching cell-phone-only voters -- in some states, voters provide phone numbers when they register, and list vendors attempt to match names and addresses to mobile and landline numbers culled from commercial data -- but they also come with shortcomings, such as voters listed without phone numbers.
Pollsters who rely on these voter lists must still weight their data demographically to compensate for these limitations as well as for low response rates and uncertainty about voter turnout. "You have to decide what the electorate is going to look like," Messina explained on Monday. "That's another place where [public] pollsters just got it wrong."
But what made the Obama pollsters confident they were right in their assumptions about the demographics of the likely electorate? "We spent a ton of time at the outset of the election looking at the historical trends in the battleground universe," Benenson said, adding that they drew on past exit-poll data as well as models produced by the internal analytics team.
Simas said that the campaign's analysis of official reports on who voted early in states like Ohio, Nevada and North Carolina "made me very confident that the team had nailed it on the makeup of the electorate. We were hitting our assumption targets across the board. In truth, that's the only way to know if you're right."
Obama's 2012 campaign manager Jim Messina sat down with Mike Allen at POLITICO's Playbook Breakfast held at The W Hotel on 11/20/12


Jim Messina at POLITICO's Playbook Breakfast




No comments: