How to read a poll

As readers of this blog know, I am not wild about public polls – they tend to focus people on the “horse race” at the expense of other areas of the campaign and way too often their read of the horse race or of changes in it are misleading. Nonetheless, they seem to proliferate so here is a short primer of what to look for to evaluate how real they are – or are not:

1. Do the demographics of the poll match those of the electorate? The distribution in the poll by age, gender, partisanship, race, education and geography should match that of the electorate. Now, these factors vary in the electorate depending on voter registration and turnout so the exact distribution for a future election is unknowable. Additionally, while we all have access to U.S. Census data, most of us do not have access to special modeled voter files that tell us this information for earlier elections. So some “guesstimating” is necessary for casual consumers of polls.

Still, the electorate isn’t radically different from the adult population, except perhaps by age as older people are more likely to vote than younger people. One thing to always watch out for is percent college educated because people with four-year college degrees are only a little more likely to vote but much more likely to complete polls. In Mississippi, 24 percent of adults over age 25 have four-year college degrees. The most recent MSToday poll of registered voters over age 18 had the figure at 21 percent. That is not unreasonable, although perhaps a tad low. On the other hand, I have seen polls that had the figure over 40 percent in Mississippi, which is not at all reasonable. It matters a lot because Governor Tate Reeves has more support among white voters without college experience than among white voters with four year college degrees. It also mattered a lot in producing the polling errors of 2016 as Hillary Clinton had a lot more support than Donald Trump among voters with four-year college degrees.

2. Is the partisanship correct? In states with party registration, like California or Florida, you can see whether the number of Democrats, Republicans and independents (or decline to state voters as they are known in California) is correct. With a special modeled file, statisticians have estimated the probability of each voters’ partisanship in every state and polls that use those files rely on that modeling. When neither of those is available, partisanship can rely on party self-identification or on prior vote. Both of those methods are somewhat problematic. Self-identification is an attitude and can fluctuate over time – someone may see themselves as a Republican today but start thinking next week they are more of an independent, particularly if they anticipate crossing party lines in the next election. Prior vote – for whom people voted in the last election – relies on their memory, and there is a tendency to recall voting for the winner. Still recalled Presidential vote, since people felt pretty strongly about that one, can be a useful measure and ground those who say they are independents as leaning one way or the other in reality. In the last two MSToday polls, party self-identification shifted quite a bit from 35 percent Democratic in January to 27 percent in April while the Republican percent went up two points from 38 to 40 percent and the independent percentage went down five points. It may be that voters are feeling less Democratic but it is likely that some of the change was the result of sample fluctuation. If the state’s underlying partisanship is the same, the partisanship of the two samples should have been more similar than it was.

3. Know the real sample sizes (or be cautious of them). Every poll these days has been weighted. That means that when the data collection is done, the pollster looks at the sample and up-weights or down-weights respondents in some groups to reflect their representation in the electorate. If they do not have enough young people, or people without college experience, or voters in the Delta, they count those they do have extra – as if they were 1.1 persons (or more) instead of 1 – and they down-weight people in groups that are over-represented. Small weights make the poll better but larger weights – or multiple weights – can make a very small group of people count for too much of the poll. I once did a poll that was low on both Republicans and African Americans and made the rookie mistake of up-weighting both those groups at the same time creating a sample that had a lot of Black Republicans, which made it appear (wrongly) that my candidate was slipping among Black people.

Weighting can be tricky and as response biases have gotten worse, it matters more. Very few public polls report their weights or the actual sample sizes they collected. Ask for them – or know that in telephone polls the pollster has probably up-weighted younger voters – especially younger men – African Americans and Hispanics. In on-line polls, they have almost certainly up-weighted voters without college experience, and seniors. In either case, the actual sample sizes of these groups are likely smaller than they appear.

4. Take it all with many grains of salt. Polls can be very useful in understanding how other people are thinking about the world or about an election. But they used to be more of an exact science than they are because people used to be easier to reach. If everyone in the population of interest (people who will vote in the next election) is equally likely to be in the poll, then all the laws of probability apply and you know their opinions within a mathematical margin of error. But as the response rate to polls has plummeted, and in ways that are not at all random, those laws no longer apply. The collection has biases that have been adjusted by the pollster in line with their assumptions. The best pollster making the most studied assumptions still misses the mark sometimes. And changes in a horse race for an election that won’t happen for months may – or may not – mean anything at all. I hope to see more news coverage of what candidates are doing and saying, and leave the internal processes and strategic judgments to their campaigns – although I am still something of a poll addict and will look, even while shaking my head and wishing for more coverage of who these people are, what makes them tick, and what they would do if they win the office that they seek.

Right now, like most Mississippians, I know a lot more about Tate Reeves and the kind of leader he is than I do about Brandon Presley. That will change as the candidates, their campaigns, and the press each tell us more. All we really know right now is that Mississippians aren’t satisfied with the status quo leaving room for the challenger, whom most of us don’t know very well yet.

Polling is Leaving Out Poor People

Those who follow such matters already know that pollsters under-sampled white, non-college voters in 2016. Then, in 2020, Trump voters exhibited greater than average response bias as they were less likely than others in their demographic to respond to polls.

The problems with polling are not only about Trump voters, or about election projection for that matter. The core problem is that some people are less likely to respond to polls. Pollsters “correct” for this by up-weighting those who do respond – counting their responses extra and assuming the respondents represent their demographic. Some groups who are not Trump voters but consistently require up-weighting are low income people, people in minority communities, lower propensity voters, and young people.

Low income people and lower propensity voters (groups that overlap significantly) have always been harder to poll. Some of the difference is behavioral. Low income people are often less available – more likely to work nights, to move frequently, or to use a burner phone without any listing. They may also associate polls with the government, or the media, or other elites – the establishment if you will – and have little interest in unnecessary interaction with those (which is likely part of the problem with Trump voters).

Question wording is also often a problem. If people are asked to choose among response alternatives that do not reflect their views or concerns, they are more likely to terminate the interview. Many polls on COVID vaccination do not include cost as a barrier, assuming that people know the vaccine is free although free health care is outside the experience of most people, particularly those who are lower income.

Pollsters’ increasing use of online panels may be making the problem of getting a representative sample of low income people worse. Such panels are recruited in advance and demographically “balanced” to represent the population.

The first problem is that rather than eliminating response biases they are simply injecting bias earlier in the process as the panel consists of people who have agreed in advance to be polled.

Second, online panels eliminate some low income people from polling samples entirely. In 2019, 86.6 percent of households had some form of internet access, including 72 percent with smart phones. But the percent varies by state, ethnicity, and income, according to the ACS https://nces.ed.gov/programs/digest/d17/tables/dt17_702.60.asp which has been clear about the problems in needing to weight census data in 2020 given the low response rates of low income people https://www.census.gov/newsroom/blogs/research-matters/2020/09/pandemic-affect-survey-response.html.

Finally, if panel recruitment is by phone or mail, it may be skipping those who are more transient or who do not respond to such calls for all the reasons described above. And even with pre-recruitment, most panels are up-weighting low income people because they are not responding at the same rate as other panelists even when the recruitment is more balanced.

Does the exclusion of low income people from polls matter? Superficially it may not matter very much to political campaign strategists because they are interested in likely voters and willingness to be polled and vote propensity are related (per Pew Research studies). However, the relative absence of low income voters may misinform the campaign about what is on people’s minds, especially in lower income states and districts. If the campaign is considering investment in organizing low income communities, the exclusion reduces the potential for that strategy.

Not-for-profit organizations that wish to provide services to low income people should be very careful about relying on polls. Research has shown large response biases in health care research (https://link.springer.com/article/10.1007/s11606-020-05677-6), for example. Collecting data on site or in person may be far more valuable, and personal interviews are becoming feasible once again.

Most of the publicly released polls on issues like COVID vaccination are reporting data by income. In some cases, the income categories are cruder than they should be (e.g. below $40K as the lowest). In virtually all public surveys, the data are weighted but information on the degree of weighting applied is unavailable. If, as in Mississippi, nearly 20 percent of the population of interest is below the poverty line, how many were interviewed in a sample of 500 before weighting? If there were only 50, that wasn’t a meaningful sample from which to weight.

Every consumer of polls should know what the unweighted data looks like. And every consumer of polls should be a little skeptical of results in groups that required significant weighting or were unbalanced demographically without it. If your interest is in a group that is up-weighted, like lower income people, you may have learned less than you think.

None of this should suggest that such polls are without value. But they shouldn’t be seen as all encompassing. There is no substitute for conversation, and articles like these https://www.nytimes.com/2021/04/30/health/covid-vaccine-hesitancy-white-republican.html may be more useful and informative than some of the published online panel data in understanding what lower income communities are thinking and feeling on issues of concern.

There are other groups who are under – or over- represented in polls. Under sampling low income people seems both egregious and important at this time. But, as I have written before, the core problems on sampling call for new research methodologies as well as for greater care by pollsters and greater caution from those who consume data.

Thoughts on “Revisiting Polling”

This week five major polling firms released a statement on “Revisiting Polling for 2021 and Beyond,” which you can find here. Friends, former clients, and readers of this blog have asked me what I thought of it. This post answers that question without going behind anyone’s back, especially since I applaud most of it. The group of five pollsters are all former colleagues, some are also friends, and they include some of the researchers I respect the most. (These are overlapping; not mutually exclusive categories.)

First, I thought it was thoughtful, analytic, reflective and productive. I found it useful and interesting that the impact of unexpected Republican turnout contributed to the problem but did not account for it. I totally agree that presenting results with a range of scenarios – different turnout levels for example – would be productive. I acknowledge that I tried to do that a few years back and found that clients adopted the optimistic scenario as the “real” one. Further, both clients and the powers-that-be appreciate expressions of certainty, even when none exists. A group effort to present results as a range may be more productive than an individual one.

Second, I welcomed the discussion of weighting procedures and the use of analytic modeling in polling. In the old days, polling used random samples. The margin of error tells the statistical probability that a random sample is wrong but that is not how virtually any pollsters are sampling these days. Instead, pollsters are weighting the data to presumptions of the electorate – often well-researched and well-grounded presumptions but presumptions nonetheless. Apparently many of these were too optimistic on the Democratic side. I would also hope for greater transparency in identifying those presumptions in the future.

Third, the use of modeling to ground the sample in base attitudes and partisanship as well as demographics is important. If analytics says 40 percent of the electorate in question tilts Republican, then the sample should too. The more sophisticated and accurate the modeling is, the better grounded the polling will be, and the better able to show change and relate other attitudes to those grounded in modeling. Using the modeling properly requires certain sampling and calling protocols, however, that were not covered in the memo. Proper alignment with modeling would, for example, have made partisan bias due to COVID behavior extremely unlikely. Modeling, however, includes a “mushy middle” of people about whom there is uncertainty. They are in a modeling middle not a middle in reality and even when polling and modeling match, that can be a source of error. Modeling, too, needs to be more transparent about its own level of error, and more politically astute about what is modeled and how.

Finally, and perhaps most importantly, I appreciated the opening up of the discussion to analytics practitioners and others outside of polling. In fact, I believe the resolution of “the polling problem” is outside of polling. The change in sample frame from random to weighted “representative” samples – forced by response rates – means polling will continue to rely on presumptions and will not again provide accuracy within the margin of error, except when the presumptions are correct.

The resolution, in my view, is in a great deal more clarity in what the research questions are and a lot more creativity in how to answer them. I agree with my former colleagues that polling remains an important element of political campaigns. It should not, in my view be the only or perhaps even the dominant methodology employed. There are an emerging array of methodologies and unlimited potential for experimental design. Some are advances in projecting results and others help get at underlying attitudes and message development. Perhaps there needs to be some separation of research that fulfills those goals.

There should also be a new attitude of listening to voters rather than approaching them exclusively with an ivory tower sense of distance. People will usually tell you what they think if they think you really want to know. Analytics can do a lot more to help win elections, but analytics practitioners need to understand their own limitations too. And pollsters often ask questions in ways that are obtuse, at best, beyond the Beltway (a phrase that is meaningless to many). New ways of listening and new qualitative techniques are as important in understanding the electorate as are fixes in projections.

Consumers of polls need to understand both their value and their limitations. Elected officials certainly express more skepticism about the “horse race” number these days, but that should continue when their pollsters tell them they have 52 percent of the vote with their opponent at 48 percent. That doesn’t mean you will win, and the why of it all – what voters are thinking and feeling about their own lives is critical too.

I wish the media would stop treating polls as a central story about voters and the election. Dueling polls are much less interesting than dueling candidates, or ideas, or constituencies. And if you must cover polls, do so please in a way that is more discerning about polling quality, and far more transparent about how the poll was conducted and weighted, and how that offers potential bias. It always does.

Failures of Punditry (and Polls)

As the year began I wrote what I called my New Year’s “irresolutions” – a set of observations on the Democratic field that I cloaked in uncertainty. I promised to come back and identify those that were wrong. There are two standouts in that regard: (1) Joe Biden’s staying power is far less certain – and I take little comfort in having been right about that before I was wrong; and (2) I now suspect that a Michael Bloomberg nomination is as likely as several other possibilities on the table.

I was not alone in being wrong and there are two core reasons for why so many were. The first is that the polls were wrong – not a single poll showed a Sanders-Buttigieg tie in Iowa with Elizabeth Warren in third; nor did a single poll show the Sanders-Buttigieg photo finish in New Hampshire with Amy Klobuchar in a strong third. In addition to the usual problems with polls (see prior posts and tweets), in Iowa, polls overestimated turnout and apparently underestimated the power of organization and the movement of late deciders. They included too many non-voters and too few who moved late to Buttigieg. In New Hampshire, there was not time for quality polling between the debate and the primary given issues with callbacks and weekend samples so most polling missed the Klobuchar growth. Additionally, those “future former Republicans” of Buttigieg’s may have been a bigger piece of the electorate than some foresaw.

The second reason pundits were wrong is that this is not an election like any we have seen before. Voters are seriously shopping for a candidate who can defeat Donald Trump. Like the pundits’, voters’ hypotheses about that shift over time, and so too do their candidate preferences. Debate performances, candidate message, perceived toughness, all matter. Since so many were so wrong, the impact of punditry seems to matter less although I am continually concerned that wrong polls can impact elections and, in their own way, thwart the voter will they intend to reflect.

The factors that made punditry and polls wrong in these first two states are operative in those that are coming up. There will not be time for quality polls between debates and primaries or between South Carolina and Super Tuesday. Voters may also change their minds about who is the strongest candidate and about what they will tolerate from candidates about whom they have mixed feelings.

Yes, polls do not show Buttigieg and Klobuchar to have much support from voters of color but usually these are polls with small and often unbalanced samples of voters of color. Besides, African American and Hispanic voters have in the past overlooked far more egregious violations on race than these candidates are accused of. I suspect most voters of color have concluded a long time ago that white politicians are imperfect on these issues. Additionally, I suspect these candidates will do some more outreach than perhaps they have to date and maybe (or not) to positive effect.

This is not a prediction that their support will grow – I don’t know – but there is no reason to rule it out either. Sanders is better known in those communities, and has a civil rights movement history from the 1960s. That doesn’t mean he has a lock on anything – and neither does Biden. Further, we have not heard yet from any voters in the south or in the southwest and we don’t really know how they are judging these candidates, or will after two more debates in their very different home states. We will have to wait and see. And the results of the next two states may or may not tell us much about Super Tuesday, when a third of delegates are chosen.

One element of current punditry I question is whether voter decisions are ideological. There is a conventional analysis that groups moderate candidates and progressive candidates and presumes some trade-off among them. The analysis is supported by voters’ second choices – as Warren is the more frequent second choice of Sanders supporters and vice versa. But some of that may reflect changeable theories of who can win. Further, there are perhaps gender dynamics in play – worth wondering whether Warren’s weakness in New Hampshire was in part attributable to Klobuchar’s growth. I don’t know.

One more irresolution I want to comment on: whether there will or even can be a first ballot winner. Multiple candidates and the deferral of the votes of super-delegates do make it less likely, as basic arithmetic and every model shows. But candidates can release their delegates before the vote; they can team up on prospective tickets too; but more importantly, the primary process is not linear, many things can happen, and a clear winner has time and space to emerge. We will see. The only thing I do know is that we should not pre-judge results because the situation and voters’ behavior is unique to this year and to the need to defeat Donald Trump.

Big Structural Change

Increasingly, the Democratic presidential nomination seems a battle between former Vice President Joe Biden and Senator Elizabeth Warren.  Senator Bernie Sanders impacts the race but with scant signs of growth in his support.  Senator Kamala Harris and Mayor Pete Buttigieg still hold on to smaller constituencies, with life in other candidacies, including Senators Booker and Klobuchar, and with flashes of passion from former Congressman O’Rourke. 

There is still time for another candidate to emerge but the race has remained in near stasis as summer has turned to fall. 

The two leaders – Biden and Warren – are the two candidates who have presented the clearest rationales for their candidacy.  Biden fundamentally promises a return to the Obama years and Warren pledges big structural change.  The latter is making some observers nervous, resulting in a spate of polls that show general election voters are not yet ready to embrace big structural change.

The most recent NBC/Wall Street Journal poll shows a plurality of non-Democratic primary voters supporting smaller scale policy changes and majority opposition to some of Warren’s policy proposals.  The centrist Democratic organization Third Way presents data that voters want a more centrist approach on health care rather than Medicare for all.  CNN continues to show that voters prefer a candidate who can defeat Trump over one with whom they agree on the issues, which may be a false choice if they can have both.   

There are many reasons to be anxious about the 2020 election.  The stakes are extraordinarily high, we are now in an impeachment process, and, with over a year to go, many factors are simply unknowable, including the progress of Democratic candidates as they move toward the nomination and the general election, the erratic behavior of the president, and the potential for corruption of the process.

I am not, however, concerned about Warren’s articulation of the need for big structural change.  Here’s why:

  •  Warren has left herself a lot of room to define the nature of structural change.  The words establish her as the change candidate, and as a clear contrast to Biden’s return to the recent past.  As the leading woman candidate, and a Biden alternative, she would represent change in any case. Embracing that positioning seems smart and many of her proposed policies, like increasing taxes on the super wealthy, are in fact broadly popular.
  • Warren has the capacity to be a reform candidate. She is financing her campaign differently than the other candidates, and she is undaunted by demands of both big corporate interests and the super-wealthy. For the 30 years I was in polling, messaging of standing up to big corporate interests to bring change has been a strong elixir. Back in 1990, in polling for the late Senator Wellstone (who, for the record, was always clear he didn’t listen to his polling), 72 percent of Minnesotans said the problem in Congress was more that its members listened to special interests than that problems were beyond government solution. Similar results have replicated in the interim but few candidates can authentically articulate the message. Despite two Pinocchios from the Washington Post, Warren is uniquely able to articulate that her presidency would listen and respond to people and not to special interests (hopefully combined with a plan for economic growth and small business development). Genuine reform in how we conduct business in Washington would be big structural change.
  • Voters will likely be more interested in the results than in the process of change.  Warren has a variety of plans – and ways of paying for them that do not require tax increases on the middle class.  Voters favor lower health care costs, more accessible post-secondary education, more economic opportunity, fair treatment and fair pay in the work place, and Warren is talking about these issues.  Voters are not – at this point – ready to embrace Medicare for all but they may also understand that it won’t happen unless they do and there are interim steps in the process they may endorse moving forward. 
  • The impeachment process may change the context.  On the downside, it may make Washington and Congress look even more partisan and angry.  On the upside, it may focus discussion of the threats of the Trump presidency.  Democrats have so many complaints about Trump that our attacks are like spam – diverse, diffuse, and occasionally obscure to some people.  That he represents a threat to national security and to the electoral process in which people choose their own leaders can become central to arguments against him.  In either case, the process may spur greater interest in change from business as usual in Washington even if the desire for change encompasses both parties.     

None of this discussion should suggest I do not have anxieties about the leading candidates.  My principal anxiety about Warren is whether she will appear the Harvard professor who needs to be the smartest in the room, or whether she is the woman of blue collar roots motivated by instincts of caregiving and reform.  Candidate imagery and gender interact, and I am sure her campaign is well aware of the image downsides of being the “Smart Girl.”  As for Biden, his strength is in a perception that he is a known quantity and a decent man, who represents little that is radical or risky.  Other than gaffes that can undermine perceived steadiness, I worry that he will not connect with younger voters whose heightened participation is essential to prevent this electorate from being older than the 2016 electorate, a demographic change that would favor Trump.        

Additionally, Sanders may garner more support than I am crediting him with here and others may emerge.  There is room for both to happen.  A three or four candidate late field can spur another anxiety:  that no one have a majority of delegates going into the convention. 

I am not, however, anxious that Warren is the candidate of big change.  If the country moves from Trump to Warren, it will be a big change – in structure, process, and result. 

Shortly after the 2016 election, I had lunch with a colleague whom I respect.  I noted that in politics as well as physics, for every action there is a reaction.  We went from a brilliant, erudite President who believed in meritocracy to the current incumbent.  The next wave, I suggested, could bring big change.  Maybe, my colleague responded, but we are in for a whole lot of hurt in the meantime.  His prediction was correct.  We will see if mine was as well.    

Some questions for post-Labor Day Polls

I suspect we will see a spate of new polls fielding after Labor Day.  I am hoping they ask some questions beyond the horse race that tell us more about what voters are thinking around the Democratic presidential contest.  Here are some suggestions (in no particular order):

Candidate Qualities 

Here are some qualities people might look for in the candidate they ultimately support for President.  On a scale of 1 to 7, please tell me how important each one is to you, with a 1 meaning not important at all and a 7 meaning it is the most important quality.  (READ AND RANDOMIZE)

  • Can beat Trump in November
  • Shows compassion for people
  • Knows what they want to do as President
  • Would bring the country together
  • Would make significant policy changes
  • Has a new approach to governing
  • Will protect individual rights and freedoms
  • Will promote economic opportunity
  • Has the wisdom of experience
  • Will advance equality and anti-racism

Which of these qualities – or some other quality – is most important to you of all?

Electability

We know voters care about whether a candidate can beat Trump but we don’t know what qualities make a candidate stronger in their views.  How about a couple questions, like:

How important are each of these in telling you a candidate can defeat Trump in November, using a scale of 1 to 7 with a 1 meaning it is not important at all and a 7 meaning it the most important quality?  (READ AND ROTATE)

Is there another quality that is important in telling you a candidate can win?

  • Tough and willing to fight
  • Has moderate issue positions
  • Popularity with Trump voters
  • Inspires young people
  • Relates to diverse communities
  • Leads Trump in the polls
  • Likeable and appealing

Thinking about your friends and neighbors, if the Democratic candidate is a woman, will that make them more or less likely to turn out and support that candidate in November, or won’t it make any difference to them?

Thinking about your friends and neighbors, if the Democratic candidate is over age 75, will that make them more or less likely to turn out and support that candidate in November, or won’t it make any difference to them?

Thinking about your friends and neighbors who are uncomfortable with Trump, do you think they are looking more for a return to the pre-Trump years or more for new policies that will bring change?

Issues

What issues are most important to you in the 2020 election? (Open-end, multiple response)

If we elect a Democratic president in 2020, which of the following should be their top priority in their first term: (READ AND ROTATE)

  • Climate change
  • Affordable health care
  • Access to post-secondary education
  • Infrastructure like roads and bridges
  • Higher wages
  • Immigration reform
  • Criminal justice reform
  • Other (specify)

When it comes to health care, which would do more to expand access to quality affordable care – (ROTATE)  a public option in which voters can choose government-administered insurance OR Medicare for all in which everyone is in a government-administered insurance program (with response options for neither as well as don’t know)?

(IF CHOICE) Would that system be much better, somewhat better, somewhat worse, or much worse than the current system?

If there is a Democratic president, how likely is it that the proposal will become law in the next five years – very likely, somewhat likely, not very likely, or not at all likely?

The Horse Race Question

I have been concerned that asking about 20 people in phone polls flattens choices because people can only hold seven plus or minus two item in short term memory.  Consider asking the horse race in groups of 5 to 7 candidates – preferably randomizing sets although it is also tempting to ask the top 7 together.  Then add a question like:

You indicated candidates A, B, and C were your top choices within the groups I gave you.  Which of these is your first choice among all the candidates?    

Is that the candidate you would most like to see as President or the candidate you feel can best win?  (Code for volunteered both)

Vote History

Most public polls are asking how likely people are to vote in the primary or caucus in their state.  Consider asking whether they voted in the 2016 contest between Hillary Clinton and Bernie Sanders and for whom they voted.

The question allows analysis of how large a primary electorate you are polling and what the standing is among those most likely of all to participate, as they have done so historically.  It will also say where the support for these two candidates is going. 

Demographics

Basic demographics are fine, but also consider asking whether they live in a county that supported Clinton or Trump in 2016, as these voters may have different perspectives from each other.

# # #

Questions like these would say more about what voters are looking for in the next president (other than that he or she is not Trump).  Crosstabs of questions like these by candidate preference might also provide more insight as to why voters are making the initial choices they are, and how the contest may evolve.

Don’t just poll – research!

Political practitioners too often see polls and focus groups as the automatic choices for political research.  Polling tells you about aggregate attitudes.  Strategic reliance on polling grew with dependence on television – a medium that used to reach most everyone and so aggregate attitudes and message receptivity made sense. 

In the internet age, the strategic balance is shifting to more targeted research and a more diverse array of message options. 

Here are some traditional research goals and some new (and old) approaches:

Exploratory research

One goal of almost any campaign is to avoid generic messaging.  If you want voters to think your candidate understands the unique problems of their region, the first step is to learn about those problems. In a national campaign, that can mean speaking to the dairy crisis in Wisconsin or the historic importance of the glass industry in Toledo. 

Local political conversation remains a basic.  Meanwhile, Google Trends can provide localized search data, voter file analysis combined with Census data can tell you the demographics and partisanship of who votes and who does not.  Sophisticated modeling like that of the Peoria Project can say a lot about people’s political attitudes and interests and the breakdown of Google affinity groups can say a lot about people’s non-political interests.   

Message Development:  The Core Argument

Review the candidate’s records against what you have learned about voters.  The core argument is almost always that your candidate will represent people’s interests and the other candidate will represent someone else – be that partisan interests, special interests, or an ideology (although most voters say philosophy) that is alien to people.  Alternatively, you can argue your opponent has a character flaw but these days a lot of voters think most politicians have character flaws, so it better be an egregious flaw.    

If you know the turf, have analyzed the available voter info, and read information on each candidate, the likely core arguments logically follow. On the presidential level, for example, Pete Buttigieg has to develop generational change as an argument. Joe Biden must run on his experience and the comfort of familiarity. Should either become the nominee, they will have different contrasts with Trump.  Already, Buttigieg articulates that we can’t continue the way we are while Biden promises a return to the balanced decency and rationalism of the recent past. Much is baked into who they are. The same is true in a local race. The candidate defines the message.

Message Development: The Media Mix 

Here’s where there are a lot of new opportunities in the internet age.  You can’t have a different core argument in different media; but you have more options for how to express the argument  than ever before.

Let’s take a congressional example:    

Your candidate is running against a Republican incumbent who has opposed funding an array of programs that would put money in this single media market district.  The district includes the city of Townville and surrounding rural counties. It tilts Republican and is predictably fiscally conservative as people figure that government money goes to someone else.

From exploratory research, you know a rural hospital is of danger of closing, local stores and the Family Dollar store have closed in small towns, and tariffs are hurting farmers, each of which can be tied to incumbent votes or statements opposing ACA, opposing online sales taxes and so helping Amazon, and supporting Trump economic policies.  The consequences, however, allow localized messaging that the incumbent has let bad things happen in the district.

You also want to make sure that voters who turn out for the presidential race in Townville vote down ballot. 

You are going to need a television ad that establishes a basic argument that your guy is going to put the people of the local area first, and not fall in with what party bosses tell him to do in Washington.  It should likely reinforce that he won’t waste their money on things that don’t help them. (It will be less generic than that because your candidate will be a person with a history and personality.)

Polling to sort through some options may be appropriate at this stage but instead of testing “message” paragraphs, you might look instead simply for what bothers people most, since your paragraphs will never translate directly to ads and can lead to swing voters terminating the poll as they get annoyed by them.

Then you have a lot to work with on specific executions that can reinforce each other as appropriate in television, mail, and online.  Start with online testing because it is easiest to do.  You are now looking at executions of varying content, but also varying tone and style.  

Message Testing

Your creative team is not limited to a 30 second format but can use a longer-form story, a metaphor, or a meme.  They can show how your candidate’s spouse has to drive farther for groceries since Family Dollar closed or how ambulance response times will increase if the hospital closes.   

As long as the ads are under the message umbrella, and sensitive to what you know from affinity group and other analyses of the district’s interests from Google, Facebook and other sources, the team can develop an array of options.

Internet ads can be self-testing.  It is easy to see what engages interest – either through clicks, viewership, or by varying search terms.  Conduct brand-lift surveys.  Such surveys are a standard for commercial advertisers – asking one question, exposing someone to an ad, and asking a follow up question later to test movement.  Brand-lift surveys can and should be conducted within affinity groups or whatever targeting scheme you will employ, or they can help you choose internet targets.

You can also design your own experiments:  If you are canvassing, expose some people to one message and others to another.  Gauge their reaction.  You can also contact them a few days later and see what they retain and how their attitudes may have changed. Add to your canvassing script what people have heard lately, and you will have another measure of what is breaking through as advertising begins.  (Don’t ask them to recall the medium – people are not very good at that.) 

The end result should be a mix of messages, measured for effectiveness within affinity groups or other online targets.  Television and mail can overlap with internet messaging and some internet messaging may stand alone as it impacts a discrete group only (like efforts to reach Democratic Presidential voters in Townville).

Prediction

The final historic purpose of polls is to predict the outcome of the race.  Conduct tracking polls if you want, but they won’t guide your final resource targeting online because the sample size will be insufficient and your targets are generally behavioral not demographic.  Analysis of the canvassing stream will help as you monitor what people are hearing.  Tailored analytics can tell you whether you are above or below partisanship among people who are principally streaming online or are more traditional in their media habits, and by attitudinal groups as available.    

Currently, most analytic efforts are too divorced from campaign strategy to help but that will likely change as campaign practitioners see the broader uses for analytics and as analytics professionals are better integrated into the strategic discussions of campaigns. 

In any case, there are new tools available designed for the internet age.  Polling is not about targeted online communications. These days television alone will not reach everyone, and it misses opportunities for tailored messaging about issues that touch people’s lives.