Entering polling’s silly season…

Soon there will be a plethora of “horse race” polls in various races and nationally, likely showing divergent results. That is a seasonal phenomenon plus the one-two punch of the Supreme Court decision on Roe v. Wade and the compelling hearings by the House Select Committee on the January 6 Attack have made the mid-terms more interesting and more contested. Plus, Republicans have nominated some truly dreadful candidates.

On the flip side, President Biden’s approval is low and the economy is perceived as weak. Last time there was a mid-term election with rising inflation (although rising less than now at that point) and a then unpopular Democratic President, the year was 1978. In that year, which almost no one under age 50 remembers, Republicans picked up three Senate seats and 15 House seats.

There are many differences between 1978 and 2022. So this year is looking interesting, but we cannot know what will happen based on polling alone, or even mostly. Here are two caveats on polling and then some things to watch for since polling isn’t going away.

The first caveat should be familiar to anyone who has followed this blog: No one is polling random samples. Polling simply doesn’t work the way it used to work. The rule for a random sample is that everyone in the population of interest – people who will vote on November 8, 2022 – has an equal chance of being included in the poll. Caller ID means no one has been able to do that since about 1990. Instead, pollsters mimic the electorate and aim for representative samples. Widely swinging turnout, extreme difficulties reaching some demographics, and response bias have made “representativeness” more and more difficult. Response bias is not unitary. It combines distrust of polls, distrust of the media, the desire to be seen as an individual not part of a labeled aggregate, the quality of the polling questions (which may cause terminations) and varying interest in politics. The response bias factor is greater among conservative voters than liberal voters but is greatest of all in the middle of the ideological spectrum, which is often the most interesting. Pollsters correct for the problems by “weighting” the data. Those weights rely on assumptions about the shape of electorate, predicated in part by mistakes of the past. There is no correction for “new” errors until after they have happened.

No poll, no matter how carefully constructed, should ever be seen as an absolute; an indication, yes, but not an absolute. If polls – even multiple polls – show Candidate A two points ahead of Candidate B, that does not mean Candidate A will win. That has nothing to do with the margin of error (which is addressed by multiple polls). It has more to do with common assumptions about turnout and perhaps new forms of response bias which we don’t know about yet. (On the other hand, if even two even semi-legitimate polls show Candidate A winning by 15 points, he or she very likely will win handily; this is all on the margins.)

Which brings me to my second caveat: Change is more often slow and incremental than sudden and dramatic. Small things happening slowly are worth attending to. Polls do not often pick these up, but deeper conversations with voters can at least create hypotheses ahead of the polls. I have found stories about how most Republicans still support Trump uninteresting. Of course they do; it is more important that the support appears less than it was. On the flip side, there is no question that many Democratic and independent women are deeply upset by the Roe v. Wade decision. The question is whether some of those who would not ordinarily vote in the midterms will turn out and vote as a consequence. Younger, lower propensity voters are not often included in polls, or not included in numbers that would allow analysis of them separately from the aggregate. In a close election, however, they could make a critical difference.

So, if you want to know what will happen, be skeptical but not dismissive of polls, be analytic about what small(ish) things may matter, and be curious about probing whether they will this time. Big upsets don’t come out of nowhere. They are also rare. But they do happen. Here are some things to watch for in polls and in the world outside them.

Small groups and sample sizes: Surprises may be produced by small groups in the electorate, like independent women, young voters, high propensity voters who stay home, low propensity voters who turn out. But an aggregate poll, even with a large sample size, cannot tell you much about groups who may be 10 percent of the electorate.

For example, the poll by Cygnal of the Georgia electorate led off the silly season. This Republican outfit put out a release on their 1200 sample Georgia poll, which included an oversample of 770 African American voters. First, they trumpeted that Republican Governor Brian Kemp led Democrat Stacey Abrams by 50 to 45 percent, which is pretty unimpressive for an incumbent Governor, actually, and they never defined their turnout assumptions which will matter a great deal in this race. Next, they declare that African American voters are not homogeneous, which really didn’t require a poll, and then declare that a quarter of African Americans under age 35 are supporting Kemp. Valid? Maybe, but maybe not. Even with a sample of 770 African American voters, how many were under age 35? I would guess maybe 150, and maybe fewer actually and upweighted to about 150. Was that small sample representative of younger African Americans? And who was the base sample of anyway, those who have voted previously or all African Americans? It may be true that Kemp is doing better than expected among younger African Americans – he has gotten a lot of press lately for not being as opposed to voting rights as other Republicans – but the conclusion, without adequate information about the sample, that this is somehow a prediction, seems a bit dubious.

Turnout assumptions: I have seen very few public polls (actually I cannot recall one) that have specified assumptions about voter turnout. First, if turnout is based on self-report (reaching people at random and screening for whether they are likely to vote), the poll is likely representing turnout as both higher than it will be (there is social pressure to say likely to vote) and more Democratic (see above on response bias). If they are selecting people based on vote history from a voter file, well, fine, but then say so, and even then is the assumption that turnout patterns are like those of 2018 (which was a Democratic year) or different. In any case, if the information on weighting and turnout is absent, the poll is really hard to read in a meaningful way.

Defining independents: With deeper party polarization, looking at voters who say they are independent is an important task. The thing is, who they are is often volatile and independents are often undersampled in polls. The image is that independents are people without party predilections, but there are independents who call themselves that because neither party is sufficiently liberal or conservative enough for them. They may be independent, but they are not up for grabs. Then there are those who don’t like either party because they feel neither party represents their moderate or centrist views. They are very important, but their distaste for politics extends to a distaste for polls, and so they are often undersampled even while many of them vote. People also float in and out of calling themselves independents, depending how they are feeling about the parties which creates exaggerated volatility – unhappy Republicans may say they are independent, making the category more Republican, or vice versa. Mushing all this together independents are still generally only about 20 to 25 percent of the polling sample. See the above on sample size. A poll of 500 with roughly 100 “independents,” of multiple descriptions won’t say much about them. (Except in some places, like California, where DTS – decline to state – voters can be a constant category.)

Extrapolating from national data: There are new national polls that show the generic match up between Democrats and Republicans far closer than it was in the spring even while President Biden’s numbers are low. It’s intriguing, and hints that the midterms may be less of a victory for the Republicans than previously thought. Hints at. Intriguing. I’m on board with both of those but not (yet) with any prediction. There is no reasonable way to extrapolate the national data into a number of house seats or to any particular statewide race. First, the national data may simply represent an even greater separation between “red” and “blue” places than was true even six months ago. Most of the national polls do not provide a time series by region, plus the above comment on turnout applies. National data often inadvertently oversample the coasts because there are more phones per person on the coasts. That New York, New England, and California are approaching political apoplexy matters, but doesn’t predict what voters will do in Georgia, Ohio, Wisconsin, or Nevada.

So what to do with all this if you are interested in the election? Well, that depends on your vantage point.

If you are a candidate, go figure out how many votes you need to win the election, how many you have already effectively banked by virtue of your party label or prior base, who are the additional people by demographics, geography, and perspective you need to win, and then go plan your campaign to win them. Whether you are ahead or behind, and by a little or a lot really doesn’t matter much in doing the intellectual work of how to win. Polling can help you understand your district better, and what people there may want to know about you and your opponent, but the strategic process of what you do with that information matters a lot more than the poll per se, which is only one tool in developing your strategy.

If you are a member of the press, use the polls to guide who you talk to, what you assess, and to enlarge your view of the range of what might happen. I wish you wouldn’t report on them as much but I recognize that is a losing battle. It’s too easy to report polls. But please ponder the questions that emerge from them: Will Republican voters turnout in droves because they are upset with Biden, or will more than usual stay home because they have new doubts about the MAGA crowd? Is there something happening that will cause younger people and lower propensity Democrats to turn out, whether that is something local or national? Are the individual candidates and campaigns perceived as interesting or distinctive enough (in ways both positive or negative) to break through whatever is happening nationally? And do voters generally believe they have relevant choices in the district or state on which you are reporting, or are the candidates boring, muting any opportunities for changing turnout dynamics or partisanship?

If you are an activist, well, go get to work. Whether your candidate is ahead or behind, by a little or a lot, door-to-door canvassing to discuss the election matters and has more of a lasting impact than most anything else, even for the next election. Read the polls if you like. But don’t let predictions become a self-fulfilling prophecy, as too often happens during the silly season.

Authenticity

Congratulations to John Fetterman on winning the Democratic nomination in Pennsylvania. And kudos to you for being declared authentic.

Being authentic has long been a positive description in politics and is increasingly rare. What is it and why has John Fetterman won the authenticity award? Check out the dictionary and it means John Fetterman is an original. Not a copy. Not like everyone else.

Well, that really shouldn’t be that special, each of us being unique individuals and all. So why is only John Fetterman special in this way?

I have long believed that voters seeing individual candidate personality is critical to the candidate winning. Voters seem pretty good at getting a read on what candidates are about personally. In focus groups, I have asked questions about what a candidate would be like on a first date, whether, as a neighbor, they would look after your house when you are gone, and other questions to get at what the “guy” is like. People answer these questions easily. They do have that kind of read on people – even people running for office. Candidates who would be too polite or too grabby on that first date, or who, as neighbors, won’t pick up your mail, are less likely to win regardless of their issue positions. Even if campaign ads and messages declare them to be a fighter, if as a neighbor you can’t call on them in an emergency, you are clearly not buying they fight for you.

Now, plenty of ads describe candidates as growing up barefoot and poor, or the child of a single mother, or in some other way overcoming the odds just like most people have. But, would they feed your cat when you are away for the weekend?

In the olden days of polling (like in the 1990s), pollsters told candidates what people were worried about and then, on an individual basis, tried to connect what people were concerned about with the candidate’s thinking. In the modern era of independent expenditures, half the time the pollster hasn’t met the candidate, much less derived a sense of what makes them unique – as a person as well as politically. The result is too much messaging is pat regurgitated shit like how people deserve X, or at least how some people deserve X, and how the candidate knows they deserve whatever because he/she has also overcome odds. (The overuse of the word deserve is a pet peeve; it is fundamentally about entitlement and not respectful of what people earn.)

Not all candidates have visible tattoos, dislike suits and wear shorts and hoodies like Fetterman. He does provide more to work with than most. But every candidate has some attitudes that don’t fit the mold, or some aspects of their thinking that are original, or a real story about how they became interested in politics, or about how they are a good neighbor (told better by the neighbor, I suspect).

So, as the 2022 cycle gets going, if you want candidates to be deemed “authentic,” suggest they say some things in a way only they would say them. Messages and ads taken from common talking points will just produce an image that your candidate is a typical politician. And, believe me, those guys are never fun on a first date and while some might say they will feed your cat, they will get busy and forget and your cat may starve.

Now, in many cases, both the candidates would let the cat starve. Then people make a partisan choice between two cat-killers. Probably not a good year though for Democratic cat-killers. Even if they grew up poor and overcame the odds and therefore know what you deserve.

The Press, the poll, and the Governor

So, for those who aren’t from Mississippi, here is the state of play: The state is about to overtake Louisiana, if it hasn’t already, as the number one COVID hotspot. Our Republican Governor, Tate Reeves, has been very clear that he will not issue a statewide mask mandate including in schools, although he does (kinda) encourage vaccinations and allow local mandates. Then there is Dr. Dobbs, our telegenic and media savvy State Health Officer who encourages mask-wearing, sporting one at press conferences while standing right next to the naked-faced Governor.

Some of the state press are going ballistic on Reeves. I appreciate our more progressive press – they make it a lot easier to know what’s going on. They clearly care about the crisis we are in – and, perhaps, care more than the Governor does. I fear, however, that they are giving him the upper hand. Some elements of that:

1. Readers of this blog know I get frustrated by bad polling. That is no less true when it is making a point I agree with. Trumpeting an opt-in poll, with a non-representative sample https://www.sunherald.com/news/coronavirus/article253462859.html really doesn’t help your credibility.

2. Reeves impresses me as very smart. He seems generally well informed but is making ideological decisions I disagree with. He is unwilling to take federal money if it requires even a small state expenditure; he doesn’t believe the state should mandate individual behavior; and he sees his job as running the mechanics of the government rather than leading people toward better behavior. He basically articulated all of those policy-laden precepts in his last press conference but because he also said one of you was “virtue-signaling” you gave him a free pass on the rest. Perhaps you took his bait?

3. Y’all seem to love Dr. Dobbs, and he does speak for the science and is far better than the Governor at demonstrating empathy. But it also appears to me like a well-orchestrated dance. He is a state employee – appointed by the Board of Health, although most of its members were appointed or re-appointed by Reeves. Dr. Dobbs took a good long while to address the equity issues in vaccine distribution, and his dance with the Governor serves to limit political opposition to Reeves. Looking at them as some kind of yin and yang, fails to lift up other political voices that may be critical of Reeves. Rely on Dobbs for the science, by all means, but maybe give a few column inches to political opposition as well – like the Mayors, supervisors, and school board Presidents who might just tell you Reeves is making their jobs harder.

The bottom line for me as a reader is that there is a lot I would like to know that I am not hearing about. Reeves is not the worst Republican Governor – a toss up between his colleagues in Florida and Texas in my view. But he is also using the polarization of the moment to avoid discussion of some basic issues of governance. While there is a squabble about virtue signaling, he is failing to use resources available to him, and defining state government responsibilities as narrowly as he can. If he believes in local decision-making, how do local leaders respond to those policies? At least one enquiring mind would like to know…

Looking at the wrong problem

So, I just got off the phone with an old friend who is on the communications side of political consulting. My friend is apparently giving my former polling colleagues a rough time and apparently so are others – “suggestions” that are not feasible, organizations that are assigning them letter grades as they would to school children, clients dismissing the need for research at all.

Now, I have been pretty clear in these pages that I think polling as a methodology no longer works the way people think. It is a rougher measure and can leave important groups out of the equation. It has value but it also has serious limits: It is far less useful than it used to be for prognosticating close elections. Low response rates allows greater risk of response bias and, as a result, sampling is more complex.

Polling risks leaving out constituencies that may be critical to winning – voters who are anti-establishment (or anti elite) and see it as an elite or establishment tool, and those who just don’t relate to the political frame as employed. Except perhaps for this last one, none of this is the fault of pollsters, and imposing the extant political frame on swing and low propensity voters who aren’t interested in it is hardly an error unique to pollsters. The Washington political frames to which many voters do not relate is a shared Washington responsibility.

Here’s what I think are actually the remedies to better political research by campaigns:

1. More upfront strategic thinking about how to win. There is a plethora of information available for any district or state, including prior election results, demographics and analytics, and two (or more) real candidates with unique strengths and weaknesses. After studying all that, what are the hypothetical ways to win that you need to test? (Chances are there are better methods than polling for choosing which is most likely.)

2. Better analytics and better integration of them strategically. Political analytics got better and better from 2006 through 2012. Then its practitioners started competing on cost and cutting corners on what they did statistically. At the same time, people seemed to think it was a good idea to separate analytics from the process of campaigning so it was an independent look and not integrated into the campaign process. Both of these developments were unfortunate in my view. Cutting corners made analytics less valuable as a predictor and the separation from campaigns meant than campaigns did not have the capacity to ask for a sophisticated statistical look at the challenges that were on the table strategically. It’s time to go back to the future on analytics – an invaluable tool that should be guided strategically.

3. Tailored research that answers the strategic questions on the table. In close elections, winning is often on the margins. Hypothetically, maybe your candidate can win if you can move 6 percent more of Latino voters, or lose a particular suburban community by a little less, or find a way to blame the incumbent for the serious infrastructure problems in a community that usually votes for that person’s party. Strategic analysis and analytics can help you develop these options. There are experiments you can conduct to say which one(s) might help put your candidate over the top. And a poll of voters in the aggregate wont tell you which one will work anyway.

4. Integration of field data into research. Almost any good campaign has a field program in which people go talk to individual voters, including those who are swing voters and lower propensity voters. I don’t want to mess up the open ended nature of these conversations, which is part of what makes them valuable, but there are ways of capturing quantitative information from them – and that is about the only way you really will hear from genuinely non-partisan and non-political voters, and those who vote irregularly. You have to start the field program earlier, but that is generally a valuable thing to do for other reasons.

So, yes, there are new challenges in political research. The biggest problem in polling is that you can no longer talk to people at random because they don’t respond at random. Careful polling makes that less of a problem and sloppy polling makes it worse but it is not feasible to eliminate the problem. The problem is the result of caller ID, telemarketing, political polarization, and changing modes of communication. The pollsters did not create the problem.

Generally, pollsters are analytic and political thinkers with a penchant for numbers. Those skills sets are important in the mix of campaign skills. Conversation about methodology is useful. Creativity on how to answer strategic questions is essential. Increasingly, the presence of advanced statistical skills on the team is important. Beating up on the pollsters won’t help to find new and better ways to conduct research.

Polling is Leaving Out Poor People

Those who follow such matters already know that pollsters under-sampled white, non-college voters in 2016. Then, in 2020, Trump voters exhibited greater than average response bias as they were less likely than others in their demographic to respond to polls.

The problems with polling are not only about Trump voters, or about election projection for that matter. The core problem is that some people are less likely to respond to polls. Pollsters “correct” for this by up-weighting those who do respond – counting their responses extra and assuming the respondents represent their demographic. Some groups who are not Trump voters but consistently require up-weighting are low income people, people in minority communities, lower propensity voters, and young people.

Low income people and lower propensity voters (groups that overlap significantly) have always been harder to poll. Some of the difference is behavioral. Low income people are often less available – more likely to work nights, to move frequently, or to use a burner phone without any listing. They may also associate polls with the government, or the media, or other elites – the establishment if you will – and have little interest in unnecessary interaction with those (which is likely part of the problem with Trump voters).

Question wording is also often a problem. If people are asked to choose among response alternatives that do not reflect their views or concerns, they are more likely to terminate the interview. Many polls on COVID vaccination do not include cost as a barrier, assuming that people know the vaccine is free although free health care is outside the experience of most people, particularly those who are lower income.

Pollsters’ increasing use of online panels may be making the problem of getting a representative sample of low income people worse. Such panels are recruited in advance and demographically “balanced” to represent the population.

The first problem is that rather than eliminating response biases they are simply injecting bias earlier in the process as the panel consists of people who have agreed in advance to be polled.

Second, online panels eliminate some low income people from polling samples entirely. In 2019, 86.6 percent of households had some form of internet access, including 72 percent with smart phones. But the percent varies by state, ethnicity, and income, according to the ACS https://nces.ed.gov/programs/digest/d17/tables/dt17_702.60.asp which has been clear about the problems in needing to weight census data in 2020 given the low response rates of low income people https://www.census.gov/newsroom/blogs/research-matters/2020/09/pandemic-affect-survey-response.html.

Finally, if panel recruitment is by phone or mail, it may be skipping those who are more transient or who do not respond to such calls for all the reasons described above. And even with pre-recruitment, most panels are up-weighting low income people because they are not responding at the same rate as other panelists even when the recruitment is more balanced.

Does the exclusion of low income people from polls matter? Superficially it may not matter very much to political campaign strategists because they are interested in likely voters and willingness to be polled and vote propensity are related (per Pew Research studies). However, the relative absence of low income voters may misinform the campaign about what is on people’s minds, especially in lower income states and districts. If the campaign is considering investment in organizing low income communities, the exclusion reduces the potential for that strategy.

Not-for-profit organizations that wish to provide services to low income people should be very careful about relying on polls. Research has shown large response biases in health care research (https://link.springer.com/article/10.1007/s11606-020-05677-6), for example. Collecting data on site or in person may be far more valuable, and personal interviews are becoming feasible once again.

Most of the publicly released polls on issues like COVID vaccination are reporting data by income. In some cases, the income categories are cruder than they should be (e.g. below $40K as the lowest). In virtually all public surveys, the data are weighted but information on the degree of weighting applied is unavailable. If, as in Mississippi, nearly 20 percent of the population of interest is below the poverty line, how many were interviewed in a sample of 500 before weighting? If there were only 50, that wasn’t a meaningful sample from which to weight.

Every consumer of polls should know what the unweighted data looks like. And every consumer of polls should be a little skeptical of results in groups that required significant weighting or were unbalanced demographically without it. If your interest is in a group that is up-weighted, like lower income people, you may have learned less than you think.

None of this should suggest that such polls are without value. But they shouldn’t be seen as all encompassing. There is no substitute for conversation, and articles like these https://www.nytimes.com/2021/04/30/health/covid-vaccine-hesitancy-white-republican.html may be more useful and informative than some of the published online panel data in understanding what lower income communities are thinking and feeling on issues of concern.

There are other groups who are under – or over- represented in polls. Under sampling low income people seems both egregious and important at this time. But, as I have written before, the core problems on sampling call for new research methodologies as well as for greater care by pollsters and greater caution from those who consume data.

Problems with polling: Redux

I haven’t posted since April since I had little to contribute to what I saw as the two overarching goals of the last six months: electing Joe Biden and developing a COVID vaccine. I did my civic duty toward the first and had nothing to contribute to the second, and so it seemed a time to pause. Now, I feel my free speech is restored and for a moment at least there is some attention to one of my favorite topics – how we need to do research for campaigns differently. I have covered much of that previously but here is a redux on the problems with polling with some updating for 2020.

1. Samples are not random. If you ever took an intro stats course, it grounded most statistics in the need for a random sample. That means that everyone in the population of interest (e.g. people who voted November 3) has an equal probability of being included in the sample.

The margin of error presumes a random sample. The number of people required to give you an accurate picture of the array of views in a population depends on size of the sample, the breadth of the views in the population, and the randomness of the sample.

The intuitive example: Imagine a bowl of minestrone soup. If you take a small spoonful, you may miss the kidney beans. The larger the spoonful (or sample) the more likely you are to taste all the ingredients. The size of the spoon is important but not the size of the bowl. But if you are tasting cream of tomato soup, you know how it tastes with a smaller spoon. America is definitely more like minestrone than cream of tomato.

The problem with polling has little to do with the margin of error, which remains unchanged. The problem is that pollsters have not used random samples for a generation. The advent of caller ID and people’s annoying proclivity to decline to answer calls from unknown numbers (a proclivity I share), plus some changes in phone technology with fiber optics – including a proliferation of numbers that are not geographically grounded, and an explosion of polls and surveys (How was your last stay at a Hilton?), makes the act of sharing your opinion pretty unspecial.

Not to worry, we pollsters said. Samples can still be representative.

2. The problem with “representative” samples. A representative sample is one constructed to meet the demographics and partisanship of the population of interest (e.g. voters in a state) in order to measure the attitudes of that representative sample.

The researcher “corrects” the data through a variety of techniques, principally stratified samples and weighting. A stratified sample separates out particular groups and samples them separately. Examples include cluster samples, which stratify by geography, and age stratified samples, which use a separate sample for young people, who are hard to reach.

Professional pollsters usually sample from “modeled” files that tell how many likely voters are in each group and their likely partisanship. They upweight – or count the people they are short of extra. They may up-weight the conservative voters without college experience, for example, to keep both demographics and partisanship in line with the model for that state or population. Virtually every poll you see has weighted the data to presumptions of demographics and partisanship.

Back to the minestrone soup example: Samples are drawn and weighted according to the recipe developed before the poll is conducted. We presume the soup has a set quantity of kidney beans because that’s what the recipe says. But voters don’t follow the recipe – they add all kinds of spices on their own. Pollsters also get in a rut on who will vote – failing to stir the soup before tasting it.

Most of the time, though, the assumptions are right. The likely voters vote and the unlikely voters do not, and partisanship reflects the modeling done the year before. But disruptive events happen. In 1998 in Minnesota, most polls (including my own) were wrong because unlikely voters participated and turnout was unexpectedly high particularly in Anoka County, home of Jesse Ventura, who became Governor that year. That phenomenon is parallel to the Trump factor in 2016 and even more so in 2020. Unexpected people voted in unexpected numbers. If the polls are right in 2022, as they generally were in 2018, it is not because the problem is fixed but because conventional wisdom is right again, which would be a relief to more than pollsters, I expect.

3. What’s next. I hope part of what’s next is a different approach to research. If campaigns and their allies break down the core questions they want to answer, they will discover that there is a far bigger and more varied toolbox of research techniques available to them. The press could also find more interesting things to write about that help elucidate attitudes rather than predict behavior.

Analytics has a great deal more to offer. That is especially so if analytics practitioners became more interested in possibilities rather than merely assigning probabilities. Analytics has become too much like polling in resting on assumptions. Practitioners have shrunk their samples and traded in classical statistics for solely Bayesian models.

Please bear with me for another few sentences on that: classical statistics make fewer assumptions; Bayesian statistics measure against assumptions. When I was in grad school (back when Jimmy Carter was President – a Democrat from Georgia!), people made fun of Bayesian models saying it was like looking for a horse, finding a donkey, and concluding it was a mule. We will never collect or analyze data the way we did in the 1970s and 80s, but some things do come around again.

It would also be helpful if institutional players were less wedded to spread sheets that lined up races by the simple probability of winning and instead helped look for the unexpected threats and opportunities. In those years when everything is as expected, there are fewer of those. But upset wins are always constructed by what is different, special, unusual, and unexpected in the context of candidates and moment. Frankly, finding those is what always interested me most because that’s where change comes from.

More on all of this in the weeks and months ahead, and more on all the less wonky things I plan to think about Democrats, the south, shifting party alignments, economic messaging, and my new home state of Mississippi. I am glad to be writing again, now that I feel more matters in this world than just Joe Biden and vaccines.

Some questions for post-Labor Day Polls

I suspect we will see a spate of new polls fielding after Labor Day.  I am hoping they ask some questions beyond the horse race that tell us more about what voters are thinking around the Democratic presidential contest.  Here are some suggestions (in no particular order):

Candidate Qualities 

Here are some qualities people might look for in the candidate they ultimately support for President.  On a scale of 1 to 7, please tell me how important each one is to you, with a 1 meaning not important at all and a 7 meaning it is the most important quality.  (READ AND RANDOMIZE)

  • Can beat Trump in November
  • Shows compassion for people
  • Knows what they want to do as President
  • Would bring the country together
  • Would make significant policy changes
  • Has a new approach to governing
  • Will protect individual rights and freedoms
  • Will promote economic opportunity
  • Has the wisdom of experience
  • Will advance equality and anti-racism

Which of these qualities – or some other quality – is most important to you of all?

Electability

We know voters care about whether a candidate can beat Trump but we don’t know what qualities make a candidate stronger in their views.  How about a couple questions, like:

How important are each of these in telling you a candidate can defeat Trump in November, using a scale of 1 to 7 with a 1 meaning it is not important at all and a 7 meaning it the most important quality?  (READ AND ROTATE)

Is there another quality that is important in telling you a candidate can win?

  • Tough and willing to fight
  • Has moderate issue positions
  • Popularity with Trump voters
  • Inspires young people
  • Relates to diverse communities
  • Leads Trump in the polls
  • Likeable and appealing

Thinking about your friends and neighbors, if the Democratic candidate is a woman, will that make them more or less likely to turn out and support that candidate in November, or won’t it make any difference to them?

Thinking about your friends and neighbors, if the Democratic candidate is over age 75, will that make them more or less likely to turn out and support that candidate in November, or won’t it make any difference to them?

Thinking about your friends and neighbors who are uncomfortable with Trump, do you think they are looking more for a return to the pre-Trump years or more for new policies that will bring change?

Issues

What issues are most important to you in the 2020 election? (Open-end, multiple response)

If we elect a Democratic president in 2020, which of the following should be their top priority in their first term: (READ AND ROTATE)

  • Climate change
  • Affordable health care
  • Access to post-secondary education
  • Infrastructure like roads and bridges
  • Higher wages
  • Immigration reform
  • Criminal justice reform
  • Other (specify)

When it comes to health care, which would do more to expand access to quality affordable care – (ROTATE)  a public option in which voters can choose government-administered insurance OR Medicare for all in which everyone is in a government-administered insurance program (with response options for neither as well as don’t know)?

(IF CHOICE) Would that system be much better, somewhat better, somewhat worse, or much worse than the current system?

If there is a Democratic president, how likely is it that the proposal will become law in the next five years – very likely, somewhat likely, not very likely, or not at all likely?

The Horse Race Question

I have been concerned that asking about 20 people in phone polls flattens choices because people can only hold seven plus or minus two item in short term memory.  Consider asking the horse race in groups of 5 to 7 candidates – preferably randomizing sets although it is also tempting to ask the top 7 together.  Then add a question like:

You indicated candidates A, B, and C were your top choices within the groups I gave you.  Which of these is your first choice among all the candidates?    

Is that the candidate you would most like to see as President or the candidate you feel can best win?  (Code for volunteered both)

Vote History

Most public polls are asking how likely people are to vote in the primary or caucus in their state.  Consider asking whether they voted in the 2016 contest between Hillary Clinton and Bernie Sanders and for whom they voted.

The question allows analysis of how large a primary electorate you are polling and what the standing is among those most likely of all to participate, as they have done so historically.  It will also say where the support for these two candidates is going. 

Demographics

Basic demographics are fine, but also consider asking whether they live in a county that supported Clinton or Trump in 2016, as these voters may have different perspectives from each other.

# # #

Questions like these would say more about what voters are looking for in the next president (other than that he or she is not Trump).  Crosstabs of questions like these by candidate preference might also provide more insight as to why voters are making the initial choices they are, and how the contest may evolve.

Buying a Ticket Out of Iowa – Online

The Iowa caucuses help frame who is a contender for the nomination, which is especially important in such a large Democratic field.  Historically, there are “three tickets out of Iowa.” Only once in recent political history has anyone become President without a top-three finish in the Iowa caucuses (although a whole lot of precedent-breaking is going on, including early California voting right after the Iowa caucus). 

One element of precedent-breaking that is certain: online communications matter more.  Yes, Iowa is an older electorate and people expect a strong field organization.  But it is also heavily wired and online engagement is starting high and growing (although the number one search name nationally in the last 30 days was not a candidate but Nipsey Hussle – I checked.)

Here’s one approach to researching and defining your target online:

1.   How many voters do you need?  First, decide the size of your initial target.  There are complexities in that basic calculus.  Democrats have not always released raw totals as opposed to delegate percentages so history is limited and the number of caucus attendees per delegate varies across the state.  Additionally, 10 percent of delegates this year will be chosen by a mobile phone caucus held before the in-person caucus event and campaigns will ultimately need to have separate goals for each stage.

As a starting point, (unless your campaign is Sanders or Biden, both of which have higher bars), I recommend finding 75K voters – reasonably distributed across the state – who become committed to your candidacy.  The highest Democratic caucus turnout was 239K in 2008.  Even with the mobile caucus 300K this year would be a stretch, as it represents nearly half the registered Democrats.  Thus, 75K should produce at least a 25 percent popular finish and threshold everywhere; it is also competitive with Sanders who won less than half 2016’s 171K participants, not all of whom will either stick with him or return to the caucuses.

2.  How to find your vote.   Every campaign will individually target repeat caucus attendees but well under 100K participated in both 2008 and 2016.  So talk to the repeats and monitor their choices – at least until they stop answering their phone (although you should have their emails by then).  

The next layer is people who will turn out in the caucuses because they are excited about one of the current candidates – that is the factor that ultimately expands the caucus universe; people who are caucusing not out of habit, civic duty, or party loyalty but because they really support someone.  Finding your own unique base also means less immediate competition for those voters and time to engage and mobilize them.  That’s where online strategies help – in finding the people with whom your candidacy resonates enough to draw their participation and who are not prior caucus attendees (whom everyone will target in field, mail and online).

3.  Defining your target online.  As soon as interest from outside the regular caucus universe is in the thousands – and preferably close to 5,000 – your internet team can do “look alike” modeling to find people whose internet behavior is similar to theirs.  Those who have opted in to your emails, attended an event, contributed money, or simply visited your web site tell the campaign in online terms, rather than simply demographics, whom it is attracting.  The most likely next set are people who behave like them in terms of their online habits.  Exclude from that modeling regular caucus attendees so you are finding more people who are attracted specifically to your campaign. 

Add to the look alike model message-driven targets through affinity targets or search terms.  If your candidate is a veteran, people who search for veterans benefits on line can receive an ad about your candidate.  Or if your candidate has been a leader on climate change, those who search on that issue should hear about it.  If your candidate just announced a student loan policy initiative, perhaps it is time to buy “student loan” as a search term. Such a candidate could use search to drive voters to a web site – and use look alike modeling to deliver ad content.

As voters engage, they will help refine your model and, if they opt in, they become part of your target.

4.  Adapt your message for online.  The expression of the candidate’s message and narrative is different online than it would be in television or a speech. A 30 second television ad buys emotional impact, especially since those who watch television will see it 20 times.  Online ads are more about engaging a conversation – piquing curiosity first rather than creating a dramatic moment.   Klobuchar’s recent video promotes her name – and also that she is smart, funny, practical and a mother.  That is not her full message but it is an introduction.

Internet engagement is slower than television impact.  Start now to build your narrative and the process will tell you a lot. 

Your narrative will likely include three elements of message by the end:  (1) that yours is the right candidate to take on Trump – because of their fighting personality, because of elements of the contrast, or because a lot of the same voters like them (although careful there).  (2) they have an optimistic notion of what the future looks like; while true that we are all going to die, that doesn’t get people to want to talk to you more; and (3) personal intangibles that meet the moment – voters can’t tell you what these are yet but internet testing and modeling may help you figure it out. 

Internet response and the results of your canvassing data stream can replace many traditional functions of polling because they can tell you who is attracted to a candidacy and some well-placed questions in the canvas data stream (or in a brand lift survey) can tell you about why.  

One question all this won’t answer is who is ahead right now.  However, until we can say who the electorate is, that’s not very answerable or interesting.  And 75K committed but geographically dispersed supporters put you in the running.

5.  Do you need a pollster?  You certainly need someone in your campaign whose job is to explore what voters may be thinking, listen to them, figure out how to reach out to them, and quantify their response.  That is the pollster’s traditional role: to focus on voters rather than the news cycle or Twittersphere and thus to save campaigns from isolating themselves within their own bubble.  Your campaign needs that regardless of how research and communication techniques evolve.   

Good luck!