This week five major polling firms released a statement on “Revisiting Polling for 2021 and Beyond,” which you can find here. Friends, former clients, and readers of this blog have asked me what I thought of it. This post answers that question without going behind anyone’s back, especially since I applaud most of it. The group of five pollsters are all former colleagues, some are also friends, and they include some of the researchers I respect the most. (These are overlapping; not mutually exclusive categories.)
First, I thought it was thoughtful, analytic, reflective and productive. I found it useful and interesting that the impact of unexpected Republican turnout contributed to the problem but did not account for it. I totally agree that presenting results with a range of scenarios – different turnout levels for example – would be productive. I acknowledge that I tried to do that a few years back and found that clients adopted the optimistic scenario as the “real” one. Further, both clients and the powers-that-be appreciate expressions of certainty, even when none exists. A group effort to present results as a range may be more productive than an individual one.
Second, I welcomed the discussion of weighting procedures and the use of analytic modeling in polling. In the old days, polling used random samples. The margin of error tells the statistical probability that a random sample is wrong but that is not how virtually any pollsters are sampling these days. Instead, pollsters are weighting the data to presumptions of the electorate – often well-researched and well-grounded presumptions but presumptions nonetheless. Apparently many of these were too optimistic on the Democratic side. I would also hope for greater transparency in identifying those presumptions in the future.
Third, the use of modeling to ground the sample in base attitudes and partisanship as well as demographics is important. If analytics says 40 percent of the electorate in question tilts Republican, then the sample should too. The more sophisticated and accurate the modeling is, the better grounded the polling will be, and the better able to show change and relate other attitudes to those grounded in modeling. Using the modeling properly requires certain sampling and calling protocols, however, that were not covered in the memo. Proper alignment with modeling would, for example, have made partisan bias due to COVID behavior extremely unlikely. Modeling, however, includes a “mushy middle” of people about whom there is uncertainty. They are in a modeling middle not a middle in reality and even when polling and modeling match, that can be a source of error. Modeling, too, needs to be more transparent about its own level of error, and more politically astute about what is modeled and how.
Finally, and perhaps most importantly, I appreciated the opening up of the discussion to analytics practitioners and others outside of polling. In fact, I believe the resolution of “the polling problem” is outside of polling. The change in sample frame from random to weighted “representative” samples – forced by response rates – means polling will continue to rely on presumptions and will not again provide accuracy within the margin of error, except when the presumptions are correct.
The resolution, in my view, is in a great deal more clarity in what the research questions are and a lot more creativity in how to answer them. I agree with my former colleagues that polling remains an important element of political campaigns. It should not, in my view be the only or perhaps even the dominant methodology employed. There are an emerging array of methodologies and unlimited potential for experimental design. Some are advances in projecting results and others help get at underlying attitudes and message development. Perhaps there needs to be some separation of research that fulfills those goals.
There should also be a new attitude of listening to voters rather than approaching them exclusively with an ivory tower sense of distance. People will usually tell you what they think if they think you really want to know. Analytics can do a lot more to help win elections, but analytics practitioners need to understand their own limitations too. And pollsters often ask questions in ways that are obtuse, at best, beyond the Beltway (a phrase that is meaningless to many). New ways of listening and new qualitative techniques are as important in understanding the electorate as are fixes in projections.
Consumers of polls need to understand both their value and their limitations. Elected officials certainly express more skepticism about the “horse race” number these days, but that should continue when their pollsters tell them they have 52 percent of the vote with their opponent at 48 percent. That doesn’t mean you will win, and the why of it all – what voters are thinking and feeling about their own lives is critical too.
I wish the media would stop treating polls as a central story about voters and the election. Dueling polls are much less interesting than dueling candidates, or ideas, or constituencies. And if you must cover polls, do so please in a way that is more discerning about polling quality, and far more transparent about how the poll was conducted and weighted, and how that offers potential bias. It always does.
One thought on “Thoughts on “Revisiting Polling””
Brilliant. There are so many good things here but my favorite is “modeling to needs to be transparent about its own level of error.” In 2018 I would say to the kids at For Our Future that there should be +/- on models and they would look at me like “you poor dear.”