Will Magnolia Tribune Use Fake Polls to Create Fake News?

I hope not. But watch for it. Because the Magnolia Tribune is polling its subscribers on issues on which it is taking a position. The questions are biased and the sample is hardly random. If the purpose is to probe subscriber views, that’s their own business. If the data are presented as more than that, it will be fake news.

I used to be a professional political pollster because I care about what people are thinking and feeling. It’s important in political campaigns and in understanding public dialog. I subscribe to the Magnolia Tribune because it tends to telegraph what I believe will be the right-wing messages in Mississippi this year and it is useful to know what those are. I have learned from the Magnolia Tribune that the right will attack Medicaid expansion as not helping the poor based on biased research from states that charged recipients of Medicaid coverage a premium, with the result in those states that lower income people dropped out. More recently, I have learned they will argue that more state help for schools will force up local property taxes, which is not necessarily true, especially as in lower income districts it will result in more Title 1 federal money. Finally, I have learned they will argue that if the state spends money on its Capital City, it should be able to overrule local leadership, although I suspect people in Tishomingo County want self government despite the money the state (and feds) spend there. Mississippi has a long history of strong local and county government.

Now the Magnolia Tribune is conducting a “poll” of subscribers on these and other issues. The questions follow a paragraph arguing one side and then providing a button so subscribers can express their opinions. Nothing wrong with that if the purpose is to see if active subscribers share the editorial opinions of the Magnolia Tribune. (I don’t share them but it wont surprise me to learn that most subscribers do.) If the results are presented as a poll of public attitudes, representing a broader population than those who answered it, it will be fake news.

Polls that mirror public or voter attitudes are much harder to conduct than in the past because response rates are low. Good pollsters reach out repeatedly to try to up the rate and try extra hard to reach those who are hardest to reach – young people, people of color, and people in the political middle. They then count those they do reach from harder to reach groups extra by “upweighting” their responses. There are problems with these procedures too as I have written about in this space but at least it is an honest effort to be representative. I believe the Mississippi Today poll early this year was an honest effort, although an imperfect one. Pretending self-selected subscribers are representative of anything else is not an honest effort at a professional poll. I hope the Magnolia Tribune does not present its results as more than they are.

I haven’t seen any in depth recent polls that address the issues at hand but I suspect the following is still true: Most voters are not policy wonks. They want good schools because it is good for kids and for the Mississippi economy to have them. If only they had political leaders who go about that effort honestly. If we are all lucky, they will get them. They are apparently pretty clear they don’t have such leaders now.

The midterms, prophecy, and blood sacrifice

I am just back from two weeks in Greece. A visit to the cradle of democracy and contemplation of events Before the Common Era provides perspective. Besides, Greece is beautiful and retired people get to travel in October. But so much back here is messier now than when I left.

Despite all the polls, analytics, and forecasts, I think it is unwise to be too confident that any of us know what will happen in 11 days. A lot is close; in the last few election cycles, close polling has presaged a wave in one direction or the other, and the trend the last couple weeks has not been good for the Democrats. But the past is an imperfect predictor of the future, or even of the present. I am concerned, also, that such prophecies become self-fulfilling, creating rather than measuring momentum. Past performance is a useful predictor in targeting as well, but it does seem to me a bit overdone. Upsets do happen as a result of candidates or chemistry. The first U.S. Senate race for which I polled was Paul Wellstone’s in 1990, back when I was too new and naive to understand he couldn’t win. (For those who do not remember, he did indeed win.)

So, having learned from the oracle at Delphi how to be properly ambiguous my prediction is JUSTICE WILL PREVAIL NOT LOSE GROUND NOW. The meaning of that depends on how you see justice and whether you place the comma before or after the word “not.” Thus it’s correct – if interpreted properly.

If this is a wave election, there will be blood. It seems rather likely that many will call for a blood sacrifice of the pollsters. I do not think that will work any better than the blood sacrifices of pre-classical, pre-democratic Greece. True, if you sacrifice animals or even people after an earthquake, you are unlikely to have another earthquake right away but that may well have nothing to do with the sacrifice.

Now, I have been very clear in this blog and to anyone who asks that I believe people need to change and expand their research protocols. Polling is hardly the only form of research available, it does not work the way it used to – or the way people think it does. It also looks at the aggregate, which is less useful in the internet age, encouraging aggregate media like TV and lessening the emphasis on organizing on the ground, or by internet networks. There is utility in knowing aggregate attitudes but as an early step in a strategic process which now in my view over-relies on polling.

But the problems with polling should not swamp an examination of the problems with campaigning, which seems far less connected to people than in times past. And the media’s coverage of politics seems highly problematic and often destructive of the democratic process. It emphasizes polarization for the drama, forecasting and predictions for their ease, and in the process makes change, creativity, and conversation with the middle more difficult for everyone. The middle, which is bigger than some think and includes soft partisans, is increasingly non-participatory in polls and in reality, which also makes the polarization worse.

So, yes, we need better ways of doing research. But also a different attitude about listening to people and their views, more individual contact, less nationalization. And far less forecasting which does not, as far as I can see, contribute much to the dialog at all and risks creating a conversation from the top-down which alters results from the bottom-up. Besides, even if you sacrifice the pollsters, it wont affect the timing of the next earthquake.

JUSTICE WILL PREVAIL NOT LOSE GROUND NOW

Entering polling’s silly season…

Soon there will be a plethora of “horse race” polls in various races and nationally, likely showing divergent results. That is a seasonal phenomenon plus the one-two punch of the Supreme Court decision on Roe v. Wade and the compelling hearings by the House Select Committee on the January 6 Attack have made the mid-terms more interesting and more contested. Plus, Republicans have nominated some truly dreadful candidates.

On the flip side, President Biden’s approval is low and the economy is perceived as weak. Last time there was a mid-term election with rising inflation (although rising less than now at that point) and a then unpopular Democratic President, the year was 1978. In that year, which almost no one under age 50 remembers, Republicans picked up three Senate seats and 15 House seats.

There are many differences between 1978 and 2022. So this year is looking interesting, but we cannot know what will happen based on polling alone, or even mostly. Here are two caveats on polling and then some things to watch for since polling isn’t going away.

The first caveat should be familiar to anyone who has followed this blog: No one is polling random samples. Polling simply doesn’t work the way it used to work. The rule for a random sample is that everyone in the population of interest – people who will vote on November 8, 2022 – has an equal chance of being included in the poll. Caller ID means no one has been able to do that since about 1990. Instead, pollsters mimic the electorate and aim for representative samples. Widely swinging turnout, extreme difficulties reaching some demographics, and response bias have made “representativeness” more and more difficult. Response bias is not unitary. It combines distrust of polls, distrust of the media, the desire to be seen as an individual not part of a labeled aggregate, the quality of the polling questions (which may cause terminations) and varying interest in politics. The response bias factor is greater among conservative voters than liberal voters but is greatest of all in the middle of the ideological spectrum, which is often the most interesting. Pollsters correct for the problems by “weighting” the data. Those weights rely on assumptions about the shape of electorate, predicated in part by mistakes of the past. There is no correction for “new” errors until after they have happened.

No poll, no matter how carefully constructed, should ever be seen as an absolute; an indication, yes, but not an absolute. If polls – even multiple polls – show Candidate A two points ahead of Candidate B, that does not mean Candidate A will win. That has nothing to do with the margin of error (which is addressed by multiple polls). It has more to do with common assumptions about turnout and perhaps new forms of response bias which we don’t know about yet. (On the other hand, if even two even semi-legitimate polls show Candidate A winning by 15 points, he or she very likely will win handily; this is all on the margins.)

Which brings me to my second caveat: Change is more often slow and incremental than sudden and dramatic. Small things happening slowly are worth attending to. Polls do not often pick these up, but deeper conversations with voters can at least create hypotheses ahead of the polls. I have found stories about how most Republicans still support Trump uninteresting. Of course they do; it is more important that the support appears less than it was. On the flip side, there is no question that many Democratic and independent women are deeply upset by the Roe v. Wade decision. The question is whether some of those who would not ordinarily vote in the midterms will turn out and vote as a consequence. Younger, lower propensity voters are not often included in polls, or not included in numbers that would allow analysis of them separately from the aggregate. In a close election, however, they could make a critical difference.

So, if you want to know what will happen, be skeptical but not dismissive of polls, be analytic about what small(ish) things may matter, and be curious about probing whether they will this time. Big upsets don’t come out of nowhere. They are also rare. But they do happen. Here are some things to watch for in polls and in the world outside them.

Small groups and sample sizes: Surprises may be produced by small groups in the electorate, like independent women, young voters, high propensity voters who stay home, low propensity voters who turn out. But an aggregate poll, even with a large sample size, cannot tell you much about groups who may be 10 percent of the electorate.

For example, the poll by Cygnal of the Georgia electorate led off the silly season. This Republican outfit put out a release on their 1200 sample Georgia poll, which included an oversample of 770 African American voters. First, they trumpeted that Republican Governor Brian Kemp led Democrat Stacey Abrams by 50 to 45 percent, which is pretty unimpressive for an incumbent Governor, actually, and they never defined their turnout assumptions which will matter a great deal in this race. Next, they declare that African American voters are not homogeneous, which really didn’t require a poll, and then declare that a quarter of African Americans under age 35 are supporting Kemp. Valid? Maybe, but maybe not. Even with a sample of 770 African American voters, how many were under age 35? I would guess maybe 150, and maybe fewer actually and upweighted to about 150. Was that small sample representative of younger African Americans? And who was the base sample of anyway, those who have voted previously or all African Americans? It may be true that Kemp is doing better than expected among younger African Americans – he has gotten a lot of press lately for not being as opposed to voting rights as other Republicans – but the conclusion, without adequate information about the sample, that this is somehow a prediction, seems a bit dubious.

Turnout assumptions: I have seen very few public polls (actually I cannot recall one) that have specified assumptions about voter turnout. First, if turnout is based on self-report (reaching people at random and screening for whether they are likely to vote), the poll is likely representing turnout as both higher than it will be (there is social pressure to say likely to vote) and more Democratic (see above on response bias). If they are selecting people based on vote history from a voter file, well, fine, but then say so, and even then is the assumption that turnout patterns are like those of 2018 (which was a Democratic year) or different. In any case, if the information on weighting and turnout is absent, the poll is really hard to read in a meaningful way.

Defining independents: With deeper party polarization, looking at voters who say they are independent is an important task. The thing is, who they are is often volatile and independents are often undersampled in polls. The image is that independents are people without party predilections, but there are independents who call themselves that because neither party is sufficiently liberal or conservative enough for them. They may be independent, but they are not up for grabs. Then there are those who don’t like either party because they feel neither party represents their moderate or centrist views. They are very important, but their distaste for politics extends to a distaste for polls, and so they are often undersampled even while many of them vote. People also float in and out of calling themselves independents, depending how they are feeling about the parties which creates exaggerated volatility – unhappy Republicans may say they are independent, making the category more Republican, or vice versa. Mushing all this together independents are still generally only about 20 to 25 percent of the polling sample. See the above on sample size. A poll of 500 with roughly 100 “independents,” of multiple descriptions won’t say much about them. (Except in some places, like California, where DTS – decline to state – voters can be a constant category.)

Extrapolating from national data: There are new national polls that show the generic match up between Democrats and Republicans far closer than it was in the spring even while President Biden’s numbers are low. It’s intriguing, and hints that the midterms may be less of a victory for the Republicans than previously thought. Hints at. Intriguing. I’m on board with both of those but not (yet) with any prediction. There is no reasonable way to extrapolate the national data into a number of house seats or to any particular statewide race. First, the national data may simply represent an even greater separation between “red” and “blue” places than was true even six months ago. Most of the national polls do not provide a time series by region, plus the above comment on turnout applies. National data often inadvertently oversample the coasts because there are more phones per person on the coasts. That New York, New England, and California are approaching political apoplexy matters, but doesn’t predict what voters will do in Georgia, Ohio, Wisconsin, or Nevada.

So what to do with all this if you are interested in the election? Well, that depends on your vantage point.

If you are a candidate, go figure out how many votes you need to win the election, how many you have already effectively banked by virtue of your party label or prior base, who are the additional people by demographics, geography, and perspective you need to win, and then go plan your campaign to win them. Whether you are ahead or behind, and by a little or a lot really doesn’t matter much in doing the intellectual work of how to win. Polling can help you understand your district better, and what people there may want to know about you and your opponent, but the strategic process of what you do with that information matters a lot more than the poll per se, which is only one tool in developing your strategy.

If you are a member of the press, use the polls to guide who you talk to, what you assess, and to enlarge your view of the range of what might happen. I wish you wouldn’t report on them as much but I recognize that is a losing battle. It’s too easy to report polls. But please ponder the questions that emerge from them: Will Republican voters turnout in droves because they are upset with Biden, or will more than usual stay home because they have new doubts about the MAGA crowd? Is there something happening that will cause younger people and lower propensity Democrats to turn out, whether that is something local or national? Are the individual candidates and campaigns perceived as interesting or distinctive enough (in ways both positive or negative) to break through whatever is happening nationally? And do voters generally believe they have relevant choices in the district or state on which you are reporting, or are the candidates boring, muting any opportunities for changing turnout dynamics or partisanship?

If you are an activist, well, go get to work. Whether your candidate is ahead or behind, by a little or a lot, door-to-door canvassing to discuss the election matters and has more of a lasting impact than most anything else, even for the next election. Read the polls if you like. But don’t let predictions become a self-fulfilling prophecy, as too often happens during the silly season.

Problems with polling: Redux

I haven’t posted since April since I had little to contribute to what I saw as the two overarching goals of the last six months: electing Joe Biden and developing a COVID vaccine. I did my civic duty toward the first and had nothing to contribute to the second, and so it seemed a time to pause. Now, I feel my free speech is restored and for a moment at least there is some attention to one of my favorite topics – how we need to do research for campaigns differently. I have covered much of that previously but here is a redux on the problems with polling with some updating for 2020.

1. Samples are not random. If you ever took an intro stats course, it grounded most statistics in the need for a random sample. That means that everyone in the population of interest (e.g. people who voted November 3) has an equal probability of being included in the sample.

The margin of error presumes a random sample. The number of people required to give you an accurate picture of the array of views in a population depends on size of the sample, the breadth of the views in the population, and the randomness of the sample.

The intuitive example: Imagine a bowl of minestrone soup. If you take a small spoonful, you may miss the kidney beans. The larger the spoonful (or sample) the more likely you are to taste all the ingredients. The size of the spoon is important but not the size of the bowl. But if you are tasting cream of tomato soup, you know how it tastes with a smaller spoon. America is definitely more like minestrone than cream of tomato.

The problem with polling has little to do with the margin of error, which remains unchanged. The problem is that pollsters have not used random samples for a generation. The advent of caller ID and people’s annoying proclivity to decline to answer calls from unknown numbers (a proclivity I share), plus some changes in phone technology with fiber optics – including a proliferation of numbers that are not geographically grounded, and an explosion of polls and surveys (How was your last stay at a Hilton?), makes the act of sharing your opinion pretty unspecial.

Not to worry, we pollsters said. Samples can still be representative.

2. The problem with “representative” samples. A representative sample is one constructed to meet the demographics and partisanship of the population of interest (e.g. voters in a state) in order to measure the attitudes of that representative sample.

The researcher “corrects” the data through a variety of techniques, principally stratified samples and weighting. A stratified sample separates out particular groups and samples them separately. Examples include cluster samples, which stratify by geography, and age stratified samples, which use a separate sample for young people, who are hard to reach.

Professional pollsters usually sample from “modeled” files that tell how many likely voters are in each group and their likely partisanship. They upweight – or count the people they are short of extra. They may up-weight the conservative voters without college experience, for example, to keep both demographics and partisanship in line with the model for that state or population. Virtually every poll you see has weighted the data to presumptions of demographics and partisanship.

Back to the minestrone soup example: Samples are drawn and weighted according to the recipe developed before the poll is conducted. We presume the soup has a set quantity of kidney beans because that’s what the recipe says. But voters don’t follow the recipe – they add all kinds of spices on their own. Pollsters also get in a rut on who will vote – failing to stir the soup before tasting it.

Most of the time, though, the assumptions are right. The likely voters vote and the unlikely voters do not, and partisanship reflects the modeling done the year before. But disruptive events happen. In 1998 in Minnesota, most polls (including my own) were wrong because unlikely voters participated and turnout was unexpectedly high particularly in Anoka County, home of Jesse Ventura, who became Governor that year. That phenomenon is parallel to the Trump factor in 2016 and even more so in 2020. Unexpected people voted in unexpected numbers. If the polls are right in 2022, as they generally were in 2018, it is not because the problem is fixed but because conventional wisdom is right again, which would be a relief to more than pollsters, I expect.

3. What’s next. I hope part of what’s next is a different approach to research. If campaigns and their allies break down the core questions they want to answer, they will discover that there is a far bigger and more varied toolbox of research techniques available to them. The press could also find more interesting things to write about that help elucidate attitudes rather than predict behavior.

Analytics has a great deal more to offer. That is especially so if analytics practitioners became more interested in possibilities rather than merely assigning probabilities. Analytics has become too much like polling in resting on assumptions. Practitioners have shrunk their samples and traded in classical statistics for solely Bayesian models.

Please bear with me for another few sentences on that: classical statistics make fewer assumptions; Bayesian statistics measure against assumptions. When I was in grad school (back when Jimmy Carter was President – a Democrat from Georgia!), people made fun of Bayesian models saying it was like looking for a horse, finding a donkey, and concluding it was a mule. We will never collect or analyze data the way we did in the 1970s and 80s, but some things do come around again.

It would also be helpful if institutional players were less wedded to spread sheets that lined up races by the simple probability of winning and instead helped look for the unexpected threats and opportunities. In those years when everything is as expected, there are fewer of those. But upset wins are always constructed by what is different, special, unusual, and unexpected in the context of candidates and moment. Frankly, finding those is what always interested me most because that’s where change comes from.

More on all of this in the weeks and months ahead, and more on all the less wonky things I plan to think about Democrats, the south, shifting party alignments, economic messaging, and my new home state of Mississippi. I am glad to be writing again, now that I feel more matters in this world than just Joe Biden and vaccines.

People Do Not Want To Be Polled

The core problem with polling is that people do not wish to be polled.  Those who answer their phones when the caller is unknown to them are unusual and atypical.  And even many who do answer do not choose to complete the poll.    

This year’s telephone polling results were closer to the final election results than in 2016.  Much of the improvement, however, was in the nature of the mid-term electorate and not because the polls themselves were better.  The mid-term electorate was highly polarized, and rabid partisans are easier to poll than voters in the middle.  Polls were still wrong when those in the middle did not break proportionately to the partisans. 

Back in the 1980s, polling achieved representative samples of voters by calling phone numbers at random.  The definition of random is that everyone in the universe of interest (people who will vote in the next election) has an equal chance of being polled. With the advent of cell phones, caller ID, and over-polling, samples have not been random for a while – not since the last century anyway. 

Pollsters replaced random samples with representative ones. Political parties and commercial enterprises have “modeled” files – for every name on the voter file, there is information on the likely age, gender, race or ethnicity and, using statistics, the chances that individual will vote as a Democrat or Republican.  If the sample matches the distribution of these measures on the file, then it is representative and the poll should be correct.

There are three problems (at least) with that methodology:  (1) there may be demographics the pollster is not balancing that are important;  pollsters got the 2016 election wrong in part because they included too few voters without college experience in samples and college and non-college voters were more different politically than they had been before.  (2)  rather than letting the research determine the demographics of the electorate, the pollster needs to make assumptions about who will turn out to make the sample representative – including how many Democrats and how many Republicans.  When those assumptions are wrong so are the polls.  This year, conventional wisdom was correct and so the polls looked better.

The third problem is perhaps the most difficult and follows from the first two:  pollsters “weight” the data to their assumptions.  If there are not enough voters under 30 in the sample (and they are harder to reach) then pollsters count the under 30 voters they did reach extra – up weighting the number of interviews with young people to what they “should” have been according to assumptions.  Often, however, the sample of one group or the other wasn’t only too small, but was an inadequate representation in the first place – a skewed sample of young people is still skewed when you pretend it is bigger than it actually was. 

The problems can be minimized by making more calls to reduce the need to up-weight the data.  If 30 percent of some groups of voters complete interviews but only 10 percent of other groups, just make three times the number of calls to the hard to reach group.  That is what my firm and others did this year.  It is, however, an expensive proposition and still does not insure that the people who completed interviews are representative of those who did not.

Next Post:  The Self-Selecting Internet