Women and Electability – Part 2

In Part 1 of this post, I argued that there is no solid reason to consider women less electable than men in the 2020 Presidential contest.  Women candidates do need to grapple, however, with four areas that may create misconceptions of their potential.  First is implicit bias, second is the nature of leadership archetypes (and negative stereotypes), and third is the management of the strong value among women voters of caregiving and the “Caregiver” archetype.  Finally, there is the differing nature of media coverage of women candidates.  None of these are barriers but they are considerations for women candidates and those observing them.

1.    Association and Implicit Bias.  One reason men may be currently considered more electable is simply how often they have been elected.  Older men look more like the panoply of former Presidents than women do, even though men have given up both the wig of our first president and the mutton chops of many.  People are more used to seeing men in leadership positions and so they associate men and leadership qualities.

Both academic and popular research shows that people have stronger associations with men and leadership qualities than with women and leadership qualities.  Such “implicit bias” is not necessarily unconscious and it does not necessarily project behavior.  In fact, there is some academic literature lately that suggests it does not predict behavior.  Still, there is such bias. 

To measure yours, the American Association of University Women has provided a test on line.  It is anonymous and instructive:  AAUW Implicit Association Test of Gender Bias.   Implicit bias is not a barrier because at a time when people may want change – and perhaps big, structural change, as one candidate promises, such associations may not matter, or perhaps even underline the change a woman might bring. 

2.   Archetypes and Negative Stereotypes.  At a deeper level than the associations shown in implicit bias tests, there is leadership imagery that is sometimes more male than female. Jungian psychology introduced the idea that we share unconscious ideas, often gender-associated.  Thirty years ago, Robert L. Moore and Douglas Gillette wrote King, Warrior, Magician, Lover about the archetypes of mature men (as opposed to other archetypes like “the Trickster,” who survives challenges through trickery and deceit, which may remind you of someone). 

The use of archetypes for communications and branding is recounted in Margaret Mark and Carol Pearson’s classic book, The Hero and the Outlaw: Building Extraordinary Brands through the Power of Archetypes.  Their work helped establish Nike as “the hero” brand, and Apple as “the magician.” 

Some archetypes are more gender-laden than others.  The King is certainly a gender-based term (and the Queen has different associations), as is the “Everyman” or “Regular Joe” that their book discusses. There are, however, women Warriors, Magicians, and Sages, which are among the highly desirable leadership archetypes, in mythology, in history and in popular culture.

The shadow (a Jungian term too) of the Queen archetype bears watching. There we find negative stereotypes like the manipulative “Queen Bee” who destroys other women while the men work for her; the Queen with her clique of Mean Girls so well-profiled by the movie of that name; the Bossy Beyatch (to use the more acceptable colloquialism), and her sister the “Angry Woman,” with the latter two carrying qualities less acceptable for women than for men.      

Another archetype, the Innocent, is not inherently negative but not what voters want in a President.  Children and some women are perceived as The Innocent and we do not want a President who is untutored in the ways of the world. 

The “Good Girl” or “Daddy’s Girl” archetype is more likeable but follows status quo authority a little too much rather than bringing change, and generally follows men rather than aligning equally with other women.

The “Victim” archetype is also not a desirable President.  The victim feels powerless and blames others for their predicament.  There is a fine line for women leaders in talking about discrimination against women and sounding like they believe women – perhaps including them – are victims.  The President of the United States should show compassion for victims but should never be a victim.

3.  The Caregiver.    One of the biggest challenges for women candidates is integrating one of the most powerful, positive – and generally female – archetypes:  The Caregiver.  The Caregiver is compassionate, generous, thoughtful and kind.  It is reputedly part of the branding of Campbell’s Soup, Johnson & Johnson, and McDonald’s – with billions and billions sold.  

Women identify with caregiving.  In a survey I conducted for a client many years ago, nearly 70 percent of women voters said caregiving was one of their most important values.  The importance of caregiving is presumably why George W. Bush modified his declaration of being a conservative with the word “compassionate.”

Women candidates – despite their self-evident ambition and aggression – generally have advantages on compassion, as they do on issues associated with it like health care.  Failing to display Caregiver qualities can alienate other women, who value caregiving in themselves and in leaders. The challenge is in nurturing the caregiver, which almost all successful women candidates do, without appearing to be the Innocent or the “Good Girl.” 

Part of the answer for women candidates in balancing strength and compassion is to define who and what they fight for:  in the mythological world from which archetypes derive, the male fights to be the Alpha male; to win the competition for its own sake.  The woman or female fights to protect her cubs (if she is a lioness or a bear), or her children, family or community. The behavior may look the same but the motivation is different.  

Note that if she fights for victims, then you have to see yourself as a victim to believe she fights for you – and most people do not see themselves that way – and even fewer want to be a victim.  The strong Caregiver fights for what she loves to make it stronger.  The Caregiver is not patronizing.

4.  Media Bias.  Others have written about how the media cover men and women differently and how some men candidates do the same (Suzanna Danuta Walters Washington Post Op Ed).  I do think the coverage is more balanced than it has been in the past and some in the press clearly make a conscious effort to diversify their sources.  Still, the reality remains that most of the press corps covering the presidential campaign and reporting it on television are men, and, indeed, white men.  The media need to understand and give women candidates’ credit for messaging and strategies that incorporate gender differences – women candidates and their strategies are not supposed to be just like men’s. I hope the press talk to more women – and many more people of color – who live outside the bubble of punditry about what they hear the candidates saying, and what they are listening for.  The perspective is likely to be different, but also more reflective of the majority of voters.

# # #

It is still 230 days until the Iowa Caucuses when the first votes are cast.  The electorate in Iowa will very likely be larger and younger than eight years ago, and in other states it will be larger, younger, and more ethnically and racially diverse than in the past.  In every state, the majority of the primary electorate will be female. 

In what may be a historically large primary and general election, pollsters don’t quite know who to talk to – and not everyone wants to talk to pollsters.  The best anyone can do at this point is to reach out to the extent they can, and be aware that the dynamics of gender, race, ethnicity – and generation – are changing the electorate in ways that may be difficult to predict.  The picture may look similar in seven months.  Or it may be very different, indeed. 

Women and Electability – Part 1

Punditry has focused lately on Democratic voters’ desire for an electable presidential candidate with the suggestion that the electability criterion biases them against women candidates.  Some of the discussion has been silly:  The random Bernie Sanders staffer declaring with more arrogance than evidence that Senator Elizabeth Warren is unelectable; the challenger the popular Mayor of Duluth, Minnesota announcing that voters don’t want a Mommy (which likely helped Mayor Emily Larson as I hear there are many Mommies in Duluth).  On the other side, any number of women have tweeted that individual women presidential candidates have never lost an election, clear proof of nothing at all especially as our last Democratic president had been through the learning experience of an electoral loss.   

The problem is not punditry alone.  Voters, also, are uncertain that a woman is equally electable as a man (Pew Study on Women in Leadership), which may make the matter somewhat relevant to which candidate they choose. I say only somewhat because I question whether electability will matter as much – or be framed the same way – as we all get closer to actual voting.  Further, Iowa prospective caucus attendees have now told the Des Moines Register poll that women may have a very slight advantage over men in defeating Trump.   

The reality is that there has not been a test of whether women, writ large, are less electable.  In the one circumstance in which a major party nominated a woman, she lost.  Further, she lost to someone many women believe is an embodiment of toxic masculinity. That experience may have created doubt but not a reality.

The loss of one woman in a particular election year is insufficient data to draw the conclusion that women are less electable.  The 2016 election had unique dynamics that will not replicate in 2020.  Hillary Clinton was an imperfect candidate (as all candidates are) and had a complex individual history (as all candidates do).  She is not a generic woman – and neither is anyone else.  Additionally, she won the popular vote and narrowly missed an Electoral College win (by some 70,000 votes) across three states.  Since the 2016 election, Michigan elected Gretchen Whitmer Governor, Wisconsin re-elected Tammy Baldwin Senator, and Pennsylvania sent four new women to Congress in a national wave that elected more women than ever before.   

Indeed, the 2018 election results suggest that interest in increasing women’s political leadership role has continued unabated and perhaps has grown.  Polling, including the Pew study cited above, suggest that voters want to increase the number of women in leadership and believe women generally work harder, are more compassionate, and perhaps more honest than men.  These advantages are stronger among women voters who are not only a majority of the electorate but the overwhelming majority of the Democratic primary electorate.  The first stage to being elected is being nominated.   

Having a woman head of state is still rarer than not – most countries have never elected a woman.  More than 50 (59 is the most recent figure I could find) have done so, however.  Sri Lanka led the way in 1960.  Currently, there are women heads of state in every continent except Antarctica and here in North America. 

Despite the growth in women’s political leadership it would not be intellectually honest to argue the advantages for men have been eradicated.  Some advantages for men candidates rest in institutional support and most certainly in the nature of media coverage (See Jess McIntosh and Alexandra Rojas on CNN).  There are also differences in how men and women are perceived and how those perceptions interact with campaign messages for women candidates.

Next week, I will launch the second part of this post which looks at some of those differences.  

A Consumer Guide to Polls

I have tweeted that many of the public polls are “flawed and meaningless,” which was a bit hyperbolic on my part.  It was born of frustration with the high number and lower quality of polls and their own tendency toward hyperbolic analysis and over-prediction. So here are some more considered thoughts, using more than 280 characters.  

There is no such thing as a perfect poll.  If there were, it would include only people who are going to vote in the election of interest, and the sample would match their demographics precisely, both overall and within subgroups.  That won’t happen because there is no way of knowing exactly who will vote – voters don’t yet know if they will – and while the pollster can work at getting the demographics right, there is always the chance that they are not, or that samples are affected by response bias.   

That said, not all polls are created equal.  Some are conducted responsibly and analyzed thoroughly, with the pollster applying their own skeptical and analytic oversight in reporting the results.  Other polls are “quick and dirty” with less caution in sampling and an analysis that seems to stop at the top lines. 

Here are some things to look for to separate the better ones from those that are fundamentally flawed.

The Sample.  No one knows exactly who is going to vote.  Voters can tell you whether they currently think they will but they are not very accurate about that.  Campaign pollsters usually use a model based on vote history, which means they are pretty sure they are talking to likely voters but may also exclude some people who are new to the electorate.  Most public polls use a random sample and self-report, which is more inclusive but often more inclusive than reality. 

Only about 58 percent of eligible voters participated in the 2016 general election and 28 percent in the primaries (across both parties).  Democratic turnout may be higher in 2020 but when a poll includes all adults and reports that 47 percent of those who are registered are likely to vote in the Democratic primary, something is likely wrong.  (And not all states require advance registration so that screen alone introduces a small bias.)

Consumers of polling should make a judgment about how good a job the pollster did finding the electorate of interest before considering the results.

Demographics.   We know the demographics of who has voted in the past (using voter file analysis) and chances are who will vote in the future is roughly similar, although how similar is a matter of conjecture.  Roughly half of Democratic primary voters are of color, and they are more female and have more formal education than average. In the past, Democratic primary voters have been older than average although millennial participation increased in 2018 and may in the 2020 primaries (https://www.brookings.edu/research/the-2018-primaries-project-the-demographics-of-primary-voters/).   A poll of primary voters that matches overall census demographics is very likely wrong.  A poll that doesn’t consider the potential for turnout shifts is over-confident.

Data Weighting.   These days nearly all polls weight by demographics because different groups of voters have different probabilities of responding to polls (although there are stratification procedures in sampling that minimize the need for weighting, few public polls use them).  A procedure called “raking” weights the data to expected demographics.  How finely honed those demographic goals are can impact the sample in unexpected ways.  An initial sample that is low on African Americans and on young voters can double-weight young, African Americans, leading to wrong conclusions about both young people and African Americans.   

Few polls report how much weighting they did or how, which means analysis of subgroups within the electorate can be dicey.  Because younger voters are less likely to complete polls, I generally assume they have been weighted and am more cautious about results by age.  (My former firm used to stratify by age to minimize weighting – but that is an expensive process.) 

Days in the Field.  One way to get a better demographic distribution with less weighting is to stay in field longer and try each prospective respondent multiple times.  Be wary of polls that fielded only a day or two, especially if all the calling was on a weekend.  Chances are the data are weighted extra-heavily because it takes longer to get a more representative sample.

Decidedness.  Campaign polls usually ask something like, “Are you certain you will support that candidate or do you think you might change your mind?”  When you interrupt someone’s evening to ask whom they support for President, they may give you an answer they believe is a firm commitment, or they may just pick the candidate they know best or have heard about most recently.  Some measure of certainty is useful.  It is rare to see most supporters certain of their choice until the closing weeks. 

Additionally, Mark Blumenthal presented work at the AAPOR conference showing that news events can create polling response bias. If someone has been in the news recently, their supporters may be more willing to complete your poll.

Relevance.  By the time I vote on March 10, 2020, the field of candidates will be different – many of the current 23 will likely have suspended their campaigns and there could be new entries still.  The current preferences of voters in later states is not, I would submit, terribly relevant to the process that will winnow candidates before they get to the later states. 

The dialogue of the race may also change.  Voters’ focus on perceived electability may shift with perceptions of Trump’s fortunes or simply as voters know the candidates and the differences among them better. 

In any case, national polls of primary voters have odd samples as the rules are different state by state.  They also are imposing simultaneous responses on a sequential process.  

Prediction.  Polls don’t predict.  It is a cliché – but still true – that they are “snapshots in time.”   The “horse race” alone is the more-or-less casual preference of someone who may or may not have given the matter much thought, given that they will not act on their preference for at least eight months. 

Analysis of other data allows some cautious hypotheses – candidates who are less known have more room for growth; candidates whom voters actively oppose surely have less. 

A Pop Quiz.  Given all of this, which of the following statements about the Democratic primary election is most likely true?

  1.  Joe Biden is the front runner.
  2.  Joe Biden has almost 40 percent of the vote.
  3.  Polls consistently show Biden with more early support than other candidates.
  4.  We know nothing at all about any of this yet.

I would submit that #3 is true – Biden has more early support than others – but what that will mean for eight months from now – or a year from now – is a matter of conjecture.  Voters like Joe Biden but others have more room for growth.  Polls, however, should not create self-fulfilling prophecies or false narratives. 

I am going to find the election very interesting to watch. I really don’t have firm predictions on how it will develop.  To me, that’s what makes it interesting.  

Buying a Ticket Out of Iowa – Online

The Iowa caucuses help frame who is a contender for the nomination, which is especially important in such a large Democratic field.  Historically, there are “three tickets out of Iowa.” Only once in recent political history has anyone become President without a top-three finish in the Iowa caucuses (although a whole lot of precedent-breaking is going on, including early California voting right after the Iowa caucus). 

One element of precedent-breaking that is certain: online communications matter more.  Yes, Iowa is an older electorate and people expect a strong field organization.  But it is also heavily wired and online engagement is starting high and growing (although the number one search name nationally in the last 30 days was not a candidate but Nipsey Hussle – I checked.)

Here’s one approach to researching and defining your target online:

1.   How many voters do you need?  First, decide the size of your initial target.  There are complexities in that basic calculus.  Democrats have not always released raw totals as opposed to delegate percentages so history is limited and the number of caucus attendees per delegate varies across the state.  Additionally, 10 percent of delegates this year will be chosen by a mobile phone caucus held before the in-person caucus event and campaigns will ultimately need to have separate goals for each stage.

As a starting point, (unless your campaign is Sanders or Biden, both of which have higher bars), I recommend finding 75K voters – reasonably distributed across the state – who become committed to your candidacy.  The highest Democratic caucus turnout was 239K in 2008.  Even with the mobile caucus 300K this year would be a stretch, as it represents nearly half the registered Democrats.  Thus, 75K should produce at least a 25 percent popular finish and threshold everywhere; it is also competitive with Sanders who won less than half 2016’s 171K participants, not all of whom will either stick with him or return to the caucuses.

2.  How to find your vote.   Every campaign will individually target repeat caucus attendees but well under 100K participated in both 2008 and 2016.  So talk to the repeats and monitor their choices – at least until they stop answering their phone (although you should have their emails by then).  

The next layer is people who will turn out in the caucuses because they are excited about one of the current candidates – that is the factor that ultimately expands the caucus universe; people who are caucusing not out of habit, civic duty, or party loyalty but because they really support someone.  Finding your own unique base also means less immediate competition for those voters and time to engage and mobilize them.  That’s where online strategies help – in finding the people with whom your candidacy resonates enough to draw their participation and who are not prior caucus attendees (whom everyone will target in field, mail and online).

3.  Defining your target online.  As soon as interest from outside the regular caucus universe is in the thousands – and preferably close to 5,000 – your internet team can do “look alike” modeling to find people whose internet behavior is similar to theirs.  Those who have opted in to your emails, attended an event, contributed money, or simply visited your web site tell the campaign in online terms, rather than simply demographics, whom it is attracting.  The most likely next set are people who behave like them in terms of their online habits.  Exclude from that modeling regular caucus attendees so you are finding more people who are attracted specifically to your campaign. 

Add to the look alike model message-driven targets through affinity targets or search terms.  If your candidate is a veteran, people who search for veterans benefits on line can receive an ad about your candidate.  Or if your candidate has been a leader on climate change, those who search on that issue should hear about it.  If your candidate just announced a student loan policy initiative, perhaps it is time to buy “student loan” as a search term. Such a candidate could use search to drive voters to a web site – and use look alike modeling to deliver ad content.

As voters engage, they will help refine your model and, if they opt in, they become part of your target.

4.  Adapt your message for online.  The expression of the candidate’s message and narrative is different online than it would be in television or a speech. A 30 second television ad buys emotional impact, especially since those who watch television will see it 20 times.  Online ads are more about engaging a conversation – piquing curiosity first rather than creating a dramatic moment.   Klobuchar’s recent video promotes her name – and also that she is smart, funny, practical and a mother.  That is not her full message but it is an introduction.

Internet engagement is slower than television impact.  Start now to build your narrative and the process will tell you a lot. 

Your narrative will likely include three elements of message by the end:  (1) that yours is the right candidate to take on Trump – because of their fighting personality, because of elements of the contrast, or because a lot of the same voters like them (although careful there).  (2) they have an optimistic notion of what the future looks like; while true that we are all going to die, that doesn’t get people to want to talk to you more; and (3) personal intangibles that meet the moment – voters can’t tell you what these are yet but internet testing and modeling may help you figure it out. 

Internet response and the results of your canvassing data stream can replace many traditional functions of polling because they can tell you who is attracted to a candidacy and some well-placed questions in the canvas data stream (or in a brand lift survey) can tell you about why.  

One question all this won’t answer is who is ahead right now.  However, until we can say who the electorate is, that’s not very answerable or interesting.  And 75K committed but geographically dispersed supporters put you in the running.

5.  Do you need a pollster?  You certainly need someone in your campaign whose job is to explore what voters may be thinking, listen to them, figure out how to reach out to them, and quantify their response.  That is the pollster’s traditional role: to focus on voters rather than the news cycle or Twittersphere and thus to save campaigns from isolating themselves within their own bubble.  Your campaign needs that regardless of how research and communication techniques evolve.   

Good luck!  

To Find the Answer, Change the Questions

Earlier blog posts outlined issues around polls and who completes them.  There is a more fundamental question: whether polling is a tool of the television age and not of the internet age.

Polling came into its heyday with television communications.  Advertisers wanted to know about how to appeal to the aggregate of television viewers back when 12 million people generally watched Ed Sullivan on Sunday nights – a variety show with something for almost everyone and acts ranging from Elvis, to Sam Cooke, to Jim Henson’s Muppets, to Maria Callas. (As an aside, only one of them was not born in my new home state of Mississippi). 

Now, there are no variety shows and most people stream videos in line with their specific interests. Opera lovers do not need to watch Elvis and vice versa.    (https://www.cnbc.com/2018/03/29/nearly-60-percent-of-americans-are-streaming-and-most-with-netflix-cnbc-survey.html).

Advertising to the aggregate still has value.  Half of Americans prefer to watch rather than read the news and most of those watchers do so on television.  (https://www.journalism.org/2018/12/03/americans-still-prefer-watching-to-reading-the-news-and-mostly-still-through-television/).  But the online only audience is growing, and has the added advantage of being able to appeal to people according to their interests rather than forcing Maria Callas on the Elvis crowd or vice versa.  

The internet is a fundamentally different medium than television.  It requires a different kind of message and message delivery.  Different research is needed to design and evaluate that.

1.  The internet is about engagement.   In his classic 1964 work, Understanding Media, Marshall McLuhan distinguished between “hot” media that are low in audience participation and “cool” media that are high in it.  His analogy was that a lecture would be a hot presentation and a seminar a cool one, allowing more participation.  He argued that television is a “cool” medium because it demands an audience response whereas movies are hotter. Laugh-tracks were a standard element of television shows but not of movies. But if television is, “cool,” the internet must be cold:  the medium is defined by audience response. 

Traditional polling does not generally look at what may interest people, or what they want to know, but at what they think given a limited number of pre-coded options.    It doesn’t say nearly as much about what will engage people cognitively, visually, or emotionally.

2.  The audience is different. The internet does not define audiences the way TV does.  TV buyers buy time to reach broad demographics – like women 35 plus.  Internet buyers can reach people based on interests using Google affinity audiences, custom affinity groups, lists or look-alike targeting (https://support.google.com/displayvideo/answer/6021489?hl=en ). They can explicitly include or exclude political junkies, strong partisans, and news junkies.   They can target country music listeners, classic rock aficionados, or those who respond to heavy metal – or opera. 

Polling does not look at audiences that way and so does not effectively target for internet advertising.   Your candidate – or issue or idea – cannot be different things to different people; that will produce an inauthentic mishmash and people still do talk to each other.  But people who are searching online for Kim Kardashian, World Cup Soccer, or Crafting Hacks, will not likely engage with the same content, even if they are all women 35 plus.

3.  The format(s) are different.  Online ads come in a variety of formats with different purposes and goals.  In most online ads, you do not have people for a full 30 seconds, although if you get their attention, they may stay with you far longer than the typical television ad.  The format of the ad must be tailored to its purpose, with a very broad range of choices.   And the ad will most often be seen on a screen far smaller than the average television set.  

Traditional polling may tell you what voters – at least those willing to be polled – believe distinguishes Candidate A and Candidate B to Candidate A’s advantage. Even so, that is only a first step to figuring out how that message may be expressed.   For online communications, add the need to strategically build the argument in way that engages people over the long haul. (There is no such thing as 2000 gross rating points of internet).

# # #

I believe we are just beginning to learn how to use online communications in politics – there are more messaging options, targets, social networks, and connections than dreamed of in Marshall McLuhan’s television philosophy.

The questions and the methods for internet communications are likely situation specific.  For the next blog post, I will outline a possible internet research strategy, if you were, for example, trying to break into the top tier in the Iowa Democratic caucuses.  Keep a look out for that before the end of April.      

Insurgency and Consultants

This blog is mostly about research but I decided to interrupt the regularly scheduled program to weigh in on the DCCC “blacklist” of consultants who help candidates running in primaries against Democratic Caucus members.

First, let me say the rule is neither new nor surprising.  The DCCC is funded by Members and by donors who support the current Caucus and so of course it serves to protect its own membership.  Second, the DCCC has always had its favorites, often former staff who have become consultants, and who are self-evidently in a mutually supportive relationship with the status quo at the DCCC (which I am quite certain is also the case at the NRSC).  The only difference is there is now a form, rather than the classic “hire your friends” habit that remains part of most political (and other) institutions.     

I did immediately wonder, however, whether anyone really thinks depriving insurgent candidates of establishment consultants hampers their chances.  While smart and with abundant technical skills, consultants are generally not well-prepared to assist insurgent revolts. 

Still, my firm did help some insurgents historically (and in most cycles we were not a Committee favorite).  So here, free of charge, are some questions insurgents should answer for themselves in assessing and planning strategy.  These questions should serve as a guide as well for incumbents in assessing their vulnerabilities. 

What has changed?   The insurgent is working to unseat someone whom the district has chosen before.  You need to identify what has changed to develop an insurgent strategy.  The change can be among voters as the result of shifting demographics, or district lines, or levels of participation.  Or the change can be in the incumbent, or his/her relationship to the district:  like he/she doesn’t live there, doesn’t communicate, votes against voters’ interests, or has become self-important and/or distant in some way. 

Where can you find the votes to win?  Usually in a congressional district, fewer than 60,000 votes are cast in primaries, although in some districts the number may inflate this year if the primary is at the same time as the presidential primary.  In many cases, the number is far lower:  In NY 14, when AOC defeated Joe Crowley, fewer than 30,000 votes were cast. Come up with a reasonably high guesstimate and figure out where your 50 percent plus 1 can come from.  If you are a different kind of candidate who can excite a different level of participation, support may be from people who have not previously voted in a primary.  (Incumbents:  do not limit communications to the core of party activists who have voted in the last four primaries.)

What do you have to say that is new and different and relates to the above?  The insurgent is unlikely to win unless (a) something has changed and (b) the insurgent is representative of that change and/or the need for change.  I have seen insurgents win because voters wanted economic change and the insurgent spoke to that; because the incumbent (state legislator) also held a second job and the insurgent knocked on every door and said he would work for voters full time; because the incumbent was invisible in key constituencies and the insurgent had the capacity to motivate those same voters; because the insurgent had experienced and survived discrimination by institutions the incumbent embraced.  In each case, the key element was clarity in what needs to change and an insurgent who represented the change.

Are you willing to go door-to-door?    Door-to-door canvassing remains the most effective form of communication and the incumbent, stuck in Washington (or your state Capitol if you are running for a legislative seat) won’t be able to do as much.  If people have talked to you personally, and haven’t heard from the incumbent except through paid advertising, you are advantaged.  Internet organizing (as opposed to using the internet like a TV channel) can also be very effective as people are hearing from those they know and trust. 

# # #

If, as an insurgent, you cannot answer these questions affirmatively, you probably will not win.  And if incumbents are genuinely and broadly reaching out to their district’s constituencies – seeking advice and genuinely listening across divides of age, gender, race, ethnicity, and income – they are unlikely to lose.

Far less important in the calculus:  Who has consultants approved by the DCCC.

Next Post:  Back to the regularly scheduled program.   

Could AI Write This Blog?

Yes, in theory but not in today’s reality.  In my last post, I suggested that polls are still usually predictive but modeling often more so.  But quick-and-dirty analytics has few advantages over quick-and-dirty polling.

About 15 years ago, campaigns started to use statistical modeling to produce efficiencies and better targeting.  Before that, campaigns targeted persuadable precincts.  Statistical modeling helps find the individual swing voters in precincts that are generally Democratic or Republican, a process that is more inclusive and allows more efficient use of resources. 

Increasingly, campaign communications are directed to individuals – online, through the mail, or through door knocks or addressable TV – and not exclusively (or even mostly) through broadcast media like television.  Polling analyzes people in the aggregate – telling a campaign what percent of men or women, younger or older voters, as examples, support a particular candidate.  Polls also say what groups are more undecided or seem more likely to move in response to arguments about the candidates.  Modeling makes those same predictions on the individual level improving prediction, efficiency, and targeting accuracy.

A decade ago, modeling used commercially available data and advanced statistics to make predictions.  A woman in an urban area where an unusually high number of people are college educated and of color is probably a Democrat – especially if she has a frequent flyer card that shows international vacation travel.  An older man in a rural area that has few people of color is more likely a Republican – especially if he has a hunting license and a subscription to a gun magazine.  Those are stereotypical examples but the plethora of available data on where people live, shop, travel, and what they read helps make probabilistic predictions on the individual level.

The process, however, still requires a lot of data collection to make the less obvious associations between people’s behavior and their voting habits.  To make modeling less expensive, the next iteration started with a set of assumptions and pre-established algorithms to allow modelers to collect less data (and do less analysis) in achieving results.  Currently, most modeling is done with artificial intelligence and machine learning, which is even more efficient, and uses smaller samples than it used to. 

AI often skips the step of understanding why certain variables are predictive, or asking how this individual situation may be different, or even analyzing in depth the patterns of errors and misassumptions that may lead to those questions.  Those who assumed people who voted for Obama would vote for Clinton made errors; voters who supported Trump in 2016 did not quite so reliably vote Republican last year. Failing to consider the why and the underlying dynamics led to strategic errors.

Opinion research at its best has more depth of understanding than AI produces, plus some judgment calls, or hunches, or perhaps a little artistry, which machine learning does not (yet?) produce.  Analytics is unquestionably a boon.  Using advanced statistics is an important tool to prediction and targeting especially as samples are increasingly skewed.  It is not, however, a replacement for strategy or judgment nor does it help much (although it could) in understanding what people are thinking and feeling, how they perceive a candidate, or how that candidate can improve his or her relationship with their constituencies. 

Next Post:  To find the answer, change the question 

Living with Uncertainty

Given the problems with polls, do they accurately predict elections?  The answer is they usually do.  Their assumptions need to be more or less correct and their samples inclusive, which is harder than it has been in the past.  Nobel Prize winning physicist Richard Feynman said, “It is scientific only to say what’s more likely or less likely, and not be proving all the time what’s possible or impossible.”  

Some studies have concluded polls are no less accurate now than they have ever been, but that should not be too reassuring.  Polls have always made assumptions about who is going to vote, and wrong assumptions have long led to wrong predictions – like on how Dewey would beat Truman in 1948.  Samples were off then because people without phones supported Truman.  Samples were off in 2016 when polls included too many voter with college experience and made wrong assumptions on voter turnout. 

We never really know the outcome of an election before it happens.  At best, we know what outcome is more likely (and sometimes much more likely).  Most of us do not really have a reason to know the outcome of the election before it happens.  If we work for a partisan committee like the NRSC or DSCC, we may be concerned with allocating resources.  If we work for the media, we may feel polls are newsworthy (although I wish y’all found them less so is it really news that someone is more likely to win than someone else?). 

Polling somewhat ironically shows that voters do not trust the polls they hear about in the media – and those who took that poll may arguably have trusted polls more than the average voter or why bother.  Media coverage of bad polling can create electoral outcomes, a concern raised by a bipartisan group of pollsters (https://www.huffingtonpost.com/2010/11/08/pollsters-raise-alarm-ina_n_780705.html) back in 2010.

Modeling can help with prediction because it develops a predictive algorithm or formula that does not require a random or representative sample.  The process does require an adequate sample, however, and often more analysis and examination of error than is applied (which is the subject of a subsequent blog).  And modeling still just provides a probability and not an absolute.   Two plus two may always equal four but neither polling nor modeling are arithmetic; they say there is some level of probability that Candidate X will win, or that voter Y will support him or her.

The margin of error does not help.  It describes the statistical chance the poll is wrong by more than that number of points assuming the sample is truly random, which is rarely the case, or representative, which is increasingly arguable.

More skepticism about polls is healthy.  It reduces the risk of cutting off resources from a campaign that can win, or affecting electoral outcomes through publicizing wrong polling.  As for campaign strategy, we could use some new thinking about how we listen to voters that might make campaigns more interesting and engaging to more people, even while their outcome remains uncertain.  It is sometimes the job of campaign strategists to make what seems impossible, in fact not only possible but real, and polling alone does not do that.

Next Post:  Could AI have written this blog?

The Self-Selecting Internet

The most obvious solution to the problems with telephone polling is to administer polls online.  That is a solution to the burgeoning cost of polling but not to the problem of whether the sample is representative of the electorate.  Internet polling is less expensive and many companies provide polling panels that can mirror population demographics.  But there is no way around the reality that people who are less interested in politics are disinclined to complete polls about politics even if they are interested enough to vote.  

Internet respondents for the most part (there are exceptions) are people who have signed up to be on a panel and take a lot of polls.  Should we assume that those who subscribe to a panel in exchange for a reward of some kind are representative of those who do not?  I do not think so – especially when the invitation to complete the poll often tells you what it is about.  (As a panelist, I chose to complete recent polls on feminism and on the Supreme Court but not on several other topics).    

One group that is often underrepresented in both telephone and online polls are people in the middle of the political spectrum.  In 2018, most voters were knew early on which party they would support, particularly in federal races (at least according to the polls). The election depended on voter turnout patterns and the relatively small number of people in the middle who were undecided, conflicted, not yet paying attention, more disinterested, or considering split tickets.

Voters in the middle are less likely than rabid partisans to want to share their political views whether probed online or on the phone.  If you are tired of arguments about President Trump from either perspective, you are less likely to agree to spend 10 or 15 minutes talking (or writing) about him.  Internet poll results often have even fewer undecided voters than telephone polls.           

Luckily for the pollsters, the middle was a small group in this year’s election and so the absence of people in the middle did not skew too many polls.  Some polls were wrong in Ohio because voters in the middle were disproportionately likely to support Republican Mike DeWine for Governor and Democrat Sherrod Brown for U.S. Senate. Those who were careful to poll the middle correctly predicted the result in each race.  Those who polled more partisans and fewer voters in the middle got it wrong. 

Online polling is also prone to leave out another significant group:  people who are not online.  Telephone polling tells us that 80-85 percent of voters are online but there is still 15 to 20 percent who say they are not.  Combining online and telephone samples can fill that gap – except they will both leave out those people who simply – for whatever reason – do not want to be polled.

Next Post:  Living with Uncertainty 

People Do Not Want To Be Polled

The core problem with polling is that people do not wish to be polled.  Those who answer their phones when the caller is unknown to them are unusual and atypical.  And even many who do answer do not choose to complete the poll.    

This year’s telephone polling results were closer to the final election results than in 2016.  Much of the improvement, however, was in the nature of the mid-term electorate and not because the polls themselves were better.  The mid-term electorate was highly polarized, and rabid partisans are easier to poll than voters in the middle.  Polls were still wrong when those in the middle did not break proportionately to the partisans. 

Back in the 1980s, polling achieved representative samples of voters by calling phone numbers at random.  The definition of random is that everyone in the universe of interest (people who will vote in the next election) has an equal chance of being polled. With the advent of cell phones, caller ID, and over-polling, samples have not been random for a while – not since the last century anyway. 

Pollsters replaced random samples with representative ones. Political parties and commercial enterprises have “modeled” files – for every name on the voter file, there is information on the likely age, gender, race or ethnicity and, using statistics, the chances that individual will vote as a Democrat or Republican.  If the sample matches the distribution of these measures on the file, then it is representative and the poll should be correct.

There are three problems (at least) with that methodology:  (1) there may be demographics the pollster is not balancing that are important;  pollsters got the 2016 election wrong in part because they included too few voters without college experience in samples and college and non-college voters were more different politically than they had been before.  (2)  rather than letting the research determine the demographics of the electorate, the pollster needs to make assumptions about who will turn out to make the sample representative – including how many Democrats and how many Republicans.  When those assumptions are wrong so are the polls.  This year, conventional wisdom was correct and so the polls looked better.

The third problem is perhaps the most difficult and follows from the first two:  pollsters “weight” the data to their assumptions.  If there are not enough voters under 30 in the sample (and they are harder to reach) then pollsters count the under 30 voters they did reach extra – up weighting the number of interviews with young people to what they “should” have been according to assumptions.  Often, however, the sample of one group or the other wasn’t only too small, but was an inadequate representation in the first place – a skewed sample of young people is still skewed when you pretend it is bigger than it actually was. 

The problems can be minimized by making more calls to reduce the need to up-weight the data.  If 30 percent of some groups of voters complete interviews but only 10 percent of other groups, just make three times the number of calls to the hard to reach group.  That is what my firm and others did this year.  It is, however, an expensive proposition and still does not insure that the people who completed interviews are representative of those who did not.

Next Post:  The Self-Selecting Internet