Yes, in theory but not in today’s reality. In my last post, I suggested that polls are still usually predictive but modeling often more so. But quick-and-dirty analytics has few advantages over quick-and-dirty polling.
About 15 years ago, campaigns started to use statistical modeling to produce efficiencies and better targeting. Before that, campaigns targeted persuadable precincts. Statistical modeling helps find the individual swing voters in precincts that are generally Democratic or Republican, a process that is more inclusive and allows more efficient use of resources.
Increasingly, campaign communications are directed to individuals – online, through the mail, or through door knocks or addressable TV – and not exclusively (or even mostly) through broadcast media like television. Polling analyzes people in the aggregate – telling a campaign what percent of men or women, younger or older voters, as examples, support a particular candidate. Polls also say what groups are more undecided or seem more likely to move in response to arguments about the candidates. Modeling makes those same predictions on the individual level improving prediction, efficiency, and targeting accuracy.
A decade ago, modeling used commercially available data and advanced statistics to make predictions. A woman in an urban area where an unusually high number of people are college educated and of color is probably a Democrat – especially if she has a frequent flyer card that shows international vacation travel. An older man in a rural area that has few people of color is more likely a Republican – especially if he has a hunting license and a subscription to a gun magazine. Those are stereotypical examples but the plethora of available data on where people live, shop, travel, and what they read helps make probabilistic predictions on the individual level.
The process, however, still requires a lot of data collection to make the less obvious associations between people’s behavior and their voting habits. To make modeling less expensive, the next iteration started with a set of assumptions and pre-established algorithms to allow modelers to collect less data (and do less analysis) in achieving results. Currently, most modeling is done with artificial intelligence and machine learning, which is even more efficient, and uses smaller samples than it used to.
AI often skips the step of understanding why certain variables are predictive, or asking how this individual situation may be different, or even analyzing in depth the patterns of errors and misassumptions that may lead to those questions. Those who assumed people who voted for Obama would vote for Clinton made errors; voters who supported Trump in 2016 did not quite so reliably vote Republican last year. Failing to consider the why and the underlying dynamics led to strategic errors.
Opinion research at its best has more depth of understanding than AI produces, plus some judgment calls, or hunches, or perhaps a little artistry, which machine learning does not (yet?) produce. Analytics is unquestionably a boon. Using advanced statistics is an important tool to prediction and targeting especially as samples are increasingly skewed. It is not, however, a replacement for strategy or judgment nor does it help much (although it could) in understanding what people are thinking and feeling, how they perceive a candidate, or how that candidate can improve his or her relationship with their constituencies.
Next Post: To find the answer, change the question