Experience tells marketing execs that customer research always pays off, but the fact that many forecasts were off target in recent elections around the world—from Brexit to the 2016 US election—has made them wary of surveys.
However, if marketers are aware of potential bias in their surveys, they can take steps to reduce its effects.
Image attribution: Pavlo Luchovski
If truth be told, the margin of error for most of the 2016 election polls was just two to three percentage points—a typical standard error. So what’s the problem? The problem is that all the errors went in the same direction. When that happens, it means that the data contain a bias.
Most experts are attributing the polling misstep to one of two possible culprits.
This occurs when only certain kinds of people are willing to give their opinion. Related to this is the “shy Trumper” hypothesis, where those intending to vote Trump—or Brexit—may have been reluctant to reveal their intention in the heated political climate.
What marketers need to know is that biased non-response is not exclusive to politics. The market research industry faces exactly the same problem. Think of a survey where only customers with a grudge or employees of the month had motives to participate. The results would be skewed in both cases.
If the sample is biased, and the responses are not weighted to reflect this bias because pollsters are not aware of their existence, then generalizations about the population based on those responses will be flawed.
Sometimes survey responses are a poor measure of voting intentions simply because people are not completely honest in their answers, again because support for a candidate or political outcome is considered socially undesirable.
Experts call this the “Bradley effect” after Tom Bradley, an African-American Democrat who lost the 1982 California governor’s race despite having a lead in the polls. The hypothesis is that some voters were not only reluctant to reveal their preference for the white candidate—for fear of opening themselves to criticism—but actually expressed a preference for Bradley, skewing results even further than non-response.
Like political polling, market research is not free of social desirability bias, and we’ll talk later about the best ways to reduce its effects.
Frequently, companies don’t take any measures to determine if their surveys are affected by biased non-response, which means that they may be deciding their next moves based on flawed data.
Dave Vannette, writing for Qualtrics, says, “In most cases, the simplest approach is to create a new variable (column) that is 0 if the customer did not participate in the survey and 1 if the customer did participate in the survey.”
If marketers identify a non-response bias in their data, it’s still possible that the bias is irrelevant to the matter at hand. Going back to the political examples, a non-response bias related to geography or age would have been relevant in the Brexit vote, because people in different regions or age groups had different views on the EU. On the other hand, if, say, tall people were much more likely to participate in the pre-election polls than short people, it wouldn’t have made much of a difference.
And of course, as Dave Vannette says, “Even when nonresponse bias is present it can often be adjusted for using weighting”—that is, a statistical technique in which certain data items receive more emphasis than others within the same group.
To avoid the social desirability bias in customer research:
There are also useful questionnaires that allow you to measure social desirability and test your results, like the Balanced Inventory of Desirable Responding and the Marlowe-Crowne Social Desirability Scale.
After a polling misstep, it can take time to figure out what went wrong. After the first of a series of polling misfires in the British general election in 2015, a panel of experts spent no less than six months trying to determine what happened. However, from the marketer’s point of view, the chief takeaway is that the efficacy of survey research is not in question, even if this is “a time of extensive experimentation and innovation in the field,” according to the Pew Research Center. The challenges pollsters face now are helping marketers identify problems and improve surveys for the future.
For more stories like this, subscribe to the Content Standard Newsletter.
Featured image attribution: US Embassy Tel Aviv