Some survey researchers argue that telephone polls are "dead" and should be replaced with non-probabilistic samples from online panels. We have seen a decline in response rates for RDD samples in recent years, with many such surveys reporting response rates consistently below 10%.
Of interest is not only the question of the quality of low-response telephone surveys, but also if and when non-probabilistic samples, such as those obtained through Internet panels, could serve as feasible and reliable substitutes or alternatives.
To reduce distortions in non-probabilistic samples, the online survey research industry has developed several methods, including:
- Propensity modeling: a logistic regression model is constructed to "predict" the probability belonging to one sample over another. The non-probabilistic samples can then be weighted using the inverse of the expected probability derived from these propensity models.
- Sampling: is a technique in which a probabilistic sample is used as a gold standard and online non-probability panelists are combined on a one-to-one basis based on a specified number of variables.
Unlike the propensity method, sample matching is not an explicit weighting technique, but a method that essentially attempts to balance a non-probabilistic sample with a probabilistic sample based on target variables.
This article compares probabilistic samples with low response rates with non-probability samples in terms of the quality of the basic data, using an elementary approach that uses the fundamental survey data available: demographic data.
Methods and Data:
The data used for this study come from five main sources:
- Non-probabilistic Internet panel from Centris survey on communication, entertainment and telephony. (1 Panel)
- Telephone RDD omnibus survey (Telephone, with limited version only for mobile phones)
- Telephone RDD survey on a general population aged 18 and 54 years (Telephone 2)
- Non-probabilistic internet panel from a sports survey (Panel 2)
- NHIS of the 2013
The two non-probabilistic samples were obtained from two different Internet panels.
While our study brings together several non-probabilistic and probabilistic samples of various dimensions and scopes, demographic variables are common to all. To facilitate comparisons, a key set of variables that are not susceptible to satisfaction, bias of social desirability or other measurement errors that could confuse the impact of the sample type with that of the interview mode used have been identified.
For our analysis, we consider four specific demographic variables: age group (18-34, 35-49, 50-64, 65 +); race / ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, non-Hispanic other); education (less than High School, High School, Some College, College or Beyond); and region (Northeast, South, Midwest, West).
We used the American Community Survey (ACS) of the US Census Bureau of 2012 as a "gold standard" source for population benchmarks in order to assess the estimated bias.
For each possible pair (A, B) of demographic variables, conditional distributions are evaluated using the crossed tabulation of the demographic variable A (rows) and the demographic variable B (columns).
The notion behind our elementary tabulation approach is simple: to quantify the bias of the estimates of a demographic variable within each level of the second demographic variable or, more simply, to examine the conditional distribution of a demographic variable given a category of a other.
All this is done using the unweighted and weighed samples using the common weighting techniques (propensity score, raking, matching) comparing the results.
In our analysis, non-weighted non-weighted samples from the Internet panel showed higher and varying estimated bias than probabilistic samples with low response rates, without any systematic pattern emerging. The biggest errors in the non-probabilistic samples examined are largely due to education and ethnicity. Once weighted, the non-probabilistic samples do not "recover" on the telephone samples, the estimated average bias, in fact, even if lowered, is however higher than that relative to the probabilistic samples.
There are many who have been quick to say that, given the low response rates currently achieved in telephone survey research, the very concept of "probabilistic sampling" is zero. The response rate does not determine whether a probabilistically extracted sample results in a representative sample of respondents.
What is critical is the degree to which failure to respond is systematic rather than random. From our point of view, the research industry does not yet understand when the lack of response in probabilistic sample surveys is random or systematic. Research that has found little or no bias suggests that failure to respond is perhaps less systematic than many suspects.
This does not contradict the threat posed by failure to respond, but, in this article, probability-based telephone surveys with low response frequency get two and a half times less bias and half the sample size required for equivalent statistical power, compared to non-sample samples. probabilistic tested. If these benefits are worth the cost of the probabilistic approach, this is up to the investigator and his research needs.
DAVID DUTWIN, TRENT D. BUSKIRK