fbpx

Theory and practice in non-probabilistic surveys - 1

INTRODUCTION

In recent years, the use of non-probabilistic surveys has increased, due to lower costs and higher response rates. In this type of survey, however, there is a self-selection of respondents, which makes the methods based on the design of the inference of inapplicable surveys, raising doubts about the potential of having distorted results.

Selection bias refers to the systematic differences between a statistical estimate and the actual population parameter caused by problems concerning sample composition (rather than measurement errors).

Usually, the selection bias derived from:

  • non-coverage: the sampling list omits parts of the target population
  • no answer: the selected units do not complete the survey

These concepts are related to a process that begins with a complete population and randomly selects a subset.

Many non-probabilistic polls do not originate from something that resembles a sampling list. For this type of survey, the processes that lead to the inclusion of a responder in the sample are numerous, potentially arbitrary, and may not resemble the traditional probabilistic survey process at all.

The research focused on identifying the conditions under which valid statistical inferences can be made on random effects using observed data. There are two contexts, the causal one (where the parameter of interest is the contrast between experimental treatments) and the investigation (where a wide range of estimates is measured, including averages, totals, correlations and other association measures).

Despite the differences, the conditions that produce selection bias in causal analyzes also apply in an investigation context.

We identify three components that determine if auto selection can lead to distorted results:

  • INTERCHANGEABILITY: are the confounding variables fully known and measured for all the sampled units?
  • POSITIVITY: Does the sample include all necessary types of units in the target population or some groups with distinct characteristics are missing?
  • COMPOSITION: does the sample distribution correspond to the target population with respect to the confounding variables or can it be adapted accordingly?

The article is divided into two phases:

  1. We describe how the components apply in the context of random experiments and probabilistic investigations, before demonstrating how they extend to the coverage of observational studies and non-probabilistic investigations.
  2. We provide a critical review of current practices in the collection of non-probabilistic data and their implications for selection biases.

RANDOMIZATION AND INFERENCE NOT DISTORTABLE IN EXPERIMENTS AND SURVEYS

In experiments, the outcome for a patient may be different if he is given treatment A or treatment B. Before choosing a treatment, both results are possible, but we only observe the results under the treatment actually provided to the patient.

The causal effect is the difference between the two potential results. Although we can never observe both results on a single individual, we can compare the average result for people receiving treatment A with those of people receiving treatment B to make inference about which treatment is best.

When treatments are randomly assigned, we can reasonably be sure that the differences observed in the results between the treatment conditions are due to the therapies themselves and not to some other difference between the two groups.

When treatments are not randomly assigned, these evaluations are more difficult.

For example, if patients who receive treatment A tend to get worse, but treatment A is usually given to sicker patients, it is difficult to know whether the difference is due to the treatment or because the patients who received it were in worse form conditions to begin with. The underlying level of the disease is known as a confounder. Confounders are variables associated with both treatment choice and outcome of interest, and are the primary source of selection bias in causal analyzes.

A probability-based survey is essentially a random experiment in which the group of subjects is the set of units in the sampling list and the treatment is the selection in the survey. Unlike the experiments in which we observe the results on both treated and untreated subjects, in the surveys we observe the results only on the selected units, with the expectation that there should be no difference between selected and not selected units.

STRONG IGNORABILITY- INTERCHANGEABILITY AND POSITIVITY

"Strong ignorability" means the conditions for which the inference on causal effects can be assessed without selection errors:

  • INTERCHANGEABILITY: requires the mechanism for which the subjects are assigned a treatment to be independent of the measured result, both unconditionally and conditionally to the observed covariates.
  • POSITIVITY: it must be possible, for each subject, to receive any of the treatments. This requires that all subjects have a positive chance of receiving treatment.

In the experiments, the random assignment of the treatment guarantees that on average the conditions of interchangeability and positivity are met. This works the same way in probabilistic polls.

COMPOSITION

In random experiments, to allow the generalization of the experimental results for the target populations, there are various methods (eg re-weighting strategies that aim to equate the experimental sample and the population with respect to the characteristics observed).

While the experiments must cover the comparability of the treatment and control, as well as the sample and the population, the investigations must concern only the sample and the population. It is understood that the composition of a sample will correspond to that of the population when all the units have an equal probability of selection, implying unconditional interchangeability.

When the selection probabilities are unequal but known for each unit on the list, the situation is equivalent to conditional exchangeability and inversely weighted observations to the likelihood of selection produce unbiased population estimates.

In both cases, random selection ensures that on average the sample will correspond to the target population on the distribution of any variable measured in the survey.

EXTENSION OF THE FRAMEWORK ON NON-CASUAL SAMPLES

However, the conditions seen above are only guaranteed when randomization is successful at 100%, and this happens very rarely. In the experiments, the subjects abandon the tests or are lost to follow-up. In surveys, sampling lists may not fully cover the target population and a portion of the sampled units is never observed. When such problems occur, the usual solution is to perform statistical adjustments to correct any imbalances. By doing so, we rely on a model that assumes that positivity and interchangeability are valid, and that the adjustment reconstructs the correct sample composition for the confounding covariates.

The same is true for surveys that do not use probabilistic sampling. When units are not randomly selected from the target population, researchers must rely on statistical models. Probability-based surveys with undercover or non-response problems must also specify a model that relates the units observed to what has not been observed. For probabilistic samples, the initial project performs most of the work ensuring interchangeability, positivity and the correct composition of the sample. Statistical models are used during the estimation to correct those that are hoped to be minor biases. On the contrary, non-probabilistic samples cannot rely on randomness to help meet these requirements, instead they must rely on models at all stages of the investigation process from sample selection to estimation. As in causal analyzes, researchers can never know for sure that these requirements have been met.


Authors:
ANDREW W. MERCER *, FRAUKE KREUTER, SCOTT KEETER, ELIZABETH A. STUART

Leave a comment

EnglishItalian