The use of an approach mixed-mode it is theoretically quite interesting. Not only does it address the problem of non-coverage, but it is also a way of trying to increase response rates. Following in a sequential procedure the sampling units with Internet access that have not responded online with another data collection method, we expect to obtain higher response rates.
However, the first results of real studies are not so positive, and they tell us that mixed-mode designs do not necessarily produce higher overall response rates than a unimodal design with the "best" survey mode. This is especially true when the different modes are made available simultaneously.
More important than response rates per se is the bias created by non-response and non-coverage errors, and sample representativeness. Generally the representativeness of a mixed mode survey is as good but no better than a face-to-face survey. Using the web in a mixed mode survey can help reduce costs.
However, this is also not systematic. It depends on the total sample size and also on the use of incentives. The mixed mode approach also has its drawbacks:
- makes the implementation of the survey more difficult: more preparation work in terms of adapting the questionnaires to the different specificities of the methods, specification of rules for data collection, monitoring of field work, more complex data processing and analysis, etc. .
- may affect the comparability of data. The different data collection methods in fact have different properties (presence or absence of an interviewer, visual or auditory stimulus, etc.). Therefore, the data collection method with which an individual is responding to the survey can push the same respondent to give answers different: this is what is sometimes called "mode effect".
Therefore, it is necessary to check the comparability of the results between the different data collection modes before combining the responses from a mixed mode survey into a single analysis. In this document we will compare the estimated quality on single items and on complex concepts between unimodal and mixed mode surveys.
Methods and Data:
In this study two investigations were compared:
- the 6 cycle of the European Social Survey (ESS), in which the data were collected through face-to-face interviews (with demonstration cards) at the home of the respondents
- the ESS mixed mode survey, which was performed in parallel (2012-2013) with the main survey by the same agency. The survey was sequential, first a web survey was proposed, and if the respondents did not respond, it was followed by a face-to-face one.
Comparisons were made in the UK and Estonia. The two types of surveys are compared in terms of the quality of the single items and of the composite scores.
Quality is defined as the part of the variance in the observed responses which is explained by the concept of latent interest, ie the product of validity and reliability. The procedure used to assess the quality of individual items is the MultiTrait-MultiMethod (M TMM) approach, which consists of repeating several questions using different response scales. The reliability and validity coefficients can be estimated for each question.
As for the quality of the composite score (CS), it is the strength of the relationship between the CS and the variable we are really interested in, that is the latent complex concept. We calculate the quality using the formula provided by Saris and Gallhofer (2014):
they are the random components in relationships.
Since researchers are often not interested in individual items but in complex (or composite) concepts, we have also considered the quality of CS (Composite Score).
The quality estimates of the mixed mode survey are very similar to those of the face-to-face survey.
However, we must be careful to generalize the results because the amount of evidence is still limited. The results in terms of quality of individual items and composite scores so far are quite positive for the implementation of mixed mode surveys. However, several quality indicators must be taken into account, e.g. the quality of the answers to the open questions, the bias of non-response and that of social desirability, in order to make a decision on the use of a mixed-mode approach.