There are different methods for assessing the reliability of questionnaires. These methods can be divided between inexpensive ex-ante methods that require a desk job, methods that can be applied in the laboratory and methods that can be used only with pilot surveys and therefore with the use of substantial resources.
Ex ante there are 4 consolidated methods:
- QUAID (Question Understanding Aid) and Survey Quality Profile (SQP) based on software;
- Expert review in which experts evaluate the questions
- and finally QAS (Questionnaire Appraisal System) which is based on a "checklist" where it is verified that each question in the questionnaire satisfies the aspects measured by the check list itself.
In the laboratory, with the support of small groups of interviewees, there are cognitive interviews which consist in being told by the interviewee what he understands about each single question and being explained aloud how the interviewee arrives at the answer to the single question.
With pilot surveys instead there are 2 methods and that is the Behavior Coding which consists in listening to the interviews and classifying the critical aspects of every single question and finally the Response Latency and that is the time that elapses between the question and the answer.
These 7 methods differ in the resources used to implement them and in the reliability of the results in evaluating the individual questions. This experiment tries to evaluate which of these methods is the best. The conclusions suggest that some methods are more effective than others depending on the type of questions.
There are other methods, not mentioned in this article, to assess the reliability of the questions: for example at the GOR in Berlin this year a study was presented comparing cognitive interviews with online probing, a promising technique that still needs of experimentation.
How Accurately Do Different Evaluation Methods Predict the Reliability of Survey Questions?
By Aaron Maitland Stanley Presser from JSSAM September 2016