Skip to main content
. 2021 Jun 25;36(6):982–995. doi: 10.1093/heapol/czab048

Table 2.

Approaches to strengthening surveys

Approach Description Comparison to cognitive interviewing Issue
Expert review Subject area experts review the survey tool and judge how well each questionnaire item truly reflects the construct it is intended to measure
  • An important form of validation but provides no insight into respondent understanding and interpretation of the survey questions

Experts are unable to predict how the survey respondents will interpret the questions
Respondent-driven pretesting A small group of participants with the same characteristics as the target survey population complete the survey. Researchers elicit feedback during the survey or at the end through debriefings. Feedback elicitation can include targeted probes about questions that appeared problematic, in-depth exploration of each question, probing on a random sub-set of questions, or asking participants to rate how clear the question was
  • Respondent-driven pretesting may overlap with cognitive interviewing (e.g. eliciting in-depth reflection on how the participants interpret questions and formulate answers as they proceed through the survey)

  • However, it may also differ from cognitive interviewing by focusing instead on post-survey reflections through ratings or group debriefs (Ruel, Wagner and Gillespie, 2016)

Low methodological clarity: can be the same as cognitive interviewing or quite different
Translation and back translation After translating a survey from the origin to the target language, a different translator ‘blindly’ translates the survey back. Differences are then compared and resolved (Weeks, Swerissen and Belfrage, 2007)
  • Back translation includes the same close attention to language and meaning as cognitive interviewing

  • However, it does not examine cultural appropriateness or the extent to which questions achieve cognitive match between researchers and respondents

Involves bilingual translators whose world view and experience do not match the target population’s, making them unable to comment on the tool’s appropriateness
Pilot testing Enumerators administer the survey to a small group of participants with the same characteristics as the target survey in as close to real world conditions as possible
  • Pilot testing explores survey length, modality (e.g. is the computer assisted personal interviewing (CAPI) programming and tablet hardware functioning properly?), and skip patterns, and catches obvious problems with content and translation.

  • Pilot testing is undertaken by members of the quantitative enumeration team who will conduct the survey at scale and focuses on the practical application of the survey questions. One pilot test goes through the whole survey tool with a sample participant

  • Cognitive testing is undertaken by specially trained qualitative researchers with a focus on extensive probing to understand the cognitive process underlying each response provided. One cognitive interview goes through a curated sub-set of questions from the survey tool with a sample participant

  • Cognitive interviewing is not optimal for exploring survey length, modality and skip patterns, but involves in-depth exploration of the resonance of content with local worldviews, and close attention to vocabulary, syntax, response options, question style, and conceptual nuance

Focuses on the mechanics of implementation while cognitive testing focuses on the survey questions achieving shared understanding between researcher intent and respondent interpretation