Table 2.
Approaches to strengthening surveys
| Approach | Description | Comparison to cognitive interviewing | Issue |
|---|---|---|---|
| Expert review | Subject area experts review the survey tool and judge how well each questionnaire item truly reflects the construct it is intended to measure |
|
Experts are unable to predict how the survey respondents will interpret the questions |
| Respondent-driven pretesting | A small group of participants with the same characteristics as the target survey population complete the survey. Researchers elicit feedback during the survey or at the end through debriefings. Feedback elicitation can include targeted probes about questions that appeared problematic, in-depth exploration of each question, probing on a random sub-set of questions, or asking participants to rate how clear the question was |
|
Low methodological clarity: can be the same as cognitive interviewing or quite different |
| Translation and back translation | After translating a survey from the origin to the target language, a different translator ‘blindly’ translates the survey back. Differences are then compared and resolved (Weeks, Swerissen and Belfrage, 2007) |
|
Involves bilingual translators whose world view and experience do not match the target population’s, making them unable to comment on the tool’s appropriateness |
| Pilot testing | Enumerators administer the survey to a small group of participants with the same characteristics as the target survey in as close to real world conditions as possible |
|
Focuses on the mechanics of implementation while cognitive testing focuses on the survey questions achieving shared understanding between researcher intent and respondent interpretation |