Abstract
Social network analysis depends on how social ties to others are elicited during interviews, a process easily affected by respondent and interviewer behaviors. We investigate how the number of self-reported important social contacts varied within a single data collection round. Our data come from HAALSI, a comprehensive population-based survey of individuals aged 40 years and older conducted over thirteen months at the Agincourt health and demographic surveillance site in rural South Africa. As part of HAALSI, interviewers elicited detailed egocentric network data. The average number of contacts reported by the 5059 respondents both varied significantly across interviewers and fell over time as the data collection progressed, even after adjusting for respondent, interviewer and respondent-interviewer dyad characteristics. Contact numbers rose substantially after a targeted interviewer intervention. We conclude that checking (and adjusting) for interviewer effects, even within one data collection round, is critical to valid and reliable social network analysis.
INTRODUCTION
Measurements of social networks depend on the number and type of social ties to others (Berkman et al. 2000; Smith and Christakis 2008). These ties are typically elicited through interviews, a process easily affected by respondent or interviewer characteristics and behaviors. Understanding social network structure and composition requires substantial amounts of information from respondents (“egos”) about the people (“alters”) they have relationships with (Marsden 1990). Notably, the survey burden associated with network data collection depends heavily on the number of alters elicited through “name generator” questions: each alter named leads to the repetition of all follow-up questions characterizing the ego-alter relationship (“name interpreters”) (Burt 1984).
Interviewers have been identified as a key source of variation in survey responses, particularly for questions which are attitudinal, ambiguous or have complex skip patterns (West and Blom 2016). Several studies have previously identified interviewer effects on network size (Brüderl, Huyer-May and Schmiedeberg 2013; Josten and Trappmann 2016; Marsden 2003; Paik and Sanchagrin 2013; van Tilburg 1998). These interviewer effects may arise from arise from differential understanding of survey questions, and therefore how questions are presented to respondents. Interviewers can also affect which alters are elicited due to their own characteristics (e.g. sex, race, age or experience), or the nature of the interviewer-respondent dyad (e.g. gender, race or age homophily), leading to different lines of enquiry, levels of probing, or expectations of social acceptability (Collins 1980; Hox 1994; Marsden 2003; Phung et al. 2015).
Furthermore, if respondents or interviewers are aware that naming more alters substantially increases survey length, then either group may consciously or unconsciously seek to minimize the number of alters named (Eagle and Proeschold-Bell 2015; Van der Zouwen and Van Tilburg 2001). In cross-sectional surveys, the opportunities for respondents to learn are limited, but those for interviewers will increase as the survey period progresses. Interviewers may try to reduce survey burden, either for themselves or for respondents, by favoring language or probes that decrease the number of alters elicited. Indeed, past studies in Europe have found evidence of interviewers intentionally filtering out questions by entering fewer responses that would trigger more questions. Such filtering behavior has been seen in Europe for interviewers who are being compensated by the interview rather than by the hour (Josten and Trappmann 2016; Kosyakova, Skopek and Eckman 2014), for interviewers with prior experience using the relevant screening tool (Matschinger, Bernert and Angermeyer 2005) and where interviewers are under substantial pressure to complete more interviews (Schnell and Kreuter 2000).
We aim to extend this literature by assessing how the number of alters elicited systematically changed over the course of a cross-sectional social network survey of older adults in rural South Africa. We show a substantial drop in alter numbers over time, and a swift reversal following retraining, providing substantial evidence for interviewer effects.
METHODS
Survey design
The social network module was one component of the baseline wave of the “Health and Aging in Africa: a Longitudinal Study of an INDEPTH community” (HAALSI) questionnaire conducted in 27 of the 31 villages that comprise the MRC/Wits Rural Public Health and Health Transitions Research Unit in Mpumalanga Province, South Africa (hereafter, “Agincourt”) (Kahn et al. 2012). The HAALSI study is a population-based longitudinal cohort of men and women aged 40 years and over in rural South Africa, and was selected as a random sample of approximately 40% of all age-eligible individuals in the Agincourt demographic surveillance area; 85.9% of elibile indivduals approached consented to participate. Interviews progressed from village to village throughout the study period with interviewers randomly assigned to potential participants within each village.
Data were collected using computer-assisted personal interviews (CAPI). The baseline survey was modeled closely on sister health and aging studies including the US Health and Retirement Studies (HRS), LASI in India and CHARLS in China (Arokiasamy et al. 2012; Sonnega et al. 2014; Zhao et al. 2014). It comprised an approximately three-hour household visit including structured quantitative interviews, anthropometric and physiological measurements and blood draws. Inclusion of a social network module, however, was unique to HAALSI amongst the HRS sister studies. The structure of the HAALSI social network module was modeled on the network module in the US-based National Social Life, Health, and Aging Project (NSHAP) (Cornwell, Laumann and Schumm 2008). The HAALSI module, which was started around 30 minutes into the household visit, included one name generator question: “Please tell me the names of 6 adults with whom you have been in communication either in person or by phone or by internet in the past 6 months, starting with the person who is most important to you for any reason”. This question aimed to capture the respondents’ most meaningful recent relationships – those most likely to impact their health and wellbeing. If the respondent was married and the spouse was not named by the respondent, then the spouse’s name was added as a seventh response. Neither the interviewers, nor the CAPI program, forced respondents to name six alters, despite the name generator wording.
Respondents were then asked a series of “name interpreter” questions about each named alter, including: (i) the alter’s socio-demographics (age, sex, residential location, relationship to respondent); (ii) frequency of contact (in-person, by phone/text/email); and (iii) frequency of receiving four types of social support (emotional, informational, physical and financial) from the alter.
HAALSI was granted ethics approval by the University of the Witwatersrand Human Research Ethics Committee, the Harvard T.H. Chan School of Public Health Office of Human Research Administration and the Mpumalanga Provincial Research and Ethics Committee.
Interviewer recruitment and training
HAALSI interviewers were recruited from within the local resident community, amongst those with high school graduation (“matric pass”) and fluent in both English and xiTsonga (the local language). Twenty of the 29 applicants selected for training were retained for the survey. Four supervisors with previous experience supervising the Agincourt demographic census oversaw the work of the fieldworkers. Both interviewers and supervisors received training specific to the social network module. Interviews began in November 2014. Three interviewers did not continue after January 2015 as performance requirements were not met. In May 2015 an additional seven interviewers were employed to accelerate survey completion. In September 2015, after the bulk of interviews were completed, the three best-performing fieldworkers (based on quantitative data and supervisor reports) were retained to revisit previously unavailable respondents, with other fieldworkers providing occasional assistance. Survey enrollment closed in December 2015. All but four of the 27 interviewers who worked on HAALSI had previous interviewer experience in Agincourt.
Data monitoring
Beginning in January 2015, HAALSI researchers produced monthly data quality monitoring reports until the end of the survey. Key results were shared with on-site and off site project managers, who then informed field supervisors at the weekly study management meetings and periodic re-training conducted with field staff (supervisors and interviewers). Social network module re-training in February focused on questions relating to alter ages (which had been missing in 25% of cases) and conflict (5–10% don’t know/refused). Although declining rates of alter elicitation were noted early on, the extent of decline was not clear until later in the year. As a result, the issue was not presented to the field team until June. Despite subsequent discussions with field staff, alter numbers continued to decline to a nadir in September 2015. In October and November 2015, supervisors and project managers held intensive weekly meetings with the remaining interviewers during which they discussed the issue of low alter numbers.
Statistical analyses
We first described how the number of alters reported varied by: (i) respondent characteristics, using Kruskall-Wallis χ2 tests; (ii) interviewer and their characteristics; and (iii) date of interview, including what types of alters were named differentially over time. We then conducted multilevel Poisson regression (having tested for and rejected overdispersion), in which respondents were nested within interviewers with random intercepts. Poisson models allowed us to model the count of alters elicited; multilevel models allowed us to decompose the variation in number of alters elicited into parts within and between interviewers. These regression models began with a null model with no covariates, then added in turn: month of interview; village of respondent; other respondent characteristics; interviewer characteristics and respondent-interviewer dyad characteristics.
Finally, we examined the extent of variation in the rate of decline across interviewers by running a multilevel model on only the first 11 months of data, i.e. the period prior to intensive interviewer supervision, including both random intercepts and slopes for each interviewer. The final model was of the form:
Where is the count of alters named by respondent with interviewer The model contained three sets of fixed effects: is a vector of respondent-level covariates, including an indicator for month of interview; a vector of interviewer-level covariates; and a vector of respondent-interviewer dyadic covariates. The model contained three random effects: at the individual level, a random intercept for each interviewer, and a random slope for interviewers across months. This last term further decomposed the variation in alter numbers seen between interviewers by allowing for the rate at which interviewers decrease the average number of alters they elicit to vary, so we can determine whether rates declined over time for all, or only some, interviewers.
RESULTS
A total of 5,059 individuals responded to the social network module of HAALSI, describing 15,549 alters in the social network module (Supplementary Table 1). Respondents named a median of three alters, with 252 (5.0%) individuals reporting no alters and 532 (10.5%) reporting six alters (227 of whom also had an unnamed spouse who was added as a seventh alter). Over three-quarters (77.8%) of named alters were relatives, mainly living in the same household (34.1%) or village (35.1%) as the respondent. Most non-relative alters (76%) also lived in the same village as the respondent.
The 27 interviewers conducted between 25 and 351 interviews each, with a median of 211 (Table 1). The mean number of alters elicited per interview varied widely across interviewers – ranging from 1.4 to 6.1. Interviewers who were female, younger and who conducted more interviews elicited more alters, on average. There was also considerable variation across time, with the mean number of alters elicited in each study month falling from 4.8 in November 2014 to a low of 1.7 in September 2015, before rebounding to 3.1 in the last two months of data collection following re-training (Supplementary Figure 1). This downward trend and rebound occurred within interviewers (Figure 1), and appears to correspond to a fall in alters who lived outside the respondents’ household, particularly within the Agincourt study area (Supplementary Figure 2).
Table 1:
Descriptive statistics for HAALSI interviewers
| Interviewers | Respondents interviewed | Mean number of alters named | |||||
|---|---|---|---|---|---|---|---|
| N | % | N | % | N | 95% CI | χ2 test | |
| Sex | |||||||
| Male | 7 | 25.9% | 1519 | 30.0% | 3.18 | [3.09 – 3.27] | |
| Female | 20 | 74.1% | 3540 | 70.0% | 3.03 | [2.97 – 3.08] | 17.9 (p<0.001) |
| Age | |||||||
| 20–29 | 17 | 63.0% | 3672 | 72.6% | 3.15 | [3.10 – 3.21] | |
| 30–39 | 7 | 25.9% | 1056 | 20.9% | 3.15 | [3.04 – 3.25] | |
| 40–49 | 3 | 11.1% | 331 | 6.5% | 1.95 | [1.80 – 2.11] | 159.6 (p<0.001) |
| Interviews, total | |||||||
| < 200 interviews | 13 | 48.1% | 1284 | 25.4% | 2.32 | [2.22 – 2.41] | |
| ≥ 200 interviews | 14 | 51.9% | 3775 | 74.6% | 3.29 | [3.23 – 3.34] | 323.2 (p<0.001) |
| Interviews per month | |||||||
| < 17 interviews | 13 | 48.1% | 1452 | 28.7% | 3.00 | [2.90 – 3.09] | |
| ≥ 17 interviews | 14 | 51.9% | 3720 | 73.5% | 3.10 | [3.05 – 3.15] | 9.30 (p=0.002) |
| Total | 29 | 5059 | |||||
χ2 test is a Kruskal-Wallis test of equality of ranks with one degree of freedom comparing mean number of alters reported by respondents interviewed by interviewers from each category.
Figure 1:

Mean number of alters reported to each interviewer in each month by HAALSI respondents
The multilevel regression models show that approximately half of the variance seen at the interviewer level in the null model can be explained by the month of interview (Table 2). In addition to month of interview, respondents’ age, gender, education level, marital status and household wealth are significant predictors of numbers of alters named. No other factors, however, are able to explain the remaining 50% of interviewer-level variance. Neither interviewer gender and age, nor dyadic homophily on these characteristics, predicted number of alters. Finally, when considering only the period up to September 2015, there was significant variability in how rapidly alter elicitation rates fell over time, although all 27 interviewers had a significantly negative slope coefficient (Supplementary Figure 3). This rate of decline was positively associated with level of alter elicitation, such that interviewers with higher elicitation rates saw slower fall-off in these rates. The predicted incidence rate ratio of elicited alters from model 7 continues to show a clear decline until September 2015, followed by a sharp rise (Supplementary Figure 4).
Table 2:
Summary of mixed-effect Poisson regressions for number of alters elicited by HAALSI interviewers
| 13 months of interviews (n=5059) |
11 months of interviews (n=4856) |
||||||||
|---|---|---|---|---|---|---|---|---|---|
| Model 1: Null model |
Model 2: add Months |
Model 3: Add Villages |
Model4 : add Respondent characteristics |
Model 5: add Interviewer characteristics |
Model 6: add Dyad characteristics |
Model 7: Final model |
Model 8: Random intercepts |
Model 9: Random slopes |
|
| Month of interview † | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | |
| Respondent | |||||||||
| Village of residence | 0.39 | ||||||||
| Sex and Age decade | 0.03 | 0.005 | 0.35 | 0.02 | 0.03 | 0.04 | |||
| Education | 0.04 | 0.01 | 0.01 | 0.01 | 0.02 | 0.01 | |||
| Country of origin | 0.78 | ||||||||
| Marital status | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | |||
| Household size | 0.61 | ||||||||
| Household wealth | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 | 0.005 | |||
| Interviewer | |||||||||
| Age | 0.31 | ||||||||
| Gender | 0.63 | ||||||||
| Total number of interviews | 0.71 | ||||||||
| Respondent-interviewer dyad | |||||||||
| Gender homophily | 0.54 | ||||||||
| Age difference | 0.81 | ||||||||
| Akaike Information Criterion | 18,346.5 | 17,805.4 | 17,830.3 | 17,191.8 | 17,197.1 | 17,200.2 | 17,192.1 | 16,582.8 | 16,495.0 |
| Interviewer variance (intercept) | 0.12 | 0.06 | 0.06 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 |
| [0.05 – 0.18] | [0.03 – 0.10] | [0.03 – 0.10] | [0.02 – 0.08] | [0.02 – 0.08] | [0.02 – 0.08] | [0.02 – 0.09] | [0.02 – 0.08] | [0.02 – 0.08] | |
| Interviewer variance (per month) | 0.01 | ||||||||
| [0.00 – 0.02] | |||||||||
All statistics for covariates are p-values from Wald tests for a linear hypothesis that all k categories of each variable (as shown in Supplementary Table 1) are jointly equal to zero, with an associated k–1 degrees of freedom.
Month of interview is categorical for models 1–7, to allow for non-linearities over time, and continuous for models 8 and 9 to allow for meaningful random slope coefficients. Point estimates and confidence intervals for all regressions are provided as Supplementary Table 2.
DISCUSSION
In this study of an egocentric social network data collection process within a large, cross-sectional in rural South Africa, we show that alters elicitation rates fell systematically as the survey progressed, even after adjusting for respondent, interviewer and respondent-interviewer dyad characteristics. Even more compellingly, we show that after fieldworkers and supervisors began meeting weekly to discuss fieldwork progress (including reviewing alter elicitation numbers), alter numbers rose sharply for the remainder of the study.
There are several possible explanations for the fall in alter numbers over time. First, later respondents may have learned from friends and family that the interview process was lengthy and could be shortened by reporting fewer alters. Given that the social network module was only one of several within the questionnaire, this seems unlikely. Second, later respondents may have true had fewer alters. Since this study was rolled out across consecutive villages in an overlapping fashion, the decline over time could represent a geographic pattern. Furthermore, later respondents included those who could not easily be found by the field team; such hard-to-find individuals might have had fewer alters. However, neither of these explanations explains the sharp uptick observed in the last two months of data collection. Third, increased experience may have improved interviewers’ ability to elicit the truly important people in respondents’ lives. Much of the drop in alter nominations was of kin not living in the same household, who might have been less vital to respondents. Yet, there was also a drop in the number of alters in daily contact with respondents (Supplementary Figure 5), which suggests that at least some truly pertinent alters were lost.
Finally, interviewers may have learnt to reduce survey length by eliciting fewer alters. Although interviewers were salaried, rather than paid per interview, there were substantial pressures on fieldworkers to complete interviews more rapidly – since the completion target of two interviews per day was regularly missed throughout the survey period. If interviewers learned that certain modules could be shortened, they may have guided respondents to report fewer alter to speed up the process. This explanation is supported by the sharp increase in alter numbers seen once interviewers were made aware that their elicitation rates were being observed, and that higher numbers of alters were to be expected (i.e. a Hawthorne effect).
Variation in alter elicitation rates across interviewers – even with randomized assignment to respondents – is not surprising and has been seen previously for consent rates, reporting on sensitive topics, and indeed naming of alters (Brüderl, Huyer-May and Schmiedeberg 2013; Josten and Trappmann 2016; Marsden 2003; Paik and Sanchagrin 2013; van Tilburg 1998). In contrast, variation over time in elicitation rates within interviewers across a survey period has been reported less often. This study is the first to empirically examine possible interviewer learning effects outside of Germany and the second to examine social network data specifically. Our results corroborate findings that interviewers may lead respondents away from longer interviews after learning about the interview process and when under time pressures (Matschinger, Bernert and Angermeyer 2005; Schnell and Kreuter 2000), and suggest that such behavior occurs even when interviewers are salaried (Josten and Trappmann 2016; Kosyakova, Skopek and Eckman 2014) and may be placing future employment opportunities at risk – even when stable jobs are very scarce. Interestingly, our findings suggest that not just a sub-group (Matschinger, Bernert and Angermeyer 2005), but all interviewers elicited fewer alters over time, possibly because they worked as a single cohesive field team.
Future social network studies should develop ways to minimize interviewer-associated variation over time in labor-intensive surveys. Possible approaches include: (1) improved training to increase standardization of name generator question delivery and probing; (2) more feedback of alter elicitation rates, and more careful education about the importance of comprehensive alter elicitation to the fieldwork team; (3) computer programing strongly encouraging collection of a fixed or minimum number of alters from all respondents (although this requires careful training and allowing interviewers to relax the constraint in the case of truly isolated individuals); and (4) use of self-interview methods, so interviewers are not involved in alter elicitation.
We also note that while roster-style modules, with multiple questions asked about every alter named, may be particularly susceptible to interviewer effects given the impact of roster length on interview burden, other parts of an interview with loops and skip patterns may also open to control by interviewers (Brüderl, Huyer-May and Schmiedeberg 2013). Future work could usefully examine such potential effects.
Strengths and limitations
A major strength of this study is its use of standard social network data collection methods, modelled on the NSHAP study. In addition, this study was based on an existing longitudinal surveillance platform, ensuring well-trained fieldworkers and a strong fieldwork infrastructure. Given these standardized data collection approaches, the effects we report on are likely to generalize to many other settings. Our key weakness is the inability to entirely rule-out non-interviewer led explanations for the observed temporal trends. However, the striking pattern of rebound following intensive interviewer re-training strongly suggests interviewers’ importance in generating the patterns of alter numbers seen. Furthermore, in settings where social networks can be collected via respondent-driven methods (e.g. self- or online-interviews) or via analysis of email or social media patterns, the concerns raised here may be less pfressing. Nevertheless, interviewers can help increase the validity and reliability of alter elicitation, if carefully trained and supervised, and thus interviewers are likely to be used – and interviewer effects remain a concern – even in more literate and computer-connected populations.
Conclusions
The time and effort required from both interviewers and respondents to measure networks is considerable, and is thus social network data is vulnerable to measurement error, as our findings suggest. We therefore recommend that researchers design network data collection processes to minimize opportunities for interviewer effects, continuously monitor data collection processes, and consider adjustment for both interview date and interviewer identity in any analyses they conduct – even for cross-sectional data.
Supplementary Material
Acknowledgements
The HAALSI study is funded by the National Institute on Aging from NIH (P01 AG041710), and is nested within the MRC/Wits Rural Public Health & Health Transitions Research Unit (Agincourt), supported by Wellcome Trust (grants 058893/Z/99/A; 069683/Z/02/Z; 085477/Z/08/Z; 085477/B/08/Z), the University of the Witwatersrand, and the South African Medical Research Council. TB was supported by the Wellcome Trust, and by National Institutes of Health (grants R01-AI124389 and R01-HD084233).
We would like to thank the study team, participants and community of Bushbuckridge, without whom this study would have not been possible. We acknowledge useful comments from participants at Sunbelt XXXVI.
REFERENCES
- Arokiasamy P, Bloom D, Lee J, Feeney K, and Ozolins M. 2012. “Longitudinal study on aging in India: Vision, design, implementation, and preliminary findings.” Pp. 36–74 in Aging in Asia: Findings from new and emerging data initiatives, committee on policy research and data needs to meet the challenge of aging in Asia, edited by Smith JP and Majumundar M. Washington, DC: National Academy [Google Scholar]
- Berkman LF, Glass T, Brissette I, and Seeman TE. 2000. “From social integration to health: Durkheim in the new millennium.” Social Science and Medicine 51(6):843–57. [DOI] [PubMed] [Google Scholar]
- Brüderl J, Huyer-May B, and Schmiedeberg C. 2013. “Interviewer behavior and the quality of social network data.” Pp. 147–60 in Interviewers’ Deviations in Surveys. Impact, Reasons, Detection and Prevention, edited by Winkler P, Porst R, and Menold N. Frankfurt: Peter Lang. [Google Scholar]
- Burt RS 1984. “Network items and the general social survey.” Social Networks 6(4):293–339. [Google Scholar]
- Collins M 1980. “Interviewer variability: a review of the problem.” Journal of the Market Research Society 22(2):77–95. [Google Scholar]
- Cornwell B, Laumann EO, and Schumm LP. 2008. “The social connectedness of older adults: A national profile.” American Sociological Review 73(2):185–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eagle DE, and Proeschold-Bell RJ. 2015. “Methodological considerations in the use of name generators and interpreters.” Social Networks 40:75–83. [Google Scholar]
- Hox JJ 1994. “Hierarchical regression models for interviewer and respondent effects.” Sociological Methods & Research 22(3):300–18. [Google Scholar]
- Josten M, and Trappmann M. 2016. “Interviewer Effects on a Network-Size Filter Question.” Journal of Official Statistics 32(2):1–25. [Google Scholar]
- Kahn K, Collinson MA, Gómez-Olivé FX, Mokoena O, Twine R, Mee P, Afolabi SA, Clark BD, Kabudula CW, and Khosa A. 2012. “Profile: Agincourt health and socio-demographic surveillance system.” International Journal of Epidemiology 41(4):988–1001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kosyakova Y, Skopek J, and Eckman S. 2014. “Do Interviewers Manipulate Responses to Filter Questions? Evidence from a Multilevel Approach.” International Journal of Public Opinion Research 27(3):417–31. [Google Scholar]
- Marsden PV 1990. “Network data and measurement.” Annual Review of Sociology:435–63. [Google Scholar]
- Marsden PV. 2003. “Interviewer effects in measuring network size using a single name generator.” Social Networks 25(1):1–16. [Google Scholar]
- Matschinger H, Bernert S, and Angermeyer MC. 2005. “An analysis of interviewer effects on screening questions in a computer assisted personal mental health interview.” Journal of Official Statistics 21(4):657. [Google Scholar]
- Paik A, and Sanchagrin K. 2013. “Social isolation in America: an artifact.” American Sociological Review. [Google Scholar]
- Phung TD, Hardeweg B, Praneetvatakul S, and Waibel H. 2015. “Non-sampling error and data quality: what can we learn from surveys to collect data for vulnerability measurements?” World Development 71:25–35. [Google Scholar]
- Schnell R, and Kreuter F. 2000. “An investigation of the discrepancy in the results of nearly identical victimization surveys.” KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie 52(1):96–117. [Google Scholar]
- Smith KP, and Christakis NA. 2008. “Social networks and health.” Annual Review of Sociology 34:405–29. [Google Scholar]
- Sonnega A, Faul JD, Ofstedal MB, Langa KM, Phillips JW, and Weir DR. 2014. “Cohort profile: the Health and Retirement Study (HRS).” International Journal of Epidemiology 43(2):576–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van der Zouwen J, and Van Tilburg T. 2001. “Reactivity in panel studies and its consequences for testing causal hypotheses.” Sociological Methods & Research 30(1):35–56. [Google Scholar]
- van Tilburg T 1998. “Interviewer effects in the measurement of personal network size: a nonexperimental study.” Sociological Methods & Research 26(3):300–28. [Google Scholar]
- West BT, and Blom AG. 2016. “Explaining Interviewer Effects: A Research Synthesis.” Journal of Survey Statistics and Methodology:smw024. [Google Scholar]
- Zhao Y, Hu Y, Smith JP, Strauss J, and Yang G. 2014. “Cohort profile: The China health and retirement longitudinal study (CHARLS).” International Journal of Epidemiology 43(1):61–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
