Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jan 1.
Published in final edited form as: Int J Mult Res Approaches. 2012 Jan 1;6(1):4406. doi: 10.5172/mra.2012.6.1.56

Maintaining Superior Follow-Up Rates in a Longitudinal Study: Experiences from the College Life Study

Kathryn B Vincent 1, Sarah J Kasperski 2, Kimberly M Caldeira 1, Laura M Garnier-Dykstra 2,3, Gillian M Pinchevsky 2,4, Kevin E O’Grady 5, Amelia M Arria 6
PMCID: PMC3255097  NIHMSID: NIHMS328766  PMID: 22247739

Abstract

Longitudinal studies are often considered to be a gold standard for research, but the operational management of such studies is not often discussed in detail; this paper describes strategies used to track and maintain high levels of participation in a longitudinal study involving annual personal interviews with a cohort of 1,253 undergraduates (first-time, first-year students at time of enrollment) at a large public mid-Atlantic university.

Keywords: College Students, Follow-up Rates, Longitudinal Studies, Methodology, Recruitment, Study Attrition, Young Adults

INTRODUCTION

Longitudinal research provides invaluable information for understanding the onset and development of social and public health issues affecting large numbers of individuals. Organizational scholars have recently drawn attention to a need for more systematic longitudinal studies, especially those with three or more waves of data collection (Ployhart & Vandenberg, 2010). Prior research has examined a variety of strategies for increasing follow-up rates in longitudinal studies, including using detailed contact logs, having convenient locations, providing incentives, and consistently monitoring progress of staff (Cottler, Compton, Ben-Abdallah, Horne, & Claverie, 1996; Cottler, Zipp, Robins, & Spitznagel, 1987; Scott, Sonis, Creamer, & Dennis, 2006; Stouthamer-Loeber, van Kammen, & Loeber, 1992). Other research has examined what types of people are most likely to be difficult to contact or retain in longitudinal studies such as college students (Kypri, Gallagher, & Cashell-Smith, 2004), substance abusers (Scott, 2004), young black males (Cottler et al., 1987), and cocaine users (Nemes, Wish, Wraight, & Messina, 2002). Other characteristics of difficult-to-reach participants include mobility, addiction severity, criminal justice system involvement, and social withdrawal (Scott, 2004). Mixed findings have been reported for the relationship between employment status and difficulty in locating participants (Cottler et al., 1996; Cottler et al., 1987; Nemes et al., 2002). These data suggest that there are multiple reasons for poor follow-up rates and that multi-pronged strategies are necessary to prevent attrition. Low follow-up rates in longitudinal studies can yield biased estimates of participant characteristics under study (Graham & Donaldson, 1993; Nemes et al., 2002; Prinz et al., 2001).

Considerable research supports the notion that numerous contact attempts can raise follow-up rates, even among participants who have refused past interviews (Cottler et al., 1996; Stouthamer-Loeber et al., 1992). Cottler et al. (1987) reported that 66% of the participants who would eventually complete the interview completed it by the fifth attempt, while 95% completed it by the 14th attempt. Scott et al. (2006) noted that as interviewers build positive rapport with participants, the number of necessary contact attempts generally decreases in successive follow-up waves.

Although prior research studies have been very helpful in standardizing methodology for longitudinal studies, contemporary culture and technological advancements present contemporary challenges to investigators. Enrolling young people in complex longitudinal studies can be an especially difficult task because of their involvement in a variety of competing activities. Recent research suggests that supplementing traditional tracking methods (e.g., phone, email, and talking to known associates) with new technology may be useful for certain demographic populations. College students are highly connected through the Internet and mobile phones. Social networking sites (SNS) are widely used among college students (Ellison, Steinfield, & Lampe, 2006; Hargittai, 2008; Lampe, Ellison, & Steinfield, 2006), with Facebook® and MySpace® ranking as most popular (Hargittai, 2008). Achieving success in a longitudinal study requires implementing traditional and innovative tracking strategies as well as continually monitoring the success of these methods. The success of these methods will ultimately depend on the staff tasked to carry them out, and thus, effective management is essential, including maintaining employee morale, and continually monitoring progress at every level.

The overarching goal of this paper is to describe our experiences in conducting the College Life Study, a large long-term longitudinal study of 1253 college students where excellent follow-up rates have been maintained. The paper begins with a brief description of the study. Next, we describe the traditional strategies that we employed to enhance continued participation during the first four years of data collection. Third, we present a description of the management structure, employee selection, maintenance of staff morale, and monitoring systems we employed to in order to track our progress with participant follow-up. Fourth, we present data on the factors related to attrition in our study as well as information on how various contact methods influenced successful follow-up. Finally, on the basis of these experiences, we make detailed recommendations regarding how to carry out this type of research in a contemporary context.

DESCRIPTION OF THE STUDY

The CLS is an ongoing longitudinal prospective study of young adults, all of whom were members of the fall 2004 cohort of incoming students at one large, public, mid-Atlantic university. Recruitment occurred in two stages. First, during new-student orientation in the summer of 2004, we invited all incoming first-time first-year students ages 17 to 19 to participate in a brief web-based screener (n=3,401; response rate=89%). Next, we sampled students for longitudinal follow-up, beginning with a two-hour face-to-face baseline interview during their first year of college. We deliberately oversampled students who used an illicit drug or nonmedically used a prescription drug at least once during high school to optimize statistical power for analyses on college drug use. Accordingly, we statistically weighted the data to enhance generalizability of prevalence estimates. The resulting sample of 1,253 students (response rate=87%) was similar to the first-year class with respect to race, gender, and socioeconomic status (Arria et al., 2008). We obtained a federal Certificate of Confidentiality and informed consent for both the screener and longitudinal follow-up following University IRB-approved protocols. Participants received $5 for completing the screener and $50 for each annual assessment, plus a $20 bonus for on-time completion of follow-ups (see below).

We conducted baseline interviews over a nine-month period corresponding to the 2004–05 academic year. Subsequent annual assessments were conducted on or around the anniversary of each participant’s baseline assessment. All 1,253 participants were eligible to participate in every follow-up assessment regardless of continued college attendance or participation in previous assessments. Interview content was largely consistent from year to year, and included both interviewer-administered sections and self-administered questionnaires. An earlier publication focused on the initial study design and follow-up rates during the first two years of the study (Arria et al., 2008).

TRADITIONAL STRATEGIES TO ENHANCE PARTICIPATION

Establishing a Study Identity

Prior to beginning recruitment for the longitudinal study, we implemented branding strategies such as a colorful logo to create a familiar, recognizable presence to study participants. Additionally, we developed a study website (www.cls.umd.edu) to provide participants with general information about the study, including photographs of research staff and our office building, and established a dedicated CLS email account, which we staffed seven days a week to ensure prompt response to participant communications. Overall, we aimed to project a friendly, accessible, and recognizable face for the study.

Collecting and Updating Locator Information

Consistent with procedures described by others (Cottler et al., 1996), we asked participants to provide contact information on a locator sheet at the beginning of each interview, starting with the screener. Additional locator updates were requested via email and phone at six-month intervals. We encouraged participants to provide as much locator information as possible. Unlike postal addresses and residential phone numbers, which change frequently in this population, participants’ mobile phone numbers and personal email addresses remained relatively stable throughout the follow-up period, and were therefore invaluable.

New locator data was entered into a database after each update, including the date the information was collected. This created a cumulative history of locator data from which older information could be accessed if more recent information did not lead to successful contacts. To simplify the locator update for participants, locator sheets were generated automatically from the database and pre-populated with the most recent contact information for participants to review and/or correct. As a last resort, interviewers used a variety of free public search mechanisms available on the Internet to find additional locator data. To protect confidentiality, locator data and interview data were stored separately under different ID numbers that could not be linked except by the PI or her designee.

Management of Follow-up Data

We generated personalized timelines for each participant with key dates pre-printed directly onto disposition sheets. Each deadline was computed in relation to the anniversary of the participant’s baseline interview. First, starting in Year 2, the date when the interviewer should begin contacting the participant for follow-up was set at −42 days (i.e., 6 weeks pre-anniversary). Next, we developed a target window to define on-time completion, which was set at −14 to +14 days (i.e., within two weeks pre- or post-anniversary). Finally, the date for ceasing contact attempts, the “time-out date,” was set at +56 days (i.e., eight weeks post-anniversary). We later found that the interval between the start date and the target window was too long, and since in-window completion was tied to a bonus payment (see below), we did not want to penalize participants for completing their interview early. We therefore lengthened the target window in Year 3 (−42 to +14 days) with a start date at −45 days. In Year 4 we further modified in the timeline to create a more balanced target window of −28 to +28 days, with a start date at −31 days.

Follow-up Procedures

We began follow-up on the start date and continued for as long as necessary to schedule the interview, so long as some encouraging communication was received from the participant. Initial attempts made via email and phone were followed by additional attempts using whichever method seemed most effective for each respective participant. In many cases, the entire scheduling process occurred over a few email exchanges without a single phone contact. Consistent with standard practice, interviewers varied the days and times of their attempts, and after leaving a message, waited no more than three days for a response before making another attempt. Upon receiving a reply from a participant, interviewers responded within 24 hours. Interviewers used sample scripts for written and phone communications, but were encouraged to personalize their communications appropriately. If no successful contacts had been made by the anniversary date, we sent a friendly letter to all available addresses for that participant. At the end of each semester, we also sent a similar letter to any participants who remained difficult-to-reach. Finally, we ceased contact attempts only if no response had been elicited by the time-out date, or if the interview was still not completed by the end of the assessment interval.

With respect to voicemail, several strategies proved effective in eliciting a return call. Messages left after business hours seemed to elicit more returned calls, as did messages left when the recruiter called from a mobile phone, possibly because these strategies helped to make the message seem more personal and less like a telemarketing call. Many participants paid attention to originating phone number and were therefore more likely to return calls to a mobile number, as opposed to a university office number. For this reason, most interviewers began using their own personal mobile phones for recruitment calls. On the other hand, some participants seemed more influenced by the legitimacy and seriousness of a university number, and were therefore more likely to respond to calls originating from our office phones.

When repeated email and phone attempts failed to establish contact, alternative methods were attempted. The study coincided with the emergence of SNS, which interviewers often used to contact participants. Instant messaging, text messaging, and Skype® also became increasingly useful as these technologies became more ubiquitous during the follow-up period.

A disposition sheet for each participant contained a complete history of all attempts made to contact the participant and schedule interviews for that assessment, including whether communication was initiated by the interviewer or the participant, as well as any notations about rescheduling and reasons for refusing to participate, where applicable.

Offering Flexibility with Respect to Interview Locations

Most interviews took place in our main office, located adjacent to the main campus, which was preferable due to the availability of extra supplies and other staff in case of an emergency. To offer participants greater flexibility and convenience, we procured several additional private offices in academic buildings near the center of campus. As a safety precaution, we prohibited interviewers from conducting interviews inside participants’ homes or dormitory rooms. We also adapted the interview for phone administration, which became increasingly important during Years 3 and 4, when many participants were studying abroad, graduating, or relocating for other reasons.

Participant Payments

We informed participants at baseline that they would receive $50 for each interview they completed. Beginning in Year 2, we began offering an additional $20 “bonus” for on-time completion, that is, within the target window. Recruitment communications always specified the target window end date to encourage participants to take advantage of the bonus payment. The $20 bonus was also awarded on a discretionary basis, for example, when the interview was completed early or late due to interviewer error, or when the participant took the initiative to request an early interview before they left for a study abroad period.

MANAGEMENT AND OPERATIONS

Organizational Structure

At study outset, we established a loosely-tiered structure of management and communication that allowed senior staff to exert authority over operations and communicate important information to field staff on a regular basis, while still allowing a great deal of communication from the “ground up.” Senior staff encouraged communication from field staff on issues related to both operations and interview content. Interviewers’ suggestions were taken seriously and frequently implemented, such as adding new items assessing energy drink use. This strategy leveraged field staff’s “front line” perspective to observe new social trends as they emerged, document participants’ reactions to interview questions, and ensure that the assessment remained both relevant to the times and minimally burdensome, either of which could affect follow-up rates.

Two senior staff monitored the day-to-day activities of the interview team: the Recruitment Coordinator and the Lead Interviewer. The Recruitment Coordinator meticulously monitored the progress of every interviewer and every participant via databases and weekly meetings (described below) and maintained responsibility for hiring and training interviewers, managing data entry, and other organizational tasks. As the sole full-time interviewer on the team, the Lead Interviewer served as a bridge between interviewers and senior staff. The Lead Interviewer’s role presented several benefits. First, she provided an open door for interviewers to troubleshoot recruitment problems and clarify interview procedures in a less intimidating context than with senior staff. Second, as a seasoned interviewer with well-developed observational skills, she provided valuable insights to senior staff about aspects of participants’ lives that were not captured in the assessment, such as emerging drug trends and recurring themes. Third, by serving as the primary contact for fielding interviewers’ procedural questions, the Lead Interviewer facilitated uniformity in adherence to study protocols, sharing of new recruitment strategies, and timely resolution of problems. The Lead Interviewer also ensured that each interviewer felt supported and highly valued for their contributions to the study.

All of the interviewers were responsible for contacting and scheduling interviews for their own caseload of participants. Most interviewers were graduate and advanced undergraduate students employed on an hourly basis. Because turnover rates are generally very high among student employees, we took great care to develop protocols for screening, hiring, training, and monitoring interview staff to optimize employee retention. We strongly believe that retaining successful staff is beneficial for maintaining high follow-up rates and high standards of data quality, due to experienced interviewers’ greater familiarity with study materials, quality assurance, and resourceful recruitment techniques.

Qualifications, Screening, and Hiring of Interviewers

Postings for interviewer positions were disseminated in a variety of venues including email lists within academic programs and the university’s career center. Screening applicants first via email and phone, we sought out evidence of strong communication skills, experience with customer service, ability to manage a complex schedule with competing responsibilities and priorities, conscientious work ethic, enthusiasm, attention to detail, and an upbeat, gregarious attitude. Next, two senior staff trained in behavioral interviewing techniques interviewed applicants in person. Finally, applicants provided an unprepared writing sample in response to a problem-oriented prompt; we later scrutinized responses for evidence of professionalism, ethical judgment, problem-solving, pragmatism, basic communication skills, and legibility of handwriting.

Interviewer Training

Training took place in four stages. First, new hires reviewed the study manual and interview materials, and conducted five mock interviews with friends or relatives. Second, they attended two group training sessions covering study protocols, administrative policies, interviewing skills, and tracking methods. The group also conducted a mock interview in a round-robin format, allowing more experienced interviewers to share insights and effective strategies for successfully navigating trickier items and sections. Third, new interviewers observed at least two actual interviews being administered by experienced staff. Fourth, new interviewers began scheduling and conducting their own interviews, but received one-on-one critiques and coaching from experienced staff who observed at least their first two interviews. Interviewers began conducting interviews without supervision after their competence was established and documented. Additional coaching and observations were provided as needed. To ensure continued adherence to protocols, interviewers continued to submit to observations and coaching from experienced staff at least once every semester.

Weekly Meetings

Interviewer supervision occurred in an individual, 30-minute, face-to-face weekly meeting with the Recruitment Coordinator, who supervised the entire team of interviewers and allocated all recruitment and interviewing tasks among the team. The importance of regular face-to-face time with a supervisor cannot be overstated, especially given the solitary, independent nature of the recruitment and interviewing roles. Furthermore, considering that student employees often had very limited work experience, regular supervision was extremely important for both quality assurance and professional development. Weekly meetings focused on reviewing recruitment progress on every participant in the interviewer’s caseload and all errors detected in the interviewer’s completed interviews from the previous week. In this way, interviewers received detailed, thorough, and individualized feedback consistently every week, thus promoting a serious attitude about the study and high standards for quality. Simultaneously, by cultivating an appreciation for each individual interviewer’s strengths, weaknesses, and personal motivations, the Recruitment Coordinator was able to provide tailored, individualized supervision and coaching. We believe this approach boosted morale and fostered interviewers’ feelings of investment in the study, all of which promoted staff retention, adherence to study protocols, and better overall job performance.

Maintaining Staff Morale

Creating and maintaining a team-oriented culture was important to our success. From the outset, the PI developed a mission statement for the study so that all staff had a shared understanding of our goals, encapsulated as “understanding the health-related behaviors of young adults during a critical life transition.”

The team culture was most apparent in the way recruitment and interviewing responsibilities were shared and rotated among the interviewers. At the start of each month, each interviewer received a new caseload of participants for whom they had sole responsibility (approximately 25 participants at any given time). As the semester progressed, the Recruitment Coordinator worked with interviewers to reassign any difficult-to-reach participants to another interviewer who had either more experience, a markedly different personality, or a history of successfully reaching that participant for a prior assessment. We thereby both increased the likelihood of successful recruitment and prevented newer interviewers from becoming overly discouraged. Furthermore, we trained the entire research team to conduct interviews, including the PI and other senior staff, so that interviewers could always find emergency coverage at a moment’s notice.

Because recruiting and interviewing can be tedious work, the social cohesiveness of the interviewing team was crucial to helping interviewers stay focused and preventing burnout. Interviewers were encouraged to initiate social activities on their own, both with and without the presence of senior staff (e.g., group lunches, holiday gift exchanges, camping trips). In addition to acknowledging interviewers’ individual accomplishments in their weekly meetings, senior staff also acknowledged team accomplishments on a regular basis, such as posting follow-up rates and sending group emails about publications, media mentions, and conference presentations.

ANALYSES

Descriptive Analyses of Follow-up Rates

Table 1 presents a detailed breakdown of the final interview disposition in Years 2 through 4, among all 1,253 participants in the original baseline sample. Annual follow-up rates were 91.1%, 87.9%, and 87.6%, respectively, with high rates of on-time or early completion. Note that participants who “refused” one assessment could be retained in the next assessment, as eligibility was never contingent on earlier follow-ups. However, in each year a small number of “inactive” participants were not contacted at all, because they had already expressed a desire to drop out of the study. Thus, although results shown in Table 1 reflect conventional methods for computing follow-up rates, the effectiveness of actual recruitment efforts expended might be better approximated by excluding the “inactive” individuals (i.e., follow-up rates of 91.5%, 89.4%, and 92.0%, respectively; data not shown in a table). Accordingly, analyses of data on recruitment effort (i.e., attempts, methods, number of interviewers) reported below are based on the “active” participants only, since “inactive” participants were not attempted at all.

Table 1.

Follow-up disposition for 1,253 baseline participants in the College Life Study, by assessment year

Year 2 Year 3 Year 4
n % n % n %
Completed 1,142 91.1% 1,101 87.9% 1,097 87.6%
 Completed early 82 6.5% 12 1.0% 11 0.9%
 Completed in window 762 60.8% 973 77.7% 1,038 82.8%
 Completed late 298 23.8% 116 9.3% 48 3.8%
Timed out 74 5.9% 86 6.9% 69 5.5%
Refused 32 2.6% 44 3.5% 27 2.2%
Inactive a 5 0.4% 22 1.8% 60 4.8%

Total 1,253 100.0% 1,253 100.0% 1,253 100.0%
a

Participants who asked that research staff stop contacting them were considered “Inactive” for all future waves of data collection.

Correlates of Attrition

To evaluate attrition bias, participants who did and did not complete a follow-up assessment were compared on the basis of several characteristics measured at baseline: sex; combined SAT scores; race; maternal educational level (a proxy for socioeconomic status); importance of religion; depression and anxiety symptoms [i.e., via the Beck Depression Inventory (Beck, Rush, Shaw, & Emery, 1979), Beck Anxiety Inventory (Beck, Epstein, Brown, & Steer, 1988), and Center for Epidemiologic Studies Depression scale (Radloff, 1977)]; attention-deficit/hyperactivity disorder (via self-reported diagnosis); number of illicit drugs and prescription drugs used nonmedically in lifetime (i.e., marijuana, inhalants, hallucinogens, cocaine, amphetamines/methamphetamine, heroin, ecstasy, and nonmedical use of stimulants, analgesics, tranquilizers/benzodiazepines); typical number of drinks per drinking day; age at first alcohol intoxication; and alcohol and cannabis use disorders, based on the DSM-IV criteria for substance abuse and dependence (American Psychiatric Association, 1994).

With respect to attrition bias (see Table 2), individuals who completed the interview in Years 2 through 4 were less likely to be male and had lower levels of Year 1 alcohol consumption, as compared to non-completers. In addition, participants in Years 3 and 4 used fewer illicit drugs in their lifetime on average, as compared to non-completers. However, individuals who met criteria for alcohol or cannabis use disorders were represented similarly among completers and non-completers.

Table 2.

Comparison of Baseline Characteristics of 1,253 Participants who Did and Did Not Complete Follow-up Assessments in Years 2, 3, and 4

Year 2 Year 3 Year 4
Completed Did Not Complete Completed Did Not Complete Completed Did Not Complete
% Mean SD % Mean SD % Mean SD % Mean SD % Mean SD % Mean SD
Sex (%Male) *** 47.3 61.3 46.5 63.2 46.2 64.7
Race (% White) 70.9 70.3 70.7 71.5 70.3 74.2
Mother’s education (% with graduate degree) 35.5 37.6 36.2 31.6 35.6 35.8
Combined SAT score 1267.9 119.7 1276 117.4 1269.5 119.7 1262.7 118.1 1268.3 118.4 1271.1 127.3
Importance of religion
 Not important * 24.8 36.9 24.9 33.6 24.9 33.3
 Slightly important 24.1 23.4 24.8 19.1 24.8 19.2
 Moderately important 31.3 27 31.1 29.6 30.7 32.1
 Extremely important 19.7 12.6 19.3 17.8 19.6 15.4
ADHD (% with current diagnosis) 3.6 5.4 3.9 2.6 3.9 2.6
Beck Depression Inventory Score 5.4 5.2 5.3 5.5 5.4 5.2 5.3 5.4 5.4 5.2 5 5.3
Beck Anxiety Inventory Score 7.6 7 8.2 8.8 7.5 7 8.3 8.3 7.6 6.9 8.2 8.6
Center for Epidemiologic Studies Depression Score 10.7 7.5 12.1 9.7 10.7 7.6 11.8 8.5 10.7 7.5 11.5 9.1
# of illicit drugs ever used ** 1.4 1.6 1.7 1.7 1.4 1.5 2 2.1 1.4 1.5 2 2
Typical # of drinks per drinking day a, *** 4.8 2.7 5.6 3 4.8 2.7 5.6 3 4.8 2.6 5.5 3.3
Age at first alcohol intoxication b 15.7 1.7 15.6 1.6 15.6 1.7 15.7 1.6 15.6 1.7 15.7 1.6
Alcohol use disorder 26.6 28.8 26.7 27.6 26.9 26.3
Cannabis use disorder 14.5 17.4 14.7 15.3 14.3 18.2
a

Restricted to individuals who drank alcohol at least once in their lifetime.

b

Restricted to individuals who were drunk on alcohol at least once in their lifetime.

*

Denotes statistically significant differences between completers and non-completers in Year 2 (α =.05).

**

Denotes statistically significant differences between completers and non-completers in Years 3 and 4 (α=.05).

***

Denotes statistically significant differences between completers and non-completers in Years 2 through 4 (α =.05).

Analysis of Average Effort Expended per Participant

To quantify the amount of effort expended to follow-up each participant, we abstracted data from disposition sheets for Years 2 through 4 to derive a series of three count variables representing the number of attempts interviewers made to contact the participant and schedule the interview, the number of different contact methods the interviewer used (e.g., email, mobile phone, instant messaging), and the number of different interviewers involved in contacting the participant. We excluded the Year 1 disposition sheets due to excessive missing data.

Table 3 presents an overview of the effort required to attain our follow-up rates each year. As can be seen, the number of contact attempts averaged between four and five attempts every year, with 75% of all completed interviews being completed within the first five attempts. It is apparent that a great deal of effort was expended on obtaining the last 10% of participants who were hardest to reach.

Table 3.

Effort expended for completed interviews, by assessment yeara

Mean SD Min Max Percentiles
75% 90% 95%
Number of contact attempts
 Year 2 4.3 5.2 1 43 5 10 15
 Year 3 4.5 5.5 0 53 5 9 14
 Year 4 4.7 6.0 0 54 5 11 16
Number of contact methods attempted
 Year 2 1.9 0.8 1 4 2 3 3
 Year 3 2.1 0.9 1 5 3 3 4
 Year 4 2.2 0.9 1 5 3 3 4
Number of interviewers
 Year 2 1.2 0.4 1 4 1 2 2
 Year 3 1.2 0.5 1 4 1 2 2
 Year 4 1.2 0.5 1 5 1 2 2
a

Results depict effort only for participants who completed the interview (Year 2, n=1,142; Year 3 n=1,101; Year 4 n=1,097).

To understand how much benefit was derived from the extra effort expended on the participants who were the most difficult to schedule and complete, Figures 1a through 1c plot the completion rate as a cumulative percent of all active participants for each assessment by three different measures of recruitment effort: number of contact attempts, number of interviewers needed to recruit the participant, and number of contact methods used. Results were strikingly similar for all three follow-up years, with completion reaching approximately 82% by 10 attempts and slowing thereafter. However, the extra effort expended after 10 attempts resulted in gains of 7% to 9% in the final completion rate. With respect to number of interviewers, substantial gains resulted from the involvement of a second interviewer only. Finally, diversifying the number of contact methods attempted resulted in substantial gains until the fourth method.

Figure 1.

Figure 1

Figure 1

Figure 1

Figure 1a. Cumulative percent of interviews completed,a by number of contact attempts

a Percentages were computed among active participants at each assessment (n=1,248 in Year 2; n=1,231 in Year 3; n=1,193 in Year 4)

Figure 1b. Cumulative percent of interviews completed,a by number of interviewers

a Percentages were computed among active participants at each assessment (n=1,248 in Year 2; n=1,231 in Year 3; n=1,193 in Year 4)

Figure 1c. Cumulative percent of interviews completed,a by number of contact methods

a Percentages were computed among active participants at each assessment (n=1,248 in Year 2; n=1,231 in Year 3; n=1,193 in Year 4)

Analysis of Variation in Response Rates by Contact Methods

To summarize the different contact methods used to reach active participants in Year 4, we counted the number who were attempted by a given method and the number for whom that given method was productive, meaning it resulted in some form of a reply (i.e., participant answered their phone, replied to the email, returned our call, etc.). Of the 1,193 individuals eligible for the Year 4 assessment, all but one were contacted via email (99.9%). Less prevalent contact methods were phone (70.6%), social networking sites (37.0%), and Skype® (0.3%). Most contact methods were productive at eliciting a reply for about two-thirds of individuals contacted by that method (67.1% for email, 69.2% for phone, 66.7% for Skype®), with the sole exception that social networking attempts were productive in only one-third of cases (31.3%), possibly reflecting the fact that it was usually a contact method of last resort.

Analysis of Variation in Response Rates by Interview Location

To summarize the variability in interview locations (e.g., on-site, on-campus, off-site) and formats (e.g., in-person, phone, Skype®), we coded a single variable representing interview condition for each assessment. The proportion of interviews completed “on-site,” meaning in-person in our main offices, was very high in Year 3 (85.2%) and Year 4 (82.0%), but less common in Year 2 (67.0%). One possible explanation is that participants became more aware that they could often be paid on the spot in our main offices, rather than waiting for their payment to be mailed to them. In addition, housing for upper-class students was concentrated close to our main offices, whereas most of the first-year housing was located on the opposite side of campus.

To understand how much our completion rates benefited from our flexibility in interview conditions, we compared individuals within each interview condition on the basis of in-window completion and contact effort. Results are presented in Table 4 for the 1,142 individuals who completed the Year 2 interview, because this interview had the greatest variability in interview conditions. Although 67% of Year 2 interviews occurred in our main offices, many participants took advantage of the alternative venues we offered. Relatively few (5.5%) completed their interview by phone in Year 2, with this proportion increasing in Year 3 (10.1%) and Year 4 (8.3%, data not shown in a table). In every comparison, phone interviews required significantly more effort than any of the in-person conditions—an average of 15.4 contact attempts as compared with in-person interviews in either the main office (3.9), on-campus (3.0), or off-site (3.2). Phone interviews were also significantly more likely to be completed late. Because phone interviews were typically offered as a last resort, these results are not surprising. Slight but significant differences were also observed between in-person interviews in the main office and on-campus offices. Namely, on-campus interviews were significantly more likely to be on-time (77.0% vs. 66.8%) and required fewer contact attempts on average (3.0 vs. 3.9). Thus, availability of on-campus offices appears to have provided a convenience to participants in their second year of college and helped us obtain more on-time completions with less effort. Conversely, the availability of a phone format enabled us to complete interviews with more difficult-to-reach participants. No significant differences were observed between phone and in-person interviewees on other Year 1 characteristics, with the exception that phone interviews at Year 2 had significantly higher Beck Depression Inventory scores. Comparisons of recruitment effort and timeliness in Years 3 and 4 yielded similar results, although the main-office vs. on-campus comparisons were not always statistically significant (data not shown in a table).

Table 4.

Timeliness of completion and effort expended for 1,142 participants in Year 2, by interview condition

Main Office n=765 (67.0%) On-Campus Offices n=283 (24.8%) Off-Site*n=31 (2.7%) Phone n=63 (5.5%) All n=1,142 (100.0%)

% Mean (SD) % Mean (SD) % Mean (SD) % Mean (SD) % Mean (SD)
Completed early 9.4a 2.1a 6.5 3.2 7.2
Completed in window 66.8ab 77.0ac 58.1d 23.8bcd 66.7
Completed late 23.8b 20.8c 35.5d 73.0bcd 26.1
Number of contact attempts 3.9 (4.1)ab 3.0 (2.8)ac 3.2 (2.8)d 15.4 (10.9)bcd 4.3 (5.2)
Number of contact methods attempted 1.9 (0.8)b 1.7 (0.8)c 1.8 (0.7)d 3.2 (1.0)bcd 1.9 (0.9)
Number of interviewers 1.2 (0.4)b 1.1 (0.3)c 1.0 (0.2)d 1.8 (0.6)bcd 1.2 (0.4)
*

Off-site locations include miscellaneous venues that were not specifically designated as research offices, including dormitory lounges and outdoors.

abc

Pairs of letters denote statistically significant differences between columns (p<.05).

DISCUSSION

This paper describes recruitment and follow-up strategies utilized in a longitudinal study of a large college student sample. We attained consistently high follow-up rates of 88% to 91% with minimal attrition bias. Our experiences reinforce that creating a study identity, updating locator data semiannually, offering cash incentive payments, and employing persistent and gregarious interviewers tend to promote high follow-up rates.

The present study offers several contributions to the methodological literature on longitudinal studies. First, we have demonstrated that college students are a feasible population to enroll and retain in a longitudinal study, despite the high levels of instability that typically characterize the college years. Second, our study provides a model for leveraging modern communications technology to enhance tracking and participation, and, perhaps more importantly, demonstrates the feasibility of adopting a flexible approach that adapts to emerging trends in real time. Based on our experience, we strongly believe that keeping in-step with communications technology is crucial for research staff in order to earn trust and buy-in from adolescents and young adults. Third, we have demonstrated the effectiveness of a cost-efficient organizational structure that capitalizes on the natural socio-cultural commonalities that exist between student participants and student interviewers, without sacrificing important research outcomes, such as confidentiality, high follow-up rates, and data integrity.

As others have noted (Cottler et al., 1996), persistence, resourcefulness, and creativity are essential personal qualities for interviewers to achieve high follow-up rates. We believe these qualities are reflected in our present analyses of recruitment effort. Rather than putting a limit on the number of contacts, we continued our contact attempts as long as there was some response on the part of the participant. As a result, we have produced a set of empirical data capable of suggesting logical thresholds that could be applied in future research where the desired trade-offs between budget and follow-up rates may vary. For instance, completion rates greater than 60% could not be reasonably expected had we limited our efforts to only two methods of contacting participants, especially in Years 3 and 4. On the other hand, only minimal gains in completion rates were obtained from increasing our efforts beyond four different methods of contact.

Similarly, although most completed interviews required less than five contact attempts, considerably more attempts were required to make the difference between a modest and high follow-up rate. Referencing our Year 4 data, limiting our efforts to 11 contact attempts would have resulted in a strong but suboptimal follow-up rate of 82.8% (i.e., multiply 90th percentile of those recruited in Year 4 by the 92.0% observed completion rate). Such a follow-up rate may be adequate for studies where resources are insufficient to permit extremely high numbers of attempts. On the other hand, it may be more desirable to allocate resources in favor of a higher follow-up rate at the expense of a slightly smaller sample.

Limitations

Some limitations to the present analyses must be acknowledged. First, because we were constantly improving our follow-up procedures, there were inconsistencies from year to year in how data were recorded. Second, because we recruited students from only one university, our findings may not be generalizable to other types of academic institutions or other populations, such as criminal offenders. Third, because we adopted a diverse range of recruitment methods and allowed them to evolve over time, we cannot draw any conclusions about the actual impact of any single strategy on follow-up rates. Fourth, the degree to which the validity or consistency of the data was compromised in cases that required a high number of contacts for follow-up is an important topic for future research. Lastly, our recommendations are not meant to be exhaustive and are designed to build on, rather than supplant, other time-tested strategies that were not discussed here, such as stressing confidentiality.

Recommendations

Track contact with participants

Our experiences can be useful to other investigators conducting longitudinal studies, and in particular, studies involving college students. First, careful warehousing of contact information is a worthwhile investment of resources. When contact and locator information are stored together in a relational database (but separately from research interview data, to preserve confidentiality), detailed disposition sheets can be printed that assist the interviewer in tailoring their approach in subsequent assessments. With large samples it is desirable to automate as many tasks as possible, which is greatly facilitated by an efficient database design. Also, in the aggregate, data summaries can be invaluable to senior staff as they make resource allocation decisions, for example, to budget for staffing needs in an upcoming assessment, and to adjust spending priorities in real time when progress is slower than expected.

Integrate multimodal recruitment strategies

We strongly advocate for the use of multimodal contact methods. In addition to the standard practice of varying the times and days of contact attempts, we regard it as equally important that interviewers vary their methods of communicating with participants, at least until contact is established. Although newer communication technologies, such as mobile phones, email, and SNS, played an important role in maintaining contact with our sample, traditional modes of communication, including “snail” mail and residential phones remain useful.

Leverage advances in communications technology

Recent advances in communications technology present a double-edged sword for longitudinal studies. Many newer contact methods are more stable than in the past (e.g., email and mobile phones) and communication can be less labor-intensive and less costly (e.g., email vs. “snail” mail), providing obvious advantages for reaching a highly mobile population like college students. Unfortunately, the explosion of communications technology has also had the effect of bombarding students with aggressive marketing messages and appeals for information from a variety of sources, making it increasingly difficult for researchers to capture the attention of savvy students who are skeptical of monetary offers that might sound too good to be true. Even with the ability to offer cash payments to our participants, our interviewers had to be very skillful to break through the barrage of communications participants receive on a daily basis to make our study seem special and important. Varying the methods of communication provides the flexibility to appeal to a range of personal preferences in the target population, such as whether the originating number resembles a university extension or a personal mobile phone number. Personalized communications are, we believe, the most effective way of getting noticed and setting the study apart from unsolicited communications like spam and telemarketers.

Understand the capabilities of student interviewers

Because student research assistants played a major role in this study, it is worthwhile to point out the distinct strengths and weaknesses they brought to bear. As members of the same age group and campus community, our study population and student staff shared social and cultural bonds that promoted the easy establishment of trust and rapport. From the participant’s perspective, the presence of a friendly, sympathetic voice on the phone quickly set the student interviewer apart from other university callers. Furthermore, our interviewers’ familiarity with the geography, routines, and culture of this large campus enabled them to adapt to unpredictable schedule changes (e.g., finding alternate interview locations) and administer the interview with greater understanding.

One of our greatest concerns in using student staff was the potential for participants and interviewers to cross paths in social and academic situations after the interview, which could be embarrassing or potentially lead to a breach of confidentiality. Although to our knowledge such a breach never occurred, we were mindful of the risk and therefore trained all interviewers extensively on the importance of maintaining confidentiality, which included role-playing a variety of social scenarios in which a confidentiality breach could occur. These strategies were effective in minimizing social contact between interviewers and participants and adequately prepared our interviewers to handle the few inadvertent contacts that did occur, such as when the participant was known to the researcher indirectly via younger or older siblings. Another disadvantage of student researchers is that, because of their academic commitments, they tend to have limited hours available for scheduling interviews. Consequently we had to hire a greater number of interviewers to fulfill our needs, increasing the supervisory and administrative demands.

Screen applicants carefully

We identified several key skills and qualities possessed by our more effective interviewers. Besides being friendly and personable, the interviewer should also be compassionate and respectful of the stresses and demands typical of college life. Also, the quality of an interviewer’s phone voice is important for projecting appreciation and positive energy. Effective follow-up demands persistent enthusiasm from interviewers because of the difficulty in reaching some participants. Also, interviewers must have strong instincts about different personality types and be equipped to customize their style to the needs of the participant. For example, interviewers must be able to discern a participant’s level of organization and responsiveness in order to provide—or refrain from providing—extra appointment reminders. Perhaps most importantly, successful student interviewers possess a level of maturity, organization, and time-management skills sufficient to balance the competing demands of school and work, and a fundamental appreciation for research data to fuel a personal commitment to maintaining the details of research protocols.

Establish a strong organizational structure

In a study with a large sample size and large field staff (10–15 part-time interviewers); outstanding organizational skills are an absolute requirement for maintaining high follow-up rates. In our case, the Recruitment Coordinator functioned as a central team member by monitoring progress on every participant and maintaining open lines of communication with every interviewer. She performed these tasks with great regularity, usually on a weekly or daily basis, and relied heavily on the use of relational databases to automate as many routines as possible. For example, she generated weekly progress reports for both the entire team and each individual interviewer, including timeliness of completion and number of overdue participants. We believe this level of monitoring was the key to our success, because problems could be identified and solved rapidly by redistributing workloads among staff, thereby preventing any difficult-to-reach participants from falling through the cracks. It also permitted the Recruitment Coordinator to maintain a cash management system that was nearly seamless, such that payment delays of more than a few days were quite rare. While sufficient incentive payments are desirable for achieving high follow-up rates (Festinger et al., 2005), we strongly believe that timeliness is also important for keeping participants happy and motivated, especially in longitudinal studies where the goal is to retain participants for many years. Lastly, a strong organizational structure helps create a work environment in which interviewers feel they are part of a cohesive team with shared goals, and that the work of every individual is important to the team. Interviewers are more likely to maintain a high level of motivation—and less likely to burnout—if they receive individualized monitoring, feedback, mentoring, and opportunities for growth.

Budget thoughtfully

Resources are always limited in research studies, yet our experiences indicate that a great deal can be accomplished within a limited budget through careful planning and adapting throughout the course of the study. First, as noted above, certain economies can be realized by employing student interviewers on an hourly basis, although the trade-off is the need for more supervisory staff. In this scenario, it is desirable to budget for redundancies, realizing that some student interviewers will not be successful often due to unexpected personal problems, despite being highly qualified. By hiring a large number of hourly staff and cross-training them to perform a variety of operations (e.g., recruiting, interviewing, quality assurance reviews, filing) we maintained a highly adaptable team and ensured adequate coverage throughout periods of fluctuating demand. Second, investigators should plan for staff responsibilities to evolve as the study progresses. For instance, in the first two years, the PI may need to devote 50% effort to hands-on development of study materials, staff training, and even administering interviews, but can reduce her effort to a more supervisory role in later years. For an interview study such as ours, we recommend budgeting for three full-time senior staff beginning with the first year: Project Director (Master’s level), Recruitment Coordinator (Master’s level), and Lead Interviewer (Bachelor’s level). All three play a role in establishing effective systems and infrastructure that can later be maintained by hourly staff, thereby freeing up senior staff for other tasks that become important as the study matures (e.g., analysis, manuscript preparation, grant writing). Third, investigators would be wise to budget for other important—yet easily overlooked—needs, including participant payments, rent for off-site interview spaces, phone charges (including interviewers’ mobile phones), and technology infrastructure (such as web site development and hosting, software for creating optically scannable forms, etc.).

Customer service principles should infuse all aspects of recruitment

While cash payments were highly motivating for most participants, we also acknowledge the importance of other forms of reward. We spent considerable time training interviewers on customer service principles to ensure that participants would always have a positive experience every time they had contact with any member of the research team. Even when repeated contact attempts become numerous, interviewers should strive to convey a sincere attitude of concern for participants’ time and an appreciation for their unique story and contribution to the study. For example, when calling, interviewers should acknowledge the participant’s busy schedule and make sure that they have not called at an inconvenient time. Many of our participants have commented positively about this aspect of our communications with them. Other examples include common courtesies like always being on time, never keeping a participant waiting (even when they show up hours or days early), thanking them for their time and honesty, stressing confidentiality at every stage, and taking time to convey the significance of the study. When participants disclose personal difficulties, interviewers can help them find appropriate resources. Some of these actions might seem obvious, yet we found that interviewers inconsistently applied them at first, and therefore recommend that other researchers should incorporate them explicitly into training materials, scripts, and role-plays. During routine data spot-checks conducted by phone post-interview, we consistently received positive feedback from participants when we asked them to evaluate how their interviewer treated them.

Applicability to Alternate Settings

Many of our recommendations have broad applicability to non-university settings and studies of populations other than college students, such as in a workplace. An approach emphasizing personalized communications, multimodal contact strategies, thoughtful use of communications technologies, and adherence to customer service principles would be critical to success in most settings. In many workplaces, the ability to access employees in one central location might streamline recruitment and follow-up efforts, yet different work environments will have different varieties of specific technologies and vehicles for communication (e.g., employee newsletters, company web sites), so the researcher must learn which strategies employees use most and then leverage as many as possible. Personalized communications might be especially important in a work setting to set research inquiries apart from work-related communications.

One major difference in our recommendations pertains to the choice of interviewer staff. When participants and interviewers are both college students, their social and generational commonalities create opportunities to foster connection and rapport. In most work settings, however, drawing research interviewers and participants from the same pool of employees would be ill-advised, due to obvious ethical concerns about confidentiality and the possible perception of coercion to participate. Researchers can take steps to select interviewers that have significant commonalities with the participant pool, at least demographically (e.g., age, race, sex) and if possible, experientially (e.g., work in a similar field, live in the same region). Nevertheless, these factors cannot outweigh the importance of an interviewer’s interpersonal skills for establishing rapport and fostering participant buy-in, which reiterates the need for well-developed processes for screening, training, and supervision.

Conclusions

Longitudinal studies have been and will continue to be the gold standard study design for disentangling the impact of multiple influences on behavior in developmental and naturalistic epidemiologic studies. The recommendations presented in this paper add to the existing body of literature on methods for successfully conducting longitudinal studies with minimal attrition. In particular, our experiences can serve as a model for establishing future cohort studies, especially those that involve technologically savvy young adults. Keeping up with and leveraging advances in communications technology is especially critical for success. Not only are the strategies outlined in this paper useful for maintaining high follow-up rates, but they also contribute to high-quality data which can be used to answer many significant questions about young adult health and risk behaviors.

Acknowledgments

This study was funded by the National Institute on Drug Abuse (R01DA14845, Dr. Arria, PI). Special thanks are extended to Adam Grant, Lauren Stern, Emily Winick, Elizabeth Zarate, the interviewing team, and the participants.

References

  1. American Psychiatric Association. Disorders: DSM-IV. 4. Washington, DC: American Psychiatric Press; 1994. Diagnostic and Statistical Manual of Mental. [Google Scholar]
  2. Arria AM, Caldeira KM, O’Grady KE, Vincent KB, Fitzelle DB, Johnson EP, Wish ED. Drug exposure opportunities and use patterns among college students: Results of a longitudinal prospective cohort study. Substance Abuse. 2008;29(4):19–38. doi: 10.1080/08897070802418451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Beck AT, Epstein N, Brown G, Steer RA. An inventory for measuring clinical anxiety: Psychometric properties. Journal of Consulting and Clinical Psychology. 1988;56(6):893–897. doi: 10.1037//0022-006x.56.6.893. [DOI] [PubMed] [Google Scholar]
  4. Beck AT, Rush AJ, Shaw BF, Emery G. Cognitive therapy of depression. New York: The Guilford Press; 1979. [Google Scholar]
  5. Cottler LB, Compton WM, Ben-Abdallah A, Horne M, Claverie D. Achieving a 96.6 percent follow-up rate in a longitudinal study of drug abusers. Drug and Alcohol Dependence. 1996;41(3):209–217. doi: 10.1016/0376-8716(96)01254-9. [DOI] [PubMed] [Google Scholar]
  6. Cottler LB, Zipp JF, Robins LN, Spitznagel EL. Difficult-to-recruit respondents and their effect on prevalence estimates in an epidemiologic survey. American Journal of Epidemiology. 1987;125(2):329–339. doi: 10.1093/oxfordjournals.aje.a114534. [DOI] [PubMed] [Google Scholar]
  7. Ellison N, Steinfield C, Lampe C. Spatially bounded online social networks and social capital: The role of facebook. Paper presented at the Annual Conference of the International Communication Association; Dresden, Germany. 2006. http://balzac.cnsi.ucsb.edu/inscites/wiki/images/8/85/Ellison_et_al_The_Role_of_Facebook.pdf. [Google Scholar]
  8. Festinger DS, Marlowe DB, Croft JR, Dugosh KL, Mastro NK, Lee PA, Patapis NS. Do research payments precipitate drug use or coerce participation? Drug and Alcohol Dependence. 2005;78(3):275–281. doi: 10.1016/j.drugalcdep.2004.11.011. [DOI] [PubMed] [Google Scholar]
  9. Graham JW, Donaldson SI. Evaluating interventions with differential attrition: The importance of nonresponse mechanisms and use of follow-up data. Journal of Applied Psychology. 1993;78(1):119–128. doi: 10.1037/0021-9010.78.1.119. [DOI] [PubMed] [Google Scholar]
  10. Hargittai E. Whose space? Differences among users and non-users of social network sites. Journal of Computer-Mediated Communication. 2008;13(1):276–297. [Google Scholar]
  11. Kypri K, Gallagher SJ, Cashell-Smith ML. An Internet-based survey method for college student drinking research. Drug and Alcohol Dependence. 2004;76(1):45–53. doi: 10.1016/j.drugalcdep.2004.04.001. [DOI] [PubMed] [Google Scholar]
  12. Lampe C, Ellison N, Steinfield C. A face(book) in the crowd: Social searching vs. social browsing. Paper presented at the Proceedings of the 2006 20th anniversary conference on computer supported cooperative work.2006. [Google Scholar]
  13. Nemes S, Wish E, Wraight B, Messina N. Correlates of trearment follow-up difficulty. Substance Use and Misuse. 2002;37(1):19–45. doi: 10.1081/ja-120001495. [DOI] [PubMed] [Google Scholar]
  14. Ployhart RE, Vandenberg RJ. Longitudinal research: The theory, design, and analysis of change. Journal of Management. 2010;36(1):94–120. doi: 10.1177/0149206309352110. [DOI] [Google Scholar]
  15. Prinz RJ, Smith EP, Dumas JE, Laughlin JE, White DW, Barron R. Recruitment and retention of participants in prevention trials involving family-based interventions. American Journal of Preventive Medicine. 2001;20(Suppl 1):31–37. doi: 10.1016/s0749-3797(00)00271-3. [DOI] [PubMed] [Google Scholar]
  16. Radloff LS. The CES-D Scale: A self-report depression scale for research in the general population. Applied Psychological Measurement. 1977;1(3):385–401. [Google Scholar]
  17. Scott CK. A replicable model for achieving over 90% follow-up rates in longitudinal studies of substance abusers. Drug and Alcohol Dependence. 2004;74(1):21–36. doi: 10.1016/j.drugalcdep.2003.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Scott CK, Sonis J, Creamer M, Dennis ML. Maximizing follow-up in longitudinal studies of traumatized populations. Journal of Traumatic Stress. 2006;19(6):757–769. doi: 10.1002/jts.20186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Stouthamer-Loeber M, van Kammen W, Loeber R. The nuts and bolts of implementing large-scale longitudinal studies. Violence and Victims. 1992;7(1):63–78. [PubMed] [Google Scholar]

RESOURCES