Abstract
Background:
Retention can be difficult in longitudinal trials, especially among minoritized groups and individuals with low socioeconomic status (SES) who may experience more barriers to research participation. Organized retention strategies may help; however, limited research has reported on this in detail.
Methods:
We employed several strategies throughout a 15-month randomized controlled trial to encourage retention among a diverse sample of adults with type 2 diabetes. Participants were randomized to receive mobile health support for diabetes self-care for 12 months or an attention control. Participants completed assessments at 3, 6, 12, and 15 months post-baseline. We used three main categories of retention strategies: flexibility in participation (e.g., multiple methods for data collection), communication (e.g., tracking contacts), and community building (e.g., study branding, newsletters). We monitored participants’ use of strategies and examined associations between participant characteristics and retention.
Results:
Retention remained high (≥90%) at each follow-up assessment. Participants used various methods for survey completion: online (34%), in-person (31%), and mail (30%). Most (73%) used a mail-in A1c kit at least once. Multiple completion methods were important for retaining minoritized and lower SES participants who completed assessments in-person more frequently. Communication also facilitated retention; 39% of participants used a study Helpline and tracking systems helped maintain contact.
Conclusions:
Retaining disadvantaged patients in clinical trials is necessary so findings generalize to and can benefit these populations. Retention strategies that reduce barriers to participation and engage participants and community partners can be successful. Future studies should assess the impact of retention strategies.
Trial Registration:
Keywords: retention, retention strategies, barriers to participation, type 2 diabetes, longitudinal trial, racial disparities
1. Introduction
In longitudinal trials, retention can be challenging and poor retention seriously jeopardizes study integrity by limiting generalizability, reducing analytical power to detect intervention effects, and introducing bias.[1,2]. Retention is especially challenging among persons with minoritized race/ethnicity and with low socioeconomic status (SES); these groups are traditionally harder to engage in research and less likely to participate in clinical trials than non-Hispanic Whites and those with high SES [2,3]. Considering both minoritized groups and persons with low SES have disproportionate prevalence of diseases and experience worse outcomes [4-7], attention to their recruitment and retention in clinical trials is critical.
Historically underrepresented groups experience myriad barriers to research participation. Even after overcoming potential barriers to enrolling in a study (e.g., awareness, mistrust) [8,9], other factors can impede continued participation. Competing demands for time often lead to attrition [10,11] such as balancing multiple jobs and having more family obligations (e.g., providing childcare, caring for sick relatives) [12]. Specifically, difficulty with rescheduling study visits has been reported as a problem leading to withdrawal in ethnically diverse samples [13]. Financial strain and costs associated with study participation also present barriers to participation [14,15]. Lack of transportation is particularly challenging when studies require multiple in-person follow-up visits and participants live far from the study site [12,16]. Additionally, persons with lower SES may experience housing instability or frequently changing contact information which makes maintaining communication with the research team difficult. These barriers to research participation indicate specific strategies may be needed to help retain diverse and disadvantaged patients. There has been an increase in reporting on approaches for recruiting minorities and persons with low SES, but retention strategies in long-term trials tend to be underreported or, when reported, lack sufficient detail [17].
We implemented several strategies throughout a 15-month randomized controlled trial (RCT) to encourage retention among a diverse patient sample with type 2 diabetes (T2D) and monitored participants’ use of the strategies to determine their relative success. Our research and selection of retention strategies were informed by community-engaged research practices with our clinic partners (which include reciprocal relationships and clear frequent communications) [18] and agile methods (which require iterative improvements based on feedback from target users - in this case, patients and clinic staff) [19]. Our goals for the study did not include formal development and testing of retention strategies; rather our goals were to conduct a rigorous high-quality trial, ensure representation of minoritized and historically underrepresented groups for whom and with whom the interventions were designed, and form and maintain positive relationships with clinic partners. Some retention strategies were based on our prior research, some were added in response to clinic/patient suggestions, and others were added to express appreciation to clinics/patients or to improve workflows. Our purpose herein is to retrospectively classify and describe our strategies, report the success of strategies (when measured), and discuss the lessons learned to help inform future trials and guide future systematic examination of retention strategies among historically underrepresented groups.
2. Study Context
2.1. Intervention Overview
We conducted a 15-month RCT to evaluate two mobile phone-delivered interventions for T2D self-care: REACH (Rapid Encouragement/Education And Communications for Health) and FAMS (Family focused Add-on for Motivating Self-care). REACH provides tailored and targeted text messages designed to improve diabetes medication adherence, whereas FAMS provides monthly phone coaching to set diabetes self-care goals with related text message content designed to help patients improve family/friend involvement in their diabetes care. We worked with MEMOTEXT, an algorithmic communications and data management platform, to develop the text message functionality for both interventions. Details on both interventions, the RCT protocol, and REACH and FAMS effects have been published [20-23]. Procedures were approved by the Vanderbilt University Institutional Review Board. This study is registered with ClinicalTrials.gov (NCT02409329).
2.2. Recruitment Strategies & Eligibility Criteria
Between May 2016 and December 2017, we recruited participants from 13 community health clinics located in and around Nashville, TN and 3 Vanderbilt primary care clinics. Community health clinics serve as safety net clinics and disproportionately serve Black/African Americans, Hispanics, and persons without health insurance [24]. At Vanderbilt, we used information in the electronic health record (EHR) to prioritize recruitment of patients from minoritized race/ethnicity groups and/or who had public health insurance only to recruit a population similar to that of our community health clinics. We recruited participants primarily using opt-in or opt-out letters mailed from clinics, followed by a phone call from a research assistant (RA) to assess interest. Other recruitment methods included interest boxes and flyers in clinics, provider referral, and in-person recruitment in clinic waiting rooms and clinic-community events. Eligible participants were adults (≥18 years old) with a T2D diagnosis prescribed at least one daily diabetes medication. We excluded non-English speakers, patients whose most recent hemoglobin A1c (A1c) in the last 12 months was <6.8% (51 mmol/mol), patients who did not have a cell phone with texting capability, and those who could not receive, read, and respond to a text message after instruction from a trained RA.
Several aspects of our inclusion/exclusion criteria supported our recruitment efforts. We found that participants who did not have prior experience using text messaging could easily participate after a brief experiential learning experience with an RA. We had no exclusion criteria based on comorbidities, which may have enhanced our ability to enroll disadvantaged groups. Also, we did not require an A1c in the last 12 months; having this as an inclusion criterion would have excluded many patients with discontinuities in care. Instead, we only used the presence of a recent low A1c as an exclusion criterion. The benefit of this approach was that the study had a lower burden for interested participants to enroll, however the downside was that some enrolled participants had a low baseline A1c with no room for improvement, and we did not withdraw participants from the study based on baseline A1c results. Finally, our partnerships with and presence in community clinics for recruitment may have overcome participants’ barriers related to transportation and mistrust.
2.3. Procedures
Interested and eligible participants completed enrollment which included informed consent, a baseline survey, and A1c test. All patient data was entered into REDCap (Research Electronic Data Capture) [25]. After enrollment, our teams’ statistician used R software to randomize participants to a control or an intervention group (REACH or REACH + FAMS). Specifically, we used optimal multivariate matching to ensure balance across conditions (detailed in [23]). All intervention participants received text messages for 12 months and if assigned to REACH + FAMS, also received phone coaching for the first 6 months. All participants, including the control group, received a text message when A1c results for the study were available, access to a study helpline to ask study-related questions or questions about their diabetes medications, and quarterly study newsletters (all detailed below). All participants were asked to complete subsequent surveys and A1c tests at follow-up points including 3, 6, 12, and 15 months post-baseline. We collected EHR data at all assessments. We compensated participants up to $210 for completing all study measures. We did not compensate participants for responding to texts and did not provide cell phones nor data plans.
2.4. Follow-up Protocol
Once a participant enrolled in the study, we projected subsequent dates for completing their follow-up study materials based on their enrollment date. Participants had a ±21-day window around each follow-up date to complete study materials (i.e., a survey and A1c test). After the follow-up window closed, we did not contact participants further, but would accept data returned outside the window. Participants who missed one or more follow-up assessments were still contacted to complete subsequent follow-ups.
2.5. Study Participants
We exceeded our recruitment goal of 500 participants (N=512). However, we had to administratively withdraw 6 participants before randomization for not completing enrollment procedures within the run-in period (n=5) and inappropriate behavior with study staff (n=1). Therefore, our final sample for the RCT included 506 participants (Table 1). Our sample was 54% female with nearly half (43%) recruited from community health clinics. Over half (52%) identified as a racial/ethnic minority, 61% reported annual household incomes of <$35,000, and 42% had ≤ 12 years of education. Almost half were underinsured (48%) including a quarter (23%) of uninsured participants. Twelve percent of our sample was homeless at the time of enrollment. The average A1c at enrollment was 8.6% (SD 1.8%).
Table 1.
Participant characteristics at baseline (N=506)
Characteristic | M ± SD or n (%) |
---|---|
Age, years | 55.9 ± 9.6 |
Gender, male | 232 (45.8) |
Race/Ethnicity a | |
Non-Hispanic White | 242 (48.1) |
Non-Hispanic Black | 198 (39.4) |
Non-Hispanic Other race(s) | 32 (6.3) |
Hispanic | 31 (6.2) |
Socioeconomic Status | |
Education, years b | 14.1 ± 3.1 |
Annual Household Income, USD c | |
< $10,000 | 92 (19.8) |
$10,000 – $34,999 | 189 (40.7) |
$35,000 – $54,999 | 68 (14.7) |
≥ $55,000 | 115 (24.8) |
Health Insurance d | |
Uninsured | 117 (23.3) |
Public Only | 126 (25.1) |
Private | 259 (51.6) |
Homeless e | 58 (11.6) |
Health Literacy (BHLS) | 13.1 ± 2.5 |
Numeracy (SNS) | 4.4 ± 1.3 |
Clinic Site, Community Health Clinic | 216 (42.7) |
Diabetes Duration, years | 11.0 ± 7.9 |
Diabetes Medication Type | |
Oral only | 260 (51.4) |
Insulin only | 83 (16.4) |
Both Oral and Insulin | 163 (32.2) |
Hemoglobin A1c, % | 8.6 ± 1.8 |
USD, United States Dollars; BHLS, Brief Health Literacy Scale; SNS, Subjective Numeracy Scale
3 participants did not report race/ethnicity
8 participants did not report years of education
42 participants did not report income
4 participants did not know their insurance status
7 participants did not report information on housing; homeless as defined by US Department of Health and Human Services, Section 330(h)(5)(A) and HRSA/Bureau of Primary Health Care, Program Assistance Letter 99-12, Health Care for the Homeless Principles of Practice.
2.6. Retention numbers
We considered participants “retained” at each follow-up timepoint if they completed the survey and/or A1c test. Our goal was to have ≥80% of participants retained at each follow-up. We ultimately exceeded this goal and retained ≥90% at each timepoint. Retention remained high overall throughout the trial (Figure 1).
Figure 1.
Retention rates and numbers throughout the REACH trial. Participants who withdrew or were deceased were not removed from calculated retention rates.
Table 2 reports nonparametric associations between participants’ baseline characteristics and the number of follow-up assessments completed. Females completed more surveys and A1c tests than males. Older age was associated with completion of more A1c tests, and participants recruited from Vanderbilt completed more A1c tests than those recruited from community health clinics. We believe these associations may be attributable to care plans (e.g., more A1c tests completed as part of regular clinical care) rather than to study participation since survey completion was not associated with older age or care site. Assigned condition, race/ethnicity, annual household income, housing status, education, health literacy and baseline A1c were not associated with number of completed follow-up assessments.
Table 2.
Nonparametric associations between participant baseline characteristics and number of follow-up assessments completed.
Number of completed follow-up assessments | ||||||
---|---|---|---|---|---|---|
Characteristic | Surveys | A1c | Both | |||
M±SD | P value | M±SD | P value | M±SD | P value | |
Condition | .644 | .777 | .483 | |||
Control | 3.5±1.1 | 3.6±0.9 | 3.4±1.2 | |||
REACH only | 3.5±1.0 | 3.6±0.9 | 3.4±1.0 | |||
REACH + FAMS | 3.5±1.0 | 3.6±0.8 | 3.4±1.0 | |||
Gender | <.001 | .003 | <.001 | |||
Male | 3.3±1.1 | 3.5±1.0 | 3.2±1.2 | |||
Female | 3.6±1.0 | 3.7±0.8 | 3.5±1.0 | |||
Race/ethnicity | .565 | .590 | .457 | |||
Non-Hispanic White | 3.4±1.1 | 3.6±0.9 | 3.3±1.2 | |||
Non-Hispanic Black | 3.5±1.0 | 3.6±0.8 | 3.5±1.0 | |||
Other a | 3.5±1.0 | 3.6±1.0 | 3.4±1.1 | |||
Household Income | .400 | .964 | .784 | |||
<$10,000 | 3.5±1.0 | 3.6±0.9 | 3.4±1.1 | |||
$10,000 – $34,999 | 3.5±1.0 | 3.6±1.0 | 3.4±1.1 | |||
$35,000 - $54,999 | 3.6±1.0 | 3.6±0.8 | 3.4±1.1 | |||
≥ $55,000 | 3.4±1.1 | 3.7±0.7 | 3.3±1.1 | |||
Recruitment Site | .529 | .046 | .310 | |||
Community Health Clinic | 3.5±1.0 | 3.5±0.9 | 3.4±1.1 | |||
Vanderbilt Primary Care | 3.5±1.1 | 3.7±0.8 | 3.4±1.1 | |||
Housing Status | .289 | 1.00 | .387 | |||
Stable housing | 3.5±1.1 | 3.6±0.9 | 3.4±1.1 | |||
Homeless | 3.6±1.0 | 3.6±0.9 | 3.4±1.1 | |||
rho | P value | rho | P value | rho | P value | |
Age, years | .054 | .225 | .128 | .004 | .075 | .092 |
Education, years | −.033 | .462 | −.034 | .450 | −.046 | .303 |
Health Literacy score | .037 | .412 | .015 | .729 | .049 | .275 |
Hemoglobin A1c | .011 | .800 | .033 | .469 | .012 | .782 |
Including Hispanic and multiracial.
Number of follow-up assessments completed ranged 0 to 4. Kruskal-Wallis tests for categorical characteristics; Spearman’s rho for continuous characteristics p<.05 indicated with bold text
3. Retention Strategies
During the trial, we employed a variety of strategies to retain participants. Drawing from prior review articles classifying retention strategies in trials [1,26-28], we categorized our retention strategies into three types: (1) flexibility in participation strategies, (2) communication strategies, and (3) community-building strategies. All strategies are described in Table 3.
Table 3.
Retention strategies used in REACH trial
Strategy/Item | Frequency/Timepoint | Description | |
---|---|---|---|
Flexibility in Participation Strategies | Survey completion methods | Enrollment and each follow-up | We offered several formats for participants to complete their surveys: (1) in person with an RA either at the clinic or our office, (2) paper via mail, (3) online via REDCap, or (4) over the phone with an RA. |
A1c completion methods | Enrollment and each follow-up | While venipuncture at clinics was the preferred method for A1c testing, we also accepted point-of-care testing at clinics, and mail-in A1c test kits which participants could complete at home. | |
Increased compensation over time and choice in payment method | Following enrollment and each follow-up | We incrementally increased compensation for both survey and A1c test completion at each follow-up. We also offered two payment methods for participants to choose from: checks and pre-loaded debit cards. | |
Options for limited participation instead of full withdrawal | When participants expressed interest in withdrawing from study | We assessed which parts of the study (text messaging, surveys, A1c tests) participants were not interested in/having problems with and offered alternative solutions to continue data collection and avoid full withdrawal (e.g., if annoyed by text messages, we asked if they would continue to complete surveys and A1c tests). | |
Communication Strategies | Contact and tracking methods | Enrollment and each follow-up | We used both phone calls and text messages to remind participants about upcoming follow-up assessments. We also collected and used secondary contact numbers (e.g., work number, a family member or friend’s number) in the case we could not reach participants on their primary number. We used a tracking system to log and organize all communications. |
Reminder protocols and accountability checks | All follow-up assessments when the window is open | We developed a protocol that specified when and how RAs should contact participants during a follow-up window until study materials were completed. Accountability checks helped ensure each RA followed the protocol and identified missed contact attempts. | |
Check-in phone call during assessment gap | 9 months | We reminded participants about the timeline (i.e., how long they have been in the study and the time remaining) and verified both their contact information and preferred method for completing the 12-month survey. We did this because there was not a 9-month study assessment and we didn't want to lose track of participants. | |
Accessing A1c results | After each study assessment | We offered two methods to view A1c results: online via a study website or by calling the study Helpline. | |
Study Helpline | Continuous access throughout study | The Helpline provided a phone number for participants to use to ask study-related questions, report technical problems in the messaging system, and ask our study pharmacist questions related to their diabetes medications. They also used it to tell us about changing contact information. | |
Community Building Strategies | Fostering clinic partnerships | Continuously throughout trial | We regularly updated clinics about study progress, helped support clinic initiatives through volunteering, and offered small gifts of gratitude. |
Newsletters | Quarterly throughout trial | Newsletters provided information on healthy living with diabetes; example articles include tips for traveling with diabetes medications, instructions for simple chair yoga, and healthy recipes. Returned-to-sender newsletters cued us to contact participants/secondary contacts for new contact information. | |
Study branded non-financial incentives | 3-month and 6- or 12-month follow-up | Magnets provided at 3 months displayed REACH logo, a thank you message for participating in the study, and phone number for the Helpline. At 6 or 12 months, we provided a t-shirt or water bottle displaying the REACH logo. | |
Birthday cards | Within 2 weeks of participant’s birthday | We mailed personalized birthday cards with a hand-written note, signed by study team members, to participants the month of their birthday. |
RA, Research Assistant; REDCap, Research Electronic Data Capture; REACH, Rapid Encouragement/Education and Communications for Health
3.1. Flexibility in Participation Strategies
3.1.1. Survey Completion Methods
We offered several formats for participants to complete their baseline and follow-up surveys: (1) in person with an RA either at their clinic or our office, (2) paper via mail, (3) online via REDCap, or (4) over the phone with an RA. Using a variety of methods provided flexibility for participants to complete their survey in the most convenient way possible, based on their preference and circumstances. For example, if a participant came to the clinic for an appointment within his/her follow-up window, an RA could meet him/her at the clinic before/after the appointment to complete the survey. Offering remote survey options (mail, online, or phone) allowed participants to complete their survey at home independently and reduced study burdens such as transportation and limited time. If participants did not have internet access or were not comfortable with computers, they could choose to complete the survey on paper or over the phone rather than online. If the participant requested a paper survey, we mailed them the survey along with an envelope that included a labeled and prepaid return envelope.
We encouraged participants to complete their enrollment procedures and baseline survey, in-person with an RA before/after a clinic visit to help align follow-up assessments with subsequent clinic visits. This also helped establish a face-to-face connection with the participant and guide them through the survey measures to ensure comprehension. Sixty-nine percent of participants completed their baseline survey face-to-face with an RA. After enrollment, there was more variation in participants’ selected survey method. Across all follow-up assessments, participants showed an approximately equal preference for completing the survey online (34%), in-person (31%), and by mail (30%) (Figure 2). Surveys via phone were the least popular option (4%), likely because most surveys were long and it was difficult to follow along with the questions and response options over the phone. The increased in 15-month survey completion by phone is likely because this assessment was much shorter and therefore easier to complete by phone.
Figure 2.
Percent of participants who used each survey method across assessments
We examined if race/ethnicity, income (annual household income <$35,000 vs. ≥$35,000), or recruitment site (community clinic vs. Vanderbilt) were associated with follow-up survey completion methods and, separately, use of multiple survey completion methods over time. We used non-parametric tests of difference (Fisher’s exact, Kruskal Wallis or Mann-Whitney U tests). At each follow-up assessment, race/ethnicity was associated with method of survey completion (all p values <.05). Compared to non-Hispanic White participants, Non-Hispanic Black participants completed their surveys online less frequently (average 16% less), and both non-Hispanic Black participants and participants reporting another race/ethnicity completed their surveys in person an average 15% more frequently. Compared to participants with higher income, participants with lower income completed their survey in online much less frequently (average 39% less), and more frequently completed their survey in person (average 26% more) or via paper (average 24% more; all p values <.001). Participants recruited from community clinics completed their survey in person much more frequently (average 34% more) and online much less frequently (average 32% less) than those recruited from Vanderbilt (all p values <.001). Participants from minoritized groups did not use more or fewer survey completion methods than non-Hispanic White participants over the study period, but participants with lower income (1.67±0.69 different methods vs. 1.55±0.63 for those with higher income, p<.001) and those recruited from community clinics (1.74±0.70 different methods vs 1.46±0.60 for those from Vanderbilt, p<.001) used more methods.
3.1.2. A1c Methods
We used multiple methods to collect A1c values. Participants may have had a lab-drawn venipuncture or point-of-care A1c test as part of their regular clinical care when they were due for their study A1c. In cases where a clinic and study A1c were aligned, we collected the A1c result from the EHR. If a patient was not scheduled to receive an A1c test as part of their care, the study staff worked with clinic staff to request the test and the study paid for the costs of laboratory testing. To accommodate participants who were unable to come into the clinic during their follow-up window, we offered mail-in A1c test kits, provided by CoreMedica Laboratories (Lee’s Summit, MO) which participants could complete at home. We also sent participants home with a mail-in kit if they met an RA in the clinic for a survey, but the clinic could not accommodate a venipuncture test for the participant due to workflow or scheduling.
Sixty percent of A1c tests were done at the clinic (via venipuncture or point-of-care) and 40% were done with mail-in kits. However, 73% of participants used a mail-in kit at least once during their study participation, suggesting the remote option for A1c data collection was helpful for most participants. We examined if race/ethnicity, income (annual household income <$35,000 vs. ≥$35,000), or recruitment site (community clinic vs. Vanderbilt) were associated with preferred method for follow-up A1c completion and, separately, proportion of follow-up study A1c values completed using kits. We did not find any evidence of differential kit use by race/ethnicity. We found evidence of less kit use at the 3-month follow-up among lower income participants (37% used a kit compared to 56% of higher income participants, p<.001), but this difference did not persist in subsequent follow-ups. We did see consistent evidence that participants from community clinics were less likely to use kits at each time point (all p values <.01). However, this gap narrowed over time, with 29% of community clinic participants using a kit at 3 months and 54% at 15 months, compared to a steady ~60% of Vanderbilt participants. Across follow-ups, 40% of study A1c values were completed via kit for community clinic participants whereas 60% were completed via kit for Vanderbilt participants (p<.001).
3.1.3. Compensation
We incrementally increased compensation for both survey and A1c test completion at each follow-up to incentivize participants to continue participation over the 15 months. Specifically, we paid participants the following amounts for completing their A1c and survey at each timepoint: Baseline ($20); 3 months ($35); 6 months ($50); 12 months ($65); and 15 months ($40, reflecting the briefer 15-month survey). We also offered two payment methods for participants to choose from: checks and preloaded debit cards. For our study population, it was critical to offer an alternative to checks because some participants noted they could not cash a check due to a lack of identification or transportation to a bank.
3.1.4. Limited Participation Options
To minimize data loss and the number of participants withdrawing from the study, we offered options for limited participation instead of complete study withdrawal, when participants expressed interest in withdrawing. We trained RAs to assess reasons for a participant’s desire to withdraw from the study and whether the participant would be open to a limited participation option. For participants who said they wanted to withdraw because they found the text messaging and/or phone coaching components burdensome, we offered an option to still complete and be compensated for surveys and/or A1c tests. For participants who did not want to receive intervention components nor complete surveys and A1c testing, we requested to continue to collect their EHR data. This included data such as A1c test results that might be completed at their clinic as standard of care.
Throughout the study, 5.5% (n=28) participants expressed an interest in withdrawing. When given, the primary reasons were life events (e.g., caring for sick family member, long-term hospitalization, leaving the country) or high study burden. In addition, a few participants assigned to the control condition wanted to withdraw because they felt they were not receiving benefit from the study. By communicating limited participation options, we were able prevent 3.2% (n=16) from withdrawing completely. Figure 1 shows which and when during the trial limited participation options were selected. Only 2.3% (n=12) withdrew from the study completely.
3.2. Communication Strategies
3.2.2. Contact and Tracking Methods
We used multiple methods of contact to remind participants about upcoming follow-up assessments and completing study materials. Phone calls were our primary method of contact. We collected and used secondary contact numbers (another phone number where the participant could be reached, such as the number of a family member or friend) when participants could not be reached at the primary number. In all follow-up assessments, we asked participants to confirm their contact information, including both primary and secondary phone numbers. Repeatedly confirming contact information was important with our participant population because many were transient and frequently changed phone numbers. In addition to phone calls, we also employed text messages for study-related communications.
Our team was comprised of multiple RAs; therefore, we learned it was critical to keep our communications with participants highly organized. We tracked all our communications with participants in detail in a secure Microsoft Access database. This tracking system allowed any RA to view the communication history for any participant in the study. We identified patterns with when and by which method (morning or afternoon; call or text) a participant was most likely to be responsive, which led to improved success with reaching the participant. To accommodate participants’ schedules and make it as easy as possible for participants to reach us, RAs were available via phone on some evenings and weekends as needed. Because multiple RAs could interact with a participant over the course of the study, this tracking system also helped create a seamless experience for participants.
3.2.3. Reminder Protocols and Accountability Checks
We developed and implemented a detailed and structured protocol for when to communicate with study participants regarding study appointments and completion of surveys and A1c tests. This protocol specified when and how often RAs should contact participants throughout the follow-up window until study materials were completed or the window closed. Study coordinators also conducted weekly “accountability checks” to ensure each RA was following the protocol for contact attempts. This check also helped identify and address any inadvertently missed contact attempts.
3.2.4. Check-in Call During Study Gap
Most follow-up assessments had a 3-month gap between each; however, there was a longer gap between the 6- and 12-month follow-up. To maintain contact with participants during this time, RAs called participants around 9 months post-baseline to check in. RAs would verify their contact information, remind them of the study timeline, and ask their preferred method for completing the upcoming 12-month survey and A1c test.
3.2.5. Accessing A1c Results
When participants completed A1c tests for the study, the study team paid for these tests and was responsible for communicating the results to participants. We provided participants with two methods to access their A1c test results: via a study website or by calling the study Helpline. When A1c results for the study were available, participants received a text message with details on how to access their result. To access online, participants could click on a link in the text message that led to a HIPAA complaint website hosted by MEMOTEXT. The text also provided the number for the Helpline for participants to call and have the study team tell them their result over the phone.
Most participants (78%; n=393) accessed A1c results for the study at least once. Of those who accessed their results, the website was most used (85%; 333/393) while around 15% (60/393) used the Helpline.
3.2.6. Study Helpline
We offered a HIPAA compliant, study Helpline (hosted by MEMOTEXT) as a central point of contact for participants to ask study-related questions, report technical problems with the text messaging, and speak to a clinical pharmacist to ask questions about their diabetes medications. At enrollment, RAs told participants about the Helpline and asked them to save the phone number in their phone. When participants called, a recording instructed them to leave a voicemail with their question or concern. We returned calls within one business day. When calls were specifically related to participants’ diabetes medication, a clinical pharmacist returned the call to answer questions and refer participants to their healthcare provider when needed.
In total, there were 480 calls to the Helpline and over one-third of participants (39%, n=201) called the Helpline at least once. The most common reason for calls was regarding completion of study follow-up assessments (248 calls). Other common reasons for calls included: requesting and asking questions about A1c results (107 calls), updating study-related information including contact information and timing for text message delivery (45 calls), and reporting technical problems (33 calls). Only 5 participants (<1%) called to ask the pharmacist questions about their diabetes medications.
3.3. Community Building Strategies
3.3.2. Fostering clinic partnerships
We partnered with Vanderbilt primary care clinics and community health clinics to recruit participants for the study, but also needed to maintain strong and positive partnerships with the clinics throughout the trial to help retain participants. First, we provided regular updates to clinics on our study progress via quarterly newsletters (detailed below). We also demonstrated commitment to the partnerships by supporting clinic initiatives (e.g., participating in health fairs and volunteering to help with the community garden). At certain holidays, we would send a team member to each of our clinics to drop off items to express gratitude (e.g., snacks, flowers for the front desk, or a simple thank you card). When our team was in the clinic, we wore t-shirts with our study logo. We think this created a sense of familiarity and rapport with our team, as evidenced by multiple clinic staff unexpectedly asking for study-branded t-shirts (described below under “Study Branded Non-Financial Incentives”). Strong partnerships with clinic sites supported study data collection efforts. For instance, recognizing the study and the study team members enhanced clinic staff's willingness to help us find space to meet with participants for in-person assessments and schedule venipuncture A1c tests for the study. We aimed to have a consistent presence in the clinics to help foster relationships with clinic staff and build trust with participants as they saw our team working with their clinic staff.
3.3.3. Newsletters
Quarterly, we sent a participant newsletter and a clinic newsletter. Participant newsletters provided information on healthy living with diabetes (e.g., brief articles with tips for diabetes management and healthy recipes). We also provided information about resources in the community such as hours and locations for local farmers’ markets, and when community centers provided free cooking or exercise classes. Providing this information helped engage and connect with participants in between follow-up assessments — in particular, those in the control condition who were not receiving text messages. Mailed newsletters that were returned to sender would activate our team to use participants' phone numbers and secondary contacts to catch changes to contact information quickly and between study assessments.
The clinic newsletter provided updates about study progress, new team members, and enrollment and retention numbers by clinic site. These were intended to engage our clinic partners and inform new and existing clinic staff about our study. Newsletters included pictures of our study staff to enhance familiarity. We took multiple copies of printed clinic newsletters to front desk staff at clinics and asked them to distribute to interested staff. We also emailed electronic versions to our contacts at each clinic (clinic directors, lead physicians, or pharmacists) and asked them to share with their staff.
3.3.4. Study Branded Non-Financial Incentives
We distributed small tokens of appreciation branded with our study logo (“study swag”) to participants at various follow-up assessments, either in person or via mail for those participating remotely. These items were meant to thank our participants for their time and contributions to the project, create a sense of belonging and community, and build recognition for our study logo.
At the 3-month follow up, participants were given a magnet which displayed the study logo, the phone number for the Helpline, and a thank you note for participating in the study. At 6 months, participants were either given a water bottle or high-quality t-shirt branded with the logo, per their preference. We also gave clinic staff and providers study swag; we didn’t initially plan for this, but upon several clinic staff members asking about it, we ultimately provided t-shirts to all interested providers and staff at our partnering clinics. We think this further enhanced the community building and recognition for our study and indicated to patients that their clinic supported the study.
3.3.5. Birthday Cards
We mailed personalized birthday cards to all participants within 2 weeks of their birthday. Birthday cards also had the study logo and included a hand-written note expressing our appreciation for the participant. They were signed by the principal investigator and study team members. Participants often mentioned appreciation of getting the birthday card to study staff, but we did not capture frequency of this.
4. Discussion
We implemented a variety of strategies to overcome common barriers to participation and encourage retention among a patient sample with over half with minoritized race or ethnicity and over half with lower SES. Throughout a 15-month RCT, we retained ≥90% of participants at each follow-up (based on completion of either survey or A1c data) and retention did not vary by study condition, race/ethnicity, nor socioeconomic factors (i.e., education, income, insurance, housing status). Only 2% of the sample withdrew completely from the study. We organized our retention strategies into three main categories: flexibility in participation, communication, and community building. Our success was likely attributable to combinations of these strategies, but the available data suggest that strategies that allowed for flexibility in participation were crucial, particularly for retaining persons with lower SES who completed more assessments in person and used more completion methods over the course of the study. Non-Hispanic Black participants also completed more assessments in person and fewer online. These findings may reflect trust and familiarity with community clinics and/or disparities in Internet access and comfort [29]. Our partnership with community clinics may have overcome barriers related to transportation, as several clinics served patients housed in nearby shelters or public housing and/or were walkable from participants’ homes. Our conclusion about the value of flexibility in participation are supported by other studies [30-32] For example, DeFrank et al. [31] had high retention (≥84% at each of three follow-ups) with a racially diverse and low-income sample of parent-child dyads in a pilot study assessing the feasibility of a web-based intervention for decreasing obesity risk. Their study also offered flexibility including providing participants options for how they completed their follow-up surveys (by phone, video calling, in-person; with or without assistance), the type of gift card they received as their incentive (e.g., Target, Aldi, Modell’s) and how they received it (by mail or in-person) [31].
Studies that prioritize uniformity in data collection methods may sacrifice data completion and/or participation of underserved and historically underrepresented groups from their study samples; however, measurement error introduced by multiple data collection methods could inflate standard errors and obscure intervention effects. This trade-off should be considered carefully in the context of the research purpose and the state of the larger area of study. In this context, we concluded that noise introduced by variations in A1c method or completion of self-report behavioral measures were preferable to systematically missing data that could bias our effects (e.g., persons who do not regularly have clinic visits or do not come to the clinic for a study visit are likely different in meaningful ways from those that do).
For studies that are unable to provide flexibility in data collection, frequent and structured communication may be a similarly impactful retention strategy [1,32,33]. Taani et. al [34] had >83% retention in a 6-month RCT that recruited African Americans with low income and hypertension from a free community clinic system. They cited ongoing communications as a significant reason for their high retention - in particular, communications within the study team, between the team and participants, and between the team and partnering clinics. Similar to our study, their team developed tracking systems and protocols for maintaining contact with participants and reminding them to complete follow-up materials [34]. In our experience, these systems helped created a high-functioning and organized research team which likely contributed to improved retention. In addition, offering a dedicated study phone line for participants to call may have helped retain participants. Ensuring participants assigned to control conditions receive services and/or items of value as a part of study participation is important for even retention across study arms, and potentially more ethical when working with underserved groups or groups at risk for adverse outcomes [35]. The ability to use the Helpline to ask questions about diabetes medications had perceived value but, considering so few participants used it for this specific purpose, it was unlikely to impact outcomes and is therefore an ideal offering for an attention control. The Helpline combined with study newsletters, access to study A1c results, and study swag – all provided to both study arms – may have collectively helped us achieve non-differential retention rates.
We did not measure the relative contributions of our community-building strategies, but other studies using similar strategies have reported success in their retention of minority and low-income populations. Nicholson et al. [33] recruited a low-income, racially diverse group of mothers in a longitudinal trial for improving childhood obesity and attributed their high retention rates in part to study branding. Specifically, they created a logo for their study that was used on all materials given to participants to maximize their identification with the project, including reminder post-cards, thank you notes, and a newsletter. Partnering with community clinics and fostering those partnerships has also been reported as vital to the recruitment and retention of underrepresented populations in other studies [36-38]. Through sharing our study progress with clinic partners and regularly demonstrating our commitment to the partnership, we created a reciprocal relationship that allowed for easier data collection and retention.
The total potential earning of $210 in participant compensation over the 15-month study likely aided our retention success. Some IRBs consider increasing compensation over time to be coercive and resist compensation perceived to be “high,” particularly for studies recruiting persons with lower SES [39]. Importantly, systematic examination has found persons from minoritized groups and persons with lower SES view financial compensation for research participation as “fair” when it is higher, whereas persons who are White or have higher SES view lower or no compensation as fair [40]. For people who are less confident they or their community will benefit from research findings, financial compensation may be necessary to demonstrate respect for their time and effort [12]. Considerations of the right incentive balance need to include voices from minoritized and historically underrepresented groups [41] and be mindful that the personal costs of participation are often higher for persons with lower SES. There are costs associated with good retention, whether it be to support study staff’s outreach efforts or to compensate participants such that they prioritize study participation. In studies like ours where the risk of adverse events or harm is low, we concluded shifting costs to participant compensation was important for our goal of evaluating the efficacy of our intervention in minoritized and lower SES groups.
Ultimately, a multifaceted approach that aims to reduce study burden on participants and increase engagement may be the most promising way to maintain retention in longitudinal trails. However, more strategies may not always be better or feasible. Teague et al. [28] conducted a systematic review and meta-analysis of 143 longitudinal cohort studies and found adding more retention strategies did not result in higher retention rates. The authors point out that the number of strategies used is important to consider as some strategies may be costly in both time and money [28]. We found wise use of study funds to include study staff time in community clinics, participant compensation, and study swag. On the other hand, given how seldom participants used the Helpline to leave a message about their diabetes medications, the HIPAA compliant aspects of the Helpline were costly and unnecessary.
Our exploration of retention strategies was post-hoc and therefore we did not measure several strategies and are limited in knowing which contributed most to retention. Future studies could administer a survey to ask participants about their perceptions of specific strategies to help gauge their impact, although the effects of some strategies may operate outside participants’ awareness (e.g., sense of belonging and community). If feasible, studies could randomize participants to different strategies to evaluate their impact on retention. On the other hand, we recommend allowing retention strategies to evolve throughout the course of a study because certain strategies may be more effective early in study participation and others later (e.g., meeting participants in-person for study enrollment and then allowing options thereafter). We also think it wise to add or change retention strategies in response to feedback (per agile methods). This study recruited patients with T2D from a specific region in Tennessee and required English-speaking adults who owned a mobile phone; therefore, we acknowledge our findings may not generalize to other regions and other patient populations.
5. Conclusion
A combination of retention strategies aimed at reducing barriers to participation and engaging patients and clinic partners contributed to high retention in a long-term RCT among racially and socioeconomically diverse patients. Including multiple options for completing study assessments may be key for long-term retention in diverse patient populations. We encourage future studies to plan their retention strategies early in study design, to include necessary funds in study budgets to purchase supplies and create infrastructure (e.g., tracking databases), and to plan to measure their retention efforts when possible to evaluate their relative success. Learning how to successfully retain historically underrepresented and at-risk groups in clinical trials is necessary to reduce and avoid widening persistent disparities in health.
Highlights:
We retained 90% of a racially and socioeconomically diverse sample over 15 months.
Offering multiple options for study assessment completion may be key for retention.
Studies should plan to budget for and measure success of retention strategies.
Acknowledgements:
The authors thank the REACH team, our partnering clinics (i.e., Faith Family Medical Center, The Clinic at Mercury Courts, Connectus Health, Shade Tree Clinic, Neighborhood Health, Vanderbilt Adult Primary Care), and the participants for their contributions to this research.
Funding:
This research is funded by the National Institutes of Health NIH/NIDDK R01-DK100694. LSM was also supported by a career development award from NIH/NIDDK (K01-DK106306) and LAN was supported by a career development award from NIH/NHBLI (K12-HL137943).
Abbreviations:
- SES
Socioeconomic status
- T2D
Type 2 diabetes
- REACH
Rapid Encouragement/Education And Communications for Health
- FAMS
Family focused Add-on for Motivating Self-care
- A1c
Hemoglobin A1c
- EHR
Electronic health record
- RA
Research assistant
- REDCap
Research Electronic Data Capture
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Competing interests: The authors have no competing interests to declare.
Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References
- 1.Abshire M, Dinglas VD, Cajita MIA, Eakin MN, Needham DM, Himmelfarb CD. Participant retention practices in longitudinal clinical research studies with high retention rates. BMC Med Res Methodol 2017;17(30) doi: 10.1186/s12874-017-0310-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Gul RB, Ali PA. Clinical trials: The challenge of recruitment and retention of participants. J Clin Nurs 2010;19(1-2):227–33 doi: 10.1111/j.1365-2702.2009.03041.x. PMID: 20500260. [DOI] [PubMed] [Google Scholar]
- 3.George S, Duran N, Norris K. A systematic review of barriers and facilitators to minority research participation among african americans, latinos, asian americans, and pacific islanders. Am J Public Health 2014;104(2):e16–e31 doi: 10.2105/AJPH.2013.301706. PMC3935672. PMID: 24328648. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Kirk JK, Passmore LV, Bell RA, et al. Disparities in a1c levels between hispanic and non-hispanic white adults with diabetes: A meta-analysis. Diabetes care 2008;31(2):240–46 doi: 10.2337/dc07-0382. PMID: 17977939. [DOI] [PubMed] [Google Scholar]
- 5.Centers for Disease Control and Prevention. National diabetes statistics report. Atlanta, GA: US Department of Health and Human Services, 2020. [Google Scholar]
- 6.Peek ME, Cargill A, Huang ES. Diabetes health disparities: A systematic review of health care interventions. Med Care Res and Rev 2007;64(5_suppl):101S–56S doi: 10.1177/1077558707305409. PMC2367214. PMID: 17881626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Nelson LA, Ackerman MT, Greevy RA Jr., Wallston KA, Mayberry LS. Beyond race disparities: Accounting for socioeconomic status in diabetes self-care. Am J Prev Med 2019;57(1):111–16 doi: 10.1016/j.amepre.2019.02.013. PMC6589128. PMID: 31130463. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Dancy BL, Wilbur J, Talashek M, Bonner G, Barnes-Boyd C. Community-based research: Barriers to recruitment of african americans. Nurs Outlook 2004;52(5):234–40 doi: 10.1016/j.outlook.2004.04.012. PMID: 15499312. [DOI] [PubMed] [Google Scholar]
- 9.Hamel LM, Penner LA, Albrecht TL, Heath E, Gwede CK, Eggly S. Barriers to clinical trial enrollment in racial and ethnic minority patients with cancer. Cancer Control 2016;23(4):327–37 doi: 10.1177/107327481602300404. PMC5131730. PMID: 27842322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Coleman-Phox K, Laraia BA, Adler N, Vieten C, Thomas M, Epel E. Recruitment and retention of pregnant women for a behavioral intervention: Lessons from the maternal adiposity, metabolism, and stress (mamas) study. Prev Chronic Dis 2013;10 doi: 10.5888/pcd10.120096. PMC3592785. PMID: 23469765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Ejiogu N, Norbeck JH, Mason MA, Cromwell BC, Zonderman AB, Evans MK. Recruitment and retention strategies for minority or poor clinical research participants: Lessons from the healthy aging in neighborhoods of diversity across the life span study. Gerontologist 2011;51(suppl_1):S33–S45 doi: 10.1093/geront/gnr027. PMID: [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Brown DR, Fouad MN, Basen-Engquist K, Tortolero-Luna G. Recruitment and retention of minority women in cancer screening, prevention, and treatment trials. Ann Epidemiol 2000;10(8):S13–S21 doi: 10.1016/s1047-2797(00)00197-6. PMID: [DOI] [PubMed] [Google Scholar]
- 13.Janson SL, Alioto ME, Boushey HA, Network ACR. Attrition and retention of ethnically diverse subjects in a multicenter randomized controlled research trial. Controlled Clin Trials 2001;22(6):S236–S43 doi: 10.1016/s0197-2456(01)00171-4. PMID: 11728627. [DOI] [PubMed] [Google Scholar]
- 14.Unger JM, Hershman DL, Albain KS, et al. Patient income level and cancer clinical trial participation. JAMA Oncol 2013;31(5):536–42 doi: 10.1200/JCO.2012.45.4553. PMC3565180. PMID: 23295802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Zafar SY, Peppercorn JM, Schrag D, et al. The financial toxicity of cancer treatment: A pilot study assessing out-of-pocket expenses and the insured cancer patient's experience. Oncologist 2013;18(4):381 doi: 10.1634/theoncologist.2012-0279. PMC3639525. PMID: 23442307. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Schmotzer GL. Barriers and facilitators to participation of minorities in clinical trials. Ethn Dis 2012;22(2):226–30 doi: PMID: 22764647. [PubMed] [Google Scholar]
- 17.Nicholson LM, Schwirian PM, Groner JA. Recruitment and retention strategies in clinical studies with low-income and minority populations: Progress from 2004-2014. Contemp Clin Trials 2015;45(Pt A):34–40 doi: 10.1016/j.cct.2015.07.008. PMID: 26188163. [DOI] [PubMed] [Google Scholar]
- 18.Sheridan S, Schrandt S, Forsythe L, Hilliard TS, Paez KA. The pcori engagement rubric: Promising practices for partnering in research. The Annals of Family Medicine 2017;15(2):165–70 doi: 10.1370/afm.2042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Nelson LA, Threatt AL, Martinez W, Acuff SW, Mayberry LS. Agile science: What and how in digital diabetes research. In: Klonoff DC, Kerr D, Mulvaney SA, eds. Diabetes digital health. Cambridge, MA: Elsevier Inc., 2020. [Google Scholar]
- 20.Mayberry LS, Berg CA, Harper KJ, Osborn CY. The design, usability, and feasibility of a family-focused diabetes self-care support mhealth intervention for diverse, low-income adults with type 2 diabetes. J Diabetes Res 2016;2016:7586385 doi: 10.1155/2016/7586385. PMC5116505. PMID: 27891524. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Nelson LA, Mayberry LS, Wallston K, Kripalani S, Bergner EM, Osborn CY. Development and usability of reach: A tailored theory-based text messaging intervention for disadvantaged adults with type 2 diabetes. JMIR human factors 2016;3(2):e23 doi: 10.2196/humanfactors.6029. 5034151. PMID: 27609738. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Nelson LA, Wallston KA, Kripalani S, et al. Mobile phone support for diabetes self-care among diverse adults: Protocol for a three-arm randomized controlled trial. JMIR Res Protoc 2018;7(4):e92 doi: 10.2196/resprot.9443. PMC5915673. PMID: 29636319. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Nelson LA, Greevy RA, Spieker A, et al. Effects of a tailored text messaging intervention among diverse adults with type 2 diabetes: Evidence from the 15-month reach randomized controlled trial. Diabetes Care 2020. doi: 10.2337/dc20-0961. PMID: 33154039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Hull PC, Emerson JS, Schlundt D, Reese MC, Cain VA, Levine RS. Nashville safety net assessment. 2010. http://sncmt.org/wp-content/uploads/2013/05/TSU-Report-2.pdf.
- 25.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (redcap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009;42(2):377–81 doi: 10.1016/j.jbi.2008.08.010. PMC2700030. PMID: 18929686. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Brueton VC, Tierney JF, Stenning S, et al. Strategies to improve retention in randomised trials: A cochrane systematic review and meta-analysis. BMJ Open 2014;4(2):e003821 doi: 10.1136/bmjopen-2013-003821. PMC3918995. PMID: 24496696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Horrigan J Digital readiness gaps. Pew Research Center. 2016. https://www.pewresearch.org/internet/chart/cell-phone-activities/. [Google Scholar]
- 28.Teague S, Youssef GJ, Macdonald JA, et al. Retention strategies in longitudinal cohort studies: A systematic review and meta-analysis. BMC Med Res Methodol 2018;18(1):151 doi: 10.1186/s12874-018-0586-7. PMC6258319. PMID: 30477443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Anderson M Digital divide persists even as lower-income americans make gains in tech adoption. Fact tank: News in the numbers Washington, DC: Pew Research Center, 2017. [Google Scholar]
- 30.Watson NL, Mull KE, Heffner JL, McClure JB, Bricker JB. Participant recruitment and retention in remote ehealth intervention trials: Methods and lessons learned from a large randomized controlled trial of two web-based smoking interventions. J Med Internet Res 2018;20(8):e10351 doi: 10.2196/10351. PMID: [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.DeFrank G, Singh S, Mateo KF, et al. Key recruitment and retention strategies for a pilot web-based intervention to decrease obesity risk among minority youth. Pilot Feasibility Stud 2019;5:109 doi: 10.1186/s40814-019-0492-8. PMC6727497. PMID: 31516726. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Coday M, Boutin-Foster C, Goldman Sher T, et al. Strategies for retaining study participants in behavioral intervention trials: Retention experiences of the nih behavior change consortium. Ann Behav Med 2005;29 Suppl:55–65 doi: 10.1207/s15324796abm2902s_9. PMID: 15921490. [DOI] [PubMed] [Google Scholar]
- 33.Nicholson LM, Schwirian PM, Klein EG, et al. Recruitment and retention strategies in longitudinal clinical studies with low-income populations. Contemp Clin Trials 2011;32(3):353–62 doi: 10.1016/j.cct.2011.01.007. PMC3070062. PMID: 21276876. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Taani MH, Zabler B, Fendrich M, Schiffman R. Lessons learned for recruitment and retention of low-income african americans. Contemp Clin Trials Commun 2020;17:100533 doi: 10.1016/j.conctc.2020.100533. PMC7083755. PMID: 32211558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Mayberry LS, Lyles CR, Oldenburg B, Osborn CY, Parks M, Peek ME. Mhealth interventions for disadvantaged and vulnerable people with type 2 diabetes. Curr Diab Rep 2019;19(12):148 doi: 10.1007/s11892-019-1280-9. PMID: 31768662. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Brannon EE, Kuhl ES, Boles RE, et al. Strategies for recruitment and retention of families from low-income, ethnic minority backgrounds in a longitudinal study of caregiver feeding and child weight. Child Health Care 2013;42(3):198–213 doi: 10.1080/02739615.2013.816590. PMC3782992. PMID: 24078763. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Huang B, De Vore D, Chirinos C, et al. Strategies for recruitment and retention of underrepresented populations with chronic obstructive pulmonary disease for a clinical trial. BMC Med Res Methodol 2019;19(1):39 doi: 10.1186/s12874-019-0679-y. PMC6385381. PMID: 30791871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Kogan JN, Bauer MS, Dennehy EB, et al. Increasing minority research participation through collaboration with community outpatient clinics: The step-bd community partners experience. Clin Trials 2009;6(4):344–54 doi: 10.1177/1740774509338427. PMID: 19587069. [DOI] [PubMed] [Google Scholar]
- 39.Resnik DB. Bioethical issues in providing financial incentives to research participants. Medicoleg Bioeth 2015;5:35–41 doi: 10.2147/MB.S70416. PMC4719771. PMID: 26807399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Allen SC, Lohani M, Hendershot KA, et al. Patient perspectives on compensation for biospecimen donation. AJOB Empir Bioeth 2018;9(2):77–81 doi: 10.1080/23294515.2018.1460633. PMC6299829. PMID: 29611768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Brown B, Galea JT, Dube K, et al. The need to track payment incentives to participate in hiv research. IRB 2018;40(4):8–12 doi: PMID: 30387975. [PubMed] [Google Scholar]