Abstract
Women veterans (WV) are the fastest growing veteran subpopulation and recognized to be at increased risk for suicide compared with civilians. Improving engagement (e.g., response rates) of WV in survey research is critical to ensuring valid and generalizable findings, which can inform suicide prevention programs tailored for this population. Many factors are known to influence response rates, yet little is known about ways to optimize survey response rates among WV. Three recruitment cohorts (Waves 1 [W1], 2a [W2a], and 2b [W2b]) of WV were invited to participate in an online survey for a national, mixed-methods study examining suicide risk among WV using reproductive health care services paid for or provided by the Veterans Health Administration. To examine the effects of enhanced recruitment efforts, standard recruitment materials were mailed to all three cohorts, with the additions of: a study flyer aiming to build trust between participants and researchers (W2a, W2b) and a paper survey (W2b). Characteristics of responders and non-responders were compared by wave and across survey modalities. Response rates were significantly higher for groups receiving enhanced (W2a = 17.1%; W2b = 24.6%) versus standard (W1 = 12.2%) recruitment materials. WV residing in rural areas were significantly more likely to respond by paper (37.1%) than online (19.8%). Non-respondents were disproportionately racial and ethnic minorities. Disclosure of sensitive information (e.g., military sexual trauma) did not differ by survey modality. Findings suggest that enhanced recruitment materials improve survey response rates among WV, an important consideration for future research with this population.
Keywords: recruitment, reproductive health, suicide, veterans, women
1 |. INTRODUCTION
Across fields of research (e.g., health, marketing, politics), survey response rates are considered an important benchmark of study success. Though low response rates do not always confer nonresponse bias, the likelihood of nonresponse bias increases as response rates fall; this, in turn, can affect the generalizability and validity of study findings (Groves, 2006; Hartge & Cahill, 2008; Johnson & Wislar, 2012). There is no agreed upon optimal or minimal response rate to reduce the likelihood of nonresponse bias, though some argue that high response rates (i.e., ≥70%) are desirable (Fowler, 2001; Groves, 2006). From a feasibility standpoint, while survey response rates of 60–70% have been achieved in the United States, recent large-scale health surveys have achieved response rates closer to 50% (e.g., 2018 Behavioral Risk Factor Surveillance System median response rate = 49.9%; Center for Disease Control & Prevention, 2019; Davern et al., 2010; Hartge & Cahill, 2008). Response rates of this magnitude can still be difficult to obtain, particularly as barriers to recruiting participants (e.g., changing societal perceptions) continue to rise and change (Hartge & Cahill, 2008). Many factors influence response rates, including characteristics of the recruitment base (e.g., demographics, population type), sampling and recruitment methods (e.g., probability sampling from a known sampling frame with direct invitation of a purposefully selected group vs. non-probability sampling of an easily accessible group), number of contact attempts, survey completion incentives (e.g., monetary compensation), survey sponsorship (e.g. governmental, academic, commercial), as well as survey topic, length, and modality (Daikeler, BosnJak, & Manfreda 2019; Fan & Yan, 2010; Guo, Kopec, Cibere, Li, & Goldsmith, 2016).
1.1 |. Survey modality: online data collection
Online surveys are an increasingly popular mode of survey administration. They are cost-effective, can be used to cast a wide geographic net, and can facilitate data collection from large samples over a short period of time. For these reasons, online surveys provide a convenient mode for collecting self-report data to inform knowledge and practice, including in regard to suicide prevention. Despite these advantages, online-only surveys may yield response rates approximately 10% lower than other survey modes (e.g., mail, telephone, multi-mode), depending upon characteristics of the population being sampled (Fan & Yan, 2010). For example, a 2008 systematic review found response rates among college students for online surveys to be 3% higher than for mailed paper surveys; in contrast, online survey response rates were 10–23% lower than for paper surveys in other adult samples (e.g., business professionals, employees, general consumers; Shih & Fahn, 2008). Indeed, a recent systematic review found an average response rate of 34.2% across 98 online surveys published between 2007 and 2015, with substantial variation in response rates by population (Poynton, DeFouw, & Morizio, 2019).
1.2 |. Survey topic: sensitive research
There is a paucity of research systematically examining the impact of sensitive topics (e.g., sexuality, mental health, suicidality) on nonresponse bias. However, existing studies do demonstrate that it is possible to achieve high response rates in such work. Related specifically to work on suicidality, Kessler et al. (2013) reported strong response rates (72.0–90.8%) across three large-scale survey studies addressing suicide risk and resilience in military populations. The authors noted that the recruitment settings, message and an “aggressive campaign mounted to disseminate this message” (Kessler et al., 2013, p. 300) likely contributed to their higher than average response rates. A meta-analysis of international studies examining suicidal thoughts and behaviors among college students found a median response rate of 74% (interquartile range: 37–89%), but response rates were significantly lower in North American samples (38%; Mortier et al., 2018).
The use of online surveys to collect data on sensitive topics may result in even lower response rates. A 2019 systematic review of online surveys examining sexual assault on college campuses found an average response rate of 24.6% (Rosenberg, Townes, Taylor, Luetke, & Herbenick, 2019). Research is lacking, however, regarding how online data collection influences response to surveys on suicidality. Even if online administration of surveys on sensitive survey topics is found to reduce overall response rates, a limited body of research does suggest that participants are equally likely to disclose sensitive information (e.g., regarding sexual behaviors, functioning and health; Muehlhausen et al., 2015) and socially undesirable traits (Gnambs & Kaspar, 2017), across different survey modalities (e.g., paper and online).
1.3 |. Recruitment base: women veterans
Although response rate trends have been examined across populations and survey modalities (Daikeler et al., 2019), there is a dearth of research exploring nonresponse bias and its impact in survey research with veterans. This is true both among the veteran population as a whole, as well as in specific veteran subgroups for which there is a critical need to improve understanding of health and health care utilization. Women veterans are one such population. Women veterans currently represent the fastest growing sub-population of veterans in the United States (10% of veterans in 2017 and expected to comprise 14% of the veteran population by 2030; National Center for Veterans Analysis & Statistics, 2017). Despite this, women veterans are often underrepresented in veterans’ health research—overall and in suicide prevention research specifically (Hoffmire & Denneson, 2018). This is concerning given that suicide rates increased by 60.5% among women veterans from 2005 to 2017 (Office of Mental Health & Suicide Prevention, 2019a). Furthermore, in 2017, after accounting for age, women veterans experienced a suicide rate 2.2 times higher than that of adult civilian women (Office of Mental Health & Suicide Prevention, 2019b).
Prior paper surveys among women veterans have reported response rates ranging from 11.4% to 36.2% (Coughlin et al., 2017; Eber et al., 2013; Scott et al., 2014), whereas one telephone survey achieved an overall response rate of 28% (Borrero et al., 2017). Two studies examining response rates from Reserve component, Persian Gulf War era veterans obtained modest response rates that were slightly higher for women (29.5%) than men (27.0%; Shumm et al., 1999; Shumm et al., 2000). These studies indicate that response rates may be lower among women veterans compared with other populations. However, there is a paucity of research examining how survey modality (e.g., online vs. paper), recruitment methods, and survey topics influence women veterans’ willingness to participate in survey research. Most prior literature examining women veterans’ survey response rates has not included online survey modalities and/or suicide-related survey content.
1.4 |. Present study
Given these knowledge gaps, research is warranted to identify approaches to improve engagement of women veterans into survey research, particularly that which focuses on sensitive topics, such as suicidality and reproductive health, to reduce the potential for nonresponse bias and ensure valid, generalizable results. Considering the recruitment success Kessler et al. (2013) achieved by employing a strong recruitment message, one novel approach to explore with women veterans is enhancement of recruitment materials to facilitate trust and rapport between prospective participants and the research team. Though this approach has not been evaluated with women veterans to date, qualitative research suggests that such a message may be well-received. Specifically, women veterans are more likely to engage in health care services when a trusting relationship has been established with their clinicians; this is even more important when women veterans are disclosing sensitive information, such as history of military sexual trauma (Kotzias et al., 2019). It is therefore reasonable to postulate that enhanced, rapport-building recruitment materials may be an avenue to improve survey response rates among women veterans.
Accordingly, we aimed to assess the degree to which (a) enhanced (i.e., rapport-building) recruitment materials and (b) survey modalities (online only vs. online and paper) influenced women veterans’ willingness to participate in survey research on sensitive topics. We hypothesized that the highest response rates would occur in the context of enhanced recruitment materials, particularly when combined with the addition of a paper survey option. Exploratorily, we aimed to evaluate whether survey completion mode was associated with participants’ demographic characteristics and disclosure of potentially sensitive information (e.g., suicidal ideation, suicide attempt, history of military sexual trauma).
2 |. METHODS
2.1 |. Procedures
This recruitment sub-study was conducted in the context of a national, mixed-methods study examining suicide risk among women veterans using reproductive health care (RHC) services that were paid for or provided by the Department of Veterans Affairs (VA)—subsequently referred to as VA RHC. The broader study had an overarching objective of evaluating VA RHC settings as potential targets for upstream suicide prevention services for women veterans. The study included: (a) a secondary, retrospective cohort analysis of VA administrative and clinical records for the population of post-9/11 women veterans who separated from military service between October 1, 2009 and September 30, 2018, were 18–44 years of age (i.e., of reproductive age) at separation, and had used VA RHC services since separating from military service; (b) a cross-sectional survey of a random sample of women veterans in this population who used VA RHC services in the year before study initiation (Fiscal Year [FY] 2018); and (c) qualitative interviews with a subset of survey respondents. This manuscript focuses on recruitment efforts for the survey component specifically and was driven by a clear need to improve response rates following the first wave of recruitment. Results from the broader survey will be presented in future manuscripts.
2.2 |. Survey participants
The survey sampling frame was constructed from clinical and administrative records contained within the VA Corporate Data Warehouse (CDW) and the VA-Department of Defense Identity Repository (VADIR). All women veterans who used VA RHC during FY 2018 (October 1, 2017–September 30, 2018) were identified from CDW records. Last date of separation from military service was obtained from VADIR to restrict the sampling frame to those who separated between October 1, 2009 and September 30, 2018 and were 18–44 years at separation. Of 82,840 eligible women veterans, 750 were initially selected for Wave 1, using stratified random sampling with proportional allocation to ensure the sample was comparable to the eligible sampling frame (Cochran, 1999). Sampling stratification variables included age group (<30, ≥30) and rurality of residence (urban [Census tracts with at least 30% of the population residing in an urbanized area], rural [land areas not defined as urban or highly rural] or highly rural [sparsely populated areas […] which is typically a town of no more than 2,500 people], or missing/unknown rurality; Office of Rural Health, 2020). Rural and highly rural categorizations were collapsed due to sparse data for highly rural veterans.
Wave 1 participants received standard recruitment materials. Using the same approach, an additional 1,500 women veterans were subsequently selected for Wave 2, to receive enhanced recruitment efforts. The Wave 2 recruitment sample was randomly assigned to one of two groups (Wave 2a [N = 750] and Wave 2b [N = 750]), which differed from one another in regard to the inclusion of a paper survey response option.
2.3 |. Recruitment strategy
2.3.1 |. Mailing contents
Recruitment materials for Wave 1 were sent in standard, white VA business envelopes and included the following: a standardized invitation letter (with instructions and personalized login information to complete the survey online), a postcard consent form, and a wallet-sized Veterans Crisis Line card. Recruitment materials invited potential participants to engage in a research project aiming to improve suicide prevention for women veterans. Mailing contents for Wave 2a and 2b were sent in large yellow VA envelopes and included all of the Wave 1 contents, in addition to a study flyer (described below). Wave 2a and 2b recruitment materials were identical, except Wave 2b uniquely included a paper survey, a pre-paid addressed business reply envelope to return the survey, and instructions for completing the survey by paper or online. Participants who did not respond to the initial mailing were sent two additional mailings inviting participation, sent 4 weeks apart. Initial invitations were sent in December 2018 (Wave 1) and April 2019 (Waves 2a and 2b).
2.3.2 |. Mode of participation
Participants in Waves 1 and 2a could participate only online, whereas participants in Wave 2b were provided the option to participate online or via the paper survey. Before accessing any survey questions in the secure online survey platform, participants were required to enter their unique study ID and access code pair, to provide consent, and to confirm their eligibility. Only participants who provided specific responses to the following questions were permitted to progress further into the survey: (a) Do you wish to participate in this study and take this survey? (yes); (b) Have you ever served in the U.S. military? (yes); (c) Are you currently serving in the U.S. military including Active Duty service, the National Guard, or the Reserves? (no). Wave 2b participants were provided a paper survey with their unique study ID already recorded; they were directed not to record their name or other personally identifiable information on the survey and were asked the same series of questions regarding interest and eligibility. Only those who confirmed eligibility were considered eligible upon receipt of their paper survey.
2.3.3 |. Enhanced recruitment (Wave 2a and 2b)
Enhancements made to the recruitment strategy for Wave 2a and 2b were informed by prior research and preliminary findings from Wave 1 qualitative interviews, which suggested that participants were more comfortable disclosing experiences with mental health and suicidality when they felt a personal connection and trust with their healthcare provider. As such, our team created a color flyer describing the study and the study team, with the intent of helping potential participants get to know and begin to build trust with the team. The flyer (available upon request from the corresponding author) included a photograph of our core team of investigators and staff, accompanied by brief descriptions of: (a) the project’s goals in helping to prevent suicide among women veterans; (b) the team’s skillset and experience; and (c) the Principal Investigator’s training and relevant experience. The flyer also included a link to a study website hosted within our institution’s website, which provided additional details regarding team members’ backgrounds and experiences, and which would be used to share findings with participants. Thus, Wave 2a and 2b participants could “meet our team” by reviewing the flyer included with each mailing, as well as through associated web content.
2.4 |. Survey content
In the survey, participants were asked questions covering the following domains: demographics and military history, health care use and insurance, general health status and chronic conditions, pain, insomnia, mental health conditions and symptoms, reproductive health functioning and conditions, family and relationship functioning and satisfaction, self-harm history, lifetime adverse experiences, and access to lethal means. Many of these domains address sensitive topics. Although participation was not anonymous, no identifying information was collected within the survey itself. Rather, participants were provided a unique study ID and access code pair to enter with their survey. Participants received $20 for participation. This study was approved by the local Institutional Review Board.
2.5 |. Statistical analysis
Response rates with 95% confidence intervals (CIs) were computed overall and compared across the three recruitment waves. When doing so, anyone who initiated the survey was included as a responder, regardless of whether they self-identified as ineligible or failed to confirm eligibility (n = 11) or failed to complete the full survey (n = 18). Potential participants for whom the mailing was returned undeliverable (e.g., due to change of, or non-forwardable, address) were removed from the denominator when calculating response rates. Thus, the number of survey respondents was divided by the number of targeted participants presumably reached within each wave. Response rates were compared across the three waves using the χ2 test and sample-weighted regressions to test for linear trend of proportions. Post-hoc pairwise comparisons (χ2) were also conducted between recruitment waves.
Rao–Scott χ2 and t tests were used, as appropriate, to compare responders and non-responders overall and responders by wave across various sociodemographic characteristics (rurality and region of residence, age at military separation and survey completion, race, ethnicity), last branch of military service, and type of VA RHC used (paid vs. provided) in the year before survey invitation. These variables were obtained from CDW and/or VADIR records and thus available for all women in the eligible cohort, not only those completing the survey. Finally, Rao–Scott χ2 tests were used to compare Wave 2b respondents by mode of survey completion (i.e., online vs. paper) on a limited set of demographic characteristics (age group at time of survey completion, race, ethnicity, and rurality), as well as on potentially sensitive topics of interest (military sexual trauma history, suicidal ideation, and suicide attempt). In all analyses, a p < .05 was used to denote statistical significance. SAS v9.4 was used to complete all analyses. SAS survey procedures with sampling weights were used for all analyses to account for the stratified sampling design.
3 |. RESULTS
A total of 381 women veterans initiated the survey, with 352 completing the survey. The majority of participants completed the survey online (n = 289; 82.1%). Of the women in Wave 2b who had the option to complete the survey by paper, 63 (39.6%) chose to do so. Figure 1 and Table 1 include details regarding recruitment efforts and response, by wave. For online respondents, the median survey time was 35 minutes, and approximately 87% completed the survey in less than 1 hour. Although all responders had used RHC provided by the VA in FY 2018, 30.7% also reported using RHC paid for by VA but provided by community providers during the same timeframe (Table 2). The frequency of undeliverable mailings was low and similar across survey waves at approximately 6% (Table 1). Likewise, the completion rate among survey initiators was 92% overall, with little variability across waves (Table 1).
FIGURE 1.

Survey recruitment and response flow diagram. †Although no formal opt-out procedure was implemented, three participants contacted the study team to opt out of the study. This only occurred in Wave 2b
TABLE 1.
Summary of recruitment methods and success by wave
| Wave | Mailings sent (n) | Personalized recruitment materials | Potential participation modalities | Undeliverable |
Survey responders |
Survey completers |
|||
|---|---|---|---|---|---|---|---|---|---|
| n | (%)a | n | (%)b | n | (%)c | ||||
| 1 | 750 | No | Online | 45 | (6.0) | 86 | (12.2) | 79 | (91.9) |
| 2a | 750 | Yes | Online | 41 | (5.5) | 121 | (17.1) | 114 | (94.2) |
| 2b | 750 | Yes | Online or paper | 43 | (5.7) | 174 | (24.6) | 159 | (91.4) |
| Total | 2,250 | – | – | 129 | (5.7) | 381 | (17.9) | 352 | (92.4) |
Percentages are weighted and reported out of number of mailings sent.
Percentages are weighted and computed out of number of mailings estimated to be received (mailings sent minus those returned undeliverable).
Weighted percentage of survey responders (i.e., initiators) who completed the full survey.
TABLE 2.
Comparison of military history and demographic characteristics between responders and non-responders and across recruitment waves
| Non-responders | Responders | Wave 1 responders | Wave 2a responders | Wave 2b responders | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| (N = 1,869) |
(N = 381) |
(n = 86) |
(n = 121) |
(n = 174) |
||||||
| M | (95% CI) range | M | (95% CI) range | M | (95% CI) range | M | (95% CI) range | M | (95% CI) range | |
| Age at separation | 29.3 | (29.1, 29.4) 18–44 | 29.3 | (28.8, 29.9) 18–44 | 30.1 | (28.8, 31.4) 18–44 | 29.1 | (28.1, 30.1) 19–44 | 29.1 | (28.4, 29.9) 19–44 |
| Age at surveya | 34.2 | (34.0, 34.4) 21–53 | 34.2 | (33.6, 34.9) 19–53 | 34.4 | (33.0, 35.8) 19–51 | 34.0 | (32.9, 35.1) 21–53 | 34.3 | (33.5, 35.1) 22–52 |
| % | (95% CI) | % | (95% CI) | % | (95% CI) | % | (95% CI)% | % | (95% CI) | |
| RHC care used | ||||||||||
| VA provided only | 69.5 | (67.4, 71.6) | 69.4 | (64.8, 74.0) | 70.9 | (61.3, 80.6) | 71.9 | (63.9, 80.0) | 66.9 | (59.9, 73.9) |
| VA paid or providedb | 30.5 | (28.4, 32.6) | 30.6 | (26.0, 35.2) | 29.1 | (19.4, 38.7) | 28.1 | (20.0, 36.1) | 33.1 | (26.1, 40.1) |
| Ruralityc | ||||||||||
| Rural | 20.9 | (20.2, 21.7) | 21.6 | (17.8, 25.4) | 14.0 | (7.2, 20.8) | 22.3 | (16.2, 28.5) | 24.9 | (20.3, 29.5) |
| Urban | 79.1 | (78.3, 79.8) | 78.4 | (74.6, 82.2) | 86.0 | (79.2, 92.8) | 77.7 | (71.5, 83.8) | 75.1 | (70.5, 79.7) |
| Racef | ||||||||||
| White | 53.6 | (51.4, 55.8) | 66.5 | (61.8, 71.2) | 72.1 | (62.5, 81.6) | 66.9 | (58.5, 75.4) | 63.4 | (56.3, 70.6) |
| Black | 30.9 | (28.8, 32.9) | 20.0 | (16.0, 24.0) | 17.4 | (9.4, 25.5) | 18.2 | (11.3, 25.1) | 22.5 | (16.3, 28.7) |
| Other | 15.6 | (13.9, 17.2) | 13.5 | (10.1, 17.0) | 10.5 | (4.0, 17.0) | 14.9 | (8.5, 21.3) | 14.1 | (8.9, 19.2) |
| Regiond,f | ||||||||||
| Northeast | 8.4 | (7.1, 9.6) | 10.5 | (7.4, 13.6) | 12.8 | (5.7, 19.9) | 10.7 | (5.2, 16.3) | 9.2 | (4.9, 13.5) |
| Midwest | 14.1 | (12.5, 15.6) | 19.2 | (15.2, 23.2) | 17.4 | (9.4, 25.5) | 18.2 | (11.3, 25.1) | 20.8 | (14.7, 26.8) |
| South | 53.9 | (51.7, 56.2) | 50.7 | (45.7, 55.7) | 58.1 | (47.6, 68.7) | 51.2 | (42.3, 60.2) | 46.7 | (39.3, 54.1) |
| West | 23.6 | (21.7, 25.5) | 19.6 | (15.6, 23.5) | 11.6 | (4.8, 18.4) | 19.8 | (12.7, 27.0) | 23.3 | (17.0, 29.6) |
| Last branch of servicee,f | ||||||||||
| Army | 47.4 | (45.2, 49.7) | 46.5 | (41.5, 51.5) | 46.4 | (35.7, 57.1) | 40.5 | (31.7, 49.3) | 50.7 | (43.3, 58.2) |
| Coast Guard | 0.8 | (0.4, 1.1) | 1.3 | (0.2, 2.5) | 1.2 | (0.0, 3.5) | 1.7 | (0.0, 3.9) | 1.2 | (0.0, 2.7) |
| Air Force | 17.3 | (15.6, 19.0) | 24.6 | (20.2, 28.9) | 28.6 | (18.9, 38.2) | 22.3 | (14.9, 29.8) | 24.2 | (17.8, 30.6) |
| Marine Corps | 10.3 | (9.0, 11.7) | 9.8 | (6.8, 12.8) | 11.9 | (5.0, 18.9) | 10.8 | (5.2, 16.3) | 8.1 | (4.0, 12.1) |
| Navy | 24.2 | (22.3, 26.1) | 17.8 | (14.0, 21.7) | 11.9 | (5.0, 18.9) | 24.8 | (17.1, 32.5) | 15.8 | (10.4, 21.2) |
| Ethnicitye,f | ||||||||||
| Hispanic | 15.6 | (13.9, 17.2) | 11.2 | (8.0, 14.3) | 8.1 | (2.3, 14.0) | 12.8 | (6.7, 19.0) | 11.5 | (6.8, 16.3) |
| Not Hispanic | 84.4 | (82.8, 86.1) | 88.8 | (85.7, 92.0) | 91.9 | (86.0, 97.7) | 87.2 | (81.1, 93.3) | 88.5 | (83.7, 93.2) |
Note: No statistically significant differences were observed across waves.
Abbreviations: CI, confidence interval; RHC, reproductive healthcare; VA, Department of Veterans Affairs.
For non-responders, age at survey completion is computed as their age at the date of study closure (July 31, 2019).
Participants with RHC services paid for but not provided by VA (n = 2) are included in “VA paid or provided” group.
Veterans with unknown rurality (n = 4) were excluded from these analyses; rurality was defined using the following categories: Urban: Census tracts with at least 30 percent of the population residing in an urbanized area as defined by the Census Bureau, Rural (includes highly rural): Land areas not defined as urban.
Geographic regions were defined as including the following states: Northeast: CT, ME, MA, NH, NJ, NY, PA, RI, VT; Midwest: IL, IN, IA, KS, MI, MN, MO, NE, ND, OH, SD, WI; South: AL, AR, DE, DC, FL, GA, KY, LA, MD, MS, NC, OK, SC, TN, TX, VA, WV, Puerto Ricco, US Virgin Islands; West: AK, AZ, CA, CO, HI, ID, MT, NV, NM, OR, UT, WA, WY, Guam, American Samoa.
Veterans with unknown last branch of service (n = 8) and unknown/missing ethnicity (n = 27) were excluded from these analyses.
Statistically significant differences observed (Rao–Scott χ2 test p < .05) between responders and non-responders.
3.1 |. Survey response rates
The overall survey response rate was 17.9% (95% CI: 16.3, 19.6). A significant, increasing linear trend in response rates was observed from Wave 1 to Wave 2b (Figure 2). The initial survey wave with standard recruitment materials (Wave 1) yielded a response rate of only 12.2%. With the addition of enhanced materials alone (Wave 2a), the response rate increased to 17.1%. Lastly, also providing participants the choice of participating online or by paper (in the presence of enhanced recruitment materials; Wave 2b) yielded the highest response rate of 24.6%.
FIGURE 2.

Survey response rates (1,2), by wave and participation mode. 1: For each wave, the numeric value and 95% Confidence Interval error bars reflect total response rates, across modes. 2: A statistically significant, increasing linear trend was observed for response rates across study waves (p < .0001); all post-hoc pairwise Rao–Scott χ2 comparisons between study waves were significant at p < .01
3.2 |. Nonresponse bias
Survey responders differed significantly from non-responders (Table 2) by ethnicity, last branch of military service, region of residence, and race. Responders were more likely to have served in the Air Force (24.6% vs. 17.3%), to reside in the Northeast (10.5% vs. 8.4%) or Midwest regions (19.2% vs. 14.1%), and to report their race as White (66.5% vs. 53.6%). Non-responders were more likely to have served in the Navy (24.2% vs. 17.8%) and to report their race as Black (30.9% vs. 20.0%). Furthermore, the proportion of responders of Hispanic ethnicity (11.2%) was significantly lower than that of non-responders (15.6%).
When comparing responder characteristics across recruitment waves, no statistically significant differences were identified. However, a pattern of increasing response with increasing recruitment enhancements (Wave 1 < 2a < 2b) was observed for veterans who were rural, who lived in Midwestern and Western regions, and who reported their race as Black or Other and their ethnicity as Hispanic (Table 2).
3.3 |. Survey mode effects
Wave 2b included 174 survey responders, over one-third of whom (n = 63, 36.2%) opted to complete the survey on paper (Figure 1). The only statistically significant difference noted between paper and online respondents concerned rurality; there was an increased proportion of rural veterans among paper responders (37.1%), compared with online responders (19.8%; Table 3). Notably, within the total recruitment sample (n = 2,250), 21.0% of veterans lived in rural or highly rural regions (data not shown). Though not statistically significant, the proportion of participants disclosing suicidal ideation (lifetime and past-month) and lifetime suicide attempt was consistently higher among those completing the survey by paper (Table 3). In particular, the proportion of veterans reporting past-month suicide ideation was nearly double (19.2%) among paper responders, compared with online responders (10.5%).
TABLE 3.
Comparison of Wave 2b responders by mode of survey completion
| Online |
Paper |
||||
|---|---|---|---|---|---|
| n | % | n | % | χ2 p-Value | |
| Age at survey, years | .82 | ||||
| <30 | 28 | 29.2 | 15 | 25.3 | |
| 30–45 | 63 | 65.6 | 41 | 68.0 | |
| 45+ | 5 | 5.2 | 4 | 6.7 | |
| Race | .42 | ||||
| White | 58 | 60.4 | 41 | 66.8 | |
| Black | 15 | 15.6 | 11 | 17.9 | |
| Other | 23 | 24.0 | 10 | 15.3 | |
| Ethnicity | .28 | ||||
| Hispanic | 17 | 17.7 | 7 | 11.4 | |
| Non-Hispanic | 79 | 82.3 | 55 | 88.6 | |
| Ruralitya | .02 | ||||
| Rural/highly rural | 19 | 19.8 | 23 | 37.1 | |
| Urban | 77 | 80.2 | 39 | 62.9 | |
| Military sexual trauma | |||||
| Any | 65 | 69.2 | 46 | 73.9 | .52 |
| Sexual harassment | 62 | 64.6 | 44 | 70.5 | .74 |
| Sexual assault | 44 | 45.8 | 26 | 40.7 | .55 |
| Suicidal ideation | |||||
| Lifetime | 37 | 39.0 | 30 | 47.1 | .31 |
| Past-month | 10 | 10.5 | 12 | 19.2 | .12 |
| Suicide attempt | |||||
| Lifetime | 23 | 24.2 | 20 | 31.1 | .34 |
| Past-year | 4 | 4.2 | 4 | 5.6 | .67 |
Note: This analysis was not restricted to the subset of individuals with complete data on all variables of interest. As such, a few participants were missing data on specific characteristics of interest, such as age (n = 3); race (n = 1); ethnicity (n = 1); rurality (n = 1); lifetime and past-month suicidal ideation (n = 1); lifetime suicide attempt (n = 1); or past-year suicide attempt (n = 1).
Statistically significant differences observed (Rao–Scott χ2 test p <. 05) between online and paper respondents.
4 |. DISCUSSION
As suicide rates among women veterans continue to rise (Office of Mental Health & Suicide Prevention, 2019a), understanding ways to increase women veterans’ participation in suicide prevention research is increasingly important. Although prior studies have examined the utility of different recruitment methods in other populations, to our knowledge, this study is the first to test the effectiveness of various recruitment strategies with women veterans specifically. Enhanced recruitment materials were developed to help participants build trust and connection with the research team from afar. This approach was inspired by our initial qualitative interview findings with participants, as well as prior research indicating that women veterans are more likely to engage in healthcare services when they have a trusting relationship with their clinicians, especially when discussing sensitive information similar to that assessed in the survey (e.g., suicidality and military sexual trauma history; Kotzias et al., 2019).
Our study revealed several important findings relevant to conducting survey-based suicide-related research with women veterans. Providing women veterans with recruitment materials aimed at building rapport (i.e., the study flyer describing our team and our rationale for conducting the study) and offering choice regarding modality of survey participation appear to be important for maximizing participation. Overall, we doubled our response rate from Wave 1 to Wave 2b. Response rates were significantly higher when women veterans were provided enhanced (17.1%), rather than standard (12.2%), recruitment materials and significantly higher still (24.6%) when they were also given a choice of how to complete the survey (i.e., by paper or online). Of note, more than a third of participants in Wave 2b chose to participate via paper. This was some-what surprising given the target population (i.e., women veterans of reproductive age) and potential inconvenience of participating through this method. Overall, our response rates were comparable to those reported in prior survey research among women veterans that ranged from 11.4% to 36.2% (Borrero et al., 2017; Coughlin et al., 2017; Eber et al., 2013; Scott et al., 2014). Notably, none of these prior studies utilized online surveys or rapport-building recruitment materials, however.
Another key finding concerns rural women veterans. Though not statistically significant, a clear trend was noted for rural veterans and veterans living in Midwestern and Western regions, who were increasingly likely to respond as recruitment tactics were enhanced across waves. Given that there are more rural communities in the Midwestern and Western regions of the United States, these findings may be related (Ratcliffe, Burd, Holder, & Fields, 2016). In addition, when given a choice of participation mode, rural veterans were significantly more likely than urban veterans to participate on paper. Given elevated suicide risk among rural veterans (McCarthy et al., 2012), these findings are important for enhancing understanding of ways to increase representation of rural women veterans into survey research. Future research targeting rural women veterans should carefully consider recruitment approaches and potential survey mode effects.
Maximizing response rates among WV is particularly important given our findings on nonresponse bias. We showed that non-respondents were disproportionately of Black race and Hispanic ethnicity, as well as from the Navy branch of service. The trend across survey waves revealed increasing respondent fractions of these racial and ethnic minority groups, suggesting that the enhanced recruitment methods may be more successful at securing their responses.
Finally, results from our exploratory analysis also suggest that providing women veterans with choices for how to participate in survey research is essential. In particular, providing women veterans the option to participate via paper may enable more disclosure of potentially sensitive or stigmatizing experiences, such as suicidal ideation or suicide attempts. However, these findings were not statistically significant, potentially because the number of respondents was too small to answer this question. Thus, these findings should be considered tentative and examined in subsequent research. Future research is warranted to further evaluate survey mode effects related to disclosure of suicide ideation and attempt, which could have important implications for suicide research and surveillance. The increased time and resources required to include a paper survey option are notable but may be warranted when attempting to collect potentially sensitive information, particularly regarding low-base rate events (e.g., suicide attempts).
Relatedly, another important research question to consider is whether offering anonymity impacts reporting of suicidal ideation and attempt among veterans. Although study participants were provided an anonymous study ID and access code to enter the survey and no PII was collected or stored in the survey itself, participants were aware that their identifying information would be retained in a separate, secure study database. The lack of full anonymity could have deterred some veterans from disclosing this information, although prior research with other populations has been mixed (Anestis & Green, 2015; Hom, Stanley, & Joiner, 2016).
Specific study limitations are worth noting when interpreting our findings. First, the lack of a comparison condition solely offering a paper option without enhanced recruitment materials limits some of the conclusions that can be made. In addition, as Waves 1 and 2 mailings were sent at different times of the year, we are unable to determine the potential role, if any, that differing mailing seasons had, although we consider this to be unlikely. Another limitation is that this study was not powered to test the exploratory aim; combined with the lowbase rate of suicidal ideation and attempt and relatively small number of participants who took the survey via paper, this limits conclusions that can be drawn related to impact of mode on disclosure of potentially sensitive information. Further, while the specific focus on women veterans of reproductive age using VHA RHC services fills an important gap, it limits generalizability to veterans who are male, older, or not using VHA services. Our findings are also specific to sensitive topics, in particular suicidality and reproductive health, potentially limiting generalizability to broader surveys of women veterans. Further research is warranted to build this knowledge base and would be facilitated by the inclusion of similar recruitment sub-studies, or efforts to pilot test recruitment methods before launching survey research studies. Such efforts are critical to advance knowledge regarding how to best engage veterans in research and build trust and rapport in such contexts. This is particularly important when studying potentially sensitive, high-priority issues, such as suicide. Finally, although the recruitment materials utilized in this study appear to have improved survey response rates considerably, assessment of nonresponse bias in the overall respondent sample indicated that survey responders differed from non-responders on a limited set of important variables. Although it would not be appropriate to use nonresponse weights for the aims of this sub-study, it may be important to utilize nonresponse weights accounting for differences in region of residence, last branch of military service, and race in future analyses with these data.
In conclusion, enhanced, rapport-building recruitment methods were used successfully in this study to increase the response rate of women veterans of reproductive age. Findings from this study can be used to inform the design of future surveys with veterans in an effort to reduce nonresponse bias and improve validity of study findings.
Funding information
Department of Veterans Affairs Health Services Research and Development, Grant/Award Number: 1I21HX002526-01A1)
Footnotes
Publisher's Disclaimer: Disclaimer
Publisher's Disclaimer: This material presented is based upon work supported in part by the Department of Veterans Affairs and the Rocky Mountain MIRECC for Suicide Prevention. The views expressed are those of the authors and do not necessarily represent the views or policy of the VA or the United States Government. Preliminary study findings were presented at the August 2019, VA-DoD Suicide Prevention Conference, Nashville, TN and the October 2019, IASR/AFSP International Summit on Suicide Research, Miami, FL.
CONFLICT OF INTERESTS
The authors declare that there are no conflict of interests.
REFERENCES
- Anestis MD, & Green BA (2015). The impact of varying levels of confidentiality on disclosure of suicidal thoughts in a sample of United States National Guard personnel. Journal of Clinical Psychology, 71(10), 1023–1030. 10.1002/jclp.22198 [DOI] [PubMed] [Google Scholar]
- Borrero S, Callegari LS, Zhao X, Mor MK, Sileanu FE, Switzer G, … Schwarz EB (2017). Unintended pregnancy and contraceptive use among women veterans: The ECUUN study. Journal of General Internal Medicine, 32(8), 800–908. 10.1007/s11606-017-4094-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Center for Disease Control and Prevention. (2019) BEHAVIORAL RISK FACTOR SURVEILLANCE SYSTEM: 2018 Summary Data Quality Report. Retrieved from https://www.cdc.gov/brfss/annual_data/2018/pdf/2018-sdqr-508.pdf
- Cochran WG (1999). Sampling techniques, New York, NY: John Wiley & Sons. [Google Scholar]
- Coughlin SS, Aliaga P, Barth S, Eber S, Maillard J, Mahan C, …Williams M (2017). The effectiveness of a monetary incentive on response rates in a survey of recent U.S. veterans. Survey Practice, 4(1), 1–10. 10.29115/SP-2011-0004 [DOI] [Google Scholar]
- Daikeler J, Bosnjak M, & Manfreda KL (2019). Web versus other survey modes: An updated and extended meta-analysis comparing response rates. Journal of Survey Statistics and Methodology, 8, 1–27. 10.1093/jssam/smz008 [DOI] [Google Scholar]
- Davern M, McAlpine D, Beebe TJ, Ziegenfuss J, Rockwood T, & Call KT (2010). Are lower response rates hazardous to your health survey? An analysis of three state telephone health surveys. Health Services Research, 45(5), 1324–1344. 10.1111/j.1475-6773.2010.01128.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eber S, Barth S, Kang H, Mahan C, Dursa E, & Schneiderman A (2013). The national health study for a new generation of United States veterans: Methods for a large-scale study on the health of recent veterans. Military Medicine, 178(9), 966–969. 10.7205/MILMED-D-13-00175 [DOI] [PubMed] [Google Scholar]
- Fan W, & Yan Z (2010). Factors affecting response rates of the web survey: A systematic review. Computers in Human Behavior, 26(2), 132–139. 10.1016/j.chb.2009.10.015 [DOI] [Google Scholar]
- Fowler FJ (2001). Survey Research Methods (3rd ed.). Thousand Oaks, CA: Sage Publications. [Google Scholar]
- Gnambs T, & Kaspar K (2017). Socially desirable responding in web-based questionnaires: A meta-analytic review of the candor hypothesis. Assessment, 24(6), 746–762. 10.1177/1073191115624547 [DOI] [PubMed] [Google Scholar]
- Groves RM (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675. https://www.jstor.org/stable/4124220 [Google Scholar]
- Guo Y, Kopec JA, Cibere J, Li LC, & Goldsmith CH (2016). Population survey features and response rates: A randomized experiment. American Journal of Public Health, 106(8), 1422–1426. 10.2105/AJPH.2016.303198 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartge P, & Cahill J (2008). Field methods in epidemiology. In Rothman KJ, Greenland S & Lash TL (Eds.), Modern Epidemiology (3rd ed., pp. 492–510). Philadelphia, PA: Lippincott Williams & Wilkins. [Google Scholar]
- Hom MA, Stanley IH, & Joiner TE (2016). The web-based assessment of suicidal and suicide related symptoms: Factors associated with disclosing identifying information to receive study compensation. Journal of Personality Assessment, 90(6), 616–625. 10.1080/00223891.2016.1180528 [DOI] [PubMed] [Google Scholar]
- Hoffmire CA, & Denneson LM (2018). Concerning trends in suicide among female veterans points to a need for more research on tailored interventions. VA HSR&D Forum, 9. Retrieved from. https://www.hsrd.research.va.gov/publications/forum/spring18/Forum_spring2018.pdf [Google Scholar]
- Johnson TP, & Wislar JS (2012). Response rates and nonresponse errors in surveys. JAMA: Journal of the American Medical Association, 307(17), 1805–1806. 10.1001/jama.2012.3532 [DOI] [PubMed] [Google Scholar]
- Kessler RC, Heeringa SG, Colpe LJ, Fullerton CS, Gebler N, Hwang I, … Ursano RJ (2013). Response bias, weighting adjustments and design effects in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). International Journal of Methods in Psychiatric Research, 22(4), 288–302. 10.1002/mpr.1399 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kotzias V, Engel CC, Ramchand R, Ayer L, Predmore Z, Ebener P, …Karras E (2019). Mental health service preferences among women veterans in crisis: Perspectives of Veterans Crisis Line responders. Journal of Behavioral Health Services & Research, 46(1), 29–42. 10.1007/s11414-018-9635-6 [DOI] [PubMed] [Google Scholar]
- McCarthy JF, Blow FC, Ignacio RV, Ilgen MA, Austin KL, & Valenstein M (2012). Suicide among patients in the Veterans Affairs health system: Rural–urban differences in rates, risks, and methods. American Journal of Public Health, 102(S1), S111–S117. 10.2105/AJPH.2011.300463 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mortier P, Auerbach RP, Alonso J, Axinn WG, Cuijpers P, Ebert DD, … Bruffaerts R (2018). Suicidal thoughts and behaviors among college students and same-aged peers: Results from the World Health Organization world mental health surveys. Social Psychiatry and Psychiatric Epidemiology, 53, 279–288. 10.1007/s00127-018-1481-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muehlhausen W, Doll H, Quadri N, Fordham B, O’Donohoe P, Dogar N, & Wild DJ (2015). Equivalence of electronic and paper administration of patient-reported outcome measures: A systematic review and meta-analysis of studies conducted between 2007 and 2013. Health and Quality of Life Outcomes, 13, 13. 10.1186/s12955-015-0362-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Center for Veterans Analysis and Statistics. (2017). Table 1L: Vetpop2016 living veterans by age group, gender, 2015-2045. Retrieved from https://va.gov/vetdata/Veteran_Population.asp
- Office of Mental Health and Suicide Prevention. (2019a). National veteran suicide prevention annual report. Retrieved from U.S. Department of Veterans Affairs: https://www.mentalhealth.va.gov/suicide_prevention/data.asp
- Office of Mental Health and Suicide Prevention. (2019b). 2005-2017 National Suicide Data Appendix. Retrieved from U.S. Department of Veterans Affairs https://www.mentalhealth.va.gov/suicide_prevention/data.asp
- Office of Rural Health. (2020). Rural definition. Retrieved from http://ruralhealth.va.gov/aboutus/ruralvets.asp#def
- Poynton TA, DeFouw ER, & Morizio LJ (2019). A systematic review of online response rates in four counseling journals. Journal of Counseling & Development, 97(1), 33–42. 10.1002/jcad.12233 [DOI] [Google Scholar]
- Ratcliffe M, Burd C, Holder K, & Fields A (2016). Defining rural at the U.S. Census Bureau: American community survey and geography brief ACSGEO-1. https://www.census.gov/content/dam/Census/library/publications/2016/acs/acsgeo-1.pdf
- Rosenberg M, Townes A, Taylor S, Luetke M, & Herbenick D (2019). Quantifying the magnitude and potential influence of missing data in campus sexual assault surveys: A systematic review of surveys 2010–2016. Journal of American College Health, 67(1), 42–50. 10.1080/07448481.2018.1462817 [DOI] [PubMed] [Google Scholar]
- Scott JC, Pietrzak RH, Southwick SM, Jordan J, Silliker N, Brandt CA, & Haskell SG (2014). Military sexual trauma interacts with combat exposure to increase risk for posttraumatic stress symptomology in female Iraq and Afghanistan veterans. Journal of Clinical Psychiatry, 75(6), 637–643. 10.4088/JCP.13m08808 [DOI] [PubMed] [Google Scholar]
- Shih T, & Fahn X (2008). Comparing response rates from web and mail surveys: A meta-analysis. Field Methods, 20(3), 249–271. 10.1177/1525822X08317085 [DOI] [Google Scholar]
- Shumm WR, Bollman SR, Jurich AP, Castelo C, Sanders D, & Webb FJ (2000). Understanding mail survey response rates among male Reserve component Gulf War era Veterans. Psychological Reports, 87, 859–880. [DOI] [PubMed] [Google Scholar]
- Shumm WR, Jurich AP, Bollman SR, Sanders D, Castelo C, & Webb FJ (1999). Understanding mail survey response rates among female Reserve component veterans serving during the Persian Gulf war. Psychological Reports, 85, 653–664. [DOI] [PubMed] [Google Scholar]
