Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Sep 1.
Published in final edited form as: J Pain Symptom Manage. 2017 Jul 13;54(3):368–375. doi: 10.1016/j.jpainsymman.2017.07.005

Mind the Mode: Differences in Paper vs. Web-based Survey Modes among Women with Cancer

Teresa L Hagan 1, Sarah M Belcher 2, Heidi S Donovan 2
PMCID: PMC5610085  NIHMSID: NIHMS892513  PMID: 28711752

Abstract

Context

Researchers administering surveys seek to balance data quality, sources of error, and practical concerns when selecting an administration mode. Rarely are decisions about survey administration based on the background of study participants, though socio-demographic characteristics like age, education, and race may contribute to participants’ (non)responses.

Objectives

In this study, we describe differences in paper- and web-based surveys administered in a national cancer survivor study of women with a history of cancer in order to compare the ability of each survey administrative mode to provide quality, generalizable data.

Methods

We compared paper- and web-based survey data by socio-demographic characteristics of respondents, missing data rates, scores on primary outcome measure, and administrative costs and time using descriptive statistics, tests of mean group differences, and linear regression.

Results

Our findings indicate that more potentially vulnerable patients preferred paper questionnaires and that data quality, responses, and costs significantly varied by mode and participants’ demographic information. We provide targeted suggestions for researchers conducting survey research to reduce survey error and increase generalizability of study results to the patient population of interest.

Conclusions

Researchers must carefully weigh the pros and cons of survey administration modes to ensure a representative sample and high-quality data.

Keywords: Surveys and Questionnaires, Selection Bias, Neoplasms, Vulnerable Populations

Background

Scientists conducting self-administered survey research must choose whether to use paper-based or web-based surveys or a combination of both. While questions about the use of different modes may seem innocuous, it is important that researchers consider the consequences of each approach. Two central goals of survey research are to (1) include the broadest possible sample from the population of interest to create generalizable results and (2) reduce error in survey responses (Fowler, 2014). Survey research, therefore, must ensure that the survey can be accessed and completed by a representative sample of the target population and can be easily administered, accurately completed, and seamlessly collected.

There are pros and cons for the use of both paper- and web-based questionnaires. Table 1 summarizes various types of error present within survey research, how they differ across survey modes, and possible ways to reduce these errors using examples from our study (Waltz et al., 2017). Paper-based administration ensures uniform presentation of questionnaire design and flow. Simple procedures can be used to de-identify questionnaires (e.g., labeling surveys with study ID numbers), reducing the risks of identifiable patient data being inadvertently discovered. Major downsides of paper-based administration include the financial costs (e.g., postage, survey, and mailing materials), participants purposefully or accidentally skipping and missing items, and time required to process survey data (Wright, 2005).

Table 1.

Types of Error and their Associated Risks and Error Reduction Strategies

Error Type Definition Risk
Paper-based Survey Web-based Survey
Sampling error The degree to which the sample does not represent the population being described Missing eligible individuals who are not able to be sampled during the study Missing eligible individuals who are not part of groups or organizations that do not have an online presence or web capacity
Error Reduction: Error Reduction:
Plan to have a broad outreach strategy;
Include multiple organizations and partners that representing multiple geographic locations
Provide opportunity for web-based survey completion among individuals recruited from non-web-based organizations
Measurement error The degree to which imperfections or lack of clarity in the measurement tool cause errors in measurement Format and layout of paper questionnaires leads to missed questions, misinterpretation of items, etc. (especially with items requiring branching logic) Format and layout of web-based questionnaires leads to missed questions, misinterpretation of items, etc.
Error Reduction: Error Reduction:
Careful formatting of paper surveys to direct participants’ attention to clearly worded items, defined response options, and flow of survey pages; Pilot testing of all surveys; Random ordering of surveys Careful formatting of web-based surveys for multiple platforms;
Pilot testing across multiple web-based platforms; Random ordering of surveys
Coverage error The degree to which a survey statistic differs from the true value because the sample frame does not cover the population Missing eligible individuals without an accurate street address Missing eligible individuals without access to the internet or a smart phone;
Missing population with sensitive spam folders who may not receive survey
Error Reduction: Error Reduction:
Providing surveys directly to individuals at local meetings, organizations, etc. Providing internet access and/or devices to eligible participants; Send personalized emails rather than mass mailings from a verified email account, preferable from an academic institution or organization
Non-response error The number of accessed members of a population who do not respond or complete the survey Unreturned survey packets Unopened emails containing the link to the survey;
Opened emails but incomplete surveys
Error Reduction: Error Reduction:
Postcard reminders to complete the survey at specified time points;
Replacement surveys
Email reminders to complete the survey at specified time points; Reminders of missed items within the survey

Web-based administration allows researchers to control the progression of questionnaire items and to manipulate the order of questionnaire presentation. The convenience and low costs associated with web-based administration makes this method appealing to academic researchers trying to reduce overhead costs (Greenlaw and Brown-Welty, 2009). Access to the internet continues to increase. Eighty-four percent of American adults age ≥18 use the internet, including 58% of senior citizens, 78% of rural Americans, and equal utilization across genders (Perrin and Duggan, 2015). Yet the widening computer literacy and digital divide in computer and internet use patterns by race, ethnicity, income, educational attainment, and medical health status means that certain groups of people may be left out of research primarily conducted online (Choi and DiNitto, 2013; Perrin and Duggan, 2015). Moreover, security concerns raise human subjects issues regarding patient privacy and breaches of confidentiality.

While several research studies have analyzed the differences in response rates (Kaplowitz et al., 2004; Sax et al., 2003) and survey scores between paper- and web-based survey administration modes (Hoebel et al., 2014), few studies have specifically analyzed differences among respondents with cancer. Bennett and colleagues (2016) demonstrated similar outcomes and patient acceptability for various administration modes of the National Cancer Institute’s Patient Reported Outcomes for adverse events. One study among testicular cancer survivors noted socio-demographic differences in participants completing web- vs. paper-based modes (Smith, King, Butow, & Olver, 2011). Growing evidence supports the use of the internet by cancer survivors (van de Poll-Franse and van Eenbergen, 2008), yet research regarding survey administration methodology has not specifically considered this patient population, despite it being a group often burdened by cancer and treatment concerns. Furthermore, no studies have looked at differences related to socio-demographic factors and participant choice in survey administration mode.

The purpose of this study is to determine if systematic differences in survey responses exist between self-administered paper- and web-based survey modes among women with a history of cancer. We aimed to evaluate whether differences existed in the data collected by paper- and web-based questionnaires, and, if so, what the implications of these differences are for the quality and generalizability of the data received. Specifically, we compare paper- and web-based questionnaire responses using the following criteria: (1) socio-demographic characteristics of respondents, (2) missing data rates, (3) scores on our primary outcome (i.e., self-advocacy), and (4) administrative costs and time. Based on our findings, we provide recommendations to survey researchers looking to reduce measurement error, sampling bias, and administrative burden while ensuring broad reach within their target population.

Methods

Parent Study

The Self-Advocacy Study was a national cross-sectional survey study that aimed to evaluate the psychometric properties of a new measure of self-advocacy for women with a history of cancer (Hagan et al., 2017). Participants were eligible if they had an adult diagnosis of invasive cancer (e.g. not basal cell carcinoma or in situ cancer) and could read and write in English. Participants received either a paper- or web-based questionnaire packet that included a battery of ten questionnaires including the Female Self-Advocacy in Cancer Survivorship (FSACS) Scale and additional validated psycho-social, socio-demographic, and health history measures. The 318 participants reported 20 different cancer diagnoses; the most common diagnosis was breast (38.4%) followed by ovarian (20.1%) cancer. Participants were, on average, 8.9 (SD=8.4) years past their original cancer diagnosis; 19.8% were within a year of their diagnosis and 44.7% were more than five years past their diagnosis. This study was approved by the University of Pittsburgh Institutional Review Board.

Survey Methodology

We used Dillman’s Tailored Design Method (2014) to direct procedures for both modes of survey administration. Dillman focuses on ways to design surveys that engage the participant, reduce error, and enhance data quality. This method, based on social constructivist theory, is one of the best-known texts describing evidence-based methods to conduct survey research.

Survey Administration

Potential participants were introduced to the study in-person or via online or mailed invitation, matching contact type with type of first researcher contact (e.g., mailings from a tumor registry, in-person at an advocacy event attended by a study team member, online through a cancer organization listserv, etc.). After providing informed consent to participate in the research study, all participants were asked to select which self-administered survey mode (paper- or web-based) they preferred. We let participants choose their survey mode to reduce barriers to completing the study, support participants’ autonomy, and to increase trust between the participant team and researcher. Once participants made their decision, we used that mode exclusively for questionnaire administration, reminders, and any further communication. The study schema in Figure 1 demonstrates the various methods by which participants were recruited to the study, preferred survey modality, and survey completion rates by mode. Of note, 36 participants who initially received paper-based surveys and 11 participants initially receiving web-based surveys requested the alternative mode.

Figure 1.

Figure 1

Study schema for paper- and web-based survey administration and completion

aFor hand-delivered paper-based surveys and emailed web-based surveys, we do not know how many potential participants were originally contacted the paper-based surveys were hand-delivered by leaders of advocacy organizations and emails were sent to broad distribution networks owned by advocacy and cancer organizations.

bThree participants completed the survey packet who were later determined to be ineligible because their self-reported cancer type did not meet inclusion criteria (e.g., non-invasive cancer diagnosis).

We used the web-based data management website Qualtrics (University of Pittsburgh, 2016) to build and administer the web-based questionnaires. We ensured that the wording and lay-out of each questionnaire were as identical as possible to the paper-based, and that the smart-phone and personal computer interfaces were similar. The FSACS Scale, our primary outcome of interest, was always presented first, and we automatically reminded participants if they forgot to respond to one of the scale items to reduce missing data.

We sent reminders to both groups of participants. Participants completing the paper questionnaires were sent postcards and a replacement packet if their completed packet was not returned by days 14 and 21, respectively. Of note, we could not do this for participants we recruited from the Pennsylvania Tumor Registry due to the regulations of the registry. Participants completing web-based questionnaires were sent email reminders on days 5 and 10 with a personalized link to their questionnaires. We sent reminders sooner for the web-based questionnaires since they are delivered immediately, whereas the paper questionnaires required time to be delivered. Participants in both groups received a $10 Amazon.com gift code after completing the survey packet.

Analyses

We assessed differences between paper- and web-based administration modes in four ways using descriptive statistics and independent sample t-tests. First, we compared participants’ socio-demographic characteristics by administration mode. Second, we compared the extent of missing data across the two survey modes. We concentrated on two types of missing data: (1) having more than 20% missing data on more than one questionnaire and (2) incomplete responses on our most important study variable, the FSACS Scale. Third, we compared participants’ scores on the FSACS Scale by administration mode using linear regression models assessing the impact of mode. We looked at the three distinct sub-dimensions of the FSACS Scale separately: (1) Informed Decision Making, (2) Connected Strength, and (3) Effective Communication with the Health Care Team. Finally, to assess the administrative costs and time associated with each mode, we describe the process of administering both survey modes and total costs associated with the paper-based surveys. In total, these four analyses provide a comprehensive comparison between paper- and web-based survey administration modes and allow us to assess which mode reduces survey error while ensuring survey access and reach.

Results

Socio-demographic Differences

Participants who completed paper-based surveys differed from those who completed web-based surveys on all demographic variables including age, education, annual household income, healthcare insurance, home location, race, marital status, and employment status (Table 2). Descriptive statistics indicate that participants who are older, unmarried, retired, non-White, had less education, had lower income, and did not have private healthcare insurance tended to request paper-based survey packets.

Table 2.

Demographic Information by Survey Administration Mode

Paper-based
n = 137
Web-based
n = 181
Statistic p-value
N % N %
Age M(SD) t (df=314)
66.19 11.26 52.54 11.86 −10.35 <.001**
Education χ2 (df=3)
 High school or less (n=73) 50 36.50 23 12.71 35.62 <.001**
 Vocational or 2-year degree (n=72) 36 26.28 36 19.89
 Bachelor degree (n=83) 27 19.71 56 30.94
 ≥ Some graduate school (n=84) 21 15.33 63 34.81
Income χ2 (df=3)
 <$20k (n=27) 20 14.60 7 3.87 32.76 <.001**
 $20k–49k (n=65) 40 29.20 25 13.81
 $50k–79k (n=70) 26 18.98 44 24.31
 $80k–149k (n=74) 20 14.60 54 29.83
 ≥$150k (n=20) 4 2.92 16 8.84
Health insurance χ2 (df=4)
 Private only (n=163) 51 37.23 112 61.88 24.42 <.001**
 Medicare only (n=27) 16 11.68 11 6.08
 Medicaid only (n=10) 3 2.19 7 3.87
 Medicare and private (n=60) 33 24.09 27 14.92
 Veteran’s Administration, Social Security, Disability, or Other (n=58) 33 24.09 24 13.26
Location χ2 (df=3)
 Rural (n=22) 15 10.95 7 3.87 11.23 .011*
 Suburban (n=146) 53 38.69 93 51.38
 Urban (n=99) 49 35.77 50 27.62
 Other (n=43) 15 10.95 28 15.47
Race χ2 (df=3)
 White (n=281) 117 85.40 164 90.61 9.91 .019*
 Black (n=25) 17 12.41 8 4.42
 Asian (n=3) 0 0.00 3 1.66
 Other (n=8) 2 1.46 6 3.31
Marital status χ2 (df=1)
 Married or living with significant other (n=216) 75 54.74 141 77.90 18.52 <.001**
 Not married or partnered (n=101) 61 44.53 40 22.10
Employment status χ2 (df=5)
 Working (n=133) 35 25.55 98 54.14 42.41 <.001**
 Unemployed (n=9) 2 1.46 7 3.87
 Retired (n=101) 67 48.91 34 18.78
 Disabled (n=40) 20 14.60 20 11.05
 Homemaker (n=25) 10 7.30 15 8.29
 Other (n=7) 1 0.73 6 3.31
*

p < .05

**

p < .001

Missing Data

Table 3 compares missing data by survey administration mode. Compared to participants who completed web-based questionnaires, participants who completed paper-based questionnaires were more likely to have missed >20% of items on one or more questionnaires (21.1% vs. 10.2%) and were more likely to miss items on the FSACS Scale (23.5% vs. 5.5%).

Table 3.

Survey Information by Administration Mode

Paper-based Surveys
N=137
Web-based Surveys
N=181
Responders who did not complete >80% of at least 2/10 surveys in the packet (N (%))a 37 (21.1) 21 (10.2)
Missing data on the Female Self-Advocacy in Cancer Survivorship Scale (N(%) of incomplete surveys) 32 (23.5) 10 (5.5)
a

Out of N=175 for paper-based surveys and N=206 for web-based surveys

Differences in Outcome Variable Scores

Based on the socio-demographic differences known to impact self-advocacy, we controlled for age and education in our regression analyses. To reduce multicollinearity concerns and to avoid overfitting the model, we did not include all variables that were significantly different across survey modes presented in Table 2. Across two of the three sub-dimensions on the FSACS Scale, participants’ scores did not significantly differ based on their survey administration mode. However, the regression model was significant for the Connected Strength subscale (a woman’s ability to both receive support from others while also giving support to those who depend on her) (F(3, 309) = 5.02, p= 0.02), with survey administration mode being significantly associated with the scale score (beta = −.14, t(312) = −2.11, p = .04). Participants who completed web-based questionnaires (M = 34.01, SD = 5.54) scored significantly higher on the Connected Strength sub-dimension compared to participants who completed the paper-based questionnaires (M = 31.95, SD = 6.27).

Administrative Costs and Time

The web-based administration mode required minimal cost and time. We built and piloted the web-based questionnaires on Qualtrics over the course of two weeks. We developed standard, personalized, automated emails to send participants based on whether or not they had completed the questionnaires. Because Qualtrics is a data management system in addition to a survey delivery system, survey data was automatically entered into a secure, HIPAA-compliant format that could be exported to statistical software.

The paper-based administration mode was extensively more time and labor intensive. The paper-based administration mode was used mostly to target participants of the Pennsylvania Tumor Registry, our largest recruitment site and the site with the most representative sample of potential participants. We mailed 897 introductory packets to potential participants at a cost of $3,420.28 including postage, paper, ink cartridges, thank you cards, and student worker salaries for data entry. The lapsed time between the study team sending questionnaire packets to participants and receiving completed surveys was substantially longer for the paper-based mode (M = 31.5 days; Range: 8 – 93 days) than for the web-based mode (M = 3.2 days; Range: 0–24 days). We did not document the time associated with preparing, receiving, and entering data from paper-based questionnaires, but nonetheless note the significant amount of energy required to coordinate and manage data from the paper-based survey administration mode.

Discussion

Our results demonstrate substantial differences in the demographics and survey responses of participants who completed questionnaires using paper versus web-based administration mode. Although paper-based questionnaires were more expensive to administer and had higher missing data rates, they were preferred by subgroups who are often excluded from research (elderly, minorities, and those with lower socio-economic status).

Acknowledging differences in responses as well as practical concerns of each survey mode can assist researchers in applying the most appropriate mode for their patient population. For example, our results support research demonstrating that participants who are older and less educated prefer paper questionnaires (Zuidgeest et al., 2011). Because correlations between self-advocacy and age and education were identified in the parent study, providing an option for paper questionnaires ensured that our results did not exclude eligible patients who were older or had less formal education. Furthermore, we aimed to include as representative a sample as possible by recruiting from the Pennsylvania Tumor Registry, so that we could randomly sample potential participants from the entire registry and target specific subgroups of women. While our response rate from the Tumor Registry was only 17.3%, a low rate as compared with other studies that have used this registry (Kelly et al., 2010), our inclusion criteria were much broader than previous studies and included women diagnosed with cancer from 1985 through 2013. This may have resulted in a sample with higher disease burden of this patient population of cancer survivors and the limits to contacting individuals through a state tumor registry (i.e., members’ addresses and health status are not updated). Also, since participants needed to opt into the study and extra action was required by participants to request web-based questionnaires may have suppressed participation among people who prefer web-based surveys. The low response rate did not justify the high cost and labor associated with the registry including sending reminder notices and replacement questionnaires.

Our findings have direct implications for researchers who want to include patients from potentially vulnerable backgrounds and those not typically included in medical research. Researchers who aim to include a diverse, representative sample must balance the pros and cons of each survey administration mode. While paper-based surveys may offer further reach to participants from less socio-demographically advantaged backgrounds (i.e., low sampling error), researchers must plan for the time and funds necessary to ensure that paper-based surveys are delivered in a timely manner. Questionnaire packets should be pilot tested to reduce missing data due to modifiable survey design factors and systems should be put into place to follow-up on any missed items when appropriate (i.e., measurement error). Web-based survey administration results in fast turn-around times, and back-end processing can be used to reduce missing data (i.e., low measurement error). However, web-based mode may exclude potential participants with poor internet access and a preference for paper-based surveys (i.e., high sampling and coverage errors). Ultimately, since few participants requested to change their survey mode from the one by which they were initially contacted, researchers should carefully consider how survey mode may impact the representativeness of their sample in addition to their data quality and responses.

Based on our findings, we suggest researchers apply a multi-mode survey design to address the disadvantages of each mode. While offering multiple modes requires a concerted effort to ensure the surveys remain as equivalent as possible, providing this flexibility may permit the researchers to access subgroups of the patient population that would be excluded or less likely to participate if a single survey administration mode was offered. Most importantly, by increasing response rates, providing both modes will likely increase the internal validity of the researchers’ findings and therefore have greater impact statistically and clinically. For example, since our primary outcome of interest (i.e., the FSACS Scale) differed by survey mode, we would have had vastly different results if we had only offered participants a mailed or web-based option. Providing both options in a thoughtful, proactive way can help ensure the research study has broad reach and a more accurate assessment of the researchers’ population of interest.

Limitations to this study include the non-random nature of survey mode distribution. Our goal was increase convenience and flexibility for participants in the parent study, and therefore we employed broad recruitment strategies that limited calculation of the total number of potential participants introduced to the study. Also, we did not collect information about participants’ reasons for selecting a specific survey mode. We did not analyze whether differences in questionnaire formatting between survey administration modes impacted results, though the aesthetics and presentation could have impacted participants’ behaviors.

To help ensure study results are generalizable, researchers should take additional steps to ensure all populations can access and complete surveys, especially for patients who are potentially vulnerable based on their socio-demographic history. Offering multiple survey administration modes, piloting questionnaires in the targeted population, and targeting specific groups of patients can help ensure high quality and impactful results.

Acknowledgments

Teresa Hagan was supported by a Doctoral Degree Scholarship in Cancer Nursing, DSCN-14-077-01-SCN from the American Cancer Society. This study was also supported by the National Institute of Nursing Research/National Institute of Health F31NR014066 (Hagan).

Footnotes

Conflicts of Interest: The authors declare no conflicts of interest.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Bennett AV, Dueck AC, Mitchell SA, et al. Mode equivalence and acceptability of tablet computer-, interactive voice response system-, and paper-based administration of the US National Cancer Institute’s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE) Health Qual Life Outcomes. 2016;4(1):24. doi: 10.1186/s12955-016-0426-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Choi NG, DiNitto DM. The digital divide among low-income homebound older adults: Internet use patterns, eHealth literacy, and attitudes toward computer/Internet use. J Med Internet Res. 2013;15(5):e93. doi: 10.2196/jmir.2645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed-mode Surveys: The Tailored Design Method. 4. John Wiley & Sons; Hoboken, NJ: 2014. [Google Scholar]
  4. Fowler FJ., Jr . Survey Research Methods. 5. SAGE; New York: 2014. p. 6. [Google Scholar]
  5. Greenlaw C, Brown-Welty S. A comparison of web-based and paper-based survey methods testing assumptions of survey mode and response cost. Eval Rev. 2009;33(5):464–480. doi: 10.1177/0193841X09340214. [DOI] [PubMed] [Google Scholar]
  6. Hagan TL, Cohen SM, Rosenzweig M, et al. Validating the Female Self-Advocacy in Cancer Survivorship Scale: capturing how patients get their needs met. Under review. [Google Scholar]
  7. Hoebel J, von der Lippe E, Lange C, et al. Mode differences in a mixed-mode health interview survey among adults. Arch Public Health. 2014;72(1):1. doi: 10.1186/2049-3258-72-46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Kaplowitz MD, Hadlock TD, Levine R. A comparison of web and mail survey response rates. Public Opin Q. 2004;68(1):94–101. [Google Scholar]
  9. Kelly BJ, Fraze TK, Hornik RC. Response rates to a mailed survey of a representative sample of cancer patients randomly drawn from the Pennsylvania Cancer Registry: a randomized trial of incentive and length effects. BMC Med Res Methodol. 2010;10(1):65. doi: 10.1186/1471-2288-10-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Perrin A, Duggan M. Americans’ internet access: 2000–2015. Pew Research Center; [Accessed February 17, 2017]. p. 26. Available at: http://www.pewinternet.org/files/2015/06/2015-06-26_internet-usage-across-demographics-discover_FINAL.pdf. [Google Scholar]
  11. Sax LJ, Gilmartin SK, Bryant AN. Assessing response rates and nonresponse bias in web and paper surveys. Res High Ed. 2003;44(4):409–432. [Google Scholar]
  12. Smith AB, King M, Butow P, et al. A comparison of data quality and practicality of online versus postal questionnaires in a sample of testicular cancer survivors. Psychooncology. 2013;22(1):233–237. doi: 10.1002/pon.2052. [DOI] [PubMed] [Google Scholar]
  13. University of Pittsburgh. Qualtrics Survey Service. [Accessed January 31, 2017]; Available at: http://technology.pitt.edu/service/qualtrics-survey-service.
  14. van de Poll-Franse LV, van Eenbergen MC. Internet use by cancer survivors: Current use and future wishes. Support Care Cancer. 2008;16(10):1189–1195. doi: 10.1007/s00520-008-0419-z. [DOI] [PubMed] [Google Scholar]
  15. Waltz CF, Strickland OL, Lenz ER. Measurement in Nursing and Health Research. 5. New York: Springer Publishing Company; 2017. [Google Scholar]
  16. Wright KB. Researching Internet-based populations: advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. J Compute-Med Commun. 2005;10(3):00–00. [Google Scholar]
  17. Zuidgeest M, Hendriks M, Koopman L, et al. A comparison of a postal survey and mixed-mode survey using a questionnaire on patients’ experiences with breast care. J Med Internet Res. 2011;13(3):e68. doi: 10.2196/jmir.1241. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES