Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Sep 1.
Published in final edited form as: Eval Health Prof. 2016 Jan 10;40(3):332–358. doi: 10.1177/0163278715625738

Survey Methods to Optimize Response Rate in the National Dental Practice–Based Research Network

Ellen Funkhouser 1, Kavya Vellala 2, Camille Baltuck 3, Rita Cacciato 4, Emily Durand 5, Deborah McEdward 6, Ellen Sowell 1, Sarah E Theisen 7, Gregg H Gilbert 1; National Dental PBRN Collaborative Group1
PMCID: PMC5002250  NIHMSID: NIHMS811094  PMID: 26755526

Abstract

Surveys of health professionals typically have low response rates, and these rates have been decreasing in the recent years. We report on the methods used in a successful survey of dentist members of the National Dental Practice–Based Research Network. The objectives were to quantify the (1) increase in response rate associated with successive survey methods, (2) time to completion with each successive step, (3) contribution from the final method and personal contact, and (4) differences in response rate and mode of response by practice/practitioner characteristics. Dentist members of the network were mailed an invitation describing the study. Subsequently, up to six recruitment steps were followed: initial e-mail, two e-mail reminders at 2-week intervals, a third e-mail reminder with postal mailing a paper questionnaire, a second postal mailing of paper questionnaire, and staff follow-up. Of the 1,876 invited, 160 were deemed ineligible and 1,488 (87% of 1,716 eligible) completed the survey. Completion by step: initial e-mail, 35%; second e-mail, 15%; third e-mail, 7%; fourth e-mail/first paper, 11%; second paper, 15%; and staff follow-up, 16%. Overall, 76% completed the survey online and 24% on paper. Completion rates increased in absolute numbers and proportionally with later methods of recruitment. Participation rates varied little by practice/practitioner characteristics. Completion on paper was more likely by older dentists. Multiple methods of recruitment resulted in a high participation rate: Each step and method produced incremental increases with the final step producing the largest increase.

Keywords: survey methods, participation rates, response rates, online surveys, health professions, dentists

Introduction

Surveys of health-care professionals are a valuable tool in health services and policy research because they are a cost-effective method to assess knowledge, attitudes, and practices in delivery of health care (VanGeest, Johnson, & Welch, 2007). Response rate, a measure of the representativeness of the sample, is the most common statistic cited to indicate the quality of a survey (Baruch & Holtom, 2008; Rogelberg & Staton, 2007). These rates, historically and currently, have been lower for health professionals than the general public (Asch, Jedrziewski, & Christakis, 1997; Cummings, Savitz, & Konrad, 2001; Sudman, 1985). From Sudman’s seminal article in 1985, reasons for lower response rates from physician surveys include lack of time, saliency, or perceived lack of importance, concerns about confidentiality, and concern about bias of the survey, either in general or for specific questions, including not allowing a full range of responses to questions. The presence of “gate keepers,” office personnel who in effect screen mail and e-mail requests of the health-care professionals for whom they work, has been cited recently as a major reason for low response among health professionals (Klabunde et al., 2012).

In addition to lower response rates historically among health-care professionals, most reviews find that response rates have been declining (Cho, Johnson, & VanGeest, 2013; Cull, O’Connor, Sharp, & Tang, 2005; McLeod, Klabunde, Willis, & Stark, 2013). In a review of 50 surveys of pediatricians from 1994 to 2002, Cull, O’Connor, Sharp, and Tang (2005) found that response rates decreased from 70% to 63%, for time periods of 1994–1998 to 1999–2002. Cho, Johnson, and VanGeest (2013), in a meta-analysis of 48 surveys of health professionals from 1948 to 2012, found that response rates decreased from over 80% before 1960 to around 50% in 2000 and then to 42% in 2012. In a review of surveys of health-care providers between 2000 and 2010, using a 60% response rate as a benchmark, the percentage of surveys that met this benchmark decreased from 61% in 1998–2000 to 36% in 2005–2008 (McLeod et al., 2013). Possible reasons for declining response rates for health-care professionals are increased requests to complete such surveys and increased workloads, both in number of patients and administrative obligations, although this has not been explicitly demonstrated (Klabunde et al., 2012).

Response rates are not the only measure of quality. Response bias, or nonresponse bias, occurs when those who respond differ from those who don’t on the outcome of interest; this bias has grown in importance as a measure of survey quality (Johnson & Wislar, 2012; Shelley, Brunton, & Horner, 2012). Assessing nonresponse bias can be done in a number of ways. The most common way is comparing characteristics of those who respond with those who do not, preferably on the outcome measure. As this information is rarely available for nonresponders, a surrogate or correlate of the outcome measure may be used. Other ways of assessing nonresponse bias involve comparing early and late responders or following-up more extensively on initial nonresponders. The latter provides only a limited assessment of nonresponse bias because late responders and responders to more extensive follow-up are still responders and thus may not reflect characteristics of true nonresponders. Few studies report on potential response bias (Asch et al., 1997; Cummings et al., 2001). Cummings, Savitz, and Konrad (2001), in a review of 27 mailed physician surveys published between 1986 and 1995, reported that only 18% estimated any type of response bias. Studies that have assessed potential response bias in physician surveys have found little (Field et al., 2002; Kellerman & Herold, 2001; McFarlane, Olmsted, Murphy, & Hill, 2006).

The review by Cho et al. (2013) found minimal response bias; specifically, response was slightly higher for younger professionals, females, and nonspecialty professionals. Higher response rates were reported for (1) mail (57%) than online (38%) or mixed mode (49%), (2) monetary (60%) than nonmonetary (48%) or no incentive (48%), (3) physicians (55%) than nurses (51%) or other health professionals (46%), (4) one (57%) or two follow-up reminders (66%) than none (43%) or three (49%), (5) non-U.S. (57%) than U.S. (43%) setting, and (6) non-RCT (57%) than Randomized Controlled Trial (RCT) (50%) study designs.

Monetary incentives have consistently been found to increase response rates (Asch, Christakis, & Ubel, 1998; Halpern, Ubel, Berlin, & Asch, 2002; Kasprzyk, Montano, St. Lawrence, & Phillips, 2001; Keating, Zazlavsky, Goldstein, West, & Ayanian, 2008; Leung, Ho, Chan, Johnston, & Wong, 2002; Robertson, Walkom, & McGettigan, 2005). Even small amounts, for example, US$1, US$2, and US$5, increase participation (VanGeest et al., 2007). Prepaid incentives are more effective than promised incentives (Delnevo, Abatemarco, & Steinberg, 2004; Leung et al., 2004). In general, most studies have found that the larger the incentive, the larger the effect (Asch et al., 1998; Halpern et al., 2002; Kasprzyk et al., 2001; Keating et al., 2008) but not all (Burt & Woodwell, 2005; VanGeest, Wynia, Cummins, & Wilson, 2001). An optimal amount has not been determined (Klabunde et al., 2012). Few nonmonetary incentives increase participation (Burt & Woodwell, 2005; Halpern et al., 2002).

Studies of factors affecting response rates among nonphysician health-care providers have been few, compared to those among physicians, and most were primarily done by mail (Guise, Chambers, Valimaki, & Makkonen, 2010; Hawley, Cook, & Jensen-Doss, 2009; Hill, Fahrney, Wheeless, & Carson, 2006; Paul, Walsh, & Tzelepis, 2005; Ulrich et al., 2005; VanGeest & Johnson, 2011). In general, findings among nonphysician providers are consistent with those among physicians, namely, that monetary incentives, even small amounts, increase response rates compared to no incentives, nonmonetary, or a lottery. Hawley, Cook, and Jensen-Doss (2009), in a study including both physician and nonphysician providers, found a lower response rate among psychiatrists than therapists, psychologists, counselors, or social workers. This is in contrast to the across-studies comparison by Cho et al. (2013), who found that physicians typically responded at higher rate than did nonphysician providers. VanGeest and Johnson (2011), in their review of studies among nurses, found that nurses responded well to telephone strategies, in contrast to surveys among physicians (Cho et al., 2013) who typically respond very poorly to telephone surveys.

Use of online (also referred to as web based or electronic) methodology to conduct surveys has become increasingly popular. Online methodology has many advantages compared to postal mail or telephone methods, such as being quicker, less expensive, and typically having higher rates of item completeness. The majority of online surveys costs are due to programming and enabling e-mail delivery. Because these largely are initial costs, the cost efficiency of online surveys increases as the sample size increases. Often-cited limitations of online surveys are low response rates and difficulty in specifying the sampling base (Braithwaite, Emery, de Lusignan, & Sutton, 2003; de Leeuw, 2012; van Selm & Jankowski, 2006). The latter occurs when there is an incomplete or outdated list of e-mail addresses for the target population. Beebe, Locke, Barnes, Davern, and Anderson (2007) conducted a randomized mixed-mode study of 500 physicians. One group was contacted by e-mail first and asked to complete the survey online. The other group was sent an invitation and questionnaire via postal mail. One week after initial notification and request for participation, each nonrespondent was sent a reminder, in the same mode (online or paper) as the original request. After another week, each nonrespondent was sent another reminder and request, this time in the “other” mode. The response rate, completed/eligible, was calculated for each arm, web/mail and mail/web, completion/eligible, significance of difference determined. The results were web/mail 62.9% and mail/web 70.2%, p =.07. They concluded that mail then web results in a slightly higher response; however, if time is a factor, they recommended using web then mail. Schleyer and Forrest (2000) presented a cost–benefit analysis comparing postal and electronic mail. If basic assumptions of availability of valid e-mails for study population are met, using electronic is more cost effective at sample sizes of 348 or more; furthermore, the cost–benefit is larger with larger study sizes.

For more than two decades, access to the Internet has seldom been an issue when surveying health-care professionals. The challenge has been, and remains, how to catch professionals’ attention sufficiently to elicit a questionnaire response. This is especially challenging in the current era of information overload. As stated above, few surveys of health-care providers have obtained response rates above 70% (reviews: Flanigan, McFarlane, & Cook, 2008; McLeod et al., 2013; Shelley et al., 2012; VanGeest et al., 2007). The Medical Expenditure Provider surveys, Medical Provider Component, have achieved 80–95% response depending on provider type from the 2006 survey (Stagnitti, Beauregard, & Solis, 2008); however, these are follow-up surveys to providers of patients who participated, conducted primarily by telephone. Therefore, these may not be directly comparable to a typical or stand-alone survey. Virtually, all surveys of dentists with response rates over 80% have been by postal mail and outside of the United States: Swedish orthodontists, 87% (n = 157; Bjerklin & Bondemark, 2008); British general dentists, 86% (n = 75; Sutton, Ellituv, & Seed, 2005); Ugandan dentists, 82% (Mutyabule & Whaites, 2002); and British periodontists, 82% (n = 459; McCrea, 2008).

Shelley, Brunton, and Horner (2012) reviewed 53 surveys of dental radiology published between 1983 and 2010 to develop recommendations to assist future researchers. They argued that study characteristics other than response rates should be considered. One example is the specification of the sampling base. In their review, Shelley et al. (2012) reported a mean response rate of 74%; however, they included in their review surveys that were conducted at meetings in which there is a 100% response rate. As they noted, these surveys are not comparable to a typical survey; response rates from these contexts are of lesser value as a quality indicator.

To our knowledge, no large survey of health-care providers, which we define as having more than 1,000 potential respondents in the sampling frame, with an online component has reported a response rate of over 70%. Using methods described by Schleyer and Forrest (2000), the modified tailored approach of Dillman (2007), and the recommendations from Klabunde et al. (2012), the proceedings from a 2010 National Cancer Institute (NCI) workshop surveying health-care providers, we report our experience using a large online component in obtaining excellent response to a survey of dentists in the National Dental Practice–Based Research Network.

The objectives of this report are to quantify the (1) increase in response rate associated with successive survey methods in a questionnaire completed by dentists and by three sequential categories of recruitment (electronic, paper, and personal follow-up), (2) time to completion with each successive step, (3) contribution from the final method (follow-up by study staff [regional coordinator]), and (4) differences in response rate and mode of response by practice/practitioner characteristics.

Method

The National Dental Practice–Based Research Network (“network”) is a consortium of dentists and dental organizations focusing on improving the scientific basis for clinical decision making (Gilbert et al., 2013). Its mission is “To improve oral health by conducting dental practice-based research and by serving dental professionals through education and collegiality.” It is committed to maximizing the practicality of conducting research about clinical practice across geographically dispersed regions and diverse practice types. The network comprises six geographic regions, each with a regional director and coordinator for administrative purposes. Many details about the network are available at its website, www.nationaldentalpbrn.org. This study was approved by the respective institutional review board(s) of each of the network’s regions.

Enrollment Questionnaire

As part of the network enrollment process, practitioners complete an Enrollment Questionnaire that describes themselves, their practice(s), and their patient population. A copy of the questionnaire is publicly available (National Dental Practice-Based Research Network [PBRN] Study Results Page). Questionnaire items from the Enrollment Questionnaire, which had documented test/retest reliability, were taken from our previous work in a practice-based study of dental care and a PBRN that ultimately led to the National Dental PBRN (Florida Dental Care Study, 2015; Gilbert et al., 2011). The typical enrollee completes the questionnaire online, although a paper option is available. Invitations to enroll are typically done by mass mailings or by face-to-face request during a dental professional meeting. These invitations are one time only; they are not followed up by any further mail, e-mail, or personal contact.

Content of the Isolation Techniques Questionnaire

After confirming on the questionnaire itself that the respondent was a general dentist and that he or she does at least one root canal treatment each month (as compared to the “do at least some” criterion taken from the Enrollment Questionnaire), respondents were asked for the number of root canal treatments performed each month and the frequency and type of isolation methods used. The questionnaire was comprised of 57 questions printed on eight pages. A copy of the full questionnaire is publicly available (National Dental PBRN Study Results Page). There is no overlap of information requested between the enrollment and isolation techniques questionnaire.

Electronic Development and Testing of Online Isolation Techniques Questionnaire

An online web survey system was used for primary data collection and management. The system tracked all activity for each of the participating dentists. Each component was tested and a full system test was executed to ensure that it was functional as expected; functional testing included screen review, navigation assessment, and data entry. The web survey system was tested on Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, and Apple’s Safari browsers. The web survey was rendered as a series of hypertext markup language pages. Users advanced through the pages by selecting a “Next” button. The content on each page was limited to help minimize the scrolling required. Respondents could skip questions with the exception of the eligibility question but were prompted when they tried to advance to a different screen or submit a page with an omitted question and were asked to confirm if they wanted to leave the screen without answering the question. The system timed out after 30 min of no use. When they logged back in, the system would take them to the screen where they left off. Respondents were allowed to save their responses and continue at a later time. User acceptance testing of the web-based survey was performed by various members at each of the network administrative regions. The user testing evaluated the readability, feasibility, and Internet browser compatibility of the electronic survey to ensure that the system functioned as expected and was consistent with all protocol requirements.

Administration of the Isolation Techniques Questionnaire

By January 31, 2014, more than 5,000 persons had completed an Enrollment Questionnaire; 1,876 of these persons were invited to participate in the Isolation Techniques questionnaire because they met four criteria: (1) general dentist, (2) currently practicing/seeing patients, (3) reported performing at least some root canal treatment, and (4) selected the “limited” or “full” participation levels, as compared to the “information only” level of participation in the network. Preprinted invitation letters were postal mailed to eligible practitioners, inviting them to participate and informing them that they would receive an e-mail with a link to the electronic version of the questionnaire. At the time of the e-mail, the practitioners were given the option to request a paper version of the survey; none did. Practitioners were asked to complete the questionnaire within 2 weeks. Two reminder e-mails were sent at 2-week intervals to those who had not completed the questionnaire. A postal reminder was sent with the third e-mail reminder, again 2 weeks after prior reminder, 6 weeks after the initial e-mail request; a printed version of the questionnaire was included with the postal reminder offering the practitioners the option of completing the questionnaire on paper. After an additional 2 weeks, another postal reminder with a printed questionnaire was sent. If a response was not received within 2 weeks, regional coordinators followed up to ensure that the network communications had been received and ascertain whether the dentist was interested in participating. There was not a specific protocol regarding mode and order of contacting by the regional coordinators. They followed up with telephone calls, fax, and personal e-mails (from themselves as opposed to being from the network Coordinating Center); some started with telephone calls, while others focused on e-mails. Additionally, each region holds annual meetings for practitioners to inform them about current and planned network studies and elicit their input. Practitioners in regions (South Central, Midwest, and Northeast) who held an annual practitioner meetings between February and April 2014 and who had received at least the initial e-mail invitation, but had not completed the survey, were offered the opportunity to complete a paper survey at the meeting.

Data collection was closed 12 weeks after the original e-mail invitation. Practitioners or their business entities were remunerated US$50 for completing the questionnaire because monetary incentives have consistently shown that these incentives increase participation (VanGeest et al., 2007); if they confirmed at the end of the survey that they would like remuneration (86% did so). Survey data were collected from January 31, 2014, to July 15, 2014; completion of the isolation techniques questionnaire was not linked in any way to when the enrollment questionnaire had been completed. After survey collection, regional coordinators’ follow-up logs were reviewed to ascertain whether the network communications e-mail links and postal questionnaires had been received and whether the practitioner had moved locations as well as the number of contact attempts made.

To document test/retest reliability, 43 respondents completed the same questionnaire twice online. The mean (SD) time between test and retest was 15.5 (3.0) days. The agreement between Time 1 and Time 2 for individual questionnaire items was quantified using a mean weighted κ score, which was 0.62, with an interquartile range of 0.46–0.79.

Of the 1,876 dentists invited, 24 were deemed ineligible before beginning the questionnaire (4 had died, 15 were no longer practicing, and 5 no longer provided root canal treatment) and 136 were determined ineligible after completing the questionnaire (3 no longer a general dentist and 133 reported not doing at least one root canal treatment each month), leaving 1,716 eligible dentists; 22 were active refusals and 194 were nonrespondents. Overall, 1,500 responded: 1,488 (87%) completed the entire survey, 6 only answered the first question, 3 erroneously checked that they were not a general dentist and consequently were electronically skipped to end of survey, and 3 completed varying parts of the first half of the questionnaire. The average time to complete the survey was 15 min.

Analysis

Participation at each stage was calculated two ways: (1) incremental: of remaining eligible, the proportion participating; and (2) proportional: of those participating, the proportion accrued at each stage. Participation was also categorized into three groups: (1) after initial postal notification letter, recruited, and completed electronically, that is, online; (2) recruited with e-mail and postal, completed on paper; and (3) required follow-up of regional coordinators, completed online or on paper. Significance of differences in the proportion of eligible dentists who participated according to practice/practitioner characteristics was ascertained in bivariate analysis using χ2 tests. We also tested differences between those for whom regional coordinators followed up with whether or not they participated, and also among those who participated, in whether the survey was completed online or on paper. Independent associations with participation were assessed using logistic regression. Where indicated, some categories were collapsed (e.g., region and practice type), bivariate analyses rerun, and dichotomous grouping entered in the model. Characteristics with p <.20 in bivariate analysis were entered into the model; next, stepwise regression was used, removing variables until only those with p <.05 remained. Odds ratios (OR) and 95% confidence intervals (CIs) were calculated from the models. All analyses were performed using SAS (see https://www.sas.com/presscenter/guidelines.pdf) 9.4 (SAS Institute). Of the 1,488 completed surveys, 21 were completed at annual practitioner meetings. To enhance generalizability, the analysis described below excludes the 21 practitioners who completed the survey at an annual meeting.

Results

Completion Rates and Type

As shown in Table 1, 30% of practitioners completed the survey after the initial e-mail, 19% and 11% after the second and third e-mail reminder, respectively, 19% and 32% after the fourth e-mail/first postal and second postal reminder, respectively, and of the remaining practitioners 52% completed the survey either more than 2 weeks after the second postal reminder but before regional coordinators followed up or after regional coordinators followed up. Proportionally, 35% of practitioners responded within 2 weeks of the first e-mail, an additional 22% within 2 weeks of the second and third e-mails, another 26% within 2 weeks of the fourth e-mail/first postal and second postal reminder, and 4% within 2 weeks of the second postal reminder but before being contacted by regional coordinator and 13% as a result of regional coordinator follow-up. Overall, 66% were recruited electronically, 21% with postal follow-up and completion on paper, and the final 13% with follow-up by regional coordinators and completed online or paper. A total of 24% (353/1,470) were completed on paper.

Table 1.

Participation Rates by Recruitment Step.

Recruitment Step na Eligible Participated
No. of Days From Initial E-mail to Participation
Median No. of Days From Prior Step to Participation
nb Percentage of Remaining Eligible Median Interquartile Range
1. Initial e-mail 1,695 519 31 2 1–4
2. First e-mail reminder 1,176 222 19 16 15–19 2
3. Second e-mail reminder 954 97 10 30 29–33.5 2
4. Third e-mail/First postal reminder 857 165 19 54 46–57 12
5. Second postal reminder 692 223 32 62 61–67 6
6. Regional coordinator follow-upc 469 244 52 81d 75–91 25
Total 1,695 1,470 87
a

Excludes 21 completed at regional meetings.

b

Within 2 weeks of steps No. 1–5.

c

Of 244, 55 were completed before regional coordinator contact.

d

Median No. of days for 55 was 75; for remaining 189, median of 83 days.

Time to Completion

The median time to completion was 2 days after the initial e-mail, for those completing prior to the reminder e-mail sent 2 weeks later. Similarly, there were spikes at 2 days after the second and third e-mail reminders, with diminishing gain in recruitment described above. Spikes after a postal reminder had been sent were longer at 12 and 6 days, respectively; these represent greater recruitment gains than the simple e-mail reminders. The recruitment time involving follow-up by regional coordinators was longer, peaking at 25 days, but nonetheless accompanied by a substantial gain in response.

Contribution by Regional Coordinators

There were 469 eligible practitioners from whom surveys had not been received 2 weeks after the second postal mail request and for whom regional coordinators followed up to ascertain if they had received study information. A total of 55 were completed prior to regional coordinators attempting follow-up (28 online and 27 on paper). Of the 411 practitioners whom regional coordinators contacted or attempted to contact, 189 (46%) were completed. The median number of times a regional coordinator contacted or attempted to contact a practitioner was three (Interquartile range: 2–3; range: 1–11); fewer for when surveys were completed than when not (medians: 2 vs. 3, p <.001). Of the 189 completed surveys, 77 (41%) involved only one contact attempt.

Thirty practitioners were no longer at the practice of record; new practice information was obtained for 27, of whom 10 completed the survey. Only seven practitioners or their offices reported not receiving survey information, and when resent, either by e-mail or fax, all seven were completed. Offices of three practitioners refused information (hung-up) when regional coordinators tried to ascertain if information was received. Overall, completed surveys were obtained from 46% (n = 189 of the 411) of practitioners after follow-up by regional coordinators. Of these, 127 (67%) were completed online and 62 (33%) on paper.

Associations With Overall Participation and by Type

In bivariate (Table 2) and adjusted analysis, higher proportions of practitioners from the Western region participated, as did those who were members of at least one dental association and those who either worked in large group (managed care) practices or were owners of private practices compared to their counterparts. Among the 411 practitioners whom regional coordinators contacted or attempted to, there were no significant differences in practice/practitioner characteristics between those who ultimately participated and those who did not, based on bivariate analyses (Table 2). In adjusted analyses, however, there were significant differences. Male practitioners were more likely to ultimately participate after follow-up by regional coordinators than were females (OR = 2.1; 95% CI = [1.2, 3.5], p =.006). Likelihood of ultimately participating decreased with years since dental degree (per 10 years, OR = 0.78, 95% CI = [0.66, 0.93], p = .004).

Table 2.

Participation Rate, by Practitioner/Practice Characteristics.

Practitioner/Practice Characteristics Overall Participated Of Followed-Up by Regional Coordinators (n = 411)
Of Participated
Participated (n = 189) On Paper


n % n % n % n %
Gender
 Female 389 23 331 85 39 40 62 19
 Male 1,292 77 1,130 87 149 48 290 26
p = .2 p = .15 p = .01
Race ethnicitya
 White 1,321 79 1,156 88 145 47 292 25
 Black/African American 82 5 65 79 9 35 16 25
 Asian/Pacific Islander 163 10 145 89 21 54 24 17
 Other 12 1 9 75 1 25 1 11
 Hispanic/Latino 91 5 78 86 12 46 18 23
p = .14 p = .5 p = .2
Age (years)
 <35 186 11 157 84 23 44 14 9
 35–44 367 22 328 89 46 55 64 20
 45–54 350 21 305 87 46 51 90 30
 55–64 592 35 520 88 59 44 136 26
 65 and older 186 11 154 83 14 31 48 31
p = .2 p = .10 p < .001
Years since dental school graduation
 <10 318 19 274 86 43 50 38 14
 10–19 334 20 296 89 44 54 66 22
 20–29 382 23 334 87 43 46 96 29
 30+ 655 39 563 86 57 39 153 27
p = .6 p = .10 p < .001
Additional formal training after dental school
 No 1,009 60 865 86 108 43 228 26
 Yes 686 41 605 88 81 51 125 21
p = .14 p = .13 p = .02
Membership in any dental organizations
 No 228 13 184 81 24 36 43 23
 Yes 1,467 87 1,286 88 165 48 310 24
p = .004 p = .09 p = .8
Practice
 Practice type
  Owner of private practice 1,239 74 1,082 87 134 46 299 28
  Associate of small group private practice 215 13 174 81 27 40 29 17
  Member large group practice (HP/PDA)b 103 6 98 95 11 69 8 8
  Public, community, and publicly funded 70 4 62 89 9 53 11 18
  Federal government, academic, and other managed care 59 4 49 83 8 42 5 10
p = .007 p = .3 p< .001
 More than one practice location
  No 1,428 84 1,239 87 154 45 307 25
  Yes 263 16 230 87 35 53 46 20
p = .8 p = .2 p = .12
 Locale of practice
  Urban—inner city 184 11 156 85 28 49 39 25
  Urban—not inner city 449 27 400 89 51 50 101 25
  Suburban 760 45 660 87 76 44 144 22
  Rural 289 17 251 87 33 47 68 27
p = .5 p = .8 p = .3
Patient population
 Percent patients with private insurance
  < 40 244 15 202 83 35 45 48 24
  40–79 987 60 865 88 103 46 224 26
p = .09 p = .9 p = .06
 Percent patients who come in regularly
  < 50 310 19 265 85 42 49 63 24
  50–79 995 60 868 87 106 46 212 24
  80+ 351 21 311 89 35 45 69 22
p = .5 p = .8 p = .7
 Regionc
  Western 186 11 173 93 21 64 25 14
  Midwest 162 10 135 83 16 38 25 19
  South Central 390 23 344 88 51 53 112 33
  South Atlantic 288 17 252 88 20 36 63 25
  Northeast 360 21 310 86 32 39 72 23
p = .02 p = .06 p < .001
a

Although race and Hispanic/Latino ethnicity are separate questions in the Enrollment Questionnaire, some Hispanic/Latino participants did not provide a race or indicated “Hispanic/Latino” as their race, thus race and ethnicity were combined.

b

Either HealthPartners Dental Group in greater Minneapolis, MN, or Permanente Dental Associates in greater Portland, OR.

c

Reported on Enrollment Questionnaire as the state, subsequently categorized into one of the six regions of the network.

Among participating practitioners, completion on paper was more common among practitioners who were male, older, had more years since graduated from dental school, did not have any additional training, were owners of private practice, or from the south central region (Table 2). In adjusted analyses, all the associations remained significant except years since graduated from dental school.

Discussion

An exceptionally high participation rate (87%) was obtained. Each step produced incremental gains in response: diminishing gains with the two reminder e-mails, 19% and 10%, respectively; increasing gains with postal reminders accompanied with a printed questionnaire, 19% and 32%, respectively; and the largest gain with the final step of personal contact, 52%. Response peaked at 2 days after e-mail reminders were sent, 12 and 6 days for postal reminders, and 25 days for personal contact. Practice/practitioner characteristics differed by mode of response, paper vs. electronic, but not by whether or not response was obtained only after Regional Coordinator personal contact.

In conducting any survey, regardless of mode (postal, telephone, electronic, or combination), reminders are needed to obtain acceptable response rates. Four reminders have been advocated as an appropriate number (Dillman, 2000, 2007). Typically, there is decreasing gain with successive reminders; however, most studies that report gain by reminder do so using the same mode of reminder, primarily either electronic or postal. For example, Toledo and colleagues (2015) in an online survey of 5,433 primary health-care professionals in Spain used four e-mail reminders; overall response was only 36%. Reminders were sent at 10-day intervals. A 7% response was obtained in the 24 hr after initial e-mail. Steady but diminishing responses of 1–2% per day were observed until the first e-mail reminder sent after 10 days, after which a 4% response was observed in the next 24 hr. The pattern repeated with smaller daily increases of 0.5–1% and smaller peaks of 2% then 1%, following second through the fourth reminders. When staying within the same mode, electronic (e-mail), we also found diminishing gains with successive reminders for the two e-mail reminders. In contrast, we found increasing response when the mode of reminder was changed from e-mail to postal with a printed copy of the questionnaire included, and even more so, when the reminder mode was changed from postal to personal contact. This was most notable for the last step, personal contact by regional coordinators. Excluding practitioners who responded before regional coordinators attempted to contact, nearly half (46%) responded after being contacted; this was the highest proportional response of any step and the absolute number responding (n = 189) was comparable to the prior two steps.

The 2008 National Sample Survey of Registered Nurses (U.S. Department of Health and Human Services, 2010) used a multimodal approach similar to ours. Its protocol differed in that the nurses had to enter a web address, while our participants only had to click on a link provided in an e-mail. Also, in the last step, the nursing survey could be completed over the telephone, which was not an option provided for our survey. Their overall response rate was 62%; 27% paper, 24% online, and 10% telephone. Stepwise response rates were not presented. A study of physicians that is the most comparable to the current dental study is the study by Kroth and colleagues (2009) who examined clinicians’ response rate across three medical PBRNs. Their survey was of active network members with valid e-mail addresses, as was ours. Their initial invitation was via e-mail, with five rounds of electronic solicitation for an online-based questionnaire and two rounds of a paper-based version mailed to nonresponders. The electronic solicitations (e-mails) were personalized, came from the physician’s local practice-based network, and had a customized link to the online survey that provided automatic log-in. They had no final telephone follow-up. As with ours, the greatest response was within 2 days of initial e-mail (12%), diminishing responses occurred with the second through fourth e-mails; the paper option sent with third e-mail had a modest response, with no discernible response to either the fifth e-mail or a second paper option. Their overall response rate was 61%: 46% online and 15% on paper. There were two primary differences in their recruitment methods and ours: (1) their initial invitation was via e-mail, while ours was postal mail followed immediately by e-mail with embedded link; and (2) we had a last step of personal/staff outreach. From their graph, an estimated response within 2 weeks of initial e-mail invitation is 15% (120/805); ours was 31%. While it is doubtful that a change in mode of initial invitation would account for a doubling of response rate, it is conceivable that it could account for a 5–10% difference in response rates. There are no studies that we could find that assessed difference in response rates by mode of initial invitation in a “defined” population, that is, a nonrandom group such as practice-based networks. Our response rate, excluding the last step, was 76% ([1,470 – 244 + 55]/1,695), which would make the response rates in the studies comparable.

There can be many obstacles when surveying health professionals. The importance of these surveys and their possible difficulties is why the NCI convened a workshop in November 2010 to discuss the challenges (Klabunde et al., 2012). The first topic area identified for improvement was identification of an appropriate sampling frame (the NCI workshop focused on physicians). Ideally, a sampling frame should be complete, current, include no duplications, and have no ineligible persons; such a sampling frame is very rare. Our sampling frame of network dentists meets these criteria except that it had some ineligibles, which were screened out via the first questions on the survey. The underlying question regarding the sampling frame in our study is whether they are representative of U.S. dentists. Network members are not recruited randomly, so factors associated with network participation (e.g., an interest in clinical research) may make network dentists unrepresentative of dentists at large. While it cannot be asserted that network dentists are entirely representative, we can state that they have much in common with dentists at large, while also offering substantial diversity in these characteristics. This assertion is warranted because (1) substantial percentages of network dentists are represented in the various response categories of the characteristics listed in Table 1, (2) findings from several network studies document that network dentists report patterns of diagnosis and treatment that are similar to patterns determined from nonnetwork general dentists (Gordan et al., 2009; Norton et al., 2014), and (3) the similarity of network dentists to nonnetwork dentists using the best available national source, the 2010 American Dental Association Survey of Dental Practice (American Dental Association, 2012; Makhija et al., 2009). Although not stated as an objective in the NCI workshop, Shelley et al. (2012), in their review of dental studies, expanded on the concept about properties of a sampling frame. In an appropriate sampling frame, every member should have an equal chance of being selected and random sampling should be used. If the sampling frame is large, an appropriate sample size estimate should be made so as to avoid having to survey the entire sampling frame. Sampling the entire frame is a waste of resources (Dillman, Smyth, & Christian, 2009). We estimated that we needed a sample of 1,000. As we were unsure of what the participation rate would be, and wanted to assess the yield of having staff follow-up with nonresponders as a last step, we decided to survey the entire group.

A second topic from the NCI workshop had to do with how to optimize a mixed-mode approach, namely, an approach that uses both postal and electronic mail. Response rates are usually higher for mail than telephone or electronic, with telephone being extraordinarily difficult, and the realization that electronic will become even more pervasive than it already is. We had no randomization component to make direct comparisons, but our design was intended to optimize response. We started with mail notification, as studies using electronic notification (e-mail) had poor response. We did follow mail notification with e-mail because this allowed embedding a link which the practitioner could simply click on to begin the survey. We had two e-mail notifications at 2-week intervals. Other studies have found that gain from more reminders decreases markedly after two. Also, from our prior work, we found that practitioners who completed paper forms differed from those who completed electronically (Funkhouser et al., 2014); thus, we knew that we wanted to include a mail component as well.

The third topic area from the NCI workshop had to do with the role of gatekeepers. Our “last step,” having network staff follow-up with nonresponders, addressed that challenge. Of practitioners who completed the survey, almost half (44%) required only one call, in essence getting past a gatekeeper, yet there were others who, even after 10–11 attempts, calls, e-mails, or fax, were not responsive.

Regarding the issue of personal outreach, we believe that this study demonstrates for other organizations the potential utility of personal outreach. Not only did outreach increase the number of practitioners who responded, it also identified practitioners who were no longer eligible, thereby reducing the denominator. Because organizations will have contact information on their sampling frame or might be able to access public information to identify additional contact information (which our research assistants sometimes used), these organizations can make their own assessment of the utility of this approach for their sample or some targeted subset of it (based on the subset’s projected potential for a higher response rate). We estimate that to follow up with the 441 practitioners who had not responded after e-mail contacts required approximately 15 work-days of staff time. The majority of practitioners in our study had not participated in other studies nor had they had previous contact with our research assistants. It is typically only after network enrollees have done an in-office clinical study or attended a regional meeting of network practitioners do they begin to develop a professional relationship with the network’s research assistants.

A limitation of the study may be the relatively uncommon sampled population, specifically, the National Dental PBRN. As this was the first survey of the new national network, recruitment into the network began April 2012 and the survey was conducted in early 2014; the practitioners may have been even more likely to respond. We think this largely explains the high response, 31%, to our initial invitation. The incremental increases with second and third e-mail reminders, and the two postal reminders, should be applicable to other researchers and populations where there is some type of existing relationship. The comparability of response rates and initial reminders between our study and those of Kroth et al. (2009) support this. The large proportional and absolute gain with the final step, personal contact, surprised us. We do not know if others will find it similarly beneficial; we present it so that others may try. The gain from personal outreach comprised 13% of our respondents, which is not much larger than the 10% completed by telephone for the 2008 National Sample Survey of Registered Nurses (U.S. Department of Health and Human Services, 2010). Although the nurses’ survey was completed on the telephone and ours was via personal contact, they both used personal contact, which can be expensive and may not be as fruitful in other populations. In their review of surveys of nurses, VanGeest and Johnson (2011) found that nurses responded to telephone strategies comparable to those of mail. This differs markedly from studies conducted among physicians, which find poor response to telephone strategies (Cho et al., 2013). “Gatekeepers” in medical offices may make personal contact by telephone extremely difficult. In our study, personal contact did not necessarily mean with the dentist himself or herself and usually was not; it entailed reaching a person in the office and verifying that the dentist had received the materials. There have been no comparable studies of surveys among dentists to evaluate telephone strategies in obtaining response to surveys. We speculate that trying to get a dentist to complete a survey on the telephone would have had a very poor response, as it has with physicians.

In summary, we believe that other organizations, such as other PBRNs (of which there are many), membership associations (such as health-care professional organizations), or large cohorts from ongoing studies, may be able to use our methods in a cost-efficient manner to maximize their response rates. Using a multimodal protocol, it is possible to obtain a high participation rate with a large online component. Although response steps were not randomized, we believe that it is unlikely that additional e-mail reminders, without postal and/or telephone follow-up, would have meaningfully increased the response from 57%. Also, as we have reported previously from an earlier survey, there appears to be a difference between practitioners who respond on paper and those who do so online (Funkhouser et al., 2014). Of note, the late responders (those who did not participate until after follow-up by regional coordinators) did not differ from early responders in any characteristic assessed including mode of completion. Also, the high absolute and proportional response (52%) to the last step (personal follow-up) is noteworthy. Although this step adds to the cost, the yield is large.

Acknowledgments

The authors thank Dr. Wynne Norton, assistant professor, School of Public Health, University of Alabama at Birmingham, for her work during the development of the study protocol and questionnaire. An Internet site devoted to details about the nation’s network is located at http://NationalDentalPBRN.org. Persons who comprise the National Dental PBRN Collaborative Group are listed at http://NationalDentalPBRN.org/users/publications. Opinions and assertions contained herein are those of the authors and are not to be construed as necessarily representing the views of the respective organizations or the National Institutes of Health. The informed consent of all human subjects who participated in this investigation was obtained after the nature of the procedures had been explained fully.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by grant U19-DE-22516 from the National Institutes of Health (NIH).

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. American Dental Association. American Dental Association Survey Center: The 2010 survey of dental practice. Chicago, IL: Author; 2012. Jul, [Google Scholar]
  2. Asch DA, Christakis NA, Ubel PA. Conducting physician mail surveys on a limited budget: A randomized trial comparing $2 bill versus $5 bill incentives. Medical Care. 1998;36:95–99. doi: 10.1097/00005650-199801000-00011. [DOI] [PubMed] [Google Scholar]
  3. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology. 1997;50:1129–1136. doi: 10.1016/s0895-4356(97)00126-1. [DOI] [PubMed] [Google Scholar]
  4. Baruch Y, Holtom BC. Survey response rates and trends in organizational research. Human Relations. 2008;61:1139–1160. [Google Scholar]
  5. Beebe TJ, Locke GR, 3rd, Barnes SA, Davern ME, Anderson KJ. Mixing web and mail methods in a survey of physicians. Health Services Research. 2007;42:1219–1234. doi: 10.1111/j.1475-6773.2006.00652.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bjerklin K, Bondemark L. Management of ectopic maxillary canines: Variations among orthodontists. The Angle Orthodontist. 2008;78:852–859. doi: 10.2319/070307-306.1. [DOI] [PubMed] [Google Scholar]
  7. Braithwaite D, Emery J, de Lusignan S, Sutton S. Using the Internet to conduct surveys of health professionals: A valid alternative? Family Practice. 2003;20:545–551. doi: 10.1093/fampra/cmg509. [DOI] [PubMed] [Google Scholar]
  8. Burt CW, Woodwell D. Tests of methods to improve response to physician surveys; Paper presented at the November 2005 Federal Committee on Statistical Methodology; Arlington, VA. 2005. [Google Scholar]
  9. Cho YI, Johnson TP, VanGeest Enhancing Surveys of health care professionals: A meta-analysis of techniques to improve response. Evaluation &the Health Professions. 2013;36:382–407. doi: 10.1177/0163278713496425. [DOI] [PubMed] [Google Scholar]
  10. Cull WL, O’Connor KG, Sharp S, Tang SS. Response rates and response bias for 50 surveys of pediatricians. Health Services Research. 2005;40:213–226. doi: 10.1111/j.1475-6773.2005.00350.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cummings SM, Savitz LA, Konrad TR. Reported response rates to mailed physician questionnaires. Health Services Research. 2001;35:1347–1355. [PMC free article] [PubMed] [Google Scholar]
  12. de Leeuw ED. Counting and measuring online: The quality of internet surveys. Bulletin Sociological Methodology. 2012;114:68–78. [Google Scholar]
  13. Delnevo CD, Abatemarco DJ, Steinberg MB. Physician response rates to a mail survey by specialty and timing of incentive. American Journal of Preventive Medicine. 2004;26:234–236. doi: 10.1016/j.amepre.2003.12.013. [DOI] [PubMed] [Google Scholar]
  14. Dillman DA. Mail and internet surveys: The Tailored Design Method 2007 update with new internet, visual, and mixed-mode guide. Hoboken, NJ: John Wiley; 2007. [Google Scholar]
  15. Dillman DA, Smyth JD, Christian LM. Internet, mail and mixed-mode surveys. The tailored design method. Hoboken, NJ: John Wiley; 2009. [Google Scholar]
  16. Field TS, Cadoret CA, Brown ML, Ford M, Greene SM, Hill D, … Zapka JM. Surveying physicians: Do components of the “Total Design Approach” to optimizing survey response rates apply to physicians? Medical Care. 2002;40:596–606. doi: 10.1097/00005650-200207000-00006. [DOI] [PubMed] [Google Scholar]
  17. Flanigan TS, McFarlane E, Cook S. Conducting survey research among physicians and other medical professionals—A review of current literature. Section on Survey Research Methods. 2008:4136–4147. 2008 AAPOR. [Google Scholar]
  18. Florida Dental Care Study. 2015 Retrieved from “ http://nersp.nerdc.ufl.edu/~gilbert/”.
  19. Funkhouser E, Fellows JL, Gordan VV, Rindal DB, Foy PJ, Gilbert GH for the National Dental Practice-Based Research Network Collaborative Group. Supplementing online surveys with mailed option to reduce bias and improve response rate: The National Dental PBRN. Journal of Public Health Dentistry. 2014;74:276–282. doi: 10.1111/jphd.12054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gilbert GH, Richman JS, Gordan VV, Rindal DB, Fellows JL, Benjamin PL, … Williams OD for the Dental Practice-Based Research Network Collaborative Group. Lessons learned during the conduct of clinical studies in The Dental PBRN. Journal of Dental Education. 2011;75:453–465. [PMC free article] [PubMed] [Google Scholar]
  21. Gilbert GH, Williams OD, Korelitz JJ, Fellows JL, Gordan VV, Makhija SK, Foy PJ for the National Dental Practice-Based Research Network Collaborative Group. Purpose, structure and function of the United States National Dental Practice-Based Research Network. Journal of Dentistry. 2013;41:1051–1059. doi: 10.1016/j.jdent.2013.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Gordan VV, Garvan CW, Heft MW, Fellows J, Qvist V, Rindal DB, Gilbert GH for the Dental Practice-Based Research Network Collaborative Group. Restorative treatment thresholds for interproximal primary caries based on radiographic images: Findings from The Dental PBRN. General Dentistry. 2009;57:654–663. [PMC free article] [PubMed] [Google Scholar]
  23. Guise V, Chambers M, Valimaki M, Makkonen P. A mixed-mode approach to data collection: Combining web and paper questionnaires to examine nurses’ attitudes to mental illness. Journal of Advanced Nursing. 2010;66:1623–1632. doi: 10.1111/j.1365-2648.2010.05357.x. [DOI] [PubMed] [Google Scholar]
  24. Halpern SD, Ubel PA, Berlin JA, Asch DA. Randomized trial of $5 versus $10 monetary incentives, envelope size, and candy to increase physician response rates to mailed questionnaires. Medical Care. 2002;40:834–839. doi: 10.1097/00005650-200209000-00012. [DOI] [PubMed] [Google Scholar]
  25. Hawley KM, Cook JR, Jensen-Doss A. Do non-contingent incentives increase survey response rates among mental health providers? A randomized trail comparison. Administration and Policy in Mental Health. 2009;36:343–348. doi: 10.1007/s10488-009-0225-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hill CA, Fahrney K, Wheeless SC, Carson CP. Survey response inducements for registered nurses. Western Journal of Nursing Research. 2006;28:322–334. doi: 10.1177/0193945905284723. [DOI] [PubMed] [Google Scholar]
  27. Hing E, Schappert SM, Burt CW, Shimizu IM. Effects of form length and item format on response patterns and estimates of physician office and hospital outpatient department visits. National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey. Vital Health Statistics. 2005;2:1–32. [PubMed] [Google Scholar]
  28. Jepson C, Asch DA, Hershey JC, Ubel PA. In a mailed physician survey, questionnaire length had a threshold effect on response rate. Journal of Clinical Epidemiology. 2005;58:103–105. doi: 10.1016/j.jclinepi.2004.06.004. [DOI] [PubMed] [Google Scholar]
  29. Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. Journal of the American Medical Association. 2012;307:1805–1806. doi: 10.1001/jama.2012.3532. [DOI] [PubMed] [Google Scholar]
  30. Kasprzyk D, Montano DE, St Lawrence JS, Phillips WR. The effects of variations in mode of delivery and monetary incentive on physicians’ responses to a mailed survey assessing STD practice patterns. Evaluation & the Health Professions. 2001;24:3–17. doi: 10.1177/01632780122034740. [DOI] [PubMed] [Google Scholar]
  31. Keating NL, Zaslavsky AM, Goldstein J, West DW, Ayanian JZ. Randomized trial of $20 versus $50 incentives to increase physician survey response rates. Medical Care. 2008;46:878–881. doi: 10.1097/MLR.0b013e318178eb1d. [DOI] [PubMed] [Google Scholar]
  32. Kellerman SE, Herold J. Physician response to surveys: A review of the literature. American Journal of Preventive Medicine. 2001;20:61–67. doi: 10.1016/s0749-3797(00)00258-0. [DOI] [PubMed] [Google Scholar]
  33. Klabunde CN, Willis GB, McLeod CC, Dillman DA, Johnson TP, Greene SM, Brown ML. Improving the quality of surveys of physicians and medical groups: A research agenda. Evaluation & the Health Professions. 2012;35:477–506. doi: 10.1177/0163278712458283. [DOI] [PubMed] [Google Scholar]
  34. Kroth PJ, McPherson L, Leverence R, Pace W, Daniels E, Rhyne RL, Williams RL. Combining web-based and mail surveys improves response rates: A PBRN study from PRIME Net. Annals of Family Medicine. 2009;7:245–248. doi: 10.1370/afm.944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Leung GM, Ho LM, Chan MF, Johnston JM, Wong FK. The effects of cash and lottery incentives on mailed surveys to physicians: A randomized trial. Journal of Clinical Epidemiology. 2002;55:801–807. doi: 10.1016/s0895-4356(02)00442-0. [DOI] [PubMed] [Google Scholar]
  36. Leung GM, Johnston JM, Saing H, Tin KYK, Wong IOL, Ho L. Prepayment was superior to postpayment cash incentives in a randomized postal survey among physicians. Journal of Clinical Epidemiology. 2004;57:777–784. doi: 10.1016/j.jclinepi.2003.12.021. [DOI] [PubMed] [Google Scholar]
  37. Makhija SK, Gilbert GH, Rindal DB, Benjamin PL, Richman JS, Pihlstrom DJ for the Dental Practice-Based Research Network Collaborative Group. Dentists in practice-based research networks have much in common with dentists at large: Evidence from the Dental Practice-Based Research Network. General Dentistry. 2009;57:270–275. [PMC free article] [PubMed] [Google Scholar]
  38. McCrea SJ. Pre-operative radiographs for dental implants—Are selection criteria being followed? British Dental Journal. 2008;204:675–682. doi: 10.1038/sj.bdj.2008.524. discussion 666. [DOI] [PubMed] [Google Scholar]
  39. McFarlane E, Olmsted MG, Murphy J, Hill CA. Nonresponse bias in a mail survey of physicians; Paper presented at the May 2006 annual conference of the American Association for Public Opinion Research; Montreal, Quebec. 2006. [Google Scholar]
  40. McLeod CC, Klabunde CN, Willis GB, Stark D. Health care provider surveys in the United States, 2000–2010: A review. Evaluation & the Health Professions. 2013;36:106–126. doi: 10.1177/0163278712474001. [DOI] [PubMed] [Google Scholar]
  41. Mutyabule TK, Whaites EJ. Survey of radiography and radiation protection in general dental practice in Uganda. Dentomaxillofacial Radiology. 2002;31:164–169. doi: 10.1038/sj/dmfr/4600685. [DOI] [PubMed] [Google Scholar]
  42. National Dental Practice-Based Research Network. Retrieved from http://www.nationaldentalpbrn.org/
  43. National Dental Practice-Based Research Network. Study Results Page, section entitled “Isolation Techniques Used During Root Canal Treatment”. Retrieved from http://www.nationaldentalpbrn.org/study-results.php.
  44. Norton WE, Funkhouser E, Makhija SK, Gordan VV, Bader JD, Rindal DB, Gilbert GH for the National Dental Practice-Based Research Network Collaborative Group. Concordance between clinical practice and published evidence: Findings from the National Dental Practice-Based Research Network. Journal of the American Dental Association. 2014;145:22–31. doi: 10.14219/jada.2013.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Paul CL, Walsh RA, Tzelepis F. A monetary incentive increases postal survey response rates for pharmacists. Journal of Epidemiology and Community Health. 2005;59:1099–1101. doi: 10.1136/jech.2005.037143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Robertson J, Walkom EJ, McGettigan P. Response rates and representativeness: A lottery incentive improves physician survey return rates. Pharmacoepidemiology and Drug Safety. 2005;14:571–577. doi: 10.1002/pds.1126. [DOI] [PubMed] [Google Scholar]
  47. Rogelberg S, Stanton J. Understanding and dealing with organizational survey nonresponse. Organizational Research Methods. 2007;10:195–209. [Google Scholar]
  48. SAS Institute, Inc. SAS/STAT version 9.4. SAS Publishing; Cary, NC: 2014. Retrieved from www.sas.com/apps/pubscat/complete.jsp. [Google Scholar]
  49. Schleyer TK, Forrest JL. Methods for the design and administration of web-based surveys. Journal of American Medical Informatics Association. 2000;7:416–425. doi: 10.1136/jamia.2000.0070416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Shelley AM, Brunton P, Horner K. Questionnaire surveys of dentists on radiology. Dentomaxillofacial Radiology. 2012;41:267–275. doi: 10.1259/dmfr/58627082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Stagnitti MN, Beauregard K, Solis A. Methodology report No 23. Rockville, MD: Agency for Healthcare Research and Quality; 2008. Nov 7, Design, methods, and field results of the Medical Expenditure Panel Survey Medical Provider Component (MEPS-MPC)—2006 calendar year data. [Google Scholar]
  52. Sudman S. Mail surveys of reluctant professionals. Evaluation Review. 1985;9:349–360. [Google Scholar]
  53. Sutton F, Ellituv ZN, Seed R. A survey of self-perceived educational needs of general dental practitioners in the Merseyside region. Primary Dental Care: Journal of the Faculty of General Dental Practitioners. 2005;12:78–82. doi: 10.1308/1355761054348468. [DOI] [PubMed] [Google Scholar]
  54. Toledo D, Aerny N, Soldevila N, Baricot M, Godoy P, Castilla J, Diaz J for CIBERESP Working Group for the Survey on Influenza Vaccination in Primary Health Care Workers. Managing an online survey about influenza vaccination in primary healthcare workers. International Journal Environmental Research Public Health. 2015;12:541–553. doi: 10.3390/ijerph120100541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Ulrich CM, Danis M, Koziol D, Garrett-Mayer E, Hubbard R, Grady C. Does it pay to play? A randomized trial of prepaid financial incentives and lottery incentives in surveys of non-physician healthcare professionals. Nursing Research. 2005;54:178–183. doi: 10.1097/00006199-200505000-00005. [DOI] [PubMed] [Google Scholar]
  56. U.S. Department of Health and Human Services, Health Resources and Services Administration. The registered nurse population: Findings from the 2008 National Sample Survey of Registered Nurses. Rockville, MD: 2010. Retrieved from http://healthcaredelivery.cancer.gov/physician_surveys/appendix.html?&url=/physician_surveys/appendix.html. [Google Scholar]
  57. VanGeest JB, Johnson TP. Surveying nurses: Identifying strategies to improve participation. Evaluation & the Health Professions. 2011;34:487–511. doi: 10.1177/0163278711399572. [DOI] [PubMed] [Google Scholar]
  58. VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: A systematic review. Evaluation &the Health Professions. 2007;30:303–321. doi: 10.1177/0163278707307899. [DOI] [PubMed] [Google Scholar]
  59. VanGeest JB, Wynia MK, Cummins DS, Wilson IB. Effects of different monetary incentives on the return rate of a national survey of physicians. Medical Care. 2001;39:197–201. doi: 10.1097/00005650-200102000-00010. [DOI] [PubMed] [Google Scholar]
  60. van Selm M, Jankowski NW. Conducting online surveys. Quality & Quantity. 2006;40:435–456. [Google Scholar]

RESOURCES