Skip to main content
PLOS One logoLink to PLOS One
. 2023 Aug 22;18(8):e0289628. doi: 10.1371/journal.pone.0289628

A randomised controlled trial of email versus mailed invitation letter in a national longitudinal survey of physicians

Benjamin Harrap 1, Tamara Taylor 2, Grant Russell 3, Anthony Scott 4,*
Editor: Fares Alahdab5
PMCID: PMC10443851  PMID: 37607168

Abstract

Despite their low cost, the use of email invitations to distribute surveys to medical practitioners have been associated with lower response rates. This research compares the difference in response rates from using email approach plus online completion rather than a mailed invitation letter plus a choice of online or paper completion. A parallel randomised controlled trial was conducted during the 11th annual wave of the nationally representative Medicine in Australia: Balancing Employment and Life (MABEL) longitudinal survey of doctors. The control group was invited using a mailed paper letter (including a paper survey plus instructions to complete online) and three mailed paper reminders. The intervention group was approached in the same way apart from the second reminder when they were approached by email only. The primary outcome is the response rate and the statistical analysis was blinded. 18,247 doctors were randomly allocated to the control (9,125) or intervention group (9,127), with 9,108 and 9,107 included in the analysis. Using intention to treat analysis, the response rate in the intervention group was 35.92% compared to 37.59% in the control group, a difference of -1.66 percentage points (95% CI: -3.06 to -0.26). The difference was larger for General Practitioners (-2.76 percentage points, 95% CI: -4.65 to -0.87) compared to other specialists (-0.47 percentage points, 95% CI: -2.53 to 1.60). For those who supplied an email address, the average treatment effect on the treated was higher at -2.63 percentage points (95% CI: -4.50 to -0.75) for all physicians, -3.17 percentage points (95% CI: -5.83 to -0.53) for General Practitioners, and -2.1 percentage points (95% CI: -4.75 to 0.56) for other specialists. For qualified physicians, using email to invite participants to complete a survey leads to lower response rates compared to a mailed letter. Lower response rates need to be traded off with the lower costs of using email rather than mailed letters.

Background

Web surveys have consistently lower response rates than all other survey modes [1]. Surveys of medical practitioners remain a key source of information about clinical practice, health service delivery, and clinical attitudes and experience. A key issue with survey data is that they can have low external validity because the sample may be less representative due to response bias caused by recruitment methods and non-random selection of physicians completing the survey. Although a low response rate does not necessarily mean low external validity [2], the focus on response rates remains a key feature of the survey methods literature for physicians [35].

Systematic reviews and meta-analyses have examined different methods of increasing response rates in surveys of medical practitioner populations [3,68], such as changing features of survey design and delivering incentives. Email contact and online survey completion is popular as costs are lower but research has shown that response rates also tend to be lower, with a mailed approach more effective and recommended [3]. For example, in a meta-analysis of 48 studies of health professionals, three studies found that mailed surveys were associated with higher response rates than online/web modes, with no difference in response rates between online modes and mixed modes [3]. Pit, Vo (6) conducted a systematic review of methods used to increase response rates for GPs, and found postal surveys were more effective than phone or email surveys (as a singular method of distribution), and a sequential mixed-mode of reminders was more effective than using online only or online and paper surveys concurrently. Beebe, Jacobson [9] found that a sequential mixed mode (web followed by mail) survey of health professionals had a higher response rate than mail only but found no statistically significant differences between mail only and web only, though the sample sizes were small. No differences were found between web only, mail only, and mixed modes in a more recent randomised study of physicians [10]. Other key studies have examined mixed modes that compare combinations of mail and online approaches, but do not directly compare mail and online [1113].

Most studies used data generally more than 10 years old. As the use of email and the internet becomes more universal, including the more widespread use of electronic medical records, it is important to re-examine this issue. Nevertheless, for physician cohorts who are less familiar with the internet, mainly older physicians, there is uncertainty as to whether response rates would be different, and a risk that response rates will be lower with an email approach or online completion. Older people are less likely to respond to email than younger people [14,15] and if a mailed approach is used and physicians are given the choice between paper or online competition, the latter is less likely for older physicians [16].

The aim of this research is to compare response rates between an email approach and a mailed approach within a national longitudinal survey of physicians. More specifically, we introduce an email approach in the second of three reminders sent to non-responding physicians. In the first ten annual waves of the survey, the main mailout and all three reminders were delivered by mail only. Our null hypothesis was that there would be no difference in the response rate when delivering the reminder by email or mail.

Methods

Reporting and design of the randomised trial are based on the Consolidated Standards of Reporting Trials (CONSORT) guidelines [17]. The study was approved by The University of Melbourne Faculty of Business and Economics Human Ethics Advisory Group (Ref. 0709559) and the Monash University Standing Committee on Ethics in Research Involving Humans (Ref: 195535 CF07/1102–2007000291). Participant consent was obtained through voluntarily completing the survey.

Participants

The research was conducted within the context of the Medicine in Australia: Balancing Employment and Life (MABEL) survey. This was a longitudinal panel survey of all medical practitioners in Australia, collecting 11 annual waves of data from around 9,000 to 10,000 physicians per wave [18]. The original responders in Wave 1 (2008) were followed up annually, with the addition of a cohort of new doctors entering the sample frame from Wave 2 and each subsequent wave [19]. Each wave therefore had a mixture of doctors from different cohorts. Responses for each wave were gathered using a sequential mixed mode design based on an earlier RCT [11]. The MABEL survey is sent to all types of medical practitioners in Australia.

The sample frame for MABEL is the Medical Directory of Australia, a national database of doctors held by the Australasian Medical Publishing Company (AMPCo). We use participants from Wave 11, administered between August 2018 and April 2019. Doctors were excluded if they had previously requested to withdraw from the MABEL survey, or who were known to be deceased. Junior doctors were excluded since in 2016 we conducted a small experiment that supported the use of an email approach for junior doctors and this was adopted in subsequent waves for this group only [18].

The invitation included a mailed letter that contained unique log in details for online completion to enable longitudinal tracking, as well as a paper copy of the survey which included a unique username printed on the cover. Respondents could choose the mode of completion. The first reminder used a mailed paper letter containing instructions for online completion but no paper survey. The second reminder used a mailed letter with instructions for online completion and included a paper copy of the survey. The third reminder included only a mailed paper letter with instructions for online completion.

Intervention

The mailout for the intervention groups included an email approach for the second reminder. Both the intervention and control group were approached four times: the initial invitation plus three reminders. In the control group all four approaches used a mailed paper letter sent to each participant’s work address. All survey materials are available at www.mabel.org.au.

The intervention group only differed at the second reminder, where they were approached by email and could only complete online, receiving no paper letter or paper copy of the survey. The comparison between the intervention and control group therefore includes a different method of approach and a different method of completion: email approach plus online completion versus mailed approach plus a choice of online or paper completion. The email included the same text as the paper letter plus a link and instructions for online completion. Emails were sent to email addresses from the AMPCo database or from email addresses provided by participants in earlier waves of the survey.

Outcomes

The primary outcome for this study is the response rate at end of recruitment. A medical practitioner was considered to have responded if they returned their survey (via mail or completing online) with Section A completed which included questions on whether they were currently participating in clinical practice and at least one question answered from Section B. Surveys returned blank were counted as refusals to participate. The response rate was calculated as the total number of responses divided by the total number of surveys distributed (minus surveys that could not be sent by the mailing house because the doctor was deceased or had no valid mailing address).

Sample size

The sample for the trial included 18,247 GPs and non-GP specialists eligible to be invited to complete a survey in Wave 11. This included, i) 13,382 doctors who had previously completed at least one MABEL survey since 2008 (defined as a continuing doctor), ii) a cohort of 1,862 doctors new to the sample frame in 2018, and iii) 3,003 doctors from a ‘boost’ sample comprising a 10% random sample of those who had never responded to an invitation to participate in MABEL. The total sample size of 18,247 doctors is sufficient to detect a two-sided difference of least two percentage points in the response rate (alpha 0.05 and power 0.8). This assumes a response rate of 42.4% in the control group (an estimate from Wave 10 of MABEL).

Randomisation

A parallel-arm design with 1:1 allocation was used, with 18,247 doctors randomly allocated to either the control or intervention group. Allocation was stratified by doctor type (GP, specialist), continuing or new, and boost sample to ensure the proportions of these groups of doctors in the intervention and control groups were the same. This is important because new doctors and boost sample doctors are likely to have lower response rates, and specialists had higher response rates than GPs in previous waves. We tested a two-sided hypothesis as it was unclear, a priori, whether the intervention group would have a higher or lower response rate. Randomisation was performed using the sample command in Stata 15.1 statistical software15.

Randomisation took place (by TT) before the first invitation for Wave 11 was mailed out in August 2018. Group allocation was kept separate (in a separate electronic data file) from the main mailout and the first reminder so researchers handling the responses and reminders were blinded to group allocation during this process. The second reminder was prepared in late November 2018 by TT. The list of those eligible for a second reminder was then merged with the file containing the intervention and control group identifiers to indicate who should receive an email. A separate file indicating whether doctors had an email address was also merged onto this file. AMPCo was sent a list of doctor identifiers indicating if they should be approached using a mailed letter or an email.

Statistical methods

The analysis was conducted by BH who was blinded to group allocation until after the analysis was complete and checked, and who was not involved in the randomisation or any data collection. Baseline characteristics of the intervention and control groups are compared with each other, and with the population of medical practitioners in Australia. The main analysis was based on intention to treat, which estimates the average treatment effect (ATE).

The proportions responding in each group were compared using a 2x2 table and Pearson chi-squared test. An adjusted analysis was also conducted using multivariable logistic regression to examine the probability of response between the two groups after adjusting for covariates. Sub-group analysis was also conducted using separate logistic regressions for GPs and other specialists. Covariates included age, gender, whether qualified overseas, quartiles of the socio-economic status of patients in each respondent’s postcode (measured using the Socio-Economic Indexes For Areas (SEIFA) Index of Relative Socio-Economic Disadvantage [20]), and the proportion of the population in the postcode over 65 years old and under 5 years old. Finally, the rurality of work location was measured using the Modified Monash Model (MMM) classification [21]. Major cities (MMM1), areas within 20km of town with 50,000 population (MMM2); areas within 15km of town with 15,000 to 50,000 population (MMM3); areas within 10km of town with 5,000 to 15,000 population (MMM4); MMM5-7 (all other remote and rural areas) are grouped with MM4 for the analysis. Statistical analysis was conducted using STATA [22].

A proportion of doctors allocated to the intervention and control group did not supply a valid email address to AMPCo. These will be allocated equally across intervention and control groups. Those who did not supply an email address in the intervention group cannot adhere to their allocated group and were instead approached by mailed paper letter rather than email. The intention to treat analysis includes these in the intervention group. This is appropriate as it assumes that in practice not all doctors are willing to provide email addresses and so the main results are applicable to the population of doctors whether or not they are willing to supply an email address.

However, this will lead to an underestimate of the effect of the intervention for those who actually received an email compared to those in the control group who also received email. In addition to calculating the results using intention to treat analysis, we therefore calculate the average treatment effect on the treated (ATET). This compares those who had a valid email address in both the intervention and control group.

Results

A comparison of the sample used in the trial and the population of GPs and specialists in clinical practice in 2018 shows that the trial sample was more likely to be female, are slightly younger, less likely to be from New South Wales, more likely to be from a non-metropolitan area. The proportion who are a specialist, the socio-economic status of the population, and the proportion of the population aged under 5 and over 65 years old are similar (Table 1). Descriptive statistics comparing the characteristics of the intervention and control groups are shown in Tables 2 and 3. The flow diagram in Fig 1 shows each step of the study and how the final sample was determined. Comparison of response rates are shown in Table 4, overall and for the subgroups of GPs and non-GP specialists. The response rate in the intervention group was 35.92% compared to 37.59% in the control group, a difference of -1.66 percentage points (95% CI: -3.06 to -0.26). After adjustment for covariates, this increases to -1.93 (95% CI: -3.36 to -0.50). The difference was larger for GPs (-2.76 percentage points, 95% CI: -4.65 to -0.87) compared to non-GP specialists (-0.47 percentage points, 95% CI: -2.53 to 1.60).

Table 1. Comparison of trial participants with population of GPs and specialists in 2018.

Population of GPs and specialists (2018) Trial participants
  Mean SD n Mean SD na
Specialist (%) 48.8 50.0 43892 48.0 50.0 18237
Male (%) 64.0 48.0 43887 56.9 49.5 18239
Age 51.8 11.9 43392 50.0 12.4 18037
<35 yrs old 7.6 26.6 43392 12.9 33.5 18037
35–39 yrs old 9.0 28.6 43392 9.9 29.9 18037
40–44 yrs old 14.2 34.9 43392 14.5 35.2 18037
45–49 yrs old 14.5 35.2 43392 13.6 34.3 18037
50–54 yrs old 14.1 34.8 43392 12.6 33.2 18037
55–59 yrs old 13.4 34.1 43392 12.5 33.1 18037
60–64 yrs old 11.3 31.7 43392 10.4 30.5 18037
65–69 yrs old 7.6 26.5 43392 6.9 25.4 18037
70+ yrs old 8.1 27.3 43392 6.7 25.1 18037
Australian Capital Territory 1.7 13.0 43892 1.7 12.9 18244
New South Wales 32.7 46.9 43892 28.0 44.9 18244
Northern Territory 0.7 8.1 43892 1.1 10.5 18244
Queensland 20.1 40.1 43892 20.2 40.1 18244
South Australia 7.4 26.2 43892 7.5 26.3 18244
Tasmania 2.2 14.6 43892 2.7 16.2 18244
Victoria 25.2 43.4 43892 28.1 44.9 18244
Western Australia 10.0 30.0 43892 10.8 31.0 18244
MMM1 (Major cities) 80.5 39.6 42967 74.9 43.4 18244
MMM2 8.3 27.5 42967 9.9 29.9 18244
MMM3 5.9 23.5 42967 7.2 25.8 18244
MMM4 2.3 14.9 42967 2.8 16.5 18244
MMM5 2.5 15.7 42967 3.9 19.3 18244
MMM6 0.4 6.6 42967 1.2 10.7 18244
MMM7 0.1 3.5 42967 0.2 4.7 18244
SEIFA Q1 (High SES) 21.0 40.7 43679 21.6 41.2 18131
SEIFA Q2 19.6 39.7 43679 19.9 39.9 18131
SEIFA Q3 20.0 40.0 43679 20.0 40.0 18131
SEIFA Q4 20.1 40.1 43679 19.7 39.8 18131
SEFIA Q5 (Low SES) 19.2 39.4 43679 18.8 39.1 18131
Percent of popn under 5 yrs 5.694 1.617 43728 5.665 1.599 18151
Percent of popn above 65 yrs 13.160 4.571 43728 13.403 4.697 18151

a. This is slightly lower than 18,237 of trial participants due to some missing values of characteristics.

Table 2. Characteristics of participants in intervention and control groups.

Intervention Control N
(Intervention)
N
(Control)
Specialist (n, %) 4376 (48.0) 4377 (48.0) 9122 9125
Male (n, %) 5164 (56.6) 5220 (57.2) 9118 9120
Age (mean, sd) 49.9 (12.5) 49.8 (12.5) 8423 8349
Age categories (n, %) 8423 8349
    <35 yrs old 1129 (13.4) 1158 (13.9)
    35–39 yrs old 849 (10.1) 846 (10.1)
    40–44 yrs old 1239 (14.7) 1195 (14.3)
    45–49 yrs old 1118 (13.3) 1110 (13.3)
    50–54 yrs old 1052 (12.5) 999 (12.0)
    55–59 yrs old 1015 (12.1) 1063 (12.7)
    60–64 yrs old 869 (10.3) 862 (10.3)
    65–69 yrs old 570 (6.8) 577 (6.9)
    70+ yrs old 582 (6.9) 539 (6.5)
State/Territory (n, %) 9120 9124
    Australian Capital Territory 147 (1.6) 163 (1.8)
    New South Wales 2523 (27.7) 2588 (28.4)
    Northern Territory 100 (1.1) 103 (1.1)
    Queensland 1816 (19.9) 1865 (20.4)
    South Australia 690 (7.6) 673 (7.4)
    Tasmania 248 (2.7) 246 (2.7)
    Victoria 2557 (28.0) 2561 (28.1)
    Western Australia 1039 (11.4) 925 (10.1)
Rurality (n, %) 9120 9124
MMM1 (Major cities) 6815 (74.7) 6844 (75.0)
MMM2 905 (9.9) 903 (9.9)
MMM3 652 (7.1) 658 (7.2)
MMM4 261 (2.9) 250 (2.7)
MMM5 354 (3.9) 350 (3.8)
MMM6 113 (1.2) 98 (1.1)
MMM7 20 (0.2) 21 (0.2)
Socio-economic status of postcode (n, %) 9059 9072
SEIFA Q1 (High SES) 1855 (20.5) 1873 (20.6)
SEIFA Q2 1807 (19.9) 1729 (19.1)
SEIFA Q3 1766 (19.5) 1859 (20.5)
SEIFA Q4 1882 (20.8) 1784 (19.7)
SEFIA Q5 (Low SES) 1749 (19.3) 1827 (20.1)
Percent of popn. under 5 yrs (mean, sd) 5.7 (1.6) 5.7 (1.6) 9067 9084
Percent of popn. above 65 yrs (mean, sd) 13.4 (4.7) 13.4 (4.7) 9067 9084
Boost sample (n, %) 1501 (16.5) 1502 (16.5) 9122 9125
Continuing doctor (n, %) 6690 (73.3) 6692 (73.3) 9122 9125
Received incentive cheque (n, %) 127 (1.4) 114 (1.2) 9122 9125
Online completion (n, %) 1638 (52.0) 1515 (46.1) 3152 3286

Table 3. Characteristics of control and intervention group: Participants who supplied and an email address.

Intervention group:
have email and sent email
Control group: have email but were sent mail
N
(Intervention)
N
(Control)
Specialist (n, %) 1904 (50.6) 1,857 (50.5) 3765 3674
Male (n, %) 2092 (55.6) 2,807 (56.8) 3765 3673
Age (mean, sd) 48.8 (11.3) 48.7 (11.6) 3728 3613
Age categories (n, %) 3728 3613
    <35 yrs old 457 (12.3) 489 (13.53)
    35–39 yrs old 427 (11.5) 423 (11.7)
    40–44 yrs old 624 (16.7) 578 (16.0)
    45–49 yrs old 561 (15.0) 537 (14.9)
    50–54 yrs old 483 (13.0) 431 (11.9)
    55–59 yrs old 440 (11.8) 449 (12.4)
    60–64 yrs old 359 (9.6) 335 (9.3)
    65–69 yrs old 218 (5.8) 210 (5.8)
    70+ yrs old 159 (4.3) 161 (4.5)
State/Territory (n, %) 3765 3674
    Australian Capital Territory 65 (1.7) 71 (1.9)
    New South Wales 992 (26.3) 1021 (27.8)
    Northern Territory 40 (1.1) 46 (1.3)
    Queensland 750 (19.9) 759 (20.1)
    South Australia 240 (6.4) 257 (7.0)
    Tasmania 85 (2.3) 94 (2.6)
    Victoria 1114 (29.6) 1015 (27.6)
    Western Australia 479 (12.7) 411 (11.2)
Rurality (n, %) 3765 3674
    MMM1 (Major cities) 2827 (75.1) 2769 (75.4)
    MMM2 364 (9.7) 364 (9.9)
    MMM3 269 (7.1) 264 (7.2)
    MMM4 106 (2.8) 106 (2.9)
    MMM5 143 (3.8) 126 (3.4)
    MMM6 45 (1.2) 33 (0.9)
    MMM7 11 (0.3) 12 (0.3)
Socio-economic status of postcode (n, %) 3732 3655
    SEIFA quintile 1 778 (20.9) 718 (19.6)
    SEIFA quintile 2 724 (19.4) 701 (19.2)
    SEIFA quintile 3 718 (19.2) 786 (21.5)
    SEIFA quintile 4 769 (20.6) 696 (19.0)
    SEFIA quintile 5 743 (19.9) 754 (20.6)
Percent of popn. under 5 yrs (mean, sd) 5.6 (1.6) 5.6 (1.6) 3737 3658
Percent of popn. above 65 yrs (mean, sd) 13.3 (4.7) 13.4 (4.7) 3737 3658
Boost sample (n, %) 74 (2.0) 100 (2.7) 3765 3674
Continuing doctor (n, %) 3605 (95.8) 3500 (95.3) 3765 3500
Received incentive cheque (n, %) 49 (1.3) 40 (1.1) 3765 3674
Online completion (n, %) 533 (71.3) 399 (48.1) 748 830

Fig 1. Flow chart.

Fig 1

Table 4. Comparison of response rates.

All doctors GPs Non-GP specialists
Intention to treat analysis a
Unadjusted analysis
Control (# responded/# invited, %)
Intervention (# responded/# invited, %)
Pearson χ2(1)
3423/9107 (37.59)
3272/9108 (35.92)
5.4 (p = 0.020)
1616/4734 (34.14)
1487/4739 (31.38)
8.2 (p = 0.004)
1807/4373 (41.32)
1785/4369 (40.86)
0.20 (p = 0.658)
Odds ratio
(95% CI)
0.931**
(0.877 to 0.989)
0.882***
(0.810 to 0.961)
0.981
(0.901 to 1.07)
Difference in response rate (percentage points, 95% CI)
(Intervention minus control)
-1.66**
(-3.06 to -0.262)
-2.76***
(-4.65 to -0.87)
-0.47
(-2.53 to 1.60)
Adjusted analysis b
Odds ratio
(95% CI)
0.916***
(0.859 to 0.977)
0.879***
(0.802 to 0.964)
0.936
(0.853 to 1.03)
Difference in response rate (percentage points, 95% CI)
(Intervention minus control)
-1.93***
(-3.36 to -0.50)
-2.52***
(-4.33 to -0.72)
-1.56
(-3.80 to 0.67)
Average treatment effect on the treated (ATET) a
Unadjusted analysis
Control (# responded/# invited, %)
Intervention (# responded/# invited, %)
Pearson χ2(1)
Odds ratio
(95% CI)

847/3672 (23.07)
769/3763 (20.44)
7.7 (p = 0.006)
0.857***
(0.767 to 0.957)

417/1816 (22.96)
368/1860 (19.78)
5.5 (p = 0.019)
0.827**
(0.707 to 0.969)

430/1856 (23.17)
401/1903 (21.07)
2.4 (p = 0.122)
0.885
(0.759 to 1.03)

Difference in response rate (percentage points, 95% CI)
(Intervention minus control)

-2.63***
(-4.50 to -0.75)

-3.17**
(-5.83 to -0.53)

-2.10***
(-4.75 to 0.56)
Adjusted analysis1
Odds ratio
(95% CI)

0.847***
(0.750 to 0.957)

0.832**
(0.695 to 0.995)

0.859*
(0.726 to 1.01)

Difference in response rate (percentage points, 95% CI)
(Intervention minus control)

-2.37***
(-4.11 to -0.63)

-2.50**
(-4.93 to -0.07)

-2.33*
(-4.91 to 0.25)

a. ITT: Intention to Treat. ATET: Average Treatment Effect on the Treated.

b. Adjusted analyses are based on logistic regression including all independent variables in Table 1 (except for mode of completion) and have slightly smaller sampler sample sizes because of missing values for some independent variables.

*0.05<p< = 0.10

**0.01<p< = 0.05

***p< = 0.01.

The estimates of the ATET are shown in the bottom half of Table 3. This analysis compares only those who were approached by email in the intervention group to those in the control who had supplied an email address but were approached by mailed letter. Of those who were sent a second reminder in the intervention group, 43% (2840/6605) did not have an email address compared to 43.8% (2866/6540) in the control group. The overall ATET is larger compared to the ITT effect (-2.63 percentage points, 95% CI: -4.50 to -0.75), as well as for GPs (-3.17 percentage points, 95% CI: -5.83 to -0.53) and specialists (-2.10 percentage points, 95% CI: -4.75 to 0.56). The overall difference falls to 2.37 percentage points after adjustment for covariates.

Discussion

Using email to approach potential survey subjects is often preferred because of its low cost. It is likely to have gained popularity during the COVID-19 pandemic given issues with collecting survey data using face-to-face interviews. Several studies have shown that in surveys of physicians an emailed approach can lead to lower response rates, potentially increasing response bias and reducing external validity. This is also the case in other populations [1]. However, these studies are of specific samples and most are over 10 years old. One might assume that since then the use of email, familiarity with the internet, and online survey completion should have become more familiar to physician populations.

Our results confirm that, in a nationally representative population of qualified GPs and non-GP specialists, response rates are lower using an emailed approach. The finding of a 1.93 percentage point fall in the response rate is from the ITT analysis and so is relevant to physician populations approached initially by mail, or where it is unknown a priori if they have an email address. The ATET of a 2.37 percentage point fall in the response rate is relevant to physician populations where only an email address is available to researchers, and so for physicians willing to supply an email address. The fall in response rate is higher for GPs than for non-GP specialists. The effect size seems quite small possibly because the control group, though approached by mail, still had the option of completing the survey online given the mixed mode of survey completion available to them. Our estimates are therefore conservative compared to using mail by itself. Our results are, more relevant to surveys using mixed modes of delivery.

Weaver et al. (2019) randomised around 1,200 physicians in Minnesota and found a web-only response rate of 15.2% compared to 18.9% in the mail-only mode (3.7 percentage points), slightly higher than in our study. They also compared mixed modes (web-mail and mail-web) so the sample sizes in the web-only and mail-only groups were relatively small and so this difference was not statistically significant. Beebe et al., (2018) randomised 686 physicians, nurses and physician assistants and found a response rate of 38.2% for web only compared to 32.1% for mail only after two reminders, a slightly higher effect size (4.1 percentage points) and in the opposite direction of our results. However, again this difference was not statistically significant.

The results should also be interpreted in the context of our longitudinal study that the first 10 annual waves combined a mixed mode of completion with a single method of approach using a mailed paper letter for the main mailout and three reminders. The intervention changed the method of approach to email for the second reminder. We used the second reminder, rather than the first mailout, to avoid a potentially large fall in the total number of responses if email was worse than mail. In the second reminder fewer respondents in total would be included in the trial. We do not think that the percentage difference in response rates would be any different if the intervention was for the first mailing or the first or the third reminder. At the second reminder, the intervention group were approached by email and could only complete the survey online, whilst the control group were approached by mailed letter that included a paper copy of the survey and so had a choice of online or paper copy completion. A limitation is that we are not just comparing differences in the method of approach. Though the control group were approached by mail, they could choose online or paper completion, and so their response rate might be lower if they could complete only online or higher if they could only complete using paper. For those who responded, there is evidence that those in the intervention group were more likely to complete the survey online, presumably because this group were approached by email in the second reminder. In the ITT analysis, 46.1% of control group responded online compared to 52.0% in the intervention group (Table 1). In the ATET analysis of those who supplied an email address, 48.1% in the control group completed the survey online compared to 71.3% in the intervention group.

The study was conducted exclusively within the Australian context, which may limit the generalizability of the findings to other countries with different healthcare systems, professional cultures, and survey response patterns. It would be beneficial to conduct similar studies in other countries to better understand how the intervention is performed in different settings. In addition, we have not examined differences in non-response bias which could arise if those who are more likely to respond to email are different in unobserved ways and this changes the composition of the doctors who respond, e.g. younger doctors are likely to respond to email. Our previous research in the same context has shown mixed modes (mail-online and online-mail) showed evidence of response bias, and that young, male doctors working in remote areas are more likely to complete the online survey [11,23].

Our definition of a complete response of at least one question answered from Section B is quite conservative compared to what is generally recommended in the literature [24]. However, some responses to current working status (Section A in the survey), such as ‘retired’ or ‘not working in clinical practice’ meant that respondents were not required to complete the rest of the survey or were directed to complete only certain sections of the survey, e.g demographics. Nevertheless, for respondents who answered at least one question from Section B, item response rates are over 90% for all sections of the survey across all 11 waves [18].

As we were uncertain about the impact of email on response rates, we took a cautious approach by not using email in the first and main mailout but chose to use email in the second reminder where potential adverse impacts on overall response rate would be minimised. Our results are therefore conservative estimates of the impact of using an email approach versus mailed paper letter. If the intervention was delivered during the main mailout, the statistical power would have been higher. Although we do have a pilot survey every year and could have tested our hypothesis using the pilot sample of around 2000 doctors, the sample size in the pilot was not large enough and most pilot responses are merged into in the main wave if the surveys do not change so still count towards the main response rate.

Conclusions

The use of an emailed approach and online completion avoids printing and postage costs and the costs of manual data entry for those who responded to the second reminder. However, the lower response rates means that costs increase for the third mailed reminder as a higher number of participants needed to be approached. It remains unclear what advice to provide researchers as one might be willing to accept a lower response rate if costs are also lower. Often budgets for such surveys are very limited and email is the only option. In this case, optimising other survey characteristics (e.g. survey length) that can increase response rates is important using existing evidence [1,25] and new research in the context of physicians. In addition, qualitative research is helpful to better understand response behaviours [23]. However, we do recommend that researchers attempt to negotiate larger budgets for physician surveys to ensure that mailed paper letters are used where possible so that survey responses are externally valid. This is necessary to be able to make valid recommendations from survey research. Further research should test a mix of methods to approach potential respondents, with appropriate sample size calculations, and where the same subjects are approached by email as well as mailed letter or other types of contact such as text messaging and social media, though these methods might be less effective in older physician populations.

Acknowledgments

We thank the doctors who participated in the MABEL survey.

Data Availability

Data can requested using the following non-author email: mabel-admin@unimelb.edu.au at the Melbourne Institute: Applied Economic and Social Research at the University of Melbourne. In addition, data can be requested from the Australian Data Archive https://dataverse.ada.edu.au/dataverse/mabel. This provides access to main MABEL survey data but not the data used in this trial.

Funding Statement

This research used data from the MABEL longitudinal survey of doctors. Funding for MABEL was provided by the National Health and Medical Research Council (2007 to 2016: 454799 and 1019605); the Australian Department of Health and Ageing (2008); Health Workforce Australia (2013); The University of Melbourne, Medibank Better Health Foundation, the NSW Department of Health, and the Victorian Department of Health and Human Services (2017); and the Australian Government Department of Health, the Australian Digital Health Agency, and the Victorian Department of Health and Human Services (2018). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. AS was the Chief Investigator for all above grants but did not receive salary or payments. BH and TT were employed using the above research funding. GR did not receive salary or payments.

References

  • 1.Daikeler J, Bošnjak M, Lozar Manfreda K. Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates. Journal of Survey Statistics and Methodology. 2020;8(3):513–39. doi: 10.1093/jssam/smz008 [DOI] [Google Scholar]
  • 2.Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. JAMA. 2012;307(17):1805–6. Epub 2012/05/03. doi: 10.1001/jama.2012.3532 . [DOI] [PubMed] [Google Scholar]
  • 3.Cho YI, Johnson TP, VanGeest JB. Enhancing Surveys of Health Care Professionals: A Meta-Analysis of Techniques to Improve Response. Evaluation & the Health Professions. 2013;36(3):382–407. doi: 10.1177/0163278713496425 [DOI] [PubMed] [Google Scholar]
  • 4.Klabunde CN, Willis GB, McLeod CC, Dillman DA, Johnson TP, Greene SM, et al. Improving the Quality of Surveys of Physicians and Medical Groups: A Research Agenda. Evaluation & the Health Professions. 2012;35(4):477–506. doi: 10.1177/0163278712458283 WOS:000310687700007. [DOI] [PubMed] [Google Scholar]
  • 5.Galea S, Tracy M. Participation Rates in Epidemiologic Studies. Ann Epidemiol. 2007;17(9):643–53. doi: 10.1016/j.annepidem.2007.03.013 [DOI] [PubMed] [Google Scholar]
  • 6.Pit S, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioner’s survey response rates—a systematic review. BMC Medical Research Methodology. 2014;14:76. Epub 2014/06/08. doi: 10.1186/1471-2288-14-76 ; PubMed Central PMCID: PMC4059731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.McLeod CC, Klabunde CN, Willis GB, Stark D. Health care provider surveys in the United States, 2000–2010: a review. Eval Health Prof. 2013;36(1):106–26. Epub 2013/02/05. doi: 10.1177/0163278712474001 . [DOI] [PubMed] [Google Scholar]
  • 8.VanGeest JB, Johnson TP, Welch VL. Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Eval Health Prof. 2007;30(4):303–21. doi: 10.1177/0163278707307899 [DOI] [PubMed] [Google Scholar]
  • 9.Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the Impact of Mixed-Mode Designs (Mail and Web) and Multiple Contact Attempts within Mode (Mail or Web) on Clinician Survey Response. Health services research. 2018. Epub 2018/01/23. doi: 10.1111/1475-6773.12827 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Weaver L, Beebe TJ, Rockwood T. The impact of survey mode on the response rate in a survey of the factors that influence Minnesota physicians’ disclosure practices. BMC Medical Research Methodology. 2019;19(1). doi: 10.1186/s12874-019-0719-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Scott A, Jeon S-H, Joyce C, Humphreys J, Kalb G, Witt J, et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Medical Research Methodology. 2011;11:126. doi: 10.1186/1471-2288-11-126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Beebe TJ, Locke GR, Barnes SA, Davern ME, Anderson KJ. Mixing Web and Mail Methods in a Survey of Physicians. Health services research. 2007;42(3p1):1219–34. doi: 10.1111/j.1475-6773.2006.00652.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Beebe TJ, McAlpine DD, Ziegenfuss JY, Jenkins S, Haas L, Davern ME. Deployment of a Mixed-Mode Data Collection Strategy Does Not Reduce Nonresponse Bias in a General Population Health Survey. Health services research. 2012;47(4):1739–54. doi: 10.1111/j.1475-6773.2011.01369.x WOS:000306141500019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lusk C, Delclos GL, Burau K, Drawhorn DD, Aday LA. Mail versus Internet surveys—Determinants of method of response preferences among health professionals. Evaluation & the Health Professions. 2007;30(2):186–201. doi: 10.1177/0163278707300634 ISI:000246447000006. [DOI] [PubMed] [Google Scholar]
  • 15.Smyth JD, Olson K, Millar MM. Identifying predictors of survey mode preference. Social science research. 2014. a;48:135–44. Epub 2014/08/19. doi: 10.1016/j.ssresearch.2014.06.002 . [DOI] [PubMed] [Google Scholar]
  • 16.Taylor T, Scott A. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. Evaluation & the Health Professions. 2018;42(1):41–70. doi: 10.1177/0163278718807744 [DOI] [PubMed] [Google Scholar]
  • 17.Schulz KF, Altman DG, Moher D, the CG. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Medicine. 2010;8(1):18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Szawlowski S, Harrap B, Leahy A, Scott A. Medicine in Australia: Balancing Employment and Life (MABEL). MABEL User Manual: Wave 11 Release. Melbourne: Melbourne Institute: Applied Economic and Social Research, The University of Melbourne; 2020. [Google Scholar]
  • 19.Joyce C, Scott A, Jeon S, Humphreys J, Kalb G, Witt J, et al. The "Medicine in Australia: Balancing Employment and Life (MABEL)" longitudinal survey—Protocol and baseline data for a prospective cohort study of Australian doctors’ workforce participation. BMC Health Services Research. 2010;10(1):50. doi: 10.1186/1472-6963-10-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.ABS. Socio-Economic Indexes for Areas (SEIFA). Technical Paper 2033.0.55.001. Canberra: Australian Bureau of Statistics, 2016. [Google Scholar]
  • 21.Department of Health. Modified Monash Model. Canberra: Australian Government; 2021. [Google Scholar]
  • 22.StataCorp. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP.; 2015. [Google Scholar]
  • 23.Taylor T, Scott A. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. Evaluation & the Health Professions. 2019;42(1):41–70. doi: 10.1177/0163278718807744 [DOI] [PubMed] [Google Scholar]
  • 24.Amercian Association for Public Opinion Research. Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys. 9th ed: Amercian Association for Public Opinion Research,; 2016. [Google Scholar]
  • 25.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires Cochrane Database of Systematic Reviews. 2009;Issue 3(Art. No.: MR000008). [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Fares Alahdab

10 Apr 2023

PONE-D-22-23734A randomised controlled trial of email versus mailed invitation letter in a national longitudinal survey of physicians.PLOS ONE

Dear Dr. Scott,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR:

Thank you for submitting your paper to our journal. We appreciate the effort and time you have invested in your research. After a thorough review, we believe your study has merit and potential to contribute to the field.

In order to help you improve the quality of your manuscript, we have provided detailed feedback and suggestions. We kindly ask you to consider these comments and critiques carefully as you revise your paper. Addressing these points will not only strengthen the methodology and discussion but also enhance the overall clarity, organization, and significance of your study.

Once you have made the necessary revisions, please resubmit your manuscript for further review. We look forward to receiving your revised paper and evaluating its potential for publication in our journal.

Thank you for considering our feedback, and we hope to receive your revised manuscript soon.

==============================

Please submit your revised manuscript by May 25 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Fares Alahdab, MD, MSc

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.​

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I have several concerns about the current study such that I cannot recommend it for publication at PLOS one.

First, the topic is about a survey method, and it is comparing existing methods (online vs. letter recruitment for survey) rather than an innovative method. The sample size is restricted to be physicians, and the effect that was found is extremely small (i.e., a difference in about 2% response rate) albeit significant, and it is confirming previous findings. Given these, I think this study is more suitable for a specialized journal on survey methods.

Second, the study was “administered between August 2018 and April 2019” I don’t know what the policy of PLOS-one is about this issue, but some journals do not accept results from surveys that are more than 3 years old. Furthermore, given that the motivation for the current study is that similar studies use more than 10-year-old data and the authors argued for an increase in internet usage. As we all know a lot of things happened since April of 2019 with emails and zoom meetings serving as essential tools in our society, so I am not sure whether the current results can be treated as up-to-date findings.

Third, the methods are idiosyncratic in that the main manipulation took place only during the second reminder after the recruitment took place over mails and the letter reminder. To quote the authors, “More specifically, we introduce an email approach in the second of three reminders sent to non-responding physicians. In the first ten annual waves of the survey, the main mailout and all three reminders were delivered by mail only.” What was the reason that this specific mixed method was used for the study? Are there any empirical or theoretical basis to suggest that this method would be the best? In the discussion, the authors acknowledge these as potential limitations, and I think these limitations seriously compromise the generalizability of the findings.

Fourth, it is unclear what the outcome measures; are they response rates only after the second reminder or after the first reminder? It sounds like it is the response rate regardless of the timepoint, and this can be an issue as noted in the next point.

Fifth, the results reported in the abstract and as the main findings included data from two third of the participants in the intervention group who did not even receive the experimental treatment because they did not even have valid email addresses! These data should not even be reported. The authors also report results from those who had email addresses in both the intervention and control groups, and the response rates of both groups among these people are reduced from mid- to high thirty percent of the overall response rates down to low 20’s. Could it be the case that those who provided email addresses were more likely to have responded even before the manipulation?

Other less major questions:

Was the survey anonymous? If so, how was the online completion tracked in terms of which condition they belonged? If the survey responses had identifiers, couldn’t that be a factor that might compromise the generalizability of the findings?

First sentence of Background; “Web surveys have lower response rates than other survey modes.” This is a strong statement and perhaps incorrect given the findings of the current study. Also, what are “other survey modes”?

“…and this study also included nurses and physician assistants as well as physicians” This can sound somewhat politically incorrect as if nurses and physician assistants’ results are meaningless.

Reviewer #2: The article provides an important contribution regarding surveys and data collection. It makes a good point, indicating that most of the studies in this area are ten years old and thus requires an update. It utilizes sound statistical methods and contributes to existing literature.

Reviewer #3: Overall, the paper is well-structured, and the research question is clearly defined. The authors aim to compare the response rates between email and mailed approaches within a national longitudinal survey of physicians. I agree with the authors that this paper is up to date, considering that most studies in this area use data more than 10 years old, which justifies the need to re-examine this issue. The methods used in the study are robust and well-described, following the CONSORT guidelines and using appropriate statistical methods.

The paper provides a clear comparison of response rates between the intervention and control groups, as well as subgroups of GPs and non-GP specialists. The results indicate that the response rate was lower for the email approach compared to the mailed approach. This difference was larger for GPs compared to non-GP specialists.

In conclusion, the paper contributes valuable insights to the survey methods literature for physicians, particularly in the context of the increasing use of email and the internet in medical practice. The findings have practical implications for survey design and recruitment methods targeting physicians.

Strengths:

1. The study followed the Consolidated Standards of Reporting Trials (CONSORT) guidelines, which ensures transparent and accurate reporting of the research process.

2. The research was conducted within the context of the Medicine in Australia: Balancing Employment and Life (MABEL) survey, a well-established longitudinal panel survey of all medical practitioners in Australia, providing a strong foundation for the study.

3. The sample frame for MABEL, the Medical Directory of Australia, is a national database of doctors, ensuring a comprehensive representation of the medical practitioner population.

4. The study used a randomized controlled trial with a parallel-arm design and 1:1 allocation, which allows for a rigorous comparison between the intervention and control groups.

5. The researchers employed stratified randomization to ensure that the proportions of different doctor types (GP, specialist), continuing or new, and boost sample in the intervention and control groups were the same.

6. The analysis was conducted by a researcher who was blinded to group allocation until after the analysis was complete and checked, reducing the risk of bias.

Weaknesses:

1. The study relied on data from Wave 11, which was administered between August 2018 and April 2019, limiting the generalizability of the findings to more recent years.

2. The study excluded junior doctors, who might have different response rates and preferences compared to more experienced doctors, reducing the generalizability of the findings.

3. A significant proportion of the intervention group (43%) did not have an email address and were instead approached by mailed paper letter, which might underestimate the effect of the intervention for those who received an email.

4. The intervention only differed at the second reminder, which may not have been enough to fully capture the difference in response rates between email and mailed approaches.

5. The response rate differences between the intervention and control groups might be influenced by other factors not considered in the study, such as the timing of reminders and the content of the survey.

Suggestions for improvement:

1. The authors could consider updating the study with more recent data to ensure the relevance of the findings to current survey methods and physician response rates.

2. To further assess the generalizability of the findings, the authors could include junior doctors in the sample (for future work) and analyze their response rates separately, providing insights into different age groups and experience levels.

3. The researchers could explore additional ways to improve email deliverability and reduce the proportion of participants without an email address to better assess the intervention's effectiveness.

4. The intervention could be extended to more than one reminder stage to better understand the cumulative effect of using email approaches over multiple reminders.

5. The authors could conduct sensitivity analyses to determine the impact of other factors, such as timing of reminders and survey content, on response rates in both intervention and control groups. This would provide a more comprehensive understanding of the factors influencing physician response rates.

6. Limited generalizability across different populations: The study was conducted exclusively within the Australian context, which may limit the generalizability of the findings to other countries with different healthcare systems, professional cultures, and survey response patterns. It would be beneficial to conduct similar studies in other countries to better understand how the intervention is performed in different settings.

7. Lack of examination of non-response bias: The study focused on response rates but did not explore potential non-response bias, which may arise if the doctors who chose to participate in the survey differ systematically from those who did not. It is important to investigate whether the intervention affected the composition of respondents in any way, as this may have implications for the interpretation of the survey results and their representativeness of the target population.

8. Limited exploration of factors influencing email effectiveness: The study did not investigate factors that could influence the effectiveness of email approaches, such as the timing of the email, subject lines, or the formatting of the email content. A more detailed investigation of these factors could help identify best practices for optimizing email reminders in future survey administration.

9. Single intervention approach: The study tested only one email-based intervention, which may not represent the full range of potential interventions that could be implemented to improve response rates. Future research could explore other interventions, such as varying the incentives for participation or using different communication channels, to determine the most effective strategies for increasing response rates.

10. No qualitative insights: The study did not provide any qualitative insights into the reasons behind the observed response rate differences between the intervention and control groups. Conducting interviews or focus groups with a subset of physicians could help researchers better understand the reasons for their survey participation preferences, which could inform future survey design and administration strategies.

11. Provide more information on the implications of their findings for healthcare and survey research communities.

12. Address the study's limitations in more detail and discuss potential ways to overcome them in future research.

13. Elaborate on the practical implications of their findings, such as cost savings, logistical benefits, and challenges or barriers to implementing email reminders.

14. Clearly outline the next steps needed for future research to build upon the current study and address its limitations.

Writing Quality, Understanding Easiness, Clarity, and Organization:

- Overall, the writing quality of the paper seems to be good, with clear descriptions of the methods, results, and conclusions. The organization appears logical, with sections following a conventional structure for a research article.

- Suggestions for improvement:

a) Some portions of the text, particularly in the Methods section, could be further clarified for better readability. Consider rephrasing complex sentences and providing more straightforward explanations of the study design and data analysis.

b) In the Introduction and Discussion sections, the authors could provide more context on the importance of their research question and the implications of their findings for the broader healthcare and survey research communities.

Content Accuracy, Comprehensiveness, Clinical Usefulness, and Significance:

- The content appears accurate, and the methodology is comprehensive. However, some concerns have been raised regarding the study's limitations, which could impact the clinical usefulness and significance of the findings.

- Suggestions for improvement:

a) Address the weaknesses mentioned in previous comments, such as the limited generalizability of the study, potential non-response bias, and factors influencing email effectiveness.

b) Explore additional intervention approaches, such as varying incentives for participation, using different communication channels, or incorporating personalized messaging to identify the most effective strategies for increasing response rates.

c) Consider conducting a qualitative investigation to understand the reasons behind physicians' survey participation preferences, which could inform future survey design and administration strategies.

d) Provide a more in-depth literature review to better contextualize the study within the broader research landscape and identify gaps in current knowledge.

e) Discuss the practical implications of the study findings, including potential cost savings or logistical benefits of using email reminders, as well as any challenges or barriers to implementation.

f) Include a more detailed description of the target population and the representativeness of the sample to help readers understand the context and relevance of the study findings.

g) Present a clear plan for future research, outlining the next steps needed to build upon the current study and address its limitations.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Aug 22;18(8):e0289628. doi: 10.1371/journal.pone.0289628.r002

Author response to Decision Letter 0


30 Jun 2023

Response to editors comments (shown by *)

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

*We have made the above changes to style.

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.?

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

*We have made changes to the methods section and the inline submission form

Response to reviewers comments

Reviewer #1: I have several concerns about the current study such that I cannot recommend it for publication at PLOS one.

First, the topic is about a survey method, and it is comparing existing methods (online vs. letter recruitment for survey) rather than an innovative method. The sample size is restricted to be physicians, and the effect that was found is extremely small (i.e., a difference in about 2% response rate) albeit significant, and it is confirming previous findings. Given these, I think this study is more suitable for a specialized journal on survey methods.

*Survey methods are essential to all applied researchers, and response rates are feature of many any studies and surveys undertaking primary data collection, especially collecting data from health care providers. The size of a treatment effect should not influence publication otherwise this leads to publication bias. We think the context of our study is innovative given the increasing complexity of survey design and administration.

Second, the study was “administered between August 2018 and April 2019” I don’t know what the policy of PLOS-one is about this issue, but some journals do not accept results from surveys that are more than 3 years old. Furthermore, given that the motivation for the current study is that similar studies use more than 10-year-old data and the authors argued for an increase in internet usage. As we all know a lot of things happened since April of 2019 with emails and zoom meetings serving as essential tools in our society, so I am not sure whether the current results can be treated as up-to-date findings.

*The pandemic caused a delay in publication of this research and so we should not be disadvantaged by this. We agree that the pandemic may have changed people’s preferences towards online communication away from face to face (we now mention this at the beginning of the discussion), but both alternatives in our study (mail and email) should not have been affected by this as both do not require face to face contact. We did not compare email with the use of face-to face interviews, which might then have affected the generalisability of our results post-pandemic.

Third, the methods are idiosyncratic in that the main manipulation took place only during the second reminder after the recruitment took place over mails and the letter reminder. To quote the authors, “More specifically, we introduce an email approach in the second of three reminders sent to non-responding physicians. In the first ten annual waves of the survey, the main mailout and all three reminders were delivered by mail only.” What was the reason that this specific mixed method was used for the study? Are there any empirical or theoretical basis to suggest that this method would be the best? In the discussion, the authors acknowledge these as potential limitations, and I think these limitations seriously compromise the generalizability of the findings.

*As noted, we acknowledge these limitations in the discussion and add to the text to highlight that the percentage difference in response rates should not be any different if we implemented the intervention in the first mailout, 1st reminder or third reminder. We have also noted in the text that we used the second reminder to avoid a large fall in the total number of responses if email was worse than mail. In the second reminder fewer respondents in total would be included in the trial.

Fourth, it is unclear what the outcome measures; are they response rates only after the second reminder or after the first reminder? It sounds like it is the response rate regardless of the timepoint, and this can be an issue as noted in the next point.

*Yes the outcomes are the final response rates at the end of recruitment. We have added text to p6 to make this clear.

Fifth, the results reported in the abstract and as the main findings included data from two third of the participants in the intervention group who did not even receive the experimental treatment because they did not even have valid email addresses! These data should not even be reported. The authors also report results from those who had email addresses in both the intervention and control groups, and the response rates of both groups among these people are reduced from mid- to high thirty percent of the overall response rates down to low 20’s. Could it be the case that those who provided email addresses were more likely to have responded even before the manipulation?

*Excluding this group would bias the results and make them less generalisable to a real world setting when email addresses are not always available. This was an intention to treat analysis with the intervention occurring at the second reminder. Not having access to email is what happens in real world settings and so provides more generalisable results relevant to the population of doctors. Those who provided email addresses could be more likely to respond, but randomisation ensures the pre-intervention propensity to respond and the pre-intervention proportion with an email address are the same in the intervention and control groups.

Other less major questions:

Was the survey anonymous? If so, how was the online completion tracked in terms of which condition they belonged? If the survey responses had identifiers, couldn’t that be a factor that might compromise the generalizability of the findings?

*All MABEL survey waves were not anonymous – this is necessary in longitudinal surveys that track respondents over time. Respondents were tracked as usual as part of the longitudinal survey. For the online version of the survey, respondents were required to log in with their unique username and password. Paper versions of surveys had their username printed on the front of each survey. Text has been added/moved on p5.

First sentence of Background; “Web surveys have lower response rates than other survey modes.” This is a strong statement and perhaps incorrect given the findings of the current study. Also, what are “other survey modes”?

*This is a strong statement because it comes from a large systematic review whose results are unambiguous. Other modes include face to face, mail, telephone, and email. We have clarified the text in this sentence.

“…and this study also included nurses and physician assistants as well as physicians” This can sound somewhat politically incorrect as if nurses and physician assistants’ results are meaningless.

*We have modified the text here.

Reviewer #2: The article provides an important contribution regarding surveys and data collection. It makes a good point, indicating that most of the studies in this area are ten years old and thus requires an update. It utilizes sound statistical methods and contributes to existing literature.

Reviewer #3: Overall, the paper is well-structured, and the research question is clearly defined. The authors aim to compare the response rates between email and mailed approaches within a national longitudinal survey of physicians. I agree with the authors that this paper is up to date, considering that most studies in this area use data more than 10 years old, which justifies the need to re-examine this issue. The methods used in the study are robust and well-described, following the CONSORT guidelines and using appropriate statistical methods.

The paper provides a clear comparison of response rates between the intervention and control groups, as well as subgroups of GPs and non-GP specialists. The results indicate that the response rate was lower for the email approach compared to the mailed approach. This difference was larger for GPs compared to non-GP specialists.

In conclusion, the paper contributes valuable insights to the survey methods literature for physicians, particularly in the context of the increasing use of email and the internet in medical practice. The findings have practical implications for survey design and recruitment methods targeting physicians.

Strengths:

1. The study followed the Consolidated Standards of Reporting Trials (CONSORT) guidelines, which ensures transparent and accurate reporting of the research process.

2. The research was conducted within the context of the Medicine in Australia: Balancing Employment and Life (MABEL) survey, a well-established longitudinal panel survey of all medical practitioners in Australia, providing a strong foundation for the study.

3. The sample frame for MABEL, the Medical Directory of Australia, is a national database of doctors, ensuring a comprehensive representation of the medical practitioner population.

4. The study used a randomized controlled trial with a parallel-arm design and 1:1 allocation, which allows for a rigorous comparison between the intervention and control groups.

5. The researchers employed stratified randomization to ensure that the proportions of different doctor types (GP, specialist), continuing or new, and boost sample in the intervention and control groups were the same.

6. The analysis was conducted by a researcher who was blinded to group allocation until after the analysis was complete and checked, reducing the risk of bias.

Weaknesses:

1. The study relied on data from Wave 11, which was administered between August 2018 and April 2019, limiting the generalizability of the findings to more recent years.

*COVID delayed the publication of this paper, and the COVID years. There is no a priori reason to suggest that the results are less relevant inn2023. COVID might have changed preferences for the use internet/email away from face to face, but our intervention did not compare with face to face survey completion.

2. The study excluded junior doctors, who might have different response rates and preferences compared to more experienced doctors, reducing the generalizability of the findings.

*Yes this correct – our findings are generalisable only to qualified GPs and specialists.

3. A significant proportion of the intervention group (43%) did not have an email address and were instead approached by mailed paper letter, which might underestimate the effect of the intervention for those who received an email.

*Yes – we have accounted for this by also conducting the analysis excluding those without email addresses (the average treatment effect on the treated) where the treatment effect is slightly higher.

4. The intervention only differed at the second reminder, which may not have been enough to fully capture the difference in response rates between email and mailed approaches.

*See response to Reviewer 1. A smaller total number of respondents are exposed to the intervention at the second reminder. We believe this would not have influenced the size of the percentage change in response rates between intervention and control, but would have led to lower power to detect a difference (pp14-15). However, our sample size calculation meant that we had sufficient power.

5. The response rate differences between the intervention and control groups might be influenced by other factors not considered in the study, such as the timing of reminders and the content of the survey.

*This is correct - however randomisation ensured these were the same in the intervention and control groups.

Suggestions for improvement:

1. The authors could consider updating the study with more recent data to ensure the relevance of the findings to current survey methods and physician response rates.

*This is not possible as the last wave of the survey was in 2018-19.

2. To further assess the generalizability of the findings, the authors could include junior doctors in the sample (for future work) and analyze their response rates separately, providing insights into different age groups and experience levels.

*Junior doctors were not included in the randomisation as we focused only on more senior doctors. We will consider this for further research.

3. The researchers could explore additional ways to improve email deliverability and reduce the proportion of participants without an email address to better assess the intervention's effectiveness.

*Noted.

4. The intervention could be extended to more than one reminder stage to better understand the cumulative effect of using email approaches over multiple reminders.

*Yes we could have used email in all reminders as an intervention. There are various types of mixed mode that can be used.

5. The authors could conduct sensitivity analyses to determine the impact of other factors, such as timing of reminders and survey content, on response rates in both intervention and control groups. This would provide a more comprehensive understanding of the factors influencing physician response rates.

*We have another paper more generally examining factors influencing response rates. Our study was focused on the specific intervention and other factors were held constant because of randomisation.

Taylor T, Scott A. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. Evaluation & the Health Professions. 2018;42(1):41-70. doi: 10.1177/0163278718807744

6. Limited generalizability across different populations: The study was conducted exclusively within the Australian context, which may limit the generalizability of the findings to other countries with different healthcare systems, professional cultures, and survey response patterns. It would be beneficial to conduct similar studies in other countries to better understand how the intervention is performed in different settings.

*We have included this as a limitation in the discussion on p15.

7. Lack of examination of non-response bias: The study focused on response rates but did not explore potential non-response bias, which may arise if the doctors who chose to participate in the survey differ systematically from those who did not. It is important to investigate whether the intervention affected the composition of respondents in any way, as this may have implications for the interpretation of the survey results and their representativeness of the target population.

*This is an important issue if those who prefer email differ systematically from those who do not, and if these factors are also correlated with specific study outcomes (e.g. difference in job satisfaction for example). We have acknowledged this possibility but also cited evidence form our previous research using MABEL that examined response rates and response bias (p15). This found that response bias could be an issue depending on the specific study objectives.

8. Limited exploration of factors influencing email effectiveness: The study did not investigate factors that could influence the effectiveness of email approaches, such as the timing of the email, subject lines, or the formatting of the email content. A more detailed investigation of these factors could help identify best practices for optimizing email reminders in future survey administration.

*This was not an objective of our study, but we have noted this in the conclusion as an option for further research (p16).

9. Single intervention approach: The study tested only one email-based intervention, which may not represent the full range of potential interventions that could be implemented to improve response rates. Future research could explore other interventions, such as varying the incentives for participation or using different communication channels, to determine the most effective strategies for increasing response rates.

*As above – we have acknowledged this in the conclusions as an area of further research (p16).

10. No qualitative insights: The study did not provide any qualitative insights into the reasons behind the observed response rate differences between the intervention and control groups. Conducting interviews or focus groups with a subset of physicians could help researchers better understand the reasons for their survey participation preferences, which could inform future survey design and administration strategies.

*We have included this as further research in the conclusion. We did explore qualitative insights in a previous study which we have now referenced. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. T. Taylor and A. Scott. Evaluation & the Health Professions 2019 Vol. 42 Issue 1 Pages 41-70. DOI: 10.1177/0163278718807744

11. Provide more information on the implications of their findings for healthcare and survey research communities.

*We have added text (as above) to the conclusion.

12. Address the study's limitations in more detail and discuss potential ways to overcome them in future research.

* We have included this in the discussion in response to all of the previous comments.

13. Elaborate on the practical implications of their findings, such as cost savings, logistical benefits, and challenges or barriers to implementing email reminders.

*This suggestion is potentially useful but is not directly related to the aims of our research.

14. Clearly outline the next steps needed for future research to build upon the current study and address its limitations.

We have done this on the basis of previous suggestions by this referee – in the discussion and conclusion.

Writing Quality, Understanding Easiness, Clarity, and Organization:

- Overall, the writing quality of the paper seems to be good, with clear descriptions of the methods, results, and conclusions. The organization appears logical, with sections following a conventional structure for a research article.

- Suggestions for improvement:

a) Some portions of the text, particularly in the Methods section, could be further clarified for better readability. Consider rephrasing complex sentences and providing more straightforward explanations of the study design and data analysis.

*We have edited the methods section to clarify things where possible, though with no specific guidance from the reviewer we hope we have addressed this concern.

b) In the Introduction and Discussion sections, the authors could provide more context on the importance of their research question and the implications of their findings for the broader healthcare and survey research communities.

*We have added text in the discussion and conclusions

Content Accuracy, Comprehensiveness, Clinical Usefulness, and Significance:

- The content appears accurate, and the methodology is comprehensive. However, some concerns have been raised regarding the study's limitations, which could impact the clinical usefulness and significance of the findings.

- Suggestions for improvement:

a) Address the weaknesses mentioned in previous comments, such as the limited generalizability of the study, potential non-response bias, and factors influencing email effectiveness.

*Done – see above responses

b) Explore additional intervention approaches, such as varying incentives for participation, using different communication channels, or incorporating personalized messaging to identify the most effective strategies for increasing response rates.

*Mentioned as further research in the conclusion

c) Consider conducting a qualitative investigation to understand the reasons behind physicians' survey participation preferences, which could inform future survey design and administration strategies.

Mentioned as further research in the conclusion

d) Provide a more in-depth literature review to better contextualize the study within the broader research landscape and identify gaps in current knowledge.

*We are limited on space and have kept the literature review focussed and specific to the research questions we are addressing.

e) Discuss the practical implications of the study findings, including potential cost savings or logistical benefits of using email reminders, as well as any challenges or barriers to implementation.

*This is discussed in the conclusion.

f) Include a more detailed description of the target population and the representativeness of the sample to help readers understand the context and relevance of the study findings

*We have included a new Table 1 comparing the respondents in trial to the population of GPs and specialists in 2018. We have included a few sentences in the text on p8 to describe this table.

g) Present a clear plan for future research, outlining the next steps needed to build upon the current study and address its limitations.

*Mentioned as further research in the conclusion.

Attachment

Submitted filename: Reviewer response.docx

Decision Letter 1

Fares Alahdab

24 Jul 2023

A randomised controlled trial of email versus mailed invitation letter in a national longitudinal survey of physicians.

PONE-D-22-23734R1

Dear Dr. Scott,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Fares Alahdab, MD, MSc

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: Thank you for addressing my comments and revising your manuscript accordingly. I have no further comments to add.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

**********

Acceptance letter

Fares Alahdab

14 Aug 2023

PONE-D-22-23734R1

A randomised controlled trial of email versus mailed invitation letter in a national longitudinal survey of physicians.

Dear Dr. Scott:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Fares Alahdab

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Reviewer response.docx

    Data Availability Statement

    Data can requested using the following non-author email: mabel-admin@unimelb.edu.au at the Melbourne Institute: Applied Economic and Social Research at the University of Melbourne. In addition, data can be requested from the Australian Data Archive https://dataverse.ada.edu.au/dataverse/mabel. This provides access to main MABEL survey data but not the data used in this trial.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES