Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Jun 23;15(6):e0234817. doi: 10.1371/journal.pone.0234817

It’s how you say it: Systematic A/B testing of digital messaging cut hospital no-show rates

Adi Berliner Senderey 1,2,*,#, Tamar Kornitzer 3, Gabriella Lawrence 1,4, Hilla Zysman 3, Yael Hallak 5, Dan Ariely 3,5,#, Ran Balicer 1,6,#
Editor: Sreeram V Ramagopalan7
PMCID: PMC7310733  PMID: 32574181

Abstract

Failure to attend hospital appointments has a detrimental impact on care quality. Documented efforts to address this challenge have only modestly decreased no-show rates. Behavioral economics theory has suggested that more effective messages may lead to increased responsiveness. In complex, real-world settings, it has proven difficult to predict the optimal message composition. In this study, we aimed to systematically compare the effects of several pre-appointment message formats on no-show rates. We randomly assigned members from Clalit Health Services (CHS), the largest payer-provider healthcare organization in Israel, who had scheduled outpatient clinic appointments in 14 CHS hospitals, to one of nine groups. Each individual received a pre-appointment SMS text reminder five days before the appointment, which differed by group. No-show and advanced cancellation rates were compared between the eight alternative messages, with the previously used generic message serving as the control. There were 161,587 CHS members who received pre-appointment reminder messages who were included in this study. Five message frames significantly differed from the control group. Members who received a reminder designed to evoke emotional guilt had a no-show rates of 14.2%, compared with 21.1% in the control group (odds ratio [OR]: 0.69, 95% confidence interval [CI]: 0.67, 0.76), and an advanced cancellation rate of 26.3% compared with 17.2% in the control group (OR: 1.2, 95% CI: 1.19, 1.21). Four additional reminder formats demonstrated significantly improved impact on no-show rates, compared to the control, though not as effective as the best performing message format. Carefully selecting the narrative of pre-appointment SMS reminders can lead to a marked decrease in no-show rates. The process of a/b testing, selecting, and adopting optimal messages is a practical example of implementing the learning healthcare system paradigm, which could prevent up to one-third of the 352,000 annually unattended appointments in Israel.

Introduction

Unattended medical appointments are a frequent event. Hospital outpatient clinics have reported no-show rates of 19.3%-43.0% globally [1]. These events negatively impact care quality worldwide, causing major disruptions to clinical management, delays in scheduled care, and reduced patient contentedness. Health-care providers consider the no-show phenomenon as intractable and invest extensive efforts and resources to control and create workarounds such as overbooking [14].

Short message services (SMSs) are frequently used as pre-appointment reminders by health service providers to reduce appointment no-shows, and to provide information that relates to, and encourages, the desired behavior of canceling or keeping the appointment [511]. There is strong evidence that even simple SMS reminders are effective in reducing non-attendance compared to no reminders at all, though their impact is small [3, 1214]; thus, the reminders’ performance is considered sub-optimal. Health providers who use reminders still experience substantial no-show rates of 21%-25%, resulting in decreased quality of care [3, 9, 12, 13, 15, 16]. Simple straightforward reminders implicitly assume that one key reason a patient does not attend their appointment is due to forgetfulness. Yet, there is vast evidence suggesting that other reasons for non-attendance without notification are more prominent and that a more holistic approach to this issue is needed [1720].

Previous studies have demonstrated that the strategic narrative of the reminder may increase compliance in the healthcare domain. For example, a study that focused on human papillomavirus (HPV) infection investigated the impact of differential text messages on child HPV vaccination rates and found that persuasive text reminders, emphasizing the potential threat for the child, improved HPV vaccination rates [21].

Additionally, findings from two randomized control trials concluded that missed hospital appointments might be reduced by rephrasing appointment reminders and stating appointment costs [3]. The specific cost of the appointment manipulated guilt emotions, which in turn led to the lowest no-show rates. However, mentioning specific costs can antagonize or make patients doubt the authenticity of the reminder. We suggest that the same effect of guilt can be evoked without stating the cost, by emphasizing emotional guilt. In addition, there is no consensus as to the motivators that would optimally increase the likelihood that members will either attend their appointment or cancel it in advance.

Behavioral economics theory suggests that different motivational narratives, such as fairness to others or adherence to social norms, can dramatically increase a message’s impact as compared with a generic informative format [2128]. Many retail and finance industries’ policies and practices reflect these theories and are designed to prompt people towards particular choices. Such policies and practices might have been relatively underused in healthcare practice.

This study aimed to assess whether using specific message formats for appointment reminders influences advanced cancellation and no-show rates. We examined whether a change in the narrative of the current SMS reminder increased members' engagement compared to the SMS reminder currently in use, and whether specific strategic narratives are more effective than others. Implementing our research results in daily clinical practice can affect members’ behavior, improve quality of care, and potentially serve as a practical example for the learning health system [29].

Materials and methods

Setting and data sources

This study was based on data from individuals with scheduled appointments to one of 596 outpatient clinics located within 14 Clalit Health Services (CHS) hospitals. CHS is the largest payer-provider healthcare organization in Israel, which provides primary, specialty, and inpatient care to over 52% of the Israeli population and has 4.4 million members. CHS’s comprehensive healthcare data warehouse combines hospital and community medical records. All Israeli citizens are covered by one of four healthcare organizations, and while it is possible to switch between the organizations, membership turnover within CHS is less than 1% annually [30], allowing for consistent longitudinal follow-up. CHS’ electronic health records (EHR) contain administrative and clinical data, socio-demographic information, diagnoses from community and hospital settings, recorded chronic diseases, clinical markers, and appointment related details. All CHS members’ information was extracted from CHS’s EHR, as of the index date (appointment date) and from current demographics.

CHS operates an appointment reminder system that automatically sends a text message five days prior to a scheduled appointment with a link to an internet-based system that allows for confirmation or cancellation the appointment in advance. Data from the CHS SMS appointment reminder system was retrieved and appended to the abovementioned data points.

Study population and design

The population eligible for this study included all CHS members who were 18 years old and older, with scheduled appointments between December 1, 2018 to March 31, 2019, at one of the 596 outpatient clinics within CHS’ 14 hospitals. All participants had a valid cell phone number in the CHS EHR and consented to receive phone-based appointment reminders. The index date was defined as the date of the scheduled appointment (Fig 1). Randomization occurred via a randomization program and participants were assigned to one out of nine messages that were issued five days in advance of the appointment. Members with multiple appointments during the study period could receive the same or differently framed messages for each appointment.

Fig 1. Flow chart of the population.

Fig 1

Variables definitions

Appointment reminders

CHS members were randomly assigned to one out of nine possible message frames that reminded them of their upcoming appointment (see Table 1). Eight variations were designed based on the following principles: the ‘social norm’ versions highlighted the idea that social identity and descriptive norms potentially motivate individuals to perform certain actions [3134]. The ‘emotional’ versions aimed to provoke an emotional reaction in order to prompt members to take action by mentioning people they care about [35] or to evoke feelings of sympathy or empathy [36]. The ‘appointment cost’ version was based on the opportunity cost effect [37]. Although members do not directly pay the healthcare organization for missed appointments, this narrative highlights the amount of money they cause the organization to lose by not showing up [38]. Both ‘professional figure’ and ‘personal’ versions relied on the messenger effect that suggests that people’s compliance to a message is affected by the figure who delivers it, for example, an actual name or authority figure rather than an automated device [39]. The ninth frame was the routinely used reminder message, in use by CHS in recent years, and was retained as the control group.

Table 1. Different framings of SMS reminders sent to members five days prior to the scheduled appointment.
Control message Hello, this is a reminder for a hospital appointment you have scheduled. Click the link to confirm or cancel attendance to the appointment
Standard message Hello, you have a hospital appointment at [clinic] on [date] at [time]. Click the link to confirm or cancel attendance to the appointment
Personal request message Hello, this is Lior from [name] hospital. I wanted to remind you of the appointment you scheduled. Click the link to confirm or cancel attendance to the appointment
Professional figure message Hello, your caregiving physician wishes to remind you that you have scheduled an appointment and looks forward to seeing you in the clinic. Click the link to confirm or cancel attendance to the appointment
Appointment cost message Hello, this is a reminder for a hospital appointment you have scheduled. Non-attendance without advanced notice costs National Health Services approximately 200 NIS*. Click the link to confirm or cancel attendance to the appointment
Emotional relatives message Hello, this is a reminder for a hospital appointment you have scheduled. Your family members will be pleased to know that you are taking care of your physical state. Click the link to confirm or cancel attendance to the appointment
Emotional guilt message Hello, this is a reminder for a hospital appointment you have scheduled. Not showing up to your appointment without canceling in advance delays hospital treatment for those who need medical aid. Click the link to confirm or cancel attendance to the appointment
Social norm message Hello. Join the national effort to shorten appointment availability, and let us know if you intend to attend the appointment you have scheduled. Click the link to confirm or cancel attendance to the appointment
Social identity message Hello, this is a reminder for a hospital appointment you have scheduled. Most of the patients in our clinic make sure to confirm their appointment in advance. Click the link to confirm or cancel attendance to the appointment
Emotional relatives message Hello, this is a reminder for a hospital appointment you have scheduled. Your family members will be pleased to know that you are taking care of your physical state. Click the link to confirm or cancel attendance to the appointment
Emotional guilt message Hello, this is a reminder for a hospital appointment you have scheduled. Not showing up to your appointment without canceling in advance delays hospital treatment for those who need medical aid. Click the link to confirm or cancel attendance to the appointment

Abbreviations: NIS, New Israeli Shekel

*200 New Israeli Shekels equal approximately 55.55 US Dollars or 49.43 Euros

These messages were translated from the original Hebrew version.

Outcomes

The primary outcome was a no-show event, defined as a scheduled appointment that a CHS member failed to attend. The no-show rates were calculated as the number of no-show appointments out of the total number of appointments scheduled. The secondary outcome was advanced cancellation, defined as members who cancel their scheduled appointment in advance of the appointment date/time. The advanced cancellation rates were calculated as the number of cancellations out of the total number of appointment reminders sent.

Baseline measurements

Sociodemographic variables were measured at index date and included biological sex, age (years), socioeconomic status (SES; low, medium, high; based on clinic-level data), population sector (Jewish, non-Jewish), and immigrant status (immigrated to Israel, born in Israel). Clinical characteristics included smoking status (current, former, or non-smoker, as reported in the EHR), body mass index (computed from documented weight and height measurements), and Charlson Comorbidity Index (computed from risk factors to evaluate an age-comorbidity score [40]). Comorbidity variables were evaluated as of the index date, and included cardiovascular diseases (yes/no; defined as any of the following: acute myocardial infarction unstable angina pectoris, angina pectoris, acute coronary syndrome, percutaneous transluminal coronary angioplasty, coronary artery bypass graft, ischemic heart disease, ischemic stroke), diabetes (yes/no), chronic kidney disease (yes/no; defined as the last eGFR value prior to index date less than 60 ml/min/1.73m2), celiac disease (yes/no), and inflammatory bowel disease (yes/no) (see S1 Table). We extracted these diagnoses from community and hospital records, as well as from the CHS chronic disease registry.

The appointment characteristics included past non-attendance–i.e.,‘chronic no-show’ (yes/no; defined as members who missed appointments at least two times in a row within one year prior to index date), time to an appointment (calculated as the difference in days between the date of scheduling the appointment to the date of the appointment), and clicking on the link (yes/no; defined as whether member clicked on the link in the SMS reminder message).

Statistical analysis

Socio-demographic characteristics, clinical, and appointment-related variables were calculated within the nine different SMS groups. Summaries of continuous variables are presented as means and standard deviations unless skewed, and in that case, are presented as medians and interquartile ranges. Categorical variables are presented as absolute numbers and percentages, as appropriate.

In order to assess whether any of the message frames caused a lower risk for no-shows compared to the currently existing message frame, multinomial testing was performed. Univariate and multivariate analyses using binary logistic regression models accounting for features determined via an automated generic framework to be most predictive of no-show (i.e., sociodemographic characteristics, clinical, and appointment related variables) behavior.

Message frames were considered as treatment variables, with the existing message serving as the reference group and the record of attendance as the binary outcome variable. Secondary analyses were conducted to assess the effect of the different message frames on canceling appointments in advance.

Statistical analyses were conducted using the R language (version 3.5.3, R Foundation for Statistical Computing, Vienna, Austria). All statistical tests were 2-tailed, and a 5% significance threshold was maintained.

Ethics

This study was reviewed by the IRB of the CHS organization and it was determined that this study was not a clinical trial, but rather an organizational initiative to optimize internal policy. It received an exemption for the need for individual informed consent since it was determined that the various intervention arms posed no harm to members. It was not registered as a clinical trial for these reasons. Obtaining consent would introduce a burden to the members (larger than the intervention itself); obtaining informed consent would cause serious practical problems that would undermine the trial results (particularly for the control group), and the risk of harm was low since the intervention merely consisted of small modifications to existing routine processes.

Results

During the study period, between December 2018 and March 2019, there were 218,066 scheduled appointments in CHS’s hospital outpatient clinics, of which 161,587 had a valid associated mobile telephone number with approval for receiving phone-based appointment reminders (Fig 1). Among those who received one of the nine SMS appointment reminders (Table 1), in 104,469 (64.6%) cases, reminder’s accompanying link was opened within 48 hours of receiving the message.

Socio-demographic, clinical, and appointment related characteristics by type of appointment reminder can be seen in Table 2. Approximately half of the eligible population was female (55.4%), and the average age of the population was 59.3 years. The distribution of all members’ characteristics and appointment information was similar between the nine treatment groups, and no significant differences were found (e.g., all p values > 0.05) (Table 2).

Table 2. Socio-demographic, clinical and appointment-related characteristics by SMS group.

Variables Population Control Standard Social norm Social identity Emotional relatives Emotional guilt Appointment cost Personal Request Professional figure
Individuals, N 161,587 18,086 18,038 17,467 17,937 17,501 17,6 46 18,156 18,103 18,653
Female, n (%) 89,599 (55.4%) 10,070 (55.7%) 10,040 (55.7%) 9,696 (55.5%) 9,865 (55.0%) 9,783 (55.9%) 9,904 (56.1%) 10,120 (55.7%) 10,019 (55.3%) 10,102 (54.2%)
Age, mean (SD) 59.3 (18.3) 59.5 (18.4) 59.6 (18.2) 59.7 (18.4) 59.3 (18.2) 58.8 (18.2) 59.2 (18.3) 59.6 (18.1) 58.8 (18.7) 59.2 (17.9)
Socio-economic status, n (%)
Low 25,702 (16.1%) 2,962 (16.6%) 2,789 (15.6%) 2,719 (15.7%) 2,828 (15.9%) 2,834 (16.4%) 2,855 (16.3%) 2,969 (16.5%) 2,807 (15.7%) 2,939 (15.9%)
Medium 57,829 (36.2%) 6,332 (35.4%) 6,703 (37.6%) 6,177 (35.8%) 6,130 (34.6%) 6,431 (37.1%) 6,371 (36.5%) 6,448 (35.9%) 6,563 (36.7%) 6,674 (36.2%)
High 76,270 (47.7%) 8,573 (48.0%) 8,330 (46.7%) 8,381 (48.5%) 8,776 (49.5%) 8,060 (46.5%) 8,248 (47.2%) 8,556 (47.6%) 8,510 (47.6%) 8,836 (47.9%)
Missing 1,786 (1.1%) 219 (1.2%) 216 (1.2%) 190 (1.1%) 203 (1.1%) 176 (1.0%) 172 (1.0%) 183 (1.0%) 223 (1.2%) 204 (1.1%)
Sector, n (%)
Non-Jewish 14,883 (9.2%) 1,745 (9.6%) 1,642 (9.1%) 1,572 (9.0%) 1,684 (9.4%) 1,702 (9.7%) 1,631 (9.2%) 1,690 (9.3%) 1,509 (8.3%) 1,708 (9.2%)
Jewish 146,704 (90.8%) 16,341 (90.4%) 16,396 (90.9%) 15,895 (91.0%) 16,253 (90.6%) 15,799 (90.3%) 16,015 (90.8%) 16,466 (90.7%) 16,594 (91.7%) 16,945 (90.8%)
Immigrants, n (%) 68,082 (42.1%) 7,660 (42.4%) 7,438 (41.2%) 7,600 (43.5%) 7,429 (41.4%) 7,253 (41.4%) 7,525 (42.6%) 7,609 (41.9%) 7,580 (41.9%) 7,988 (42.8%)
Clinical characteristics
Smoking status, n (%)
Non-smoker 91,017 (61.9%) 10,237 (62.0%) 10,076 (61.8%) 9,990 (62.9%) 10,075 (61.4%) 9,785 (61.7%) 9,944 (62.0%) 10,510 (63.1%) 10,105 (61.7%) 10,295 (60.8%)
former smoker 36,849 (25.1%) 4,106 (24.9%) 4,154 (25.5%) 4,007 (25.2%) 4,168 (25.4%) 3,933 (24.8%) 3,923 (24.4%) 4,014 (24.1%) 4,202 (25.7%) 4,342 (25.6%)
Current smoker 19,120 (13.0%) 2,166 (13.1%) 2,075 (12.7%) 1,894 (11.9%) 2,153 (13.1%) 2,140 (13.5%) 2,181 (13.6%) 2,141 (12.8%) 2,063 (12.6%) 2,307 (13.6%)
Missing 14,601 (9.0%) 1,577 (8.7%) 1,733 (9.6%) 1,576 (9.0%) 1,541 (8.6%) 1,643 (9.4%) 1,598 (9.1%) 1,491 (8.2%) 1,733 (9.6%) 1,709 (9.2%)
BMI, mean (SD) 27.3 (5.5) 27.3 (5.3) 27.2 (5.5) 27.4 (5.4) 27.4 (5.5) 27.3 (5.5) 27.3 (5.3) 27.2 (5.4) 27.3 (5.6) 27.3 (5.6)
Charlson score, mean (SD) 4.2 (3.4) 4.3 (3.5) 4.2 (3.4) 4.3 (3.6) 4.1 (3.4) 4.1 (3.4) 4.2 (3.5) 4.2 (3.4) 4.2 (3.5) 4.2 (3.4)
Missing, n (%) 18,017 (11.2%) 1,932 (10.7%) 2,135 (11.8%) 1,941 (11.1%) 1,963 (10.9%) 1,989 (11.4%) 1,927 (10.9%) 1,896 (10.4%) 2,175 (12.0%) 2,059 (11.0%)
Cardiovascular diseases, n (%) 32,141 (19.9%) 3,706 (20.5%) 3,591 (19.9%) 3,740 (21.4%) 3,404 (19.0%) 3,366 (19.2%) 3,506 (19.9%) 3,718 (20.5%) 3,370 (18.6%) 3,740 (20.1%)
Diabetes, n (%) 47,804 (29.6%) 5,233 (28.9%) 5,340 (29.6%) 5,432 (31.1%) 5,391 (30.1%) 5,087 (29.1%) 5,145 (29.2%) 5,438 (30.0%) 5,144 (28.4%) 5,594 (30.0%)
CKD, n (%) 77,133 (51.7%) 8,732 (52.0%) 8,572 (51.7%) 8,644 (53.4%) 8,478 (51.1%) 8,123 (50.4%) 8,400 (51.8%) 8,800 (52.1%) 8,420 (50.9%) 8,964 (52.0%)
Celiac, n (%) 844 (0.5%) 97 (0.5%) 68 (0.4%) 99 (0.6%) 116 (0.6%) 108 (0.6%) 60 (0.3%) 105 (0.6%) 86 (0.5%) 105 (0.6%)
IBD, n (%) 3,379 (2.1%) 443 (2.4%) 408 (2.3%) 330 (1.9%) 383 (2.1%) 328 (1.9%) 361 (2.0%) 376 (2.1%) 378 (2.1%) 372 (2.0%)
Appointment characteristics
Past behavior of non-attendance, n (%) 1,665 (1.0%) 169 (0.9%) 187 (1.0%) 217 (1.2%) 144 (0.8%) 197 (1.1%) 225 (1.3%) 155 (0.9%) 183 (1.0%) 188 (1.0%)
Time to appointment, median (IQR) (days) 41.0 (21.0–91.0) 42.0 (21.0–92.0) 42.0 (21.0–91.0) 41.0 (20.2–91.0) 42.0 (21.0–91.0) 40.0 (21.0–91.0) 39.0 (20.0–91.0) 41.0 (20.0–91.0) 41.0 (21.0–91.0) 41.0 (20.0–90.0)
Clicked the link, n (%) 104469 (64.7) 11889 (65.7) 11736 (65.1) 10775 (61.7) 11416 (63.6) 11006 (62.9) 11525 (65.3) 11950 (65.8) 11848 (65.4) 12324 (66.1)

Abbreviations: SMS, short message service; SD, standard deviation; IQR, interquartile range; BMI, body mass index; SES, sociodemographic status; CKD, chronic kidney disease; IBD, inflammatory bowel disease.

Fig 2 and Table 3 present no-show and advanced cancellation rates in the groups receiving one of the eight alternative message frames compared with the generic control. Five out of the eight alternative message reminders presented in Table 1 (‘appointment cost’, ‘emotional relatives’, ‘emotional guilt’, ‘social norm’, and ‘social identity’) had significantly lower rates of no-shows and higher rates of canceling in advance compared with the routinely used message reminder. The ‘emotional guilt’ reminder frame led to the lowest no-show and highest advanced cancellation rates. Members who received the ‘emotional guilt’ message reminder had a no-show rate of 14.2% compared with 21.1% in the control group (odds ratio [OR]: 0.69, 95% confidence interval [CI]: 0.67, 0.76), and an advanced cancellation rate of 26.3% compared with 17.2% in the control group (OR: 1.2, 95% CI: 1.19, 1.21).

Fig 2. No-show and advanced cancellation rates by SMS group.

Fig 2

Abbreviations: SMS, short message service. Calculated as the number of people who did not attend an appointment (and did not cancel in advance) out of the total population with appointments. Calculated as the number of people who canceled an appointment in advance out of the total number who clicked on the SMS link.

Table 3. The effect of the specific SMS framing on no-show or advanced cancellation according to univariate and multivariate analyses.

Univariate analysis Multivariate analysis
No-show OR [CI] Advanced Cancellation OR [CI] No-show OR [CI] Advanced Cancellation OR [CI]
Control message 1 1 1 1
Standard message 1.03 [0.97, 1.1] 0.98 [0.94, 1.03] 1.01 [0.96,1.1] 0.97 [0.92, 1.03]
Personal request message 0.93 [0.77, 1.23] 1.03 (0.98,1.08] 0.93 [0.75,1.23] 1.02 [0.97,1.1]
Professional figure message 0.87 [0.76,1.45] 1.05 [1.0, 1.09] 0.87 [0.78,1.46] 1.05 [1.0, 1.1]
Appointment cost message 0.72 [0.68, 0.77]*** 1.25 [1.1,1.27]* 0.72 [0.67,0.77]** 1.27 [1.1,1.29]**
Emotional relatives message 0.77 [0.79, 0.82]*** 1.18 [1.17,1.19]*** 0.77 [0.79,0.82]*** 1.17 [1.16,1.19]**
Emotional guilt message 0.69 [0.67, 0.76]*** 1.2 [1.19,1.21]*** 0.69 [0.67,0.75]*** 1.2 [1.19,1.22]***
Social norm message 0.73 [0.61, 0.79]** 1.08 [1.07,1.08]* 0.73 [0.64,0.79]** 1.09 [1,07,1.1]**
Social identity message 0.83 [0.76, 0.87]** 1.19 [1.11, 1.24]** 0.82 [0.75,0.88]** 1.19 [1.10, 1.24]**

Abbreviations: SMS, short message service; OR, odds ratio; CI, confidence interval

All analyses were based on logistic regressions

Multivariate analysis models were adjusted for age (years), sex (male or female), socioeconomic status (low or medium/high), population sector (Jewish or Non-Jewish), immigrant status (immigrated to Israel or born in Israel), smoking status (current smoker or nonsmoker/former smoker), body mass index (kg/m2), Charlson score, diagnosis of heart condition (yes or no), diagnosis of diabetes (yes or no), diagnosis of chronic kidney disease (yes or no), diagnosis of celiac (yes or no), diagnosis of inflammatory bowel disease (yes or no), past behavior of non-attendance (≥2 missed appointments), time to appointment, clinic’s specialty and clicked the link (yes or no).

Favorable results were found among members who received the ‘appointment cost’ message, with a 15.3% no-show rate (OR: 0.72, 95% CI: 0.68, 0.77) and a 27.4% advanced cancellation rate (OR: 1.25, 95% CI: 1.1, 1.27). Similar results of 15.6% (OR: 0.77, 95% CI: 0.79, 0.82) no-show rate and 23.4% (OR: 1.18, 95% CI: 1.17, 1.19) advanced cancellation rate were found among the members who received the ‘emotional relatives’ framed message.

Both ‘social norm’ and ‘social identity’ framed messages were associated with a 17.8% (OR: 0.73, 95% CI: 0.61, 0.79) and 17.7% (OR: 0.83, 95% CI: 0.76, 0.87) no-show rate and 21.8% (OR: 1.08, 95% CI: 1.07, 1.08) and 24.6% (OR: 1.19, 95% CI: 1.11, 1.24) advanced cancellation rate, respectively. The ‘standard’, ‘personal request’, and ‘professional figure’ messages did not produce significantly different results as compared with the control message (Fig 2, Table 3).

The multivariate analysis showed that the relative reduction in the risk of no-show and advanced cancellation remained quite unchanged after adjusting for socio-demographic variables, clinical and appointment-related characteristics, and past non-attendance behavior (Table 3).

Discussion

We have shown that careful design of SMS narratives based on behavioral economic principles can reduce hospital outpatient clinic no-show rates by over 30 percent. Out of nine differently framed reminders, five produced statistically significant lower no-show rates and higher advanced cancellation rates. The emotional guilt and specific cost message frames showed the greatest nominal differences in no-show rates and advanced cancellation rates compared with the control group (14.2% and 15.3% compared to 21.1% in no-show rates and 26.3% and 27.4% compared to 17.2% in advanced cancellation rates, respectively).

While many health interventions approach behavioral challenges by emphasizing the need to support and prompt the individual through reminders, these results indicate that different messages can influence no-show and cancellation rates [5, 6, 12, 4143]. These results are aligned with behavioral economic theories. However, the current data do not offer adequate support that the varied effects have resulted from those specific psychological mechanisms. Future research is needed to explore this topic.

These results highlights the potential of introducing behavioral economics principles into multiple avenues of healthcare delivery in order to improve member adherence and reduce waste in care provision.

Our study ventures beyond the published literature by including a standard alternative message alongside the affect-based alternatives, possibly indicating that the reduction in no-show rates was in fact due to a change in context and not merely in response to a simple change in wording.

This study had several limitations. First, all reminder messages were sent five days prior to the appointment. This five day period might be considered as relatively long, as previous studies have reported shorter periods of one to three days between sending the reminder and appointment date [6]. Sending the reminder SMS five days before the appointment may have allowed the psychological effect of the remainder to decay over time, meaning that even though the members may have confirmed attendance, they may not attend the appointment. However, if at the time of the reminder, the members had forgotten about the appointment, it may have been possible that this five day period of time will enable them to rearrange schedule in order to attend.

Another possible limitation was the inability to distinguish between members who read the SMS reminders and those who did not. However, we retain our confidence in the overall conclusions since 64.6% of the receipients clickedon the link within 48 hours, indicating that the majority of people read the message. Also, the assignment to the SMS frame for each appointment was properly randomized, and there is no reason to suspect additional potential confounders. Furthermore, all members within the study period had no more than three appointments. It is possible that receiving the same message when scheduling new appointments diminished the effect over time. The multiple and varying number of appointments per patient in the sample led to correlated error variance in the study dataset, which may have produced biased error estimates. However, due to the short period of the study (December 2018 –March 2019), not many patients had repeated appointments (approximately 9.4% of the total population).

It is important to note that the expected effect of rephrasing reminder messages may be limited, as it was designed with the ‘average’ person in mind, rather than customized on an individual level. While we found that the effect of sending alternative messages was maintained after adjustment by various covariates, it is possible that interaction of individual characteristics may modify the impact of specific message frames on no-shows. Future research should focus on customizing the content per person to even further reduce no-show rates.

The major strength of our findings was that all 14 CHS hospitals, located throughout the country, were included in this study. This means that the effect of different SMS versions on no-shows was assessed among participants who came from diverse backgrounds, and thus, can be generalized to a greater scale for policy implications.

The number of unattended appointments across all outpatient clinics in CHS’ 14 hospitals is approximately 600,000 annually (18.7% of all outpatient clinic appointments). Our results indicate that replacing the current reminder message with a carefully designed message can potentially save 187,000 appointments annually. Nationally, this change can potentially result in saving approximately 350,000 unattended appointments, thereby improving the quality of care across the country.

Since May 2019, CHS changed the policy to adopt the “emotional guilt’ narrative in all outpatient clinics for all messages used in daily practice (more than 3 million appointments a year), and is monitoring the scale of the real-world impact of this change. This can serve as an example of how research knowledge gained by a learning healthcare system can be implemented into routine clinical practice and effect changes in the organization’s policy [29, 44]. It is worth noting that such a change may have additional unintended consequences other than improving visit attendance, such as a reduction in clinic or physician satisfaction ratings.

The era of digital health enables healthcare providers to systematically customize their interaction with members in order to increase the effectiveness of healthier behavior [45]. This simple example of how strategic use of traditional messaging substantially impacts members’ behaviors shows the untapped potential of smart messaging in health care. Improvement in member engagement depends on the utilization of technical add-ons, but even more so, on the nuances and characterization of the way the messaging is constructed.

Supporting information

S1 Table. Codes used for variable definitions.

(DOCX)

Acknowledgments

We thank Oran Huberman, Meirav Visel, Tzahi Israel, and Moshe Gerlitz for contribution in the intervention conceptual framing and design; Sydney Krispin and Becca Feldman for their editorial support and reviewing the manuscript; and Galit Benbenishiti, Nachum Yosef, and Ilan Gofer for their help in the design and implementation of the randomization process.

Data Availability

The data underlying the results presented in the study are available from clalit health services http://clalitresearch.org/.

Funding Statement

The Israeli Ministry of Finance provided funds to the commercial company "Kayma Labs" to permit the development of the randomization process. No other funds were received by the authors in connection with this study. No funding bodies had any role in the study design, data collection, analysis, decision to publish, or preparation of the manuscript. The authors TK, HZ, and DA are employed by “Kayma Labs”, though this company did not fund this research paper.

References

  • 1.Nelson A., et al. , Predicting scheduled hospital attendance with artificial intelligence. npj Digital Medicine, 2019. 2(1). 10.1038/s41746-019-0090-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Elvira C., et al. , Machine-Learning-Based No Show Prediction in Outpatient Visits. International Journal of Interactive Multimedia and Artificial Intelligence, 2018. 4(7): p. 29. [Google Scholar]
  • 3.Hallsworth M., et al. , Stating Appointment Costs in SMS Reminders Reduces Missed Hospital Appointments: Findings from Two Randomised Controlled Trials. PLoS One, 2015. 10(9): p. e0137306 10.1371/journal.pone.0137306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ellis D.A., et al. , Demographic and practice factors predicting repeated non-attendance in primary care: a national retrospective cohort analysis. Lancet Public Health, 2017. 2(12): p. e551–e559. 10.1016/S2468-2667(17)30217-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.de Jongh T., et al. , Mobile phone messaging for facilitating self-management of long-term illnesses. Cochrane Database Syst Rev, 2012. 12: p. CD007459 10.1002/14651858.CD007459.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gurol‐Urganci I., et al. , Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database of Systematic Reviews, 2013(12). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Rebecca Guy J.H., Wand Handan, Stott Sam,Ali Hammad and Kaldor John, How effective are SMS reminders at increasing clinic attendance-Meta-analysis and systematic review. Health Services Research, 2012. 47(2): p. 614–632. 10.1111/j.1475-6773.2011.01342.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Young K.E., Franklin A.B., and Ward J.P., Infestation of northern spotted owls by hippoboscid (Diptera) flies in northwestern California. J Wildl Dis, 1993. 29(2): p. 278–83. 10.7589/0090-3558-29.2.278 [DOI] [PubMed] [Google Scholar]
  • 9.lin CL, M.N., Boneh J, Li H, Lazebnik R, Text Message Reminders Increase Appointment Adherence in a Pediatric Clinic: A Randomized Controlled Trial. international journal of pediatrics, 2016. 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kuerbis A., van Stolk-Cooke K., and Muench F., An exploratory study of mobile messaging preferences by age: Middle-aged and older adults compared to younger adults. J Rehabil Assist Technol Eng, 2017. 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kerrison R.S., et al. , Text-message reminders increase uptake of routine breast screening appointments: a randomised controlled trial in a hard-to-reach population. Br J Cancer, 2015. 112(6): p. 1005–10. 10.1038/bjc.2015.36 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Haynes L.C., et al. , Collection of Delinquent Fines: An Adaptive Randomized Trial to Assess the Effectiveness of Alternative Text Messages. Journal of Policy Analysis and Management, 2013. 32(4): p. 718–730. [Google Scholar]
  • 13.Hallsworth M, L.J., Metcalfe R,Vlaev I, The Behavioralist As Tax Collector: Using Natural Field Experiments to Enhance Tax Compliance JPolicyAnalManag, 2013. 32: p. 718–730. [Google Scholar]
  • 14.Junod Perron N., et al. , Text-messaging versus telephone reminders to reduce missed appointments in an academic primary care clinic: a randomized controlled trial. BMC Health Serv Res, 2013. 13: p. 125 10.1186/1472-6963-13-125 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Dantas L.F., et al. , No-shows in appointment scheduling—a systematic literature review. Health Policy, 2018. 122(4): p. 412–421. 10.1016/j.healthpol.2018.02.002 [DOI] [PubMed] [Google Scholar]
  • 16.McLean S.M., et al. , Appointment reminder systems are effective but not optimal: results of a systematic review and evidence synthesis employing realist principles. Patient Prefer Adherence, 2016. 10: p. 479–99. 10.2147/PPA.S93046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Neal R.D., et al. , Reasons for and consequences of missed appointments in general practice in the UK: questionnaire survey and prospective review of medical records. BMC Fam Pract, 2005. 6: p. 47 10.1186/1471-2296-6-47 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Martin C., Perfect T., and Mantle G., Non-attendance in primary care: the views of patients and practices on its causes, impact and solutions. Family Practice, 2005. 22(6): p. 638–643. 10.1093/fampra/cmi076 [DOI] [PubMed] [Google Scholar]
  • 19.van Baar J.D., et al. , Understanding reasons for asthma outpatient (non)-attendance and exploring the role of telephone and e-consulting in facilitating access to care: exploratory qualitative study. Quality and Safety in Health Care, 2006. 15(3): p. 191–195. 10.1136/qshc.2004.013342 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Crosby L.E., et al. , Perceived barriers to clinic appointments for adolescents with sickle cell disease. J Pediatr Hematol Oncol, 2009. 31(8): p. 571–6. 10.1097/MPH.0b013e3181acd889 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Matthew S. McGlone, K.K.s., Serena A. Rodriguez, Maria E. Fernandez, Persuasive texts for prompting action: Agency assignment in HPV vaccination reminders. Vaccine, 2017. 35: p. 4295–4297. 10.1016/j.vaccine.2017.06.080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Rice T., The behavioral economics of health and health care. Annu Rev Public Health, 2013. 34: p. 431–47. 10.1146/annurev-publhealth-031912-114353 [DOI] [PubMed] [Google Scholar]
  • 23.Soler R.E., et al. , Nudging to Change: Using Behavioral Economics Theory to Move People and Their Health Care Partners Toward Effective Type 2 Diabetes Prevention. Diabetes Spectr, 2018. 31(4): p. 310–319. 10.2337/ds18-0022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.RH Thaler C.S., Nudge: improving decisions about health, wealth, and happiness. 2008. [Google Scholar]
  • 25.King D., et al. , Approaches based on behavioral economics could help nudge patients and providers toward lower health spending growth. Health Aff (Millwood), 2013. 32(4): p. 661–8. [DOI] [PubMed] [Google Scholar]
  • 26.Barkan R., et al. , The pot calling the kettle black: distancing response to ethical dissonance. J Exp Psychol Gen, 2012. 141(4): p. 757–73. 10.1037/a0027588 [DOI] [PubMed] [Google Scholar]
  • 27.Bertoni M, C.L., Robone S, Promoting breast cancer screening take-ups with zero cost: Evidence from an Experiment on Formatting Invitation Letters in Italy. IZA discussion paper, 2019. 12193. [Google Scholar]
  • 28.Ding X., et al. , Designing risk prediction models for ambulatory no-shows across different specialties and clinics. J Am Med Inform Assoc, 2018. 25(8): p. 924–930. 10.1093/jamia/ocy002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mcginnis JM, A.D., Olsen L, The learning healthcare system: workshop summery. National Academies press, June 2007. [PubMed] [Google Scholar]
  • 30.Rosen B., Waitzberg R., and Merkur S., Israel: Health System Review. Health Syst Transit, 2015. 17(6): p. 1–212. [PubMed] [Google Scholar]
  • 31.Hogg M.A. and Reid S.A., Social Identity, Self-Categorization, and the Communication of Group Norms. Communication Theory, 2006. 16(1): p. 7–30. [Google Scholar]
  • 32.Goldstein N.J., Cialdini R.B., and Griskevicius V., A Room with a Viewpoint: Using Social Norms to Motivate Environmental Conservation in Hotels. Journal of Consumer Research, 2008. 35(3): p. 472–482. [Google Scholar]
  • 33.Griskevicius V., et al. , Going along versus going alone: when fundamental motives facilitate strategic (non)conformity. J Pers Soc Psychol, 2006. 91(2): p. 281–94. 10.1037/0022-3514.91.2.281 [DOI] [PubMed] [Google Scholar]
  • 34.Shapiro J.R. and Neuberg S.L., When do the stigmatized stigmatize? The ironic effects of being accountable to (perceived) majority group prejudice-expression norms. J Pers Soc Psychol, 2008. 95(4): p. 877–98. 10.1037/a0011617 [DOI] [PubMed] [Google Scholar]
  • 35.Barrett, L.F., & Salovey, P., The wisdom in feeling. Guilford, New York..pdf>. 2002.
  • 36.Christian R.C., & Alm J., Empathy, sympathy, and tax compliance. Journal of economic psychology, 2014. 40: p. 62–82. [Google Scholar]
  • 37.Zellermayer O., The pain of paying. unpublished dissertation, Department of Social and Decision Sciences. Carnegie Mellon University, Pittsburgh, PA, 1996. [Google Scholar]
  • 38.Frederick S., et al. , Opportunity Cost Neglect. Journal of Consumer Research, 2009. 36(4): p. 553–561. [Google Scholar]
  • 39.Kassin S.M., Deposition Testimony and the Surrogate Witness:Evidence for a "Messenger Effect" in Persuasion. Personality and Social Psychology Bulletin, 1983. 9(2): p. 281–288. [Google Scholar]
  • 40.Charlson M., et al. , Validation of a combined comorbidity index. J Clin Epidemiol, 1994. 47(11): p. 1245–51. 10.1016/0895-4356(94)90129-5 [DOI] [PubMed] [Google Scholar]
  • 41.Hasvold P.E. and Wootton R., Use of telephone and SMS reminders to improve attendance at hospital appointments: a systematic review. Journal of Telemedicine and Telecare, 2011. 17(7): p. 358–364. 10.1258/jtt.2011.110707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Koshy E., Car J., and Majeed A., Effectiveness of mobile-phone short message service (SMS) reminders for ophthalmology outpatient appointments: Observational study. BMC Ophthalmology, 2008. 8(1): p. 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Downer S.R., et al. , SMS text messaging improves outpatient attendance. Australian Health Review, 2006. 30(3): p. 389–396. 10.1071/ah060389 [DOI] [PubMed] [Google Scholar]
  • 44.Bellika A.B.a.J.G., The Learning Healthcare System Where are we now A systematic review. Journal of biomedical informatics, 2016. 64: p. 87–92. 10.1016/j.jbi.2016.09.018 [DOI] [PubMed] [Google Scholar]
  • 45.Michie S., van Stralen M.M., and West R., The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci, 2011. 6: p. 42 10.1186/1748-5908-6-42 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Sreeram V Ramagopalan

8 May 2020

PONE-D-20-04898

It’s how you say it: systematic A/B testing of digital messaging cut hospital no-show rates

PLOS ONE

Dear Mrs. Berliner Senderey,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Jun 21 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Sreeram V. Ramagopalan

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Competing Interests section:

"The authors have declared that no competing interests exist."

We note that one or more of the authors are employed by a commercial company: Kayma labs

a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. 

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

c. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

3. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The submitted manuscript has several strong features making it worthy of publication. The topic chosen has substantial impact on both outcomes and financing in Israel’s healthcare system, and likely in all national healthcare systems. The study population and dataset have high ecological validity, the intervention design seems thorough and well-controlled, and the analyses are appropriate for the general study objectives. Most of the line-specific comments below have minor or moderate impacts on the overall quality of the study design and analysis.

The major challenge with the manuscript, as written, is that the conclusions extend well beyond what can be supported by a conservative interpretation of the data and analyses. The analyses show that varying the message content of appointment reminders can influence no-show and cancellation rates. The specific message alternatives drafted by the study authors are inspired by well-regarded and influential behavioral theories. However, the authors have offered no analyses that demonstrate whether the specific stimuli they drafted for this study are actually perceived by study participants (or people like the study participants) to have the intended affective and motivational impacts. The existing data support that different messages are more effective, but do not currently support hypotheses about _why_ they are effective.

I hate to be the reviewer that requests more data and/or analysis, especially for an open access journal submission. However, there is a straightforward fix if you want to go beyond the conclusion that “different messages can influence no-show and cancellation rates”. You’d need to get independent ratings of the message content, separately from their effect on no-show and cancellation rates. Even post hoc, even with a convenience sample of Israelis, consistent Likert-scale ratings of the messages against statements like “This message makes me feel guilt” or “This message makes me feel peer pressure” could support the argument that the stimuli you drafted are good representations of the behavioral economic principles they’re supposed to represent. I can’t tell whether any of the authors are behavioral or social psychologists, but a relatively quick consultation with a research psychologist could yield a good set of rating questions to support the various source theories.

If the authors do not wish to conduct additional analysis to show that the stimuli are rated as having the effect intended by these various theories, then a resubmitted manuscript should substantially scale back the interpretation of results. It would be appropriate, consistent with the often theory-free discipline of A/B testing, to note simply that different messages influence appointment behaviors, and imply that continued A/B testing could optimize appointment behaviors even if no underlying theory is applied. It would even be appropriate to speculate that the varied appointment effects are consistent with the source behavior theories, with a heavy “more research is needed” caveat as a limitation. But the current data do not offer adequate support that any of these behavioral theories is responsible for study effects, nor that one theory predicts stronger effects than another.

Line-specific comments:

• Abstract line 11 - appears to contain a typo “Clalit’sto”. Additional typos and grammar notes appear in subsequent lines, but my review was not exhaustive. Recommend that the resubmitted manuscript first get a thorough proofread by a native English speaker.

• Text line 53 - “randomized” not “randomize”

• Text line 61 - “suggests” not “suggest”

• Text line 61 – “Behavioral economic theory” does correlate message contents and their “different motivational narratives” with varying impact, but “_testing_ different motivational narratives” is just a good practice encouraged in behavioral economics. If I’m understanding your sentence correctly, I think it is stronger with the word “testing” removed.

• Text line 65 – Be careful making a value assertion here about “underuse” - how much behavioral economics use would be enough? How would we determine this?

• Text line 72 - “affect” not “effect”

• Text line 73 - “the learning health system” not “health learning system”

• Text line 79 - “registerd” is misspelled

• Text line 155 - The methods are not clearly described, but much of the source data for CCI appear to be diagnoses available in the system across each subject’s entire medical history with Clalit. Access to diagnoses is likely unequal across the cohort, based on patient age and duration of using this particular clinic system. This is a source of noise that may have weakened the likelihood of showing positive correlations with CCI in study results. Randomization would have balanced out the degree of “availability” bias that this would have introduced in CCI calculations between treatment arms, but the chance of detecting a CCI main effect would still be weakened throughout the population.

• Text line 191 – It appears that this is multinomial testing, not serial A/B testing as discussed in marketing disciplines. The medical/health services research audience for your paper will not necessarily know what A/B testing is, so it should be defined more explicitly, or the market research term should be removed from this article.

• Text line 209 - Text lines 301-302, later in the paper, clarify that the 64.6% statistic _only_ refers to click-through behavior. This line implies that it refers both to reading and to click-through. The number of messages that were read is likely larger than 64.6%, but is not measured or reported separately. Please clarify the language accordingly.

• Text line 215 - Table 2 contains no statistical inference tests to support that “no significant differences were found”. It is not necessary to modify Table 2 to include tests if they were done and indeed not significant. If tests were done, a parenthetical phrase in this sentence would suffice (e.g., all p values > X). If tests were not done, please remove the “no significant differences were found” phrase.

• Text line 272 - your analysis models do not support your ability to conclude that specific messages led to the “lowest” no-show rates or the “highest” advanced cancellation rates. This would require pairwise comparisons among the messages that reduced no-shows and increased cancellations. You can observe that specific messages had the greatest “nominal” differences from the control group, which acknowledges that “lowest” and “highest” are not supported by statistical inference testing. In reality, the pairwise ORs for no-shows and cancellations are quite comparable in magnitude among the five of your messages with significant differences from the control group.

• Text line 279 - “substancial“ is misspelled

• Text lines 278-281 - is your effect due to a “mere change in narrative” or specific “behavioral economics principles”? Your analyses don’t allow you to distinguish. The messages included in Supplement 1 would appear to a prudent layperson to match the principles listed in each row. But you’ve reported no independent message testing to confirm that the messages are perceived that way. This does not allow you to defend against any number of alternative explanations that have nothing to do with the specific behavioral economics principles. For example, is the difference because the effective messages have the longest word/character counts? Is this a Hawthorne effect, given that the control message had been in use for a while and most of the experimental messages were noticeable changes from the prior one? Is it because the effective messages provide a reason, any reason, to show up or cancel in advance (see Ellen Langer’s 1978 study on “placebic” information in persuasion)?

• Text lines 285-286 - You’ve shown no data to confirm whether patients receiving these messages felt any ‘affect’ or any specific emotional response, so the current version of your study cannot be evaluated against specific frameworks such as MINDSPACE.

• Text lines 306-308 - This is a minor point given the larger context of the study, but the multiple and varying number of appointments per patient in the sample leads to correlated error variance in your study dataset, which may have produced biased error estimates. If a study dataset allows some patients to count once, but others to count multiple times, a statistical analysis model that accounts for this correlation in error variances is often used. In your resubmission, it’s probably better to mention this as a limitation than to go back and apply a more sophisticated analysis model (e.g., genereralized estimating equations) that may not improve your precision.

• Text lines 327-328 - it might make sense to acknowledge that a change like this might have additional unintended consequences besides improving visit attendance. For example, does using a guilt-based message encourage patient resentment that might reduce clinic or physician satisfaction ratings?

• Text line 332 - “customize” instead of “costumize”?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Vernon F. Schabert, Ph.D.

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Jun 23;15(6):e0234817. doi: 10.1371/journal.pone.0234817.r002

Author response to Decision Letter 0


1 Jun 2020

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response: Thank you. We have reviewed PLOS ONE’s style requirements and believe we have make the correct formatting changes.

2. Thank you for stating the following in the Competing Interests section:

"The authors have declared that no competing interests exist."

We note that one or more of the authors are employed by a commercial company: Kayma labs

a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials] but did not have any additional role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

Response: We have included the following section at the end of the revised manuscript: “The authors TK, HZ, DA are employed by the commercial company “Kayma Labs”, though this company did not fund this research paper.

The Israeli Ministry of Finance funded this research but did not play a role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries or research materials”.

b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Response: We have included the following section in the revised manuscript: “Competing Interests Statement: The authors TK, HZ, DA are employed by the commercial company “Kayma Labs”. This commercial affiliation does not alter our adherence to PLOS ONE policies on sharing data and materials.”

c. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

Response: We have included both an updated Funding Statement and Competing Interests Statement in our cover letter above.

3. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.

Response: The ethics statement was moved to the methods section.

Reviewers' comments:

Reviewer's Responses to Questions

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Response: We appreciate the reviewer’s remark and hope that the responses below (specifically # 5) will address this appropriately and help to draw better conclusions based on the data presented.

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exceptions (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians, and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

5. Review Comments to the Author

Reviewer #1: The submitted manuscript has several strong features making it worthy of publication. The topic chosen has substantial impact on both outcomes and financing in Israel’s healthcare system, and likely in all national healthcare systems. The study population and dataset have high ecological validity, the intervention design seems thorough and well-controlled, and the analyses are appropriate for the general study objectives. Most of the line-specific comments below have minor or moderate impacts on the overall quality of the study design and analysis.

The major challenge with the manuscript, as written, is that the conclusions extend well beyond what can be supported by a conservative interpretation of the data and analyses. The analyses show that varying the message content of appointment reminders can influence no-show and cancellation rates. The specific message alternatives drafted by the study authors are inspired by well-regarded and influential behavioral theories. However, the authors have offered no analyses that demonstrate whether the specific stimuli they drafted for this study are actually perceived by study participants (or people like the study participants) to have the intended affective and motivational impacts. The existing data support that different messages are more effective, but do not currently support hypotheses about _why_ they are effective.

I hate to be the reviewer that requests more data and/or analysis, especially for an open access journal submission. However, there is a straightforward fix if you want to go beyond the conclusion that “different messages can influence no-show and cancellation rates”. You’d need to get independent ratings of the message content, separately from their effect on no-show and cancellation rates. Even post hoc, even with a convenience sample of Israelis, consistent Likert-scale ratings of the messages against statements like “This message makes me feel guilt” or “This message makes me feel peer pressure” could support the argument that the stimuli you drafted are good representations of the behavioral economic principles they’re supposed to represent. I can’t tell whether any of the authors are behavioral or social psychologists, but a relatively quick consultation with a research psychologist could yield a good set of rating questions to support the various source theories.

If the authors do not wish to conduct additional analysis to show that the stimuli are rated as having the effect intended by these various theories, then a resubmitted manuscript should substantially scale back the interpretation of results. It would be appropriate, consistent with the often theory-free discipline of A/B testing, to note simply that different messages influence appointment behaviors, and imply that continued A/B testing could optimize appointment behaviors even if no underlying theory is applied. It would even be appropriate to speculate that the varied appointment effects are consistent with the source behavior theories, with a heavy “more research is needed” caveat as a limitation. But the current data do not offer adequate support that any of these behavioral theories is responsible for study effects, nor that one theory predicts stronger effects than another.

Response: Thank you for this important and insightful comment.

We agree that the existing data can only support that different messages are more effective even if no underlying theory is applied and that the current data do not support hypotheses about why they are effective or that one theory predicts stronger effects over another.

We rephrased and scaled back the interpretation of the results in our discussion section, suggesting that future research is needed in order to demonstrate whether the specific wording perceived by study participants had the intended affective and motivational impacts.

Please also see our response below to the comment regarding lines 278-281.

Line-specific comments:

• Abstract line 11 - appears to contain a typo “Clalit’sto”. Additional typos and grammar notes appear in subsequent lines, but my review was not exhaustive. Recommend that the resubmitted manuscript first get a thorough proofread by a native English speaker.

Response: We thank the reviewer for this comment. We corrected the typos and grammar accordingly and had a professional native English speaker proofread the revised manuscript.

• Text line 53 - “randomized” not “randomize”

Response: Corrected

• Text line 61 - “suggests” not “suggest”

Response: Corrected

• Text line 61 – “Behavioral economic theory” does correlate message contents and their “different motivational narratives” with varying impact, but “_testing_ different motivational narratives” is just a good practice encouraged in behavioral economics. If I’m understanding your sentence correctly, I think it is stronger with the word “testing” removed.

Response: We fully agree with the reviewer's comment and removed “testing” from the sentence.

• Text line 65 – Be careful making a value assertion here about “underuse” - how much behavioral economics use would be enough? How would we determine this?

Response: We thank the reviewer for the comment. The sentence was rephrased in the manuscript accordingly: “Such policies and practices might have been relatively underused in healthcare practice”.

• Text line 72 - “affect” not “effect”

Response: Corrected

• Text line 73 - “the learning health system” not “health learning system”

Response: Corrected

• Text line 79 - “registerd” is misspelled

Response: Corrected

• Text line 155 - The methods are not clearly described, but much of the source data for CCI appear to be diagnoses available in the system across each subject’s entire medical history with Clalit. Access to diagnoses is likely unequal across the cohort, based on patient age and duration of using this particular clinic system. This is a source of noise that may have weakened the likelihood of showing positive correlations with CCI in study results. Randomization would have balanced out the degree of “availability” bias that this would have introduced in CCI calculations between treatment arms, but the chance of detecting a CCI main effect would still be weakened throughout the population.

Response: We thank the reviewer for this comment. It is important to emphasize that the study was performed using electronic health records data from Clalit Health Services (CHS), the largest of four national health funds in Israel. All Israeli citizens are covered by one of the health funds and can switch between them at any time. However, switching rates are relatively low (about 1% annually), which allows consistent longitudinal follow-up. All participants in this study had more than one-year membership at CHS, thus it is unlikely that diagnoses are missing or lacking documentation. We agree that the randomization assigning each participant to one of nine messages, balancing the biases but might weaken the power of CCI’s main effect.

We have added the following section to the methods: “All Israeli citizens are covered by one of four healthcare organizations, and while it is possible to switch between the organizations, membership turnover within CHS is less than 1% annually, allowing for consistent longitudinal follow-up.”

• Text line 191 – It appears that this is multinomial testing, not serial A/B testing as discussed in marketing disciplines. The medical/health services research audience for your paper will not necessarily know what A/B testing is, so it should be defined more explicitly, or the market research term should be removed from this article.

Response: We agree that the testing is not a serial A/B testing as known in marketing disciplines. We randomized each patient to one of nine message reminders and compared between the groups at a specific time point. The sentence was rephrased in the manuscript accordingly: “compared to the currently existing message frame, we performed multinomial testing”.

• Text line 209 - Text lines 301-302, later in the paper, clarify that the 64.6% statistic _only_ refers to click-through behavior. This line implies that it refers both to reading and to click-through. The number of messages that were read is likely larger than 64.6%, but is not measured or reported separately. Please clarify the language accordingly.

Response: We thank the reviewer for this comment. Indeed, due to privacy policy limitations, we could only count the click-through rate and could not track the rate at which participants opened the message. Thus, as mentioned in the comment, it is more than likely that the reading message rate was higher than 64.4%. The sentence was corrected accordingly in the manuscript: “Among those who received one of the nine SMS appointment reminders (Table 1), in 104,469 (64.6%) cases, the reminder’s’ accompanying link was opened within 48 hours of receiving the message.”

• Text line 215 - Table 2 contains no statistical inference tests to support that “no significant differences were found”. It is not necessary to modify Table 2 to include tests if they were done and indeed not significant. If tests were done, a parenthetical phrase in this sentence would suffice (e.g., all p values > X). If tests were not done, please remove the “no significant differences were found” phrase.

Response: Thank you for this important comment. All statistical analyses were conducted to find differences within the groups. We added the phrase you suggested to the manuscript: “(e.g., all p values > 0.05)”

• Text line 272 - your analysis models do not support your ability to conclude that specific messages led to the “lowest” no-show rates or the “highest” advanced cancellation rates. This would require pairwise comparisons among the messages that reduced no-shows and increased cancellations. You can observe that specific messages had the greatest “nominal” differences from the control group, which acknowledges that “lowest” and “highest” are not supported by statistical inference testing. In reality, the pairwise ORs for no-shows and cancellations are quite comparable in magnitude among the five of your messages with significant differences from the control group.

Response: We thank the reviewer for this insightful comment. It is agreed that we can only compare each alternative to the control group but not to other alternatives. Hence, we corrected this paragraph in the manuscript: “The emotional guilt and specific cost message frames showed the greatest nominal differences in no-show rates and advanced cancellation rates compared with the control group (14.2% and 15.3% compared to 21.1% in no-show rates and 26.3% and 27.4% compared to 17.2% in advanced cancellation rates, respectively)”.

• Text line 279 - “substancial“ is misspelled

Response: Thank you. We have actually rewritten this section, and the word no longer appears

• Text lines 278-281 - is your effect due to a “mere change in narrative” or specific “behavioral economics principles”? Your analyses don’t allow you to distinguish. The messages included in Supplement 1 would appear to a prudent layperson to match the principles listed in each row. But you’ve reported no independent message testing to confirm that the messages are perceived that way. This does not allow you to defend against any number of alternative explanations that have nothing to do with the specific behavioral economics principles. For example, is the difference because the effective messages have the longest word/character counts? Is this a Hawthorne effect, given that the control message had been in use for a while and most of the experimental messages were noticeable changes from the prior one? Is it because the effective messages provide a reason, any reason, to show up or cancel in advance (see Ellen Langer’s 1978 study on “placebic” information in persuasion)?

Response: We thank the reviewer for this important comment. We agree that the results only imply that different messages influence and optimize appointment behaviors even if no underlying theory is applied. It can be only speculated that the varied appointment effects are consistent with the source behavior theories. The current data do not offer adequate support that any of these behavioral theories are responsible for study effects, nor that one theory predicts stronger effects than another. We added a sentence in the discussion section (at the end of the second paragraph):

"These results indicate that different messages can influence no-show and cancellation rates. These results are aligned with behavioral economic theories. However, the current data do not offer adequate support that the varied effects have resulted from those specific psychological mechanisms. Future research is needed to explore this topic”.

• Text lines 285-286 - You’ve shown no data to confirm whether patients receiving these messages felt any ‘affect’ or any specific emotional response, so the current version of your study cannot be evaluated against specific frameworks such as MINDSPACE.

Response: We agree with the reviewer and have adjusted the paragraph in the discussion section accordingly.

• Text lines 306-308 - This is a minor point given the larger context of the study, but the multiple and varying number of appointments per patient in the sample leads to correlated error variance in your study dataset, which may have produced biased error estimates. If a study dataset allows some patients to count once, but others to count multiple times, a statistical analysis model that accounts for this correlation in error variances is often used. In your resubmission, it’s probably better to mention this as a limitation than to go back and apply a more sophisticated analysis model (e.g., genereralized estimating equations) that may not improve your precision.

Response: We thank the reviewer for this insightful comment. We agree that

the multiple and varying number of appointments per patient in the sample may have led to correlated error variance in our study dataset.

Given that the eligible population for this study included patients who had an appointment between December 2018 and March 2019, there were not many patients who had repeated appointments. Therefore, the statistical analysis model employed was not generalized estimating equations. We note that this is a potential limitation and addressed this issue in the limitations section as follows: “The multiple and varying number of appointments per patient in the sample led to correlated error variance in the study dataset, which may have produced biased error estimates. However, due to the short period of the study (December 2018 – March 2019), not many patients had repeated appointments (approximately 9.4% of the total population).”

• Text lines 327-328 - it might make sense to acknowledge that a change like this might have additional unintended consequences besides improving visit attendance. For example, does using a guilt-based message encourage patient resentment that might reduce clinic or physician satisfaction ratings?

Response: We thank the reviewer for this insightful comment, and have addressed this in the discussion section: “It is worth noting that such a change may have additional unintended consequences other than improving visit attendance, such as a reduction in clinic or physician satisfaction ratings”.

• Text line 332 - “customize” instead of “costumize”?

Response: Yes, we meant “customize”. We have corrected this mistake.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Sreeram V Ramagopalan

3 Jun 2020

It’s how you say it: Systematic A/B testing of digital messaging cut hospital no-show rates

PONE-D-20-04898R1

Dear Dr. Berliner Senderey,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sreeram V. Ramagopalan

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Sreeram V Ramagopalan

8 Jun 2020

PONE-D-20-04898R1

It’s how you say it: Systematic A/B testing of digital messaging cut hospital no-show rates

Dear Dr. Berliner Senderey:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sreeram V. Ramagopalan

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Codes used for variable definitions.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    The data underlying the results presented in the study are available from clalit health services http://clalitresearch.org/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES