Skip to main content
PLOS One logoLink to PLOS One
. 2020 Nov 19;15(11):e0242210. doi: 10.1371/journal.pone.0242210

Integrating remote monitoring into heart failure patients’ care regimen: A pilot study

Albert Sohn 1, William Speier 1, Esther Lan 2, Kymberly Aoki 2, Gregg C Fonarow 3, Michael K Ong 2, Corey W Arnold 1,4,*
Editor: Robert Ehrman5
PMCID: PMC7676713  PMID: 33211733

Abstract

Background

Around 50% of hospital readmissions due to heart failure are preventable, with lack of adherence to prescribed self-care as a driving factor. Remote tracking and reminders issued by mobile health devices could help to promote self-care, which could potentially reduce these readmissions.

Objective

We sought to investigate two factors: (1) feasibility of enrolling heart failure patients in a remote monitoring regimen that uses wireless sensors and patient-reported outcome measures; and (2) their adherence to using the study devices and completing patient-reported outcome measures.

Methods

Twenty heart failure patients participated in piloting a remote monitoring regimen. Data collection included: (1) physical activity using wrist-worn activity trackers; (2) body weight using bathroom scales; (3) medication adherence using smart pill bottles; and (4) patient -reported outcomes using patient-reported outcome measures.

Results

We evaluated 150 hospitalized heart failure patients and enrolled 20 individuals. Two factors contributed to 50% (65/130) being excluded from the study: smartphone ownership and patient discharge. Over the course of the study, 60.0% of the subjects wore the activity tracker for at least 70% of the hours, and 45.0% used the scale for more than 70% of the days. The pill bottle was used less than 10% of the days by 55.0% of the subjects.

Conclusions

Our method of recruiting heart failure patients prior to hospital discharge may not be feasible as the enrollment rate was low. Once enrolled, the majority of subjects maintained a high adherence to wearing the activity tracker but low adherence to using the pill bottle and completing the follow-up surveys. Scale usage was fair, but it received positive reviews from most subjects. Given the observed usage and feedback, we suggest mobile health-driven interventions consider including an activity tracker and bathroom scale. We also recommend administering a shorter survey more regularly and through an easier interface.

Introduction

Background

In the United States, at least 6.2 million adults currently live with heart failure [1]. Heart failure prevalence is projected to increase by 46% between 2012 and 2030, resulting in more than 8 million adults with heart failure [1]. Physical inactivity, obesity, and smoking are well-known lifestyle risk factors for heart failure [2]. Guidelines for secondary prevention after diagnosis emphasize physical activity, weight management, smoking cessation, and medication adherence [3]. Despite advances in medications and guidelines for heart failure management, heart failure considerably increases the risk of morbidity and mortality. Incidence of, morbidity resulting from, and hospitalization due to heart failure have substantial financial implications. Total direct medical costs of heart failure are expected to rise from $21 billion to $53 billion between 2012 and 2030 [1]. Total direct and indirect costs are estimated to increase by 127% from $30.7 billion in 2012 to $69.7 billion in 2030 [1].

Hospitalization is common among heart failure patients and is a significant driver of heart failure-related costs. Annually, there are more than 4 million hospitalizations with a primary or secondary diagnosis of heart failure [4], and they constitute up to 79% of the costs for heart failure treatment [5]. In 2013, heart failure was the sixth most expensive condition treated in US hospitals as costs reached $10.2 billion [6], with readmissions accounting for $2.7 billion [7]. Among heart failure patients, 83% are hospitalized at least once, and 43% are hospitalized at least four times [8]. Within the first 30 days of discharge, hospital readmission rates for heart failure patients exceed 20% [9, 10], while they near 50% within 6 months of discharge [11]. However, as high as 75% of the 30-day readmissions may be preventable [12] by addressing the patients’ lack of information, comprehension, or adherence to prescribed self-care [13, 14].

Health care expenditures for heart failure increases with an aging population, and thus preventing heart failure and improving care efficiency are imperative. Since weight change or medication non-adherence alone, for example, may not correlate with hospitalizations [1517], past home monitoring interventions have utilized a variety of methods, including wireless sensors, telephone services, websites, and home visits from nurses [1821]. Results of these interventions designed to improve health outcomes and reduce readmissions are inconclusive. Additionally, patient adherence to telemedicine interventions is often low [22]. The poor adherence and negative results are in part due to the high treatment burden home monitoring interventions place upon patients. In most cases, they require patients to engage in novel behaviors such as using unfamiliar hardware and following high-frequency manual measurement regimen (e.g., taking one’s blood pressure or heart rate multiple times a day).

Objective

Mobile health (mHealth), defined as the application of mobile technology (e.g., software apps on mobile devices, wireless sensors, etc.) in health care, may be a preferred minimally invasive alternative to telemedicine interventions [23, 24]. Activity trackers are examples of mHealth devices that have been studied due individuals’ relatively high adherence to wearing them upon recommendation. A previous study using commercial activity trackers produced adherence rates that were as high as 90% [25]. There were two main factors we sought to learn more about in this pilot study: (1) is it feasible to enroll heart failure patients in a remote monitoring regimen that uses wireless sensors and patient-reported outcome (PRO) measures; and (2) once enrolled, how adherent would patients be in using the study devices and completing PRO measures. In this work, we investigate the feasibility of recruitment and potential of the remote monitoring regimen by detailing preliminary results while identifying real-world problems and solutions to enable larger future studies.

Methods

Ethics

Data collection and analysis presented in this work were carried out under research protocol #17–001312 approved by University of California, Los Angeles, IRB. We obtained signed informed consent from all participants in the study.

Recruitment

All patients who were admitted as an inpatient or for observation from May 2018 to June 2019 were prescreened for inclusion in the study. Those who were 50 years of age or older and were being actively treated for heart failure were considered eligible for the study. Additional criteria included ownership of a compatible smartphone device (iOS or Android) with cellular voice and data, in addition to access to a Wi-Fi connection in their home. Eligible patients who were interested in the study had to score 3.5 or higher on a shortened version of the Mobile Device Proficiency Questionnaire (MDPQ) to enroll (S1 Table in S1 Appendix) [26]. Exclusion criteria included having a cognitive disability (e.g., dementia), being unable to communicate in English, having visual or auditory impairments to the extent that a smartphone could not be used, having a full-time caregiver, and enrollment or being in the process of enrolling in hospice. Prior to discharge and enrollment in the study, eligible patients signed an informed consent form, which described the baseline survey, follow-up surveys, and institutional review board-approved procedures. Subjects were not paid but were allowed to keep the study devices, which they were given at the beginning of the study after completing the baseline survey in person with a study coordinator.

Each subject’s New York Heart Failure Association (NYHA) classification and ejection fraction (EF) were recorded to describe the patients’ heart failure according to the severity of their symptoms and limitations. The NYHA classification categorizes the severity of heart failure by considering heart failure patients’ symptoms at rest and during physical activity [27]. EF indicates the percentage of blood leaving the left ventricle when it contracts and is a measurement of the heart’s degree of function.

Remote monitoring regimen

Upon providing signed informed consent, subjects received a Fitbit Charge 2 (Fitbit, Inc., San Francisco, CA, USA), a bathroom scale (BodyTrace, Inc., Palo Alto, CA, USA), and a smart pill bottle. The Fitbit Charge 2 is a wrist-worn consumer product that uses a combination of accelerometers and optical sensors to track activity, heart rate (HR), and sleep based on arm movement and wrist capillary oxygenation. Subjects were asked to wear the Fitbit activity tracker continuously (Table 1), with interruptions for only activities involving water and charging the device. Data synced to users’ smartphones via the Fitbit app, where it was then uploaded to the Fitbit cloud database and subsequently pulled to the research server via Fitbit’s application programming interfaces (APIs). The BodyTrace scale is a wireless bathroom scale that digitally captures weight. Weights were automatically uploaded to the BodyTrace cloud database via cellular modem data connection after every use (Table 1) and were available via an API. The smart pill bottle has a smart cap that automatically tracks its removal from the bottle. This signal conveys information on medication consumption and was sent to its companion smartphone app via Bluetooth (Table 1). Cap removal events were available via an API.

Table 1. mHealth for heart failure protocol data collection.

Data Collection Baseline Day 30 Day 90 Day 180
Remote Monitoring
Fitbit Charge 2 Continuous
BodyTrace Bathroom Scale Upon Usage
Smart Pill Bottle Upon Usage
Post-Discharge Questionnaires
Rapid Estimate of Adult Literacy in Medicine (REALM-7)
Demographics
Health Information National Trends Survey (HINTS)
Minnesota Living with Heart Failure Questionnaire (MLHFQ)
Self-Care of Heart Failure Index (SCHFI)
Geriatric Depression Scale (GDS-15)
Lubben Social Network Scale (LSNS-6)
Seattle Angina Questionnaire (SAQ-7)
Kansas City Cardiomyopathy Questionnaire (KCCQ-12)
Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health
PROMIS Physical Function Short Form (SF)
PROMIS Fatigue SF
PROMIS Anxiety SF
PROMIS Depression SF
PROMIS Sleep Disturbance SF
PROMIS Social Isolation SF
Hospital Readmission Questions
Emergency Room (ER) Questions
Device Questions

A web-based data integration platform was utilized to collect all data streams for remote monitoring. This platform used vendor APIs to retrieve data every hour from Fitbit and once per day from BodyTrace’s and the smart pill bottle’s APIs. If the Fitbit activity tracker did not sync for 48 hours, study personnel issued a text message to the subject followed by a phone call if it did not sync for 72 hours. If the bathroom scale or the smart pill bottle did not sync for 72 hours, the subject received a text message as well as a phone call after 96 hours. Subjects were allowed to opt out of these reminders.

Study IDs were used to identify subjects, and collected data were stored in a HIPAA-compliant encrypted database. Data recorded for each subject included the subject’s contact information and study-specific information including discharge date and study completion date, as well as withdrawal date and expiration date, if applicable. Because some of the daily readings may have ceased if a subject were hospitalized, study personnel monitored the university-based hospital system for hospital readmissions. When appropriate, readmission date(s) and readmission discharge date(s) were documented. Withdrawal of participation occurred if a subject requested withdrawal. In the event that a subject withdrew from the study or expired, study personnel stopped contacting the subject for follow-up surveys.

Post-discharge surveys

Study personnel contacted each subject on four separate occasions to complete a total of four surveys. The first was administered during the enrollment process and served as the baseline. It consisted of 16 sections (Table 1), including one that encompassed questions about sociodemographic characteristics (S2 Table in S1 Appendix). After 30 days, 90 days, and 180 days, a follow-up survey was administered via phone by a member of the study team. Study team members called subjects during a window that started three days before and ended three days after the aforementioned follow-up periods. If a subject could not be contacted during that time frame, a paper copy of the follow-up survey was mailed to the subject’s home address along with a pre-addressed, postage-paid return envelope. The 30-day, 90-day, and 180-day follow-up surveys each consisted of 13 sections (Table 1), which included questions about hospital readmission(s), emergency room (ER) visit(s), and study devices (S3 Table in S1 Appendix).

The baseline survey included the Rapid Estimate of Adult Literacy in Medicine (REALM-7) to determine the subjects’ health literacy [28] and the Health Information National Trends Survey (HINTS) to assess the subjects’ experience with technology (Table 1) [29]. Subjects also completed the Minnesota Living with Heart Failure Questionnaire (MLHFQ), Self-Care of Heart Failure Index (SCHFI), Geriatric Depression Scale (GDS-15), Lubben Social Network Scale (LSNS-6), Seattle Angina Questionnaire (SAQ-7), Kansas City Cardiomyopathy Questionnaire (KCCQ-12), and seven different Patient-Reported Outcomes Measurement Information System (PROMIS) questionnaires (Table 1). The MLHFQ is a 21-item patient-oriented measurement of health-related quality of life (HRQOL) [30]. It measures how heart failure affects a patient in three specific areas: physical, emotional, and socioeconomic. The GDS-15 is a screening test for depression among elderly populations [31], while the LSNS-6 measures the strength of social support networks among elderly populations [32]. The SCHFI assesses a patient’s ability to manage their heart failure via 22 total items in three subscales: maintenance, management, and confidence [33]. The SAQ-7 and KCCQ-12 questionnaires evaluate patients’ HRQOL in regard to angina and heart failure, respectively [34, 35]. Lastly, the PROMIS questionnaires are publicly available individual-centered PRO measures [36, 37].

Scoring

Physical (0–40) and emotional (0–25) scores for the MLHFQ were calculated by summation of corresponding responses. Lower scores signified better HRQOL with respect to physical and emotional well-being, while higher scores signified worse HRQOL [30]. Addition of all 21 responses generated a total score, creating a possible range of 0 to 105. The following represents the classification of scores: good (<24), moderate (24–45), and poor (>45) HRQOL [30]. For the GDS-15, each yes or no question had a designated answer that was indicative of depression [31]. The number of answers matching those indicative of depression was the total score. Scores ≤5 are not suggestive depression, whereas >5 suggests depression and ≥10 is almost always indicative of depression [31]. LSNS-6 total scores were derived by addition of corresponding responses. Each question was scored from 0 to 5, with less than monthly, none, and always representing 0 and daily, nine or more, and always denoting 5 [32]. The total score has a range of 0 to 30, and scores ≤12 suggest at-risk for social isolation [32].

Raw scores for the three SCHFI subscales were determined by summation of responses in each section. If the subject acknowledged having trouble breathing or ankle swelling within the past month of taking this survey, only then was the management raw score calculated [33]. Standardization of the raw scores were performed to a 0 to 100 scale, with higher scores indicating better self-care. Adequate self-care is defined as scores ≥70 for all sections of the SCHFI [33]. Addition of all corresponding responses for both the SAQ-7 and KCCQ-12 questionnaires produced raw scores. They were then standardized to a 0 to 100 range. For scores of both questionnaires, they are classified as poor (0–24), fair (25–49), good (50–74), and excellent (75–100) HRQOL in regard to angina and heart failure, respectively [34, 35]. Raw scores for the PROMIS questionnaires were computed by summation of responses to each questionnaire. Raw scores were then converted to t scores through a process of standardizing the scores to a mean of 50 and a standard deviation (SD) of 10 [36, 37]. Function scores greater than or equal to 40 are considered normal, and scores less than 40 represent moderate to severe adverse health effects. Whereas for symptoms, scores less than or equal to 60 are considered normal, and scores greater than 60 denote moderate to severe adverse health effects [36, 37].

Statistical analysis

Each questionnaire in the baseline and follow-up surveys was scored prior to statistical analyses. Missing items in questionnaires using Likert scales were substituted by the mean of the subject’s responses from the same questionnaire. However, those missing more than 20% of the items were deemed incomplete and not considered, since reliability declines when missingness is >20% [38]. The cohort was characterized using proportions, means, SDs, medians, and interquartile ranges (IQRs). For each questionnaire, summaries of responses and scores were reported, if applicable.

Adherence to wearing the Fitbit Charge 2 was calculated using two methods: (1) HR by hour (HR-hour); and (2) HR by minute (HR-minute). We examined the data by the hour for HR-hour and by the minute for HR-minute. HR-hour is the number of hours in the study that each subject had a heartrate recording divided by the total number of hours in the study. Similarly, HR-minute is the number of minutes in the study that each subject had a heartrate recording divided by the total number of minutes in the study. In an attempt to distinguish the subject from others in the household for adherence to weight measurement, weight data for each subject were averaged, and their SDs were calculated. Only the weights within 3.5 SDs from each subject’s mean weight were defined to represent subject usage. For both the bathroom scale and smart pill bottle, adherence was calculated by taking the number of days in the study that each subject had data recorded on the respective device and dividing it by the total number of days in the study.

Correlation analyses were conducted with subject characteristics (i.e., NYHA classification, EF, age, education, and annual income) to quantify their relationships with the subjects’ adherence to using the study devices and the devices’ perceived helpfulness. Correlations between PROs and adherence to using the study devices in addition to their perceived helpfulness were determined performing correlation analyses as well. To determine statistical significance, a significance level of .05, which corresponds to a 95% CI, was used for all analyses. If a subject were readmitted during the monitoring period, to avoid partial data on the day of their admission, adherence rates were calculated only up to the day before the readmission. Any surveys completed by subjects whose first readmission occurred before the halfway mark of their next follow-up survey were not considered.

Results

Demographics

We evaluated 150 hospitalized heart failure patients between May 2018 and June 2019 for study eligibility. Of these 150 patients, 23 (15.3%) were discharged before they could be approached regarding the study, and 32 (21.3%) declined to participate. Another 69 (46.0%) patients did not meet the inclusion criteria, including 42 (28.0%) without a smartphone and eight (5.3%) who did not meet the minimum MDPQ score. In total, 20 (13.3%) heart failure patients were enrolled in the study.

The subjects’ mean age was 65.3 years (SD 9.3; range 50–86). Half of the subjects were women, and 36.8% were African American (Table 2). Of the subjects, a high school degree was the highest level of education for 25.0%, whereas 35.0% had received a bachelor’s degree or higher. The proportions of subjects whose families earned less than $50,000 (52.6%) and more than $50,000 (47.4%) were fairly similar. In regard to heart failure, 81.8% of the subjects were determined to be NYHA Class III or IV, and 65.0% had EFs less than 50%.

Table 2. Demographics of the study population.

Characteristic Value
Age (years; n = 20), mean (SD) 65.3 (9.3)
Sex (n = 20), n (%)  
Male 10 (50.0)
Female 10 (50.0)
Hispanic or Spanish origin (n = 20), n (%)  
No 18 (90.0)
Yes 2 (10.0)
Race or Ethnicity (n = 19), n (%)  
White 12 (63.2)
Black or African American 7 (36.8)
Asian 0 (0.0)
American Indian or American Native 0 (0.0)
Native Hawaiian or other Pacific Islander 0 (0.0)
Education (n = 20), n (%)  
High school 5 (25.0)
Some college, associate degree, or trade school 8 (40.0)
Bachelor's degree 4 (20.0)
Master's degree or above 3 (15.0)
Annual Income (US $; n = 19), n (%)  
0–25,000 4 (21.1)
25,001–50,000 6 (31.6)
50,001–75,000 2 (10.5)
75,001 or more 7 (36.8)
New York Heart Association class (n = 11), n (%)
I 0 (0.0)
II 2 (18.2)
III 5 (45.5)
IV 4 (36.4)
Ejection fraction (n = 20), n (%)  
≤40% 13 (65.0)
41%-49% 0 (0.0)
≥50% 7 (35.0)

Access to technology

In contrast to only seven subjects owning a tablet (36.8%), 17 subjects owned a smartphone (89.5%) (Table 3). All subjects had access to the internet through a wireless network, and the majority had additional internet access through a cellular network (75.0%). Most subjects had experience accessing the internet or their email account(s) (85.0%) and researching information about heart failure (60.0%). Fewer subjects had previously used apps on their smartphones to achieve health-related goals (52.6%), to make decisions about treatment (44.4%), and to ask a doctor new questions or to get a second opinion (47.4%).

Table 3. Patient answers to the Health Information National Trends Survey.

Question No Yes
Do you ever go on-line to access the Internet or World Wide Web, or to send and receive e-mail? (n = 20) 3 (15.0) 17 (85.0)
When you use the Internet, do you ever access it through a regular dial-up telephone line? (n = 20) 18 (90.0) 2 (10.0)
When you use the Internet, do you ever access it through broadband such as DSL, cable or FiOS? (n = 20) 6 (30.0) 14 (70.0)
When you use the Internet, do you ever access it through acellular network (i.e., phone, 3G/4G)? (n = 20) 5 (25.0) 15 (75.0)
When you use the Internet, do you ever access it through a wireless network (Wi-Fi)? (n = 20) 0 (0.0) 20 (100.0)
In the past 12 months, have you used the Internet to look for heart failure information for yourself? 6 (30.0) 14 (70.0)
Do you own a tablet? (n = 19) 7 (36.8) 12 (63.2)
Do you own a smartphone? (n = 19) 2 (10.5) 17 (89.5)
Do you own a cell phone? (n = 19) 3 (15.8) 16 (84.2)
On your tablet or smartphone, do you have any software applications or "apps" related to health? (n = 20) 5 (25.0) 15 (75.0)
Have the apps on your smartphone or tablet related to health helped you achieve a health-related goal such as quitting smoking, losing weight, or increasing physical activity? (n = 19) 9 (47.4) 10 (52.6)
Have the apps on your smartphone or tablet related to health helped you make a decision about how to treat an illness or condition? (n = 18) 10 (55.6) 8 (44.4)
Have the apps on your smartphone or tablet related to health led you to ask a doctor new questions, or to get a second opinion from another doctor? (n = 19) 10 (52.6) 9 (47.4)

Patient-reported outcomes

The median MLHFQ score was 66.5 (IQR: 42.1–73.8) at baseline, corresponding to a poor HRQOL (Table 4), while the median KCCQ score (45.7, IQR: 35.7–58.6) at baseline suggested a fair HRQOL in regard to heart failure. The median SAQ score (57.4, IQR: 48.9–72.4) at baseline suggested a good HRQOL with respect to angina. Subjects had adequate ability to perform maintenance behaviors (median SCHFI score in maintenance of 70.0, IQR: 51.9–83.3 at baseline), but inadequate confidence level (median SCHFI score in confidence of 60.0, 45.0–75.0 at baseline). The median SCHFI score in management (66.7, IQR: 55.6–77.8) at baseline also revealed inadequate ability to manage heart failure for the 18 subjects who had experienced recent breathing complications or ankle swelling.

Table 4. Questionnaire scores at baseline.

Questionnaire Median Score (IQR)
Minnesota Living with Heart Failure Questionnaire
Physical (n = 20) 28.5 (16.5–34.0)
Emotional (n = 20) 11.0 (8.0–15.0)
Total (n = 20) 66.5 (42.1–73.8)
Self-Care of Heart Failure Index
Maintenance (n = 19) 70.0 (51.9–83.3)
Management (n = 18) 60.0 (45.0–75.0)
Confidence (n = 19) 66.7 (55.6–77.8)
Geriatric Depression Scale (n = 20) 4.0 (2.0–5.5)
Lubben Social Network Scale
Family (n = 20) 20.0 (15.0–21.0)
Friendships (n = 19) 17.0 (14.0–19.0)
Seattle Angina Questionnaire (n = 17) 57.4 (48.9–72.4)
Kansas City Cardiomyopathy Questionnaire (n = 19) 45.7 (35.7–58.6)
PROMIS Global Health
Physical (n = 17) 37.4 (34.9–45.0)
Mental (n = 17) 45.8 (41.1–53.4)
PROMIS Physical Function (n = 19) 36.0 (30.3–39.3)
PROMIS Fatigue (n = 19) 62.7 (57.0–69.0)
PROMIS Anxiety (n = 19) 55.6 (48.8–60.7)
PROMIS Depression (n = 18) 54.8 (49.0–58.9)
PROMIS Sleep Disturbance (n = 18) 55.2 (54.3–61.7)
PROMIS Social Isolation (n = 18) 46.8 (34.8–51.8)

According to the median and IQR scores at baseline for the GDS-15 (4.0, IQR: 2.0–5.5), the LSNS-6 Family subscale (20.0, IQR: 15.0–21.0), and the LSNS-6 Friendships subscale (17.0, IQR: 14.0–19.0), most subjects were not depressed or at-risk for social isolation (Table 4). Similarly, median scores for the following PROMIS questionnaires concerning mental health at baseline were within the normal range: Global Mental Health subscale (45.8, IQR: 41.1–53.4), Anxiety (55.6, IQR: 48.8–60.7), Depression (54.8, IQR: 49.0–58.9), Sleep Disturbance (55.2, IQR: 54.3–61.7), and Social Isolation (46.8, IQR: 34.8–51.8). On the other hand, median scores for the PROMIS questionnaires concerning physical health at baseline revealed moderate to severe adverse health effects: Global Physical Health subscale (37.4, IQR: 34.9–45.0), Physical Function (36.0, IQR: 30.3–39.3), and Fatigue (62.7, IQR: 57.0–69.0).

Fewer subjects completed the SAQ and the management section of the SCHFI in their follow-up surveys because they no longer experienced the symptoms (chest pain, chest tightness, or angina, and trouble breathing or ankle swelling, respectively) that make them eligible to complete those questionnaires (S4 Table in S1 Appendix). Fig 1, which shows the average changes in subjects’ questionnaire scores, indicates improvements in heart failure maintenance, heart failure management, angina, heart failure, physical function, fatigue, anxiety, depression, sleep disturbance and social isolation.

Fig 1. Average changes in patient-reported outcomes.

Fig 1

For non-PROMIS questionnaires, a positive change indicates improvement in health status. A positive change also signifies improvement in health status for the following PROMIS questionnaires: Global Physical Health, Global Mental Health, and Physical Function. Conversely, a negative change is indicative of improvement in health status for the following PROMIS questionnaires: Fatigue, Anxiety, Depression, Sleep Disturbance, and Social Isolation.

Remote patient monitoring

Over the course of the study, one subject withdrew and three subjects expired, including one who made the transition to hospice. There were 10 different subjects who were readmitted to the hospital at least once and a total of 21 all-cause hospital readmissions. Two (9.5%) readmissions occurred within 30 days of discharge, while 11 (52.4%) occurred between 30 and 90 days and 8 (38.1%) between 90 and 180 days. Of the 21 hospital readmissions, four (19.0%) included heart failure in the admission diagnosis.

Fig 2 illustrates the proportion of activity tracker (HR-hour and HR-minute), bathroom scale, and smart pill bottle usage across all subjects. The median usage percentage of the activity tracker was 79.1% for HR-hour and 75.4% for HR-minute. Usage percentages of the bathroom scale and smart pill bottle were 59.7%, and 2.8%, respectively.

Fig 2. Histograms of activity tracker, bathroom scale, and smart pill bottle usage.

Fig 2

Median (IQR) usage percentages were 79.1% (27.1%-90.6), 75.4% (23.4%-84.9%), 59.7% (24.6%-79.4%), and 2.8% (0.0%-54.3%) for HR-hour, HR-minute, bathroom scale, and smart pill bottle, respectively.

Correlations

HR-hour was selected for analysis of summary statistics and correlations. Device usage was not significantly correlated with collected subject characteristics (i.e., NYHA classification, EF, age, education, and annual income). On the other hand, HR-hour generated significant negative correlations with changes in SCHFI confidence subscale scores (r = -0.61; 95% CI: -0.09, -0.87), as well as changes in SAQ scores (r = -0.94; 95% CI: -0.30, -1.00). For bathroom scale usage, its negative correlation with follow-up SCHFI confidence subscale scores was the only significant result (r = -0.72; 95% CI: -0.30, -0.90). Smart pill bottle usage did not produce any significant correlations with PROs or their changes.

Perceived helpfulness of the activity tracker (S5 Table in S1 Appendix) produced significant positive correlations with HR-hour (r = 0.79; 95% CI: 0.41, 0.93) and follow-up SCHFI management subscale scores (r = 0.75; 95% CI: 0.09, 0.95). Additionally, it formed a significant negative correlation with subject age (r = -0.77; 95% CI: -0.37, -0.93). Subject age and the smart pill bottle’s perceived helpfulness also negatively correlated with significance (r = -0.72; 95% CI: -0.31, -0.91). Perceived helpfulness of the bathroom scale and its usage positively correlated with significance (r = 0.54; 95% CI: 0.01, 0.83), while it generated a significant negative correlation with EFs (r = -0.59; 95% CI: -0.09, -0.85).

Discussion

Principal findings

Smartphone ownership and patient discharge were some of the biggest factors that adversely affected our enrollment rate. Results show that 42 heart failure patients did not own a smartphone, and 23 were discharged before they could be approached regarding the study. These two factors accounted for 50% (65/130) of the patients who were excluded from the study.

The median usage percentage of the activity tracker was 79.1%, and 60.0% of the subjects wore the device for at least 70% of the hours (Fig 1). When asked about their opinion of the activity tracker (S3 Table in S1 Appendix), most subjects alluded to the usefulness of the step count and heart rate tracking features. The notification and community features were mentioned as well but not as extensively. The bathroom scale’s median usage percentage was lower at 59.7%, but nearly half (45.0%) of the subjects surpassed the usage rate of 70% (Fig 1). Despite using the bathroom scale at a lower rate than the activity tracker, the majority of subjects found it easy to incorporate into their lives and tried to use it every day. Conversely, subjects were much less adherent to using the smart pill bottle, as over half (55.0%) of the subjects used the device less than 10% of the days including seven (35.0%) who did not use the device at all (Fig 1). The most common feedback subjects provided regarding the pill bottle was that its medication reminders were helpful, but that their pill box was preferable to manage their medications. Lastly, the 30-day, 90-day, and 180-day follow-up surveys had completion rates of 50%, 55%, and 65%, respectively.

Common among all three study devices was a decrease in usage over the course of the study (S6 Table in S1 Appendix). This is consistent with previous observations in eHealth studies, which have observed the general trend that patients may gradually lose interest in or become burdened by the study [39]. In a similar study conducted with chronically ill patients and telemonitoring devices, including a Fitbit activity tracker, patients used only the devices of interest to them after feeling overwhelmed by having multiple devices [40]. Despite study personnel’s efforts to monitor and improve adherence by issuing reminders to the subjects in our study, many subjects chose not to continue using select devices.

Usage of the activity tracker by subjects in this study significantly correlated with changes in SCHFI confidence subscale scores and changes in SAQ scores from baseline with significance. These negative correlations may suggest that subjects who became less confident in their self-care or began to experience worse chest pain, chest tightness, or angina over the course of the study used the activity tracker more. The significant negative correlation between the changes in SAQ scores and bathroom scale usage may indicate that those who began experiencing worse chest pain, chest tightness, or angina used the bathroom scale more as well. Subjects with lower EFs found the bathroom scale more helpful, which may be related to physicians’ frequent recommendation of daily weight monitoring as a part of heart failure self-management [41]. Furthermore, the subjects’ usage and liking for the bathroom scale suggest the potential for health professionals to prevent fluid volume overload in heart failure patients by remotely monitoring their weight and utilizing predictive algorithms that predict hospitalizations [42].

Limitations and future directions

This pilot study had a small sample of 20 patients and was confined to those admitted as an inpatient or for observation at a university-based health system. It was also restricted to heart failure patients over the age of 50. Though more than a third (36.8%) of the subjects were African American, the remaining (63.2%) were white (Table 2). Consequently, the results may not be applicable to the general population with heart failure.

Surveys were only available in English, and thus literacy in English was required. Future studies should include translated versions of the surveys in other languages. Though subjects did not have an incentive to give false answers, it is possible that an individual became fatigued and answered questions without giving them real consideration. Having study coordinators personally administer these surveys should reduce this risk, but this could be considered a limitation. While some questionnaires asked health-related questions that specified a time frame (e.g., previous 2 or 4 weeks), most of them concerned the subjects’ status at the time each survey was administered. As a result, the daily variability in PROs were not accounted for due to the intervals of the surveys.

The number of medications each patient was taking and specific information about those medications were not considered as only one smart pill bottle was issued to each patient. Data collection may have been impacted by incorrect usage of the devices. For instance, subjects may not have worn the activity tracker properly, synced the activity tracker regularly, or stood on the bathroom scale long enough for the weight measurement to transmit.

Conclusions

The low enrollment rate suggests difficulties in recruiting heart failure patients prior to their hospital discharge. Though the continued rise of smartphone ownership among older adults may improve the enrollment rate [43], adjustments to the inclusion criteria should be considered. Relaxing the smartphone ownership requirement by allowing one’s caretaker to sync the activity tracker with his/her smartphone may circumvent cases in which the patient does not own a smartphone. Enrollment rate may also be improved by recruiting patients outside of the hospital (e.g., at outpatient appointment or by phone).

Considering the subjects’ usage of and feedback regarding the activity tracker and bathroom scale, including those devices in a remote monitoring regimen may be feasible. Since Fitbit and BodyTrace provide data access in real time, patients may be able to use the data and predictive algorithms to monitor their physical health, in addition to health professionals remotely monitoring them. Moreover, given their non-invasive nature, low cost, and wide availability, the activity tracker and bathroom scale may have the potential to become a preferred alternative to more invasive and costly interventions, such as implantable pulmonary artery monitors [44]. On the other hand, pulmonary artery monitors may be preferable for some patients based on the relative predictive accuracy of the two approaches. A larger follow-up study is necessary to demonstrate the predictive value of these remote monitoring devices.

While the activity tracker and bathroom scale were positively received, the smart pill bottle was generally not useful and the follow-up surveys had low completion rates. For populations with complex medication regimens, monitoring medication usage is challenging and likely cannot be accomplished with a single pill bottle. There is a critical need for remote sensing technologies to capture medication adherence information. To improve the completion rates of follow-up surveys, administering abbreviated versions of the surveys more regularly and through an easier interface, such as a smartphone app, should be considered.

Supporting information

S1 Appendix

(DOCX)

S1 Fig. Average changes in patient-reported outcomes 30 days after discharge.

For non-PROMIS questionnaires, a positive change indicates improvement in health status. A positive change also signifies improvement in health status for the following PROMIS questionnaires: Global Physical Health, Global Mental Health, and Physical Function. Conversely, a negative change is indicative of improvement in health status for the following PROMIS questionnaires: Fatigue, Anxiety, Depression, Sleep Disturbance, and Social Isolation.

(TIF)

S2 Fig. Average changes in patient-reported outcomes 90 and 180 days after discharge.

For non-PROMIS questionnaires, a positive change indicates improvement in health status. A positive change also signifies improvement in health status for the following PROMIS questionnaires: Global Physical Health, Global Mental Health, and Physical Function. Conversely, a negative change is indicative of improvement in health status for the following PROMIS questionnaires: Fatigue, Anxiety, Depression, Sleep Disturbance, and Social Isolation.

(TIF)

S1 Dataset

(XLSX)

S2 Dataset

(XLSX)

S3 Dataset

(XLSX)

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

This work was supported by the National Institutes of Health (https://www.nih.gov) and the National Heart, Lung, and Blood Institute (https://www.nhlbi.nih.gov) under grant R56HL135425 awarded to C.W.A., PhD. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Heidenreich PA, Albert NM, Allen LA, Bluemke DA, Butler J, Fonarow GC, et al. Forecasting the impact of heart failure in the United States: a policy statement from the American Heart Association. Circ Heart Fail. 2013;6:606–19. 10.1161/HHF.0b013e318291329a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Del Gobbo LC, Kalantarian S, Imamura F, Lemaitre R, Siscovick DS, Psaty BM, et al. Contribution of Major Lifestyle Risk Factors for Incident Heart Failure in Older Adults: The Cardiovascular Health Study. JACC Heart Fail. 2015;3(7):520–528. 10.1016/j.jchf.2015.02.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Yancy CW, Jessup M, Bozkurt B, Butler J, Casey DE Jr, Drazner MH, et al. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation. 2013;62(16):e147–239. 10.1016/j.jacc.2013.05.019 [DOI] [PubMed] [Google Scholar]
  • 4.Blecker S, Paul M, Taksler G, Ogedegbe G, Katz S. Heart failure–associated hospitalizations in the United States. J Am Coll Cardiol. 2013;61(12):1259–67. 10.1016/j.jacc.2012.12.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dunlay SM, Shah ND, Shi Q, Morlan B, VanHouten H, Long KH, et al. Lifetime costs of medical care after heart failure diagnosis. Circ Cardiovasc Qual Outcomes. 2011;4(1):68–75. 10.1161/CIRCOUTCOMES.110.957225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Torio CM, Moore BJ. National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2013: Statistical Brief #204. 2016 May. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US). Available from: https://www.ncbi.nlm.nih.gov/books/NBK368492/ [PubMed]
  • 7.Fingar K, Washington R. Trends in Hospital Readmissions for Four High-Volume Conditions, 2009–2013: Statistical Brief #196. 2015 Nov. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US). Available from: https://www.ncbi.nlm.nih.gov/books/NBK338299/. [PubMed]
  • 8.Dunlay SM, Redfield MM, Weston SA, Therneau TM, Hall Long K, Shah ND, et al. Hospitalizations after heart failure diagnosis: a community perspective. J Am Coll Cardiol. 2009; 54(18):1695–702. 10.1016/j.jacc.2009.08.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Keenan PS, Normand ST, Lin Z, Drye EE, Bhat KR, Ross JS, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1(1):29–37. 10.1161/CIRCOUTCOMES.108.802686 [DOI] [PubMed] [Google Scholar]
  • 10.Krumholz HM, Merrill AR, Schone EM, Schreiner GC, Chen J, Bradley EH, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30-day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407–13. 10.1161/CIRCOUTCOMES.109.883256 [DOI] [PubMed] [Google Scholar]
  • 11.Krumholz HM, Parent EM, Tu N, Vaccarino V, Wang Y, Radford MJ, et al. Readmission after hospitalization for congestive heart failure among medicare beneficiaries. Arch Intern Med. 1997;157(1):99–104. [PubMed] [Google Scholar]
  • 12.Desai AS, Stevenson LW. Rehospitalization for heart failure: predict or prevent? Circulation. 2012;126(4):501–6. 10.1161/CIRCULATIONAHA.112.125435 [DOI] [PubMed] [Google Scholar]
  • 13.Windham G, Bennett R, Gottlieb S. Care management interventions for older patients with congestive heart failure. Am J Manag Care. 2003;9(6):447–59. [PubMed] [Google Scholar]
  • 14.Tsuyuki RT, McKelvie RS, Arnold JM, Avezum A Jr, Barretto AC, Carvalho AC, et al. Acute precipitants of congestive heart failure exacerbations. Arch Intern Med. 2001;161(19):2337–42. 10.1001/archinte.161.19.2337 [DOI] [PubMed] [Google Scholar]
  • 15.Fudim M, Hernandez AF, Felker GM. Role of Volume Redistribution in the Congestion of Heart Failure. J Am Heart Assoc. 2017;6(8):e006817 10.1161/JAHA.117.006817 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hollenberg SM, Warner Stevenson L, Ahmad T, Amin VJ, Bozkurt B, Butler J, et al. 2019 ACC Expert Consensus Decision Pathway on Risk Assessment, Management, and Clinical Trajectory of Patients Hospitalized With Heart Failure: A Report of the American College of Cardiology Solution Set Oversight Committee. J Am Coll Cardiol. 2019;74(15):1966–2011. 10.1016/j.jacc.2019.08.001 . [DOI] [PubMed] [Google Scholar]
  • 17.Fonarow GC, Abraham WT, Albert NM, Stough WG, Gheorghiade M, Greenberg BH, et al. Factors identified as precipitating hospital admissions for heart failure and clinical outcomes: findings from OPTIMIZE-HF. Arch Intern Med. 2008;168(8):847–54. 10.1001/archinte.168.8.847 . [DOI] [PubMed] [Google Scholar]
  • 18.Suh MK, Chen CA, Woodbridge J, Tu MK, Kim JI, Nahapetian A, et al. A remote patient monitoring system for congestive heart failure. J Med Syst. 2011;35(5):1165–79. 10.1007/s10916-011-9733-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Wakefield BJ, Holman JE, Ray A, Scherubel M, Burns TL, Kienzle MG, et al. Outcomes of a home telehealth intervention for patients with heart failure. J Telemed Telecare. 2009;15(1):46–50. 10.1258/jtt.2008.080701 [DOI] [PubMed] [Google Scholar]
  • 20.Zan S, Agboola S, Moore SA, Parks KA, Kvedar JC, Jethwani K. Patient engagement with a mobile web-based telemonitoring system for heart failure self-management: a pilot study. JMIR Mhealth Uhealth. 2015;3(2):e33 10.2196/mhealth.3789 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Morcillo C, Valderas JM, Aguado O, Delás J, Sort D, Pujadas R, et al. [Evaluation of a home-based intervention in heart failure patients. Results of a randomized study]. Rev Esp Cardiol. 2005;58(6):618–25. [PubMed] [Google Scholar]
  • 22.Ware P, Dorai M, Ross HJ, Cafazzo JA, Laporte A, Boodoo C, et al. Patient Adherence to a Mobile Phone-Based Heart Failure Telemonitoring Program: A Longitudinal Mixed-Methods Study. JMIR mHealth uHealth. 2019;7(2):e13259 10.2196/13259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kumar S, Nilsen WJ, Abernethy A, Atienza A, Patrick K, Pavel M, et al. Mobile health technology evaluation: the mHealth evidence workshop. Am J Prev Med. 2013;45(2):228–36. 10.1016/j.amepre.2013.03.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Steinhubl SR, Muse ED, Topol EJ. The emerging field of mobile health. Sci Transl Med. 2015;7(283):283rv3 10.1126/scitranslmed.aaa3487 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Speier W, Dzubur E, Zide M, Shufelt C, Joung S, van Eyk JE, et al. Evaluating utility and compliance in a patient-based eHealth study using continuous-time heart rate and activity trackers. J Am Med Inform Assoc. 2018;25(10):1386–91. 10.1093/jamia/ocy067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Roque NA, Boot WR. A New Tool for Assessing Mobile Device Proficiency in Older Adults: The Mobile Device Proficiency Questionnaire. J Appl Gerontol. 2018;37(2):131–156. 10.1177/0733464816642582 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Criteria Committee of New York Heart Association. In: Nomenclature and Criteria for Diagnosis of Diseases of the Heart and Great Blood Vessels. Ninth Edition. Dolgin M, editor. Boston: Little, Brown & Co; 1994.
  • 28.Arozullah AM, Yarnold PR, Bennett CL, Soltysik RC, Wolf MS, Ferreira RM, et al. Development and validation of a short-form, rapid estimate of adult literacy in medicine. Med Care. 2007;45(11):1026–33. 10.1097/MLR.0b013e3180616c1b [DOI] [PubMed] [Google Scholar]
  • 29.Rutten LJ, Davis T, Beckjord EB, Blake K, Moser RP, Hesse BW. Picking up the pace: changes in method and frame for the health information national trends survey (2011–2014). J Health Commun. 2012;17(8):979–89. 10.1080/10810730.2012.700998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Behlouli H, Feldman DE, Ducharme A, Frenette M, Giannetti N, Grondin F, et al. Identifying relative cut-off scores with neural networks for interpretation of the Minnesota Living with Heart Failure questionnaire. Conf Proc IEEE Eng Med Biol Soc. 2009;2009:6242–6. . [DOI] [PubMed]
  • 31.Burke WJ, Roccaforte WH, Wengel SP. The short form of the Geriatric Depression Scale: a comparison with the 30-item form. J Geriatr Psychiatry Neurol. 1991;4(3):173–8. 10.1177/089198879100400310 [DOI] [PubMed] [Google Scholar]
  • 32.Lubben J, Blozik E, Gillmann G, Iliffe S, von Renteln Kruse W, Beck JC, et al. Performance of an abbreviated version of the Lubben social network scale among three European community-dwelling older adult populations. Gerontologist. 2006;46(4):503–13. 10.1093/geront/46.4.503 [DOI] [PubMed] [Google Scholar]
  • 33.Riegel B, Lee CS, Dickson VV, Carlson B. An update on the self-care of heart failure index. J Cardiovasc Nurs. 2009;24(6):485–97. 10.1097/JCN.0b013e3181b4baa0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Spertus JA, Jones P, McDonell M, Fan V, Fihn SD. Health status predicts long-term outcome in outpatients with coronary disease. Circulation. 2002;106(1):43–9. 10.1161/01.cir.0000020688.24874.90 [DOI] [PubMed] [Google Scholar]
  • 35.Green CP, Porter CB, Bresnahan DR, Spertus JA. Development and evaluation of the Kansas City Cardiomyopathy questionnaire: a new health status measure for heart failure. J Am Coll Cardiol. 2000;35(5):1245–55. 10.1016/s0735-1097(00)00531-3 [DOI] [PubMed] [Google Scholar]
  • 36.Broderick JE, DeWitt EM, Rothrock N, Crane PK, Forrest CB. Advances in patient-reported outcomes: the NIH PROMIS(®) measures. EGEMS (Wash DC) 2013;1(1):1015 10.13063/2327-9214.1015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Hays RD, Bjorner JB, Revicki DA, Spritzer KL, Cella D. Development of physical and mental health summary scores from the patient-reported outcomes measurement information system (PROMIS) global items. Qual Life Res. 2009;18(7):873–80. 10.1007/s11136-009-9496-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Downey RG, King C. Missing data in Likert ratings: a comparison of replacement methods. J Gen Psychol. 1998;125(2):175–91. 10.1080/00221309809595542 [DOI] [PubMed] [Google Scholar]
  • 39.Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11 10.2196/jmir.7.1.e11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Shaw RJ, Steinberg DM, Bonnet J, Modarai F, George A, Cunningham T, et al. Mobile health devices: will patients actually use them?. J Am Med Inform Assoc. 2016;23(3):462‐466. 10.1093/jamia/ocv186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Evangelista LS, Shinnick MA. What do we know about adherence and self-care? J Cardiovasc Nurs. 2008;23(3):250–7. 10.1097/01.JCN.0000317428.98844.4d [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Allen LA, O'Connor CM. Management of acute decompensated heart failure. CMAJ. 2007;176(6):797‐805. 10.1503/cmaj.051620 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Anderson M, Perrin A. Pew Research Center: Internet & Technology. 2017. [2019-04-26]. Technology use among seniors https://www.pewinternet.org/2017/05/17/technology-use-among-seniors/
  • 44.Sandhu AT, Goldhaber-Fiebert JD, Owens DK, Turakhia MP, Kaiser DW, Heidenreich PA. Cost-Effectiveness of Implantable Pulmonary Artery Pressure Monitoring in Chronic Heart Failure. JACC Heart Fail. 2016;4(5):368–375. 10.1016/j.jchf.2015.12.015 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Robert Ehrman

5 Aug 2020

PONE-D-20-21712

Integrating Remote Monitoring into Heart Failure Patients' Care Regimen: A Pilot Study

PLOS ONE

Dear Dr. Sohn,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Thank you for your submission on this important but understudied topic. There is clearly a need to improve at-home treatment monitoring and compliance in order reduce HF readmissions. The pilot results presented herein provide a great deal of interesting information in this area, but substantial revisions are required in order to make the results more digestible for readers such that this paper can serve as a starting point for additional study.

In addition to the concerns raised by the reviewers, please consider addressing the following issues as well:

-the lack of an a priori definition of "pragmatic feasibility" is a significant limitation

-the statistical analysis section does not provide enough detail for the reader to fully understand exactly what was done. for example, line 239 mentions "regression analyses", but it is not clear if these are univariate or mulitivariable. also, what is the definition of "adherence rate"?

-given the numerous comparisons/hypotheses tested, i highly suspect alpha-inflation and a type 1 error. i would suggest correcting for multiple comparisons or simply reporting measures of uncertainly (eg, 95%CIs) rather than p-values, especially given the small sample size and thus low power.

-please include explicit details on the amount of missing data. the reference (35) for the imputation method reports that reliability declines when missingness is >20%--do the instruments with imputed variables meet this requirement?  ref 35 also refers to imputation of Likert scale data--do all of the surveys use these?

-with so many surveys to complete, is there any concern about accuracy for individual responses? what was the schedule on which they were asked to complete them (eg, all on the same day?). were participants compensated?

Please submit your revised manuscript by Sep 19 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Robert Ehrman, MD, MS

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Competing Interests section:

"This work was supported by the National Institutes of Health (https://www.nih.gov) and the National Heart, Lung, and Blood Institute (https://www.nhlbi.nih.gov) under grant R56HL135425 awarded to C.W.A., PhD. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this pilot study. I have a few specific observations and questions for the authors to consider:

- Weight gain/loss is a complicated, and potentially problematic, endpoint for assessing efficacy of chronic heart failure care. This is particularly true if the overall goal, as appears to be the case in this study, is to prevent episodes of acute heart failure (AHF) requiring hospitalization. The understanding of the role volume overload plays in AHF has changed dramatically in recent years, and it's now recognized that volume overload (and its surrogate of weight gain) is wholly insufficient to explain many AHF presentations. I would refer the authors to carefully review an article from 2017 by Fudim et al (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5586477/) which serves as an excellent introduction to the new paradigm of understanding Acute on chronic presentations of heart failure. A few particularly pertinent points to consider, however, are listed here: 1. Weight gain is poorly correlated with need for AHF hospitalization in patients with chronic heart failure, and roughly 50% of patients gain an insignificant amount of weight before hospitalization overall. In fact, the sensitivity for weight gain in predicting hospitalization has been estimated as low as 9% (though with specificity as high as 98%). 2. Up to a third of AHF patients are normovolemic or hypovolemic on total body water analysis. 3. In the timeline of central circulation congestion, weight change is a late phenomenon, often occurring days to weeks after the development of increased cardiac filling pressures (see figure 1 in that Fudim article). With all that said, the heavy focus on weight change for the assessment of chronic HF self care is problematic in the current study. While weight change doesn't need to be ignored (after all, as mentioned the specificity's 98%), it is insufficient to be the centerpiece of a HF self-care program. I would recommend the authors address this more in their discussion, and also consider other components (see later comments) that would be important for building a self-care program for HF patients that more comprehensively addresses the current understanding of the reasons for AHF episodes among those with chronic HF.

- Medication non-adherence is also frequently overestimated by clinicians as the reason for a particular HF hospitalization (Hollenberg 2019, ACC Expert Consensus Decision Pathway on Risk Assessment, Management, and Clinical Trajectory of Patients Hospitalized With Heart Failure). As a standalone cause of AHF episodes, medication or diet non-adherence represents a minority of cases (see Dr. Fonarow's report on AHF precipitants in OPTIMIZE-HF from 2008 in JAMA Internal Medicine). Most patients have other precipitants or a mix of causes for their hospitalizations (also see the results of the international GREAT registry, Arrigo et al., which recently replicated Dr. Fonarow's work on AHF precipitants in an international AHF population). The authors should comment on this in their discussion and limitations.

- With the above bullet point acknowledged, it is ALSO true that non-adherence to guideline directed medical therapy (GDMT) for chronic HF is established in numerous studies to portend adverse outcomes including hospitalization. This is, however, a more nuanced issue than the current study examines. First, it is specific medications for specific patients that matter. A patient with HFrEF not taking their beta blocker or ACEi, for example, has a significantly different impact on outcomes than a patient with HFpEF (the latter of whom for which no significant benefit of these meds have been shown). From what I can tell, the authors did not account for this or comment on this in the manuscript. Second, It is unclear to me how medications were distributed with the smart pill bottles - e.g. did patients get a smart pill bottle for every medication they took? Cardiovascular meds only? HF GDMT meds only? How many medications were each patient taking and how did that relate to the observed low rate of compliance with the smart pill bottle (i.e. were patients with >10 meds less likely to use the bottle?)?

- Could the authors comment in their discussion on how a monitoring program such as the one they pilot here could contrast with implantable pulmonary artery monitors, given the latter's recent growth? Pulmonary artery monitors are used in many ways to accomplish the same goal as the program here (to identify changes in a patient's chronic HF that may indicate increasing congestion and decompensation) and have arguably been relatively effective at this. Moreover, they do not require as much patient burden in reporting compliance as the monitoring program here - it would seem that the piloted mobile interventions here ask a lot of the patient. Of course, the tradeoff is that such monitors are invasive, but a discussion of the ways in which non-invasive mobile interventions would (and would not) be an improvement over this alternative strategy is likely warranted in the discussion.

- The authors compiled a staggering number of different surveys into a single repeated measures survey assessment. The amount of questionnaires included versus the number of survey items in which a significant difference was found raises questions about signal vs. noise - i.e. did patient scores for the SAQ and the confidence items on the SCHFI (but not the other components of SCHFI) differ because the intervention's value was specifically reflected in the targets of these survey items, or was the difference in these items (vs. others where no difference was noted) observed simply because such a large number of comparisons were undertaken? This is a key methodological limitation, even in a pilot study, and needs further discussion.

- The authors also should detail more clearly the repeated measures regression methodology used - saying "regression analysis" here is insufficient. Additionally, as far as I can tell, the methods as described would mean that at least 4 separate repeated measures regressions (one for each adherence variable) were performed. If all of the "PRO" measures and patient characteristics were used for each of these regressions then this would mean a staggering amount of covariates were included in each model (>15?) despite a sample size of 20. This would mean a high chance of problems such as collinearity and overfitting. Conversely, if each regression included a single PRO measure and all the patient characteristics this would still be a large number of covariates for the sample size, and would mean that a few dozen regressions were performed. In this latter case, overfitting and/or co-linearity are still possible problems (and need commenting on), and additionally the more general problem of a massive number of comparisons becomes even greater.

- Pursuant to the previous point, the analysis and results are overall generally hard to follow, in no small part due to how many analyses are reported or inferred and how little methodological information is given. Granted that pilot research is exploratory by nature, with a sample of 20 patients it is truly difficult to draw helpful conclusions for next steps/future work from the way the methods and results section is currently written. The authors should consider paring things down significantly and focus on just a few relationships which are felt to signal an important basis for future study.

- I am struggling with the authors' conclusions. The primary conclusion seems to be that the monitoring program is feasible. However, I do not think I can draw the same conclusion given that only 20 of 150 patients were able to be enrolled over 1 year, followup was only 50% successful, and a major component of the monitoring (the pill bottle) had very low adherence and was poorly received. Additionally, the amalgamation of numerous survey instruments, the choices/rationale for each of which I can not seem to discern beyond all being generally related to HF, makes it difficult for me to say this is a feasible program. The observation that patients did not do well with the pill bottles, likely due to the complex nature of medication adherence (as the authors do mention), is interesting but again limited by the lack of description of how this program was administered (as detailed above) and the small sample size. Overall, the results as currently presented would generally lead me to draw the opposite conclusion - that this monitoring program struggled with feasibility and was somewhat unwieldy. Could the authors perhaps address this further, specifically adding more justification for the conclusion that the results show feasibility of the program when such a small proportion of screened patients were enrolled and follow-up rates among those enrolled was so low?

- It is noted that the senior author's NHLBI grant R56HL135425 is a larger study with some similar aims, but focused on developing a predictive algorithm from some of the monitoring methods used here. In the current manuscript I cannot tell - was the presently reported work the pilot study for the senior author's R56? If so, please comment on this in the manuscript and consider revising the discussion and conclusions to more specifically assess how the pilot results informed the larger NHLBI study. If not, are the results presented here actually the main results of that R56? If this is the case, I again would question that the current report shows the feasibility of such a mobile health program since the publicly available project description for the R56 says enrollment was planned to be 500 patients. Either way, a discussion of how these pilot results affect future research efforts is needed, as currently there is little such reflection, and meeting a 500 patient target with the 20 patient enrollment seen here raises several questions about feasibility and applicability.

Overall, I applaud the authors for taking on an important general aim (identifying better ways to track outpatient HF care and prevent AHF rehospitalizations). With that said, the small sample, low response rate, and relative disorganization of the results presented here make the report very difficult to interpret. I would like to see, in particular, a more focused discussion on what this pilot study means for the next steps in research. With that said, the small sample size may preclude such a discussion.

To my eyes the results would generally suggest the opposite of the authors' conclusion (that the mobile health program is feasible) but I do struggle with this assessment given the many sources of uncertainty in the text discussed above. Further justification and clarification of the manuscript towards this end would be helpful and, in my view, necessary for publication.

Finally, I would add that if the general conclusion of the study is the converse of how I have interpreted the authors' discussion - i.e. that this screening program was not feasible - this in and of itself would still be an interesting and useful finding possibly worthy of publication (especially in the context of why feasibility may have suffered). However, the discussion seems to focus more on the results as a stepping stone to future research (yet are unclear as to what that research would/should/could be).

Reviewer #2: Thank you for allowing me the opportunity to review this interesting manuscript on a critically important, though sorely understudied area in heart failure research. The paper has some very important insights and observations about a unique monitoring modality for ambulatory heart failure patients post discharge from the hospital. The strengths of the paper include the fairly lengthy period of study for the patients (180 days), the longitudinal design and the use of a FitBit by the patients. One of the major weaknesses in my opinion is that enormous amount of data on multiple devices and questionnaires that is all packaged together in a single manuscript. The desire to simply get all of this potentially important data into publication most likely led to this decision to package everything together, but to me this seems like 3 distinct interventions that are each worthy of study independently. Below please find some recommendations should you choose to revise the current manuscript:

1. The objective statement talks about examining the "pragmatic feasibility of the approach", but the methods of the paper do not specifically outline the way in which feasibility will be studied or established. Some of the components of feasibility are approached in the design of the study (acceptability, practicality, limited-efficacy testing) but no explicit mention of the components of feasibility are apparent in the design of the study. The other areas of feasibility such as implementation, adaptation and expansion are not a part of the design whatsoever. I would suggest revising the objective statement to more closely align with the research that was conducted. Ref: Am J Prev Med. 2009 May ; 36(5): 452–457. doi:10.1016/j.amepre.2009.02.002.

2. Limitations should emphasize the large number of patients that were eligible for enrollment but were excluded for a variety of reasons (especially the 42 patients that were not enrolled b/c they did not have a smart phone) as well as the lack of a comparison group. A pure control group would not have been practical, but one interesting approach for a comparison would be to look at the outcomes of the study cohort and compare them to the outcomes of the cohort that was unable to be enrolled b/c of a lack of a smartphone. Or even to compare them to a more general HF cohort from the institution.

3. I am not an expert on survey methodology but am unclear on the appropriateness of using averages of other questions from a questionnaire to fill in the blanks for unanswered questions. Would recommend having this specific technique examined further.

4. Focusing on the FitBit data in this manuscript seems to me to be a more focused approach that would remove a lot of the data clutter the paper suffers from b/c of the multiple devices and myriad analyses that were examined and performed. A revised objective statement could focus more strongly on the FitBit data.

5. The statements in the Discussion section about the negative correlations b/w the SCHFI and SAW score suggesting that subjects who became less confident in their self-care or began to experience worsening symptoms used the activity tracker more (and the bathroom scale) seems to be a bit of a reach based on the data presented. The intervals of the surveys were sufficiently far apart that it is hard to account for the daily variability in how a chronic HF might be feeling based on the data that was collected.

6. Lines 451-4 seems like a bit of a reach given the infrequent intervals of questionnaire completion and lack of adherence with questionnaire protocol overall

7. I agree 100% with the final sentence.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nicholas Harrison

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 19;15(11):e0242210. doi: 10.1371/journal.pone.0242210.r002

Author response to Decision Letter 0


5 Oct 2020

Editor: Robert Ehrman, MD, MS

1. the lack of an a priori definition of "pragmatic feasibility" is a significant limitation

Thank you for the feedback. We understand that we may have caused confusion and created a limitation by not presenting a definition of pragmatic feasibility. There were two main factors we wished to learn more about in our pilot: 1) is it feasible to enroll patients, and 2) once enrolled, how adherent would patients be. We have clarified this in the manuscript and removed mention of “pragmatic feasibility,” which could be subjective by context, varying across different individuals and organizations. Our intention with this paper is to report on the results of our pilot, rather than make a claim that our approach is generally “pragmatically feasible.”

2. the statistical analysis section does not provide enough detail for the reader to fully understand exactly what was done. for example, line 239 mentions "regression analyses", but it is not clear if these are univariate or mulitivariable. also, what is the definition of "adherence rate"?

We apologize for the confusion. The analyses we performed were correlation analyses, not regression analyses as stated in our initial submission. In regard to adherence rate, there are two adherence methods listed for the activity tracker (HR-hour and HR-minute), along with one for each of the bathroom scale and smart pill bottle. HR-hour is the number of hours in the study that each participant had a heartrate recording divided by the total number of hours in the study. Similarly, HR-minute is the number of minutes in the study that each participant had a heartrate recording divided by the total number of minutes in the study. For both the bathroom scale and smart pill bottle, adherence rate is the number of days in the study that each participant had data recorded on the device divided by the total number of days in the study. We have revised the section “Statistical analysis” to clarify these points.

3. given the numerous comparisons/hypotheses tested, i highly suspect alpha-inflation and a type 1 error. i would suggest correcting for multiple comparisons or simply reporting measures of uncertainly (eg, 95%CIs) rather than p-values, especially given the small sample size and thus low power.

Thank you for the suggestion. In response to this comment and consideration of the small sample size, we have removed p-values and replaced them with 95% CIs.

4. please include explicit details on the amount of missing data. the reference (35) for the imputation method reports that reliability declines when missingness is >20%--do the instruments with imputed variables meet this requirement? ref 35 also refers to imputation of Likert scale data--do all of the surveys use these?

We thank the reviewer for this comment. Only questionnaires that used Likert scales followed this method of imputation, and questionnaires missing >20% of the items were omitted from analysis. This discussion has been added to the section “Statistical analysis.”

5. with so many surveys to complete, is there any concern about accuracy for individual responses? what was the schedule on which they were asked to complete them (eg, all on the same day?). were participants compensated?

Each of the surveys was administered by a study coordinator. Participants were asked to complete all questionnaires on each survey in one sitting (baseline) or call (30-day, 90-day, and 180-day). There was no direct compensation for completing follow-up surveys and no penalty for not taking them; study participants were not paid but were allowed to keep the study devices, which they were given at the beginning of the study after completing the baseline survey in person with a study coordinator.

We do not have a robust method to evaluate the accuracy of responses. Even though subjects did not have an incentive to give false answers, it is possible that an individual became fatigued and answered questions without giving them real consideration. Having study coordinators personally administer these surveys should reduce this risk, but it is still a possibility. This could be considered a limitation of our study.

Reviewer 1: Nicholas Harrison

1. Weight gain/loss is a complicated, and potentially problematic, endpoint for assessing efficacy of chronic heart failure care. This is particularly true if the overall goal, as appears to be the case in this study, is to prevent episodes of acute heart failure (AHF) requiring hospitalization. The understanding of the role volume overload plays in AHF has changed dramatically in recent years, and it's now recognized that volume overload (and its surrogate of weight gain) is wholly insufficient to explain many AHF presentations. I would refer the authors to carefully review an article from 2017 by Fudim et al (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5586477/) which serves as an excellent introduction to the new paradigm of understanding Acute on chronic presentations of heart failure. A few particularly pertinent points to consider, however, are listed here: 1. Weight gain is poorly correlated with need for AHF hospitalization in patients with chronic heart failure, and roughly 50% of patients gain an insignificant amount of weight before hospitalization overall. In fact, the sensitivity for weight gain in predicting hospitalization has been estimated as low as 9% (though with specificity as high as 98%). 2. Up to a third of AHF patients are normovolemic or hypovolemic on total body water analysis. 3. In the timeline of central circulation congestion, weight change is a late phenomenon, often occurring days to weeks after the development of increased cardiac filling pressures (see figure 1 in that Fudim article). With all that said, the heavy focus on weight change for the assessment of chronic HF self care is problematic in the current study. While weight change doesn't need to be ignored (after all, as mentioned the specificity's 98%), it is insufficient to be the centerpiece of a HF self-care program. I would recommend the authors address this more in their discussion, and also consider other components (see later comments) that would be important for building a self-care program for HF patients that more comprehensively addresses the current understanding of the reasons for AHF episodes among those with chronic HF.

We agree that monitoring weight alone or making weight change the centerpiece of a heart failure self-care program would not be comprehensive. We did not intend for weight monitoring to be the centerpiece of this remote monitoring protocol, but rather a component of a multimodal system including heartrate, activity, and medication adherence. While weight change may be a poor predictor of rehospitalization on its own, it could be a valuable feature for some patients, and may offer signal when used in conjunction with other data sources (which is supported by the high specificity in the study the reviewer mentioned).

Instead of addressing this in the discussion, we think it may be better addressed in the background section, especially since this knowledge influenced our study design and selection of devices that could track other aspects of self-care (i.e., activity tracker and smart pill bottle). In response to this comment, we have included a statement that states “weight change or medication non-adherence alone, for example, may not correlate with hospitalizations” and cited the article referred by the reviewer. Furthermore, to clarify our overall goal and purpose of including a bathroom scale in our study, we have revised the objective section. It now indicates that we intended to demonstrate the potential of different remote monitoring devices that can provide information, which could be combined in a predictive algorithm.

2. Medication non-adherence is also frequently overestimated by clinicians as the reason for a particular HF hospitalization (Hollenberg 2019, ACC Expert Consensus Decision Pathway on Risk Assessment, Management, and Clinical Trajectory of Patients Hospitalized With Heart Failure). As a standalone cause of AHF episodes, medication or diet non-adherence represents a minority of cases (see Dr. Fonarow's report on AHF precipitants in OPTIMIZE-HF from 2008 in JAMA Internal Medicine). Most patients have other precipitants or a mix of causes for their hospitalizations (also see the results of the international GREAT registry, Arrigo et al., which recently replicated Dr. Fonarow's work on AHF precipitants in an international AHF population). The authors should comment on this in their discussion and limitations.

We thank the reviewer for this insight as well and agree that simply tracking medication non-adherence is insufficient to prevent hospitalization. We would like to ensure that this knowledge was considered while designing the study. Related to the above response, we understand that medication non-adherence may not be a good predictor in isolation, but it could be helpful as a feature in a multimodal remote monitoring system. As the reviewer states, most patients have a mix of causes for hospitalization, so we aimed to combine information from as many sources as possible, which is why we are studying several monitoring devices. In response to this comment, we have cited the article referred by the reviewer and added this discussion in the section “Background” and “Objective” under “Introduction,” since we think it may be better addressed in those sections.

3. With the above bullet point acknowledged, it is ALSO true that non-adherence to guideline directed medical therapy (GDMT) for chronic HF is established in numerous studies to portend adverse outcomes including hospitalization. This is, however, a more nuanced issue than the current study examines. First, it is specific medications for specific patients that matter. A patient with HFrEF not taking their beta blocker or ACEi, for example, has a significantly different impact on outcomes than a patient with HFpEF (the latter of whom for which no significant benefit of these meds have been shown). From what I can tell, the authors did not account for this or comment on this in the manuscript. Second, It is unclear to me how medications were distributed with the smart pill bottles - e.g. did patients get a smart pill bottle for every medication they took? Cardiovascular meds only? HF GDMT meds only? How many medications were each patient taking and how did that relate to the observed low rate of compliance with the smart pill bottle (i.e. were patients with >10 meds less likely to use the bottle?)?

Thank you for the feedback. We realize that not accounting for the specific medications patients were taking throughout their participation in the study is a limitation. The goal here was to simplify things as much as possible to see if patients would use a smart pill bottle in the simplest case (one bottle, regardless of medication type). If we were able to measure adherence with this method, it could be further stratified by number of medications or medication type. However, usage of this device was low, which prevented further analysis. In response to this comment, we have revised the section “Limitations and future directions” to indicate that we did not account for the number of medications or specific medications each patient was taking. By addressing this in the limitations, we think it will be a bridge to the conclusions section where we suggest, “for populations with complex medication regimens, monitoring medication usage is challenging and likely cannot be accomplished with a single pill bottle.”

4. Could the authors comment in their discussion on how a monitoring program such as the one they pilot here could contrast with implantable pulmonary artery monitors, given the latter's recent growth? Pulmonary artery monitors are used in many ways to accomplish the same goal as the program here (to identify changes in a patient's chronic HF that may indicate increasing congestion and decompensation) and have arguably been relatively effective at this. Moreover, they do not require as much patient burden in reporting compliance as the monitoring program here - it would seem that the piloted mobile interventions here ask a lot of the patient. Of course, the tradeoff is that such monitors are invasive, but a discussion of the ways in which non-invasive mobile interventions would (and would not) be an improvement over this alternative strategy is likely warranted in the discussion.

Thank you for this suggestion. The motivation for this remote monitoring regimen is that it is non-invasive, less costly, and more widely available than implantable pulmonary artery monitors. Additionally, this approach could promote self-care by allowing heart failure patients to monitor their own status, in addition to health professionals remotely monitoring them. On the other hand, pulmonary artery monitors may be preferable for some patients based on the relative predictive accuracy of the two approaches. A larger follow-up study is necessary to demonstrate the predictive value of these remote monitoring devices. This discussion has been added to the conclusions section.

5. The authors compiled a staggering number of different surveys into a single repeated measures survey assessment. The amount of questionnaires included versus the number of survey items in which a significant difference was found raises questions about signal vs. noise - i.e. did patient scores for the SAQ and the confidence items on the SCHFI (but not the other components of SCHFI) differ because the intervention's value was specifically reflected in the targets of these survey items, or was the difference in these items (vs. others where no difference was noted) observed simply because such a large number of comparisons were undertaken? This is a key methodological limitation, even in a pilot study, and needs further discussion.

We thank the reviewer for this comment and agree that the statistical analysis of the survey responses was inadequate. While there were many surveys administered, many of these questionnaires had responses that were highly correlated. In our initial submission, we attempted to give examples that were representative of the trends we saw, but we realize that too many were included, and their presentation was poorly organized and confusing. We have simplified the presentation of the survey results to focus on the most important findings which hopefully makes things clearer. As mentioned in our response to the editor, we have also removed p values and instead used confidence intervals because they are a more appropriate analysis of these results given the small sample size and large number of survey questions administered.

6. The authors also should detail more clearly the repeated measures regression methodology used - saying "regression analysis" here is insufficient. Additionally, as far as I can tell, the methods as described would mean that at least 4 separate repeated measures regressions (one for each adherence variable) were performed. If all of the "PRO" measures and patient characteristics were used for each of these regressions then this would mean a staggering amount of covariates were included in each model (>15?) despite a sample size of 20. This would mean a high chance of problems such as collinearity and overfitting. Conversely, if each regression included a single PRO measure and all the patient characteristics this would still be a large number of covariates for the sample size, and would mean that a few dozen regressions were performed. In this latter case, overfitting and/or co-linearity are still possible problems (and need commenting on), and additionally the more general problem of a massive number of comparisons becomes even greater.

We apologize for the confusion. The analyses we performed were correlation analyses, not regression analyses. We have revised the section “Statistical analysis” to clarify this. We also agree that multivariate analyses would lead to a large number of covariates. Due to this fact and the small sample size, we did not perform multivariate regression analyses. We think the results of correlation analyses still offer insight that may be useful for future study.

8. Pursuant to the previous point, the analysis and results are overall generally hard to follow, in no small part due to how many analyses are reported or inferred and how little methodological information is given. Granted that pilot research is exploratory by nature, with a sample of 20 patients it is truly difficult to draw helpful conclusions for next steps/future work from the way the methods and results section is currently written. The authors should consider paring things down significantly and focus on just a few relationships which are felt to signal an important basis for future study.

Thank you for the feedback. In response to this comment, we have considerably pared down the results to focus on a few relationships that we think are important for future study. We also reorganized the results into five sections: Demographics, Access to technology, Patient-reported outcomes, Remote patient monitoring, and Correlation.

9. I am struggling with the authors' conclusions. The primary conclusion seems to be that the monitoring program is feasible. However, I do not think I can draw the same conclusion given that only 20 of 150 patients were able to be enrolled over 1 year, followup was only 50% successful, and a major component of the monitoring (the pill bottle) had very low adherence and was poorly received. Additionally, the amalgamation of numerous survey instruments, the choices/rationale for each of which I can not seem to discern beyond all being generally related to HF, makes it difficult for me to say this is a feasible program. The observation that patients did not do well with the pill bottles, likely due to the complex nature of medication adherence (as the authors do mention), is interesting but again limited by the lack of description of how this program was administered (as detailed above) and the small sample size. Overall, the results as currently presented would generally lead me to draw the opposite conclusion - that this monitoring program struggled with feasibility and was somewhat unwieldy. Could the authors perhaps address this further, specifically adding more justification for the conclusion that the results show feasibility of the program when such a small proportion of screened patients were enrolled and follow-up rates among those enrolled was so low?

Thank you for the feedback. We understand it is reasonable to think that the remote monitoring regimen is potentially not feasible from the points that the reviewer mentions above. We think the low enrollment rate is in large part due to the patients being discharged before they could be approached (15.3%) and the requirements of smartphone ownership and internet access, since at least 28.0% of the potential subjects did not own a smartphone. We realize these are limitations but believe future enrollment may improve as smartphone ownership is increasing among older adults [1]. Additionally, recruiting patients outside of the hospital (e.g., at outpatient appointments or by phone) and relaxing these requirements may be possibilities for future studies. The requirement that the patient owns a smartphone may be removed by allowing one’s caretaker to sync the Fitbit with his/her smartphone. Finally, we believe that there is no “one shoe fits all” intervention for the complex spectrum of heart failure patients, and that the proposed regimen is possibly a good option for some patients, but certainly not all. This discussion has been added to the conclusions section.

In our initial submission, we had a general statement that the remote monitoring is feasible as the reviewer noted. We have revised this statement in the conclusion section to say that usage of two of the devices may be feasible (activity tracker and bathroom scale), while the other two are not in their current form (smart pill bottle and follow-up surveys). We elaborated on our discussion of the smart pill bottle’s limitation for populations with complex medication regimens in the limitations and conclusions sections. For the follow-up surveys, administering abbreviated versions of the surveys more regularly and through an easier interface, such as a smartphone app, may improve their completion rate. We have added this discussion to the section “Limitations and future directions.”

10. It is noted that the senior author's NHLBI grant R56HL135425 is a larger study with some similar aims, but focused on developing a predictive algorithm from some of the monitoring methods used here. In the current manuscript I cannot tell - was the presently reported work the pilot study for the senior author's R56? If so, please comment on this in the manuscript and consider revising the discussion and conclusions to more specifically assess how the pilot results informed the larger NHLBI study. If not, are the results presented here actually the main results of that R56? If this is the case, I again would question that the current report shows the feasibility of such a mobile health program since the publicly available project description for the R56 says enrollment was planned to be 500 patients. Either way, a discussion of how these pilot results affect future research efforts is needed, as currently there is little such reflection, and meeting a 500 patient target with the 20 patient enrollment seen here raises several questions about feasibility and applicability.

The R56 supported the pilot study we present in the manuscript and was a bridge to our R01 award that will recruit 500 patients. NIH RePORTER likely lists the aims of the R01 for the R56 given one feature of the R56 mechanism is to allow an investigator to further develop a research plan by acquiring pilot results. The overall goal of the R56 pilot study was to determine the feasibility of conducting a study that uses device data to predict hospital readmissions by demonstrating that subjects would use the devices and that the data could be applied in that way. This discussion has been added to the objective section. Given the study objective, we think usage of the activity tracker and bathroom scale may be feasible, whereas the smart pill bottle and follow-up surveys are not in their current forms. In response to this comment, as well as the reviewer’s ninth comment above, we have revised the conclusions section to explain that we think usage of the two aforementioned devices may be feasible and the limitations and conclusions section to describe the issues and future directions of the smart pill bottle and follow-up surveys.

11. Overall, I applaud the authors for taking on an important general aim (identifying better ways to track outpatient HF care and prevent AHF rehospitalizations). With that said, the small sample, low response rate, and relative disorganization of the results presented here make the report very difficult to interpret. I would like to see, in particular, a more focused discussion on what this pilot study means for the next steps in research. With that said, the small sample size may preclude such a discussion.

To my eyes the results would generally suggest the opposite of the authors' conclusion (that the mobile health program is feasible) but I do struggle with this assessment given the many sources of uncertainty in the text discussed above. Further justification and clarification of the manuscript towards this end would be helpful and, in my view, necessary for publication.

Finally, I would add that if the general conclusion of the study is the converse of how I have interpreted the authors' discussion - i.e. that this screening program was not feasible - this in and of itself would still be an interesting and useful finding possibly worthy of publication (especially in the context of why feasibility may have suffered). However, the discussion seems to focus more on the results as a stepping stone to future research (yet are unclear as to what that research would/should/could be).

Thank you for the feedback. We understand the reviewer’s concerns about the small sample and low response rate. However, despite the small sample and low response rate, we believe the results offer important insight into which aspects of the remote monitoring regimen (activity tracker and bathroom scale) may be feasible and which aspects (smart pill bottle and follow-up surveys) are not in their current forms. We revised the limitations and conclusions sections to specifically mention the feasibility of the activity tracker and bathroom scale to be used in future studies, in addition to the shortcomings of the smart pill bottle and follow-up surveys. We think that revising these sections to discuss each aspect of the remote monitoring regimen individually rather than as a whole will clarify and justify our conclusions about their feasibility and future directions. We also have revised the presentation of the survey results to focus on the most important findings, hopefully making the results easier to follow.

Reviewer 2

1.The objective statement talks about examining the "pragmatic feasibility of the approach", but the methods of the paper do not specifically outline the way in which feasibility will be studied or established. Some of the components of feasibility are approached in the design of the study (acceptability, practicality, limited-efficacy testing) but no explicit mention of the components of feasibility are apparent in thse design of the study. The other areas of feasibility such as implementation, adaptation and expansion are not a part of the design whatsoever. I would suggest revising the objective statement to more closely align with the research that was conducted. Ref: Am J Prev Med. 2009 May ; 36(5): 452–457. doi:10.1016/j.amepre.2009.02.002.

Thank you for the suggestion. We sought to investigate two main factors in our pilot: 1) feasibility of enrolling patients, and 2) their adherence to a remote monitoring reigmen. We have clarified this in the manuscript and removed mention of “pragmatic feasibility,” which is a somewhat subjective notion that will vary across individuals and organizations. Our intention with this paper is to report on the results of our pilot, rather than make a claim that our approach is generally “pragmatically feasible.”

2. Limitations should emphasize the large number of patients that were eligible for enrollment but were excluded for a variety of reasons (especially the 42 patients that were not enrolled b/c they did not have a smart phone) as well as the lack of a comparison group. A pure control group would not have been practical, but one interesting approach for a comparison would be to look at the outcomes of the study cohort and compare them to the outcomes of the cohort that was unable to be enrolled b/c of a lack of a smartphone. Or even to compare them to a more general HF cohort from the institution

We thank the reviewer for the suggestion and agree that the limitations should mention the large number of patients who were excluded for various reasons, as well as those who were discharged or declined to participate. This discussion has been added to the section “Limitations and future directions.” In regard to the comment about a control group for comparison, we agree that it would be interesting but believe that our sample size is inadequate to draw significant conclusions. However, it is something we may explore in a larger future study, since the subjects’ usage of and feedback regarding the activity tracker and bathroom scale were promising.

3. I am not an expert on survey methodology but am unclear on the appropriateness of using averages of other questions from a questionnaire to fill in the blanks for unanswered questions. Would recommend having this specific technique examined further.

Thank you for the feedback. We have revised the statistical analysis section to explain that only questionnaires that used Likert scales followed this method of imputation and that questionnaires missing >20% of the items were not subject to this method, since reliability declines when missingness is >20% [2].

4. Focusing on the FitBit data in this manuscript seems to me to be a more focused approach that would remove a lot of the data clutter the paper suffers from b/c of the multiple devices and myriad analyses that were examined and performed. A revised objective statement could focus more strongly on the FitBit data.

Thank you for the suggestion. We apologize for the confusion resulting from the high number of comparisons and analyses included in the paper. We have considerably pared down the results section to focus on a few relationships that we think are important for future study. We also reorganized the results into five sections: Demographics, Access to technology, Patient-reported outcomes, Remote patient monitoring, and Correlations. Because the objective of this manuscript was to suggest the feasibility of the remote monitoring regimen by detailing the preliminary results, we decided not to focus more on one device over the others. Instead, we have restructured the results to more clearly demonstrate the value of each of the modalities used. We hope this addresses the reviewer’s concern.

5. The statements in the Discussion section about the negative correlations b/w the SCHFI and SAW score suggesting that subjects who became less confident in their self-care or began to experience worsening symptoms used the activity tracker more (and the bathroom scale) seems to be a bit of a reach based on the data presented. The intervals of the surveys were sufficiently far apart that it is hard to account for the daily variability in how a chronic HF might be feeling based on the data that was collected.

We thank the reviewer for the feedback. We understand that there may be daily variability in how heart failure affects each individual and the wording in our initial submission was stronger than we intended it to be. Instead of suggesting a general trend, we have revised the statements to state that it is restricted to our observations because the sample size is not sufficient to make conclusions about the general population. Additionally, we have revised the section “Limitations and future directions” to discuss the aforementioned limitation of the questionnaires.

6. Lines 451-4 seems like a bit of a reach given the infrequent intervals of questionnaire completion and lack of adherence with questionnaire protocol overall

Thank you for the feedback. We understand that there are limitations from the survey frequency and completion rates. In response to this comment, as well as the one above, we have revised the limitations section and those lines to stress that further research is necessary to study this observation.

References

[1] Anderson M, Perrin A. Pew Research Center: Internet & Technology. 2017. [2019-04-26]. Technology use among seniors https://www.pewinternet.org/2017/05/17/technology-use-among-seniors/

[2] Downey RG, King C. Missing data in Likert ratings: a comparison of replacement methods. J Gen Psychol. 1998;125(2):175–91.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Robert Ehrman

29 Oct 2020

Integrating Remote Monitoring into Heart Failure Patients' Care Regimen: A Pilot Study

PONE-D-20-21712R1

Dear Dr. Sohn,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Robert Ehrman, MD, MS

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: thank you for the opportunity to review the manuscript. All of the issues that i raised with the original version of the manuscript have been adequately addressed.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Mark Favot

Acceptance letter

Robert Ehrman

10 Nov 2020

PONE-D-20-21712R1

Integrating remote monitoring into heart failure patients’ care regimen: A pilot study

Dear Dr. Sohn:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Robert Ehrman

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix

    (DOCX)

    S1 Fig. Average changes in patient-reported outcomes 30 days after discharge.

    For non-PROMIS questionnaires, a positive change indicates improvement in health status. A positive change also signifies improvement in health status for the following PROMIS questionnaires: Global Physical Health, Global Mental Health, and Physical Function. Conversely, a negative change is indicative of improvement in health status for the following PROMIS questionnaires: Fatigue, Anxiety, Depression, Sleep Disturbance, and Social Isolation.

    (TIF)

    S2 Fig. Average changes in patient-reported outcomes 90 and 180 days after discharge.

    For non-PROMIS questionnaires, a positive change indicates improvement in health status. A positive change also signifies improvement in health status for the following PROMIS questionnaires: Global Physical Health, Global Mental Health, and Physical Function. Conversely, a negative change is indicative of improvement in health status for the following PROMIS questionnaires: Fatigue, Anxiety, Depression, Sleep Disturbance, and Social Isolation.

    (TIF)

    S1 Dataset

    (XLSX)

    S2 Dataset

    (XLSX)

    S3 Dataset

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES