Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2021 Jan 19;28(4):704–712. doi: 10.1093/jamia/ocaa321

Evaluation of electronic health record-integrated digital health tools to engage hospitalized patients in discharge preparation

Anuj K Dalal 1,2,, Nicholas Piniella 1, Theresa E Fuller 1, Denise Pong 1, Michael Pardo 1, Nathaniel Bessa 1, Catherine Yoon 1, Stuart Lipsitz 1,2, Jeffrey L Schnipper 1,2
PMCID: PMC7973476  PMID: 33463681

Abstract

Objective

To evaluate the effect of electronic health record (EHR)-integrated digital health tools comprised of a checklist and video on transitions-of-care outcomes for patients preparing for discharge.

Materials and Methods

English-speaking, general medicine patients (>18 years) hospitalized at least 24 hours at an academic medical center in Boston, MA were enrolled before and after implementation. A structured checklist and video were administered on a mobile device via a patient portal or web-based survey at least 24 hours prior to anticipated discharge. Checklist responses were available for clinicians to review in real time via an EHR-integrated safety dashboard. The primary outcome was patient activation at discharge assessed by patient activation (PAM)-13. Secondary outcomes included postdischarge patient activation, hospital operational metrics, healthcare resource utilization assessed by 30-day follow-up calls and administrative data and change in patient activation from discharge to 30 days postdischarge.

Results

Of 673 patients approached, 484 (71.9%) enrolled. The proportion of activated patients (PAM level 3 or 4) at discharge was nonsignificantly higher for the 234 postimplementation compared with the 245 preimplementation participants (59.8% vs 56.7%, adjusted OR 1.23 [0.38, 3.96], P = .73). Postimplementation participants reported 3.75 (3.02) concerns via the checklist. Mean length of stay was significantly higher for postimplementation compared with preimplementation participants (10.13 vs 6.21, P < .01). While there was no effect on postdischarge outcomes, there was a nonsignificant decrease in change in patient activation within participants from pre- to postimplementation (adjusted difference-in-difference of −16.1% (9.6), P = .09).

Conclusions

EHR-integrated digital health tools to prepare patients for discharge did not significantly increase patient activation and was associated with a longer length of stay. While issues uncovered by the checklist may have encouraged patients to inquire about their discharge preparedness, other factors associated with patient activation and length of stay may explain our observations. We offer insights for using PAM-13 in context of real-world health-IT implementations.

Trial Registration

NIH US National Library of Medicine, NCT03116074, clinicaltrials.gov

Keywords: digital health; discharge preparation; patient activation; safety, checklists

INTRODUCTION

Approximately 19%–28% of hospitalized patients experience preventable adverse events after discharge due to suboptimal monitoring of conditions, medication errors or nonadherence, and failure to execute the recovery plan.1–4 Despite the 35 million discharges from US hospitals each year,5 most attempts at standardizing the discharge process have focused on clinicians.6–8 Patient-centric strategies that utilize checklists to assess patients’ perspective of their discharge readiness exist, but few have been integrated into practice.9–11 While use of digital health apps represents one promising approach to engage patients, uptake has been limited due to access and usability issues, unclear expectations for use, lack of evidence of efficacy, and suboptimal integration with the electronic health record (EHR).12–17 With the onset of the COVID-19 pandemic, use of digital health tools to support telehealth has risen rapidly and is expected to be sustained given the need to monitor patients remotely, minimize exposure risks, and facilitate virtual care.18,19 Thus, understanding the effectiveness of EHR-integrated digital health tools, as well as potential pitfalls in assessing their impact, is crucial to drive improvement in quality and safety during transitions.

Patient activation, defined as having the knowledge, skills, and confidence to self-manage, can be effectively measured by the validated patient activation measure (PAM)-13 instrument.20,21 Given its association with key outcomes, patient activation is increasingly being used as a standardized, intermediary measure to assess the effect of patient-facing digital health interventions, including those deployed during hospitalization.20,22–24 When integrated into clinical workflow, EHR-integrated digital health tools that facilitate self-identification of safety concerns via checklists have potential to “activate” patients (ie, improve their knowledge, skills, and confidence) preparing for discharge while simultaneously minimizing errors in the discharge process.9,11,16,24–28 For example, Greysen et al. demonstrated that a tablet-based intervention led to increased review of educational materials, medications, follow-up appointments, and other health-related tasks compared to usual care.29,30 Recently, we demonstrated significant improvement in patient activation associated with an acute care patient portal.31 In theory, higher patient activation at discharge should lead to increased ability to self-manage, thereby improving postdischarge outcomes;24 and higher clinician awareness of patients’ discharge preparedness can lead to changes in, and communication of, the recovery plan.32 However, studies that have evaluated the effect of portals on patient activation at discharge have demonstrated inconsistent results.33,34 O’Leary et al demonstrated no effect on patient activation.33 In a randomized controlled trial, Masterson et al was not able to demonstrate significant improvement in patient activation from admission to discharge attributed to their patient portal intervention.34 Neither study included tools to self-assess discharge preparedness using a structured checklist approach.

As part of an Agency for Healthcare Research and Quality (AHRQ) funded project, we implemented an interactive patient-centered discharge toolkit (PDTK), a suite of EHR-integrated tools that includes a structured checklist and video to coach patients in self-assessing discharge preparedness.32 Patients reported approximately 4 concerns, most commonly about medications and follow-up. Unlike previous studies, our intervention enabled clinicians to view these patient-reported concerns in real time as actionable insights via an EHR-integrated safety dashboard upon submission of the checklist from a hospital-issued patient portal or personal web-enabled device.32 In this manuscript, we report the impact of this intervention on patient activation at discharge and 30 days after discharge, hospital operational metrics, healthcare utilization, and change in patient activation during the transition period.

MATERIALS AND METHODS

Study design, setting, and participants

This prepost study was conducted at Brigham and Women’s Hospital (BWH), an academic medical center affiliated with Mass General Brigham (MGB), an integrated healthcare system in Boston, MA. All adult patients (>18 years) admitted to 1 of 3 general medicine units for at least 24 hours during the preimplementation (1/2017–10/2017) and postimplementation (1/2018–7/2018) periods were eligible. English-speaking patients who demonstrated decision-making capacity (determined by a clinician) or had a legally designated proxy (who spoke English and was available to participate) were eligible.

The PDTK study was conducted independently but in parallel to the patient safety learning lab (PSLL) study reported elsewhere.31,35 The PSLL study provided the EHR-integrated technical infrastructure (Figure 1: acute care patient portal, safety dashboard) which we enhanced (see below) to facilitate discharge preparation during hospitalization.32 While eligible patients were screened independently of the PSLL study, they included patients concurrently enrolled in the patient portal as part of the PSLL study. All clinician participants were trained to access the safety dashboard as part of the PSLL study. Thus, while patient and clinician participants (Table 1) were exposed to the patient portal and safety dashboard, respectively, throughout the PDTK study period, the discharge preparation tools were only available to patient and clinician participants during the postimplementation period of the PDTK study. Furthermore, while the focus of the PSLL study was to evaluate the impact of a suite of digital health tools on preventable harms in the hospital (eg, rates of falls),35 the focus of the PDTK study was to evaluate the effect of enhancements to the PSLL infrastructure on activating patients preparing for discharge, thereby improving safety during transitions from the hospital. The PSLL and PDTK studies were both approved by our institutional review board, the MGB Human Research Committee.

Figure 1.

Figure 1.

Digital health tools to engage patients preparing for discharge.

Table 1.

Timeline, study period, and participants

Timeline (year) 2016 2017 2018
Study Period
PSLL Preimplementation Postimplementation Postimplementation
PDTK N/A Preimplementationa Postimplementation
Participants
Patients No tools Patient portal

Patient portal + discharge moduleb

 

Discharge mobile app prototypec

Clinicians No tools EHR-integrated safety dashboard EHR-integrated safety dashboard + discharge column

Abbreviations: PDTK = patient-centered discharge toolkit; PSLL = patient safety learning laboratory.

a

During the PDTK preimplementation period, patient and clinician participants were exposed to the patient portal and safety dashboard, respectively, but did not have access to discharge preparation tools.

b

Patient portal discharge module = discharge preparation checklist and video.

c

Discharge mobile app prototype = web-based REDCap survey configured as a discharge preparation checklist and video.

Intervention

The individual components of the PDTK, including the discharge preparation checklist, have been previously reported and met the data privacy and security requirements mandated by our institution.32 The PDTK was comprised of a 16-item checklist and companion video (Figure 1, left) which were incorporated into a discharge module in the patient portal and available via a web-based REDCap (Research Electronic Data Capture, Nashville, TN) survey—used as a functional prototype of a mobile app for patients—and a discharge column added to the EHR-integrated safety dashboard (Figure 1, center) for clinicians. Thus, the checklist could be completed electronically via the patient portal or the mobile app. Once submitted, clinicians could view the status of each checklist item on the EHR-integrated safety dashboard and take appropriate action for unsatisfied items (eg, arrange follow-up, address medication concerns, etc.).

In contrast to the current workflow in which the nurse reviews discharge instructions with the patient at the time of discharge, the intent of this structured, digital health intervention was to encourage patients to self-assess discharge preparedness and report concerns at least 24 hours prior to their expected discharge date (EDD) such that clinicians could proactively address any concerns in context of other relevant EHR data (eg, discharge destination, medical and nonmedical discharge barriers, etc.) viewable from the safety dashboard. Of note, while the dashboard was being implemented across all general medicine units as part of the PSLL study, clinicians did not receive notifications when checklist responses became available; rather they were required to access the dashboard to view this information. Also, while the category of checklist responses was viewable from the dashboard display, the full survey question was only available in a “hover-over” (Figure 1, center).

Patient recruitment and enrollment

Both pre- and postimplementation patient subjects provided informed consent to participate in data collection activities, including the 30-day postdischarge follow-up call. At the time of discharge, a trained research assistant (RA) identified and randomly approached patients on study units with a plan to be discharged that day based on the EDD entered in the EHR and by confirming with the unit clerk. During the postimplementation period, RAs identified and randomly approached patients on study units who had been coached to complete the discharge checklist via the patient portal or REDCap survey on a mobile device within the prior 24–48 hours.32 The RA checked with the bedside nurse to determine whether the patient was appropriate (not considered a behavioral risk) to approach. For patients who lacked capacity (not alert and oriented) or were non-English speaking, the RA approached the healthcare proxy to participate on their behalf.25

Measures and data collection

Enrolled patients (or caregivers) were asked to complete the PAM-13 or CAM-13 (Insignia Health, Inc.) survey at discharge, respectively.21 We elected to use the PAM-13 as it had been determined to be a reliable and valid measure for use in the hospital setting and had been used in prior rigorous evaluations of digital health interventions.22,33,34 Participants were also asked to rate 2 statements about discharge safety on a 5-point Likert scale: 1) “I felt prepared by my clinical team to be safely discharged from the hospital”; 2) “I am confident that my clinical team communicated with each other about my discharge plan.” For postimplementation participants, we tracked the number of checklist items to which a “no” or “unsure” response was submitted as well as the number of days from checklist submission to discharge.

Enrolled patients (or caregivers) were called approximately 30 days after discharge during which they were asked to retake the PAM-13 or CAM-13 and complete a survey about utilization of healthcare services (eg, ambulatory and emergency department visits, readmissions) following discharge. We used administrative sources to obtain demographic information and utilization of healthcare resources within the MGB network during the 30-day period after discharge. We used patient self-report via the 30-day follow-up call for utilization outside the MGB network.

Primary and secondary outcomes

Our primary outcome was patient activation at discharge, measured as the proportion of patients with PAM scores greater than 55 (Level 3 or 4) and mean PAM scores. Secondary outcomes included patient activation at 30 days after discharge; postdischarge healthcare resource utilization during the 30 days, measured as a dichotomous variable (presence of 1 or more emergency room visits and/or hospital readmissions); EDD accuracy (percent of final EDD entries equal to the actual discharge date), discharges before noon and discharge destination (dichotomous variables); and length of stay (continuous variable). We assessed patients’ perceptions of discharge safety as the proportion of patients who agreed or strongly agreed with each statement. Finally, we measured the change in the proportion of activated patients (PAM Level 3 or 4) and mean PAM scores from discharge to 30 days postdischarge.

Power and sample size

We estimated a sample size of 416 based on 696 patients admitted to general medicine units during a 12-month study period and a 60% participation rate. We assumed an intra-class correlation coefficient (ρ) of 0.001 for patients within each unit (6 units with an average cluster size of 116 patients per unit); thus, our effective sample size was reduced to 372 (186 pre and 186 post). Finally, we assumed that 72% of patients would be classified as PAM Level 3 or 4 based on the literature.24 Using generalized estimating equations (GEE) to account for clustering of patients within unit, we estimated 80% power to detect a 12% absolute difference in the primary outcome from 72% to 84% with an alpha of 0.05.

Statistical analysis

For the primary outcome, we compared the proportion of patients with PAM scores greater than 55 (Level 3 or 4) and mean PAM scores post- versus preimplementation using a Chi-squared test and a Student’s t-test, respectively. We used multivariable regression to adjust for age, sex, race, median income by zip code, Elixhauser comorbidity score, and month of admission. All analyses (unadjusted and adjusted) used GEE to account for clustering by unit.

We compared and analyzed secondary outcomes post- versus preimplementation similar to the primary outcome. For patient activation at 30 days, we used repeated measures analysis to account for scores at discharge, as well as GEE to account for repeated measures on the same patient (discharge and 30 days after discharge) and clustering by unit. Postdischarge healthcare resource utilization events (emergency room visits, readmissions) were enumerated, dichotomized (presence of 1 or more events), compared, and analyzed using logistic regression for outcomes of varying severity (readmissions, readmissions plus emergency room visits, etc). Mean length of stay was compared using Student’s t-test and analyzed using linear regression. Responses to statements of discharge safety were dichotomized (agree or strongly agree) and analyzed using logistic regression.

To assess effect modification, we analyzed the following subgroups identified a priori: patient portal use (yes/no), elderly (>65), Elixhauser comorbidity score (>6), caregiver (yes/no), socioeconomic status (median income by zip code > $63K), and HOSPITAL readmission risk score.36,37 Each variable was used as an interaction term in subgroup analyses (eg, prepost*portal [y/n]). To evaluate change in patient activation within participants during the transition period, we compared the change in the proportion of activated patients (PAM Level 3 or 4) and mean PAM scores from discharge to 30 days postdischarge in the post- minus the preimplementation period using a Chi-squared test and Student’s t-test, respectively.

RESULTS

Of 673 patients approached (Figure 2), 484 (71.9%) enrolled; of these, 479 (98.9%) were included in the main analysis (245 pre, 234 post). Of the 479 patient participants, 215 (44.9%) were available for a 30-day follow-up call (99 pre, 116 post). Demographic variables (Table 2) were evenly balanced between pre- and postimplementation periods with notable exceptions: postimplementation participants were more often non-Hispanic and spoke English as their primary language. During the postimplementation period, an average (SD) of 3.75 (3.02) concerns were reported via the checklist, and the mean (SD) number of days from checklist submission to discharge was 4.58 (5.73) days. Nonresponders (Supplementary Appendix A) to the 30-day follow-up call had more comorbidities and were more often discharged to facilities.

Figure 2.

Figure 2.

CONSORT flow diagram.

Table 2.

Demographics of participants for patient admissions

Characteristic Pre, n = 245 Post, n = 234 PValue
No. unique patients 242 227
Caregiver (healthcare proxy) enrolled–no. (%) 12 (4.9) 8 (3.4) .50c
Age in years–mean (SD) 59.4 (18.5) 59.0 (17.4) .75b
Female gender–no. (%) 147 (60.0) 129 (55.1) .28a
Race–no. (%)
 Caucasian 167 (68.2) 160 (68.4) .83a
  Non-Caucasian 72 (29.4) 72 (30.8)
 Missing 6 (2.4) 2 (0.9)
Ethnicity–no. (%)
 Non-Hispanic 212 (86.5) 218 (93.2) .01a
 Hispanic 29 (11.8) 13 (5.6)
 Missing 4 (1.6) 3 (1.3)
 Primary language English–no. (%) 227 (92.7) 228 (97.4) .02a
Socioeconomic status (median income by zip code)–no. (%)
 Less than or equal to $47 000 54 (22.0) 49 (20.9) .97a
 $47 001–$63 000 57 (23.3) 55 (23.5)
 Greater than $63 000 131 (53.5) 126 (53.8)
 Missing 3 (1.2) 4 (1.7)
Insurance status–no. (%)
 Private 81 (33.1) 85 (36.3) .68a
  Public (Medicaid, Medicare) 159 (64.9) 143 (61.1)
 Other 5 (2.0) 6 (2.6)
Admission unit–no. (%)
 Unit 1 78 (31.8) 80 (34.2) .64a
 Unit 2 106 (43.3) 104 (44.4)
 Unit 3 61 (24.9) 50 (21.4)
Primary care physician–no. (%)
 Network 120 (49.0) 104 (44.4) .24a
 Non-network 121 (49.4) 130 (55.6)
 Missing 4 (1.6) 0
Enrolled in ambulatory patient portal 41 (16.7) 33 (14.1) .43a
Enrolled in acute care patient portal 76 (31.0) 91 (38.9) .07a
Charlson comorbidity index–mean (SD) 2.58 (2.76) 2.97 (2.98) .14b
Elixhauser index- no. (%)
 Less than or equal to 0 60 (24.5) 45 (19.2) .51a
  1 to 5 47 (19.2) 43 (18.4)
  6 to 10 45 (18.4) 46 (19.7)
 11 or more 93 (38.0) 100 (42.7)
Elixhauser comorbidities–mean (SD) 4.13 (2.37) 4.37 (2.44) .29b

Abbreviation: SD, standard deviation.

a

Chi-squared test.

b

Wilcoxin rank sum and

c

Fisher’s exact test for ordinal, continuous, and dichotomous variables, respectively.

The primary outcome (Table 3), the proportion of activated patients (PAM Level 3, 4) at discharge, was higher post- compared with preimplementation, but this was nonsignificant in unadjusted and adjusted analyses (59.8% vs 56.7%, adjusted OR 1.23 [0.38, 3.96], P = .73). Regarding secondary outcomes (Table 3), the proportion of activated patients (PAM Levels 3, 4) at 30 days was significantly lower post- compared with preimplementation in unadjusted analyses but not in adjusted analyses (45.7% vs 59.6%, adjusted OR 0.56 [0.10, 3.19], P = .51). There were no significant changes post- compared with preimplementation for postdischarge healthcare resource utilization and 30-day readmissions. Regarding hospital operational metrics, EDD accuracy, discharges before noon, and discharge with home services or to facilities were nonsignificantly higher, and mean length of stay was significantly higher post- compared with preimplementation.

Table 3.

Main outcomes

Measure Pre, n = 245 Post, n = 234 Unadjusted Effect Size [95% CI] PValue Adjusted Effect Size a  [95% CI] PValue
Patient activation at discharge (primary) b
PAM level 3 or 4–no. (%) 139 (56.7) 140 (59.8) 1.13 [0.79, 1.64]e .49 1.23 [0.38, 3.96]e .73
PAM score–mean (SD) 63.3 (15.9) 66.4 (18.4) 3.12 [0.05, 6.19]d .05 5.46 [−4.19, 15.11]d .27
Patient activation at 30 days (secondary) b
PAM level 3 or 4–no. (%) 59 (59.6) 53 (45.7) 0.57 [0.33, 0.98]e .04 0.56 [0.10, 3.19]e .51
PAM score–mean (SD) 66.4 (20.2) 64.0 (17.7) −2.24 [−5.38, 0.90]d .16 −4.74 [−18.44, 8.96]d .50
Lost to follow-up 146 118
Healthcare resources utilization within 30 days postdischarge (secondary)
ED visitsc 57 58
ED visit alone–no. (%) 22 (9.0) 22 (9.4) 1.05 [0.56, 1.95]e .88 0.32 [0.04, 2.54]e .28
ED visit and readmission–no. (%) 35 (14.3) 36 (15.4) 1.08 [0.65, 1.80]e .75 0.72 [0.13, 3.95]e .71
Readmissions (30-d)c–no. (%) 39 (15.9) 41 (17.5) 1.11 [0.69, 1.80]e .66 0.70 [0.14, 3.48]e .67
Early readmissions (7-d)–no. (%) 9 (3.7) 11 (4.7) 1.29 [0.50, 3.33]e .60 9.15 [0.47, 176.70]e .14
Hospital operational metrics (secondary)
EDD accuracy–no. (%) 141 (57.6) 148 (63.3) 1.27 [0.88, 1.84]e .20 1.40 [0.43, 4.60]e .58
Discharge before noon–no. (%) 6 (2.4) 14 (6.0) 2.51 [0.95, 6,62]e .06 0.30 [0.01, 6.28]e .44
Discharge with home services or to facility–no. (%) 109 (44.5) 117 (50.0) 0.80 [0.66, 0.97]d .27 0.62 [0.18, 2.10]d .44
Length of stay–mean (SD) 6.21 (5.89) 10.13 (9.16) 3.92 [2.56, 5.30]d <.01 8.12 [3.95, 12.89]d <.01

Abbreviations: CI, confidence interval; ED, emergency department; EDD, expected discharge date; SD, standard deviation.

a

Adjusted for age, sex, race, median income by zip code, Elixhauser score, unit, time.

b

CAM-13 was used to generate equivalent PAM score and PAM level for caregiver participants.

c

ED visits and readmission visits within the MGB network and out-of-network reported by patient during 30-day phone call.

d

Effect size measured as difference in means.

e

Effect size measured as odds ratio.

The majority of patients agreed or strongly agreed that they were prepared by their clinical team to be safely discharged (statement 1), and were confident that their clinical team communicated about the discharge plan (statement 2); however, the overall percentages of agreement were nonsignificantly lower post- compared with preimplementation for both statements: 87.9% vs 90.9%, OR 0.72 [0.43, 1.22], P = −.22; and 88.8% vs 91.3%, OR 0.34 [0.41, 1.36], P = .34, respectively. We did not observe effect modification of the intervention on the primary outcome in a priori subgroup analyses (Supplementary Appendix B).

Regarding changes in patient activation from discharge to 30 days postdischarge (Table 4), we observed a nonsignificant decrease in the difference in mean (SE) proportion of activated patients (PAM Level 3 or 4) and mean (SE) PAM scores in the 116 postimplementation (−14.1% (1.2); −2.0 (1.5)) minus the 99 preimplementation (2.0% (5.5); 2.5 (2.7)) participants in adjusted analyses (difference-in-difference of −16.1% (9.6), P = .09; −4.5 (3.6), P = .21, respectively).

Table 4.

Change in PAM scores from discharge to 30 days postdischarge within patients

Discharge 30 days Postdischarge Change: Discharge to 30 days Postdischarge AdjustedPValue
Pre, n = 99
% PAM Level 3,4–mean (SE) 58.6 (1.9) 60.6 (7.1) +2.0 (5.5) .72
PAM score–mean (SE) 65.3 (1.7) 67.8 (1.1) +2.5 (2.7) .35
Post, n = 116
% PAM Level 3,4–mean (SE) 58.4 (3.1) 44.3 (4.2) −14.1 (1.2) <.01
PAM score–mean (SE) 64.6 (0.8) 62.6 (0.9) −2.0 (1.5) .16
Change: Post- minus Preimplementation
% PAM Level 3,4–mean (SE) −0.2 (4.9) −16.3 (11.4) −16.1 (9.6) .09
PAM score–mean (SE) −0.7 (2.5) −5.2 (1.9) −4.5 (3.6) .21

Abbreviations: PAM, patient activation measure; SE, standard error.

DISCUSSION

In this prepost implementation study, we evaluated the effect of an EHR-integrated digital health intervention composed of a structured checklist and video that was administered to hospitalized patients at least 24 hours prior to anticipated discharge based on the current EDD entered in the EHR. While we did not observe a significant increase in patient activation at discharge as measured by PAM-13, postimplementation participants had significantly longer lengths of stay, equivalent in magnitude to the time from checklist submission to actual discharge. We observed no effect on postdischarge healthcare utilization, including 30-day readmissions. Finally, we observed a nonsignificant decrease in patient activation from discharge to 30 days postdischarge from the pre- to postimplementation period.

Our observations have several explanations. It is possible that the intervention was transiently activating (ie, from admission to discharge) but that postimplementation patients were less activated at baseline as reflected by their ambulatory PAM scores measured during the 30-day follow-up call (ie, they returned to their lower baseline scores). Specifically, for the 116 postimplementation enrollees who participated in follow-up, the proportion of activated patients (PAM Level 3 or 4) at discharge and 30 days afterward was 58.4% and 44.3%, respectively. In contrast, for the 99 preimplementation enrollees who participated in follow-up, the proportion of activated patients at discharge and 30 days afterward was 58.6% and 60.6%, respectively (adjusted difference-in-difference of −16.1%, P = .09). Thus, if postimplementation participants had lower intrinsic activation (as reflected by ambulatory PAM levels) than preimplementation participants, then it is possible that the effect of the intervention on patient activation was not fully appreciated in our analysis.

Alternatively, our intervention may not have been sufficiently activating or may have lowered intrinsic activation if the expectations of patients who watched the video and completed the checklist (and became aware of key items to prepare for discharge) were not met by the clinicians caring for them. In this case, lower ambulatory PAM scores at 30-day follow-up for postimplementation participants could be reflective of lack of confidence, knowledge, or skills necessary to successfully navigate postdischarge care which were uncovered by the checklist and video.38 As we previously reported, though use of patient-facing components was high, use of clinician-facing components was not: nursing use of the dashboard was modest, but physicians infrequently used the dashboard to view checklist responses submitted by patients because they were not automatically notified.32 Therefore, if postimplementation participants had perceived that their care team did not address their concerns reported via the checklist, this may have adversely affected activation, as a high level of patient–clinician interaction is likely necessary to achieve meaningful improvements in outcomes.25,34,39,40

Regarding the effect on length of stay, it is possible that postimplementation participants who completed the checklist and watched the video 24 hours prior to expected discharge independently raised questions about their discharge preparedness to their clinical team. On average, participants reported 3.75 concerns per checklist submitted, which may have led to the evaluation of new or unaddressed symptoms or conditions, additional education and counseling, and/or arrangement of home services. While we did administer the checklist at least 24 hours prior to the EDD, the average time from checklist submission to actual discharge was 4.6 days, and somewhat longer for the 38 postimplementation participants who did not agree compared to the 197 participants who did agree that they were prepared for discharge (5.17 vs 4.47 days, P = .49). However, other factors (eg, patient complexity, subsequent clinical deterioration, weekend vs nonweekend discharge date, availability of skilled nursing facility beds, etc.) may have also contributed to increased length of stay.

To date, there have been few rigorously conducted studies of the effect of EHR-integrated digital health interventions that specifically engage patients in discharge preparation during hospitalization on key outcomes in real-world clinical settings. Despite the negative results of our analysis, our study offers several instructive lessons.41 First, demonstrating impact of digital health interventions on patient activation, hospital operational metrics, and postdischarge outcomes is challenging: multiple rigorously conducted studies (Table 5) evaluating the impact of patient portals during hospitalization have yet to clearly demonstrate significant improvement in patient activation as measured by PAM-13 as well as other transitions-of-care outcomes.31,34,42 Second, studies that have demonstrated significant improvements in patient activation (including several conducted by these authors31,43) are more likely explained by the types of patients who enrolled and the extent to which these patients used digital health tools to participate in their care during hospitalization. For example, whereas Schnock et al demonstrated a statistically significant improvement in PAM scores for patients offered an inpatient portal,31 Masterson et al reported that PAM scores did not significantly increase in patients randomized to an inpatient portal.34 Third, while addressing implementation barriers and aligning with institutional priorities are clearly necessary to facilitate more robust adoption, workflow transformation alone may not be sufficient to increase patient activation.32,35,43 Fourth, increasing patient activation using technological intervention may be particularly challenging for the elderly, minorities, and those with poor health literacy and multiple comorbidities who are not infrequently encountered in the hospital.44–46

Table 5.

Selected studies evaluating the impact of hospital-based digital health interventions on patient activation using the patient activation measure (PAM)

Author Study Description Interpretation & Limitations
O’Leary et al33

Patient portal

 

Prospective cohort study, n = 202

 

Mean PAM scores:

 

64.1 (I) vs 62.7 (C), P = .46

Small to medium sample size may have limited ability to demonstrate improvement in patient activation attributed to patient portal for patients admitted to intervention compared with control units

 

Without administering PAM-13 at multiple time points during hospitalization, it is not possible to demonstrate change in patient activation (ie, from admission to discharge) within participants for either intervention or control units

Masterson et al34

Patient portal

 

RCT, n = 426

 

Change in mean PAM scores:

 

+4.4 (I) vs +4.3 (C), P = .42

Moderate sample size, but unable to demonstrate significant change in mean PAM scores for patients randomized to patient portal vs usual care from admission to discharge

 

Within patient portal group, higher usage correlated with higher patient activation scores, suggesting some degree of confounding by type of user

Schnock et al31

Patient portal, bedside display, EHR-integrated safety dashboard

 

Prepost study, n = 1637

 

Mean PAM scores:

 

71.4 (I) vs 61.3 (C), P < .01 (neurology);

 

64.7 (I) vs 60.6 (C), P = .14 (oncology);

 

65.5 (I) vs 61.8 (C), P = .01 (medicine)

Nonrandomized study in which a large number of patients were randomly approached and offered a patient portal for use during hospitalization, potentially confounded by type of user who enrolled in intervention (ie, highly activated patients were more likely to enroll in the portal)

 

Compared mean PAM scores in different groups, however, unclear how the intervention affected activation within participants from admission to discharge

Abbreviations: EHR, electronic health record; PAM, patient activation measure; RCT, randomized controlled trial; I, intervention; C, control.

In short, our collective experience underscores the complexity in using PAM-13 to attribute increases in patient-activation to patient-facing digital health interventions in context of real-world implementation efforts in which randomized study designs often are not feasible.25,26,31,32,35,43,47 To more clearly demonstrate the effect of digital health interventions on patient activation over the acute episode of care, a large number of participants and administration of PAM-13 at specific time intervals (admission, discharge, 14 days postdischarge) would likely be required. Alternatively, while patient activation measured in the hospital might not be an outcome that can be readily influenced by patient-facing digital health interventions, the PAM-13 may help identify activated patients who might benefit from certain transitional care interventions as originally suggested by Hibbard.21,22,38,44

Our study has several limitations. First, as a prepost implementation study conducted in the context of a larger health IT implementation effort, it is subject to confounding as suggested above. Nonetheless, the insights we offer should be helpful to investigators and institutions considering how to evaluate efforts at engaging patients during hospitalization and the transition period afterward using EHR-integrated digital health tools, which is becoming increasingly common with the availability of APIs and standards-based data exchange. Second, we did not measure PAM scores upon admission which may limit our ability to infer the full effects of our intervention. Still, it remains unclear to what extent patient-facing interventions can influence PAM scores during hospitalization; and, as suggested above, PAM-13 might be better suited to stratify hospitalized patients who preferentially benefit from digital health interventions.31,34 Third, loss-to-follow-up may have biased our analysis of 30-day postdischarge survey results. However, healthcare utilization was based on both postdischarge surveys and EHR data, providing more complete results. Finally, though the benefits of digital health interventions on patient activation may be limited to certain individuals (eg, younger, fewer comorbidities),42 we could not demonstrate effect modification in subgroup analyses (Supplementary Appendix B), perhaps due to limited sample size.

In summary, we evaluated the effect of an EHR-integrated digital health intervention on patient activation during discharge preparation but were unable to demonstrate improvement, which is likely related to previously described implementation factors, limitations of the outcome assessment instrument in hospitalized patients (as suggested above), and study design considerations in the context of real-world health IT implementation efforts.32,43 To meaningfully affect outcomes, we believe that patient-reported data from EHR-integrated digital health applications must be tightly incorporated into clinical workflow such that nurses and physicians can efficiently review them in parallel with other EHR data and risk stratification tools (eg, EDD, medical and nonmedical barriers, readmission risk score, etc.) to address patients’ concerns regarding discharge preparedness in real time. Future randomized studies should stratify hospitalized patients by PAM level and determine whether those with high activation preferentially benefit from digital health interventions and whether those with low activation would benefit from more coaching, caregiver support, and traditional “high-touch” transitional interventions during recovery.

FUNDING

This work was supported by a grant from AHRQ (R21-HS024751). AHRQ had no role in the design or conduct of the study; collection, analysis, or interpretation of data; or preparation or review of the manuscript. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of AHRQ.

AUTHOR CONTRIBUTIONS

All authors have contributed sufficiently and meaningfully to the conception, design, and conduct of the study; data acquisition, analysis, and interpretation; and/or drafting, editing, and revising the manuscript.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

Supplementary Material

ocaa321_Supplementary_Data

ACKNOWLEDGMENTS

None.

DISCLOSURES

None.

CONFLICT OF INTEREST STATEMENT

None declared.

DATA AVAILABILITY

The data underlying this article are available in the article and in its online supplementary material.

REFERENCES

  • 1. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW.  The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med  2003; 138 (3): 161–7. [DOI] [PubMed] [Google Scholar]
  • 2. Tsilimingras D, Schnipper J, Duke A, et al.  Post-Discharge Adverse Events Among Urban and Rural Patients of an Urban Community Hospital: A Prospective Cohort Study. J Gen Intern Med  2015; 30 (8): 1164–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Kanaan AO, Donovan JL, Duchin NP, et al.  Adverse drug events after hospital discharge in older adults: types, severity, and involvement of Beers Criteria Medications. J Am Geriatr Soc  2013; 61 (11): 1894–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW.  Adverse drug events occurring following hospital discharge. J Gen Intern Med  2005; 20 (4): 317–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Centers for Disease Control and Prevention. Hospital utilization (in non-federal short-stay hospitals). 2018. https://www.cdc.gov/nchs/fastats/hospital.htm Accessed April 16 2020.
  • 6. Halasyamani L, Kripalani S, Coleman E, et al.  Transition of care for hospitalized elderly patients? Development of a discharge checklist for hospitalists. J Hosp Med  2006; 1 (6): 354–60. [DOI] [PubMed] [Google Scholar]
  • 7. Soong C, Daub S, Lee J, et al.  Development of a checklist of safe discharge practices for hospital patients. J Hosp Med  2013; 8 (8): 444–9. [DOI] [PubMed] [Google Scholar]
  • 8. Mohta N, Vaishnava P, Liang C, et al.  The effects of a ‘discharge time-out’ on the quality of hospital discharge summaries. BMJ Qual Saf  2012; 21 (10): 885–90. [DOI] [PubMed] [Google Scholar]
  • 9. Gao MC, Martin PB, Motal J, et al.  A multidisciplinary discharge timeout checklist improves patient education and captures discharge process errors. Qual Manag Health Care  2018; 27 (2): 63–8. [DOI] [PubMed] [Google Scholar]
  • 10. Howard-Anderson J, Busuttil A, Lonowski S, Vangala S, Afsar-Manesh N.  From discharge to readmission: understanding the process from the patient perspective. J Hosp Med  2016; 11 (6): 407–12. [DOI] [PubMed] [Google Scholar]
  • 11. Coleman E. The care transitions program. https://caretransitions.org Accessed January 5, 2021.
  • 12. Crotty BH, Somai M.  Digital engagement: how serious are hospitals?  J Gen Intern Med  2020; 35 (4): 992–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Greysen SR, Magan Y, Rosenthal J, Jacolbia R, Auerbach AD, Harrison JD.  Patient Recommendations to improve the implementation of and engagement with portals in acute care: Hospital-Based Qualitative Study. J Med Internet Res  2020; 22 (1): e13337-e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Lee JL, Williams CE, Baird S, Matthias MS, Weiner M.  Too many don’ts and not enough do’s? A survey of hospitals about their portal instructions for patients. J Gen Intern Med  2020; 35 (4): 1029–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Dumitrascu AG, Burton MC, Dawson NL, et al.  Patient portal use and hospital outcomes. J Am Med Inform Assoc  2018; 25 (4): 447–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Dalal AK, Bates DW, Collins S.  Opportunities and challenges for improving the patient experience in the acute and postacute care setting using patient portals: the patient's perspective. J Hosp Med  2017; 12 (12): 1012–6. [DOI] [PubMed] [Google Scholar]
  • 17. Collins S, Dykes P, Bates DW, et al.  An informatics research agenda to support patient and family empowerment and engagement in care and recovery during and after hospitalization. J Am Med Inform Assoc  2018; 25 (2): 206–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Wosik J, Fudim M, Cameron B, et al.  Telehealth transformation: COVID-19 and the rise of Virtual Care. J Am Med Inform Assoc  2020; 27 (6): 957–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Hollander JE, Carr BG.  Virtually perfect? Telemedicine for Covid-19. N Engl J Med  2020; 382 (18): 1679–81. [DOI] [PubMed] [Google Scholar]
  • 20. Greene J, Hibbard JH.  Why does patient activation matter? An examination of the relationships between patient activation and health-related outcomes. J Gen Intern Med  2012; 27 (5): 520–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Hibbard JH, Mahoney ER, Stockard J, Tusler M.  Development and testing of a short form of the patient activation measure. Health Serv Res  2005; 40 (6p1) (Pt 1):1918–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Prey JE, Qian M, Restaino S, et al.  Reliability and validity of the patient activation measure in hospitalized patients. Patient Educ Couns  2016; 99 (12): 2026–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Hibbard JH, Greene J.  What the evidence shows about patient activation: better health outcomes and care experiences; fewer data on costs. Health Aff  2013; 32 (2): 207–14. [DOI] [PubMed] [Google Scholar]
  • 24. Mitchell SE, Gardiner PM, Sadikova E, et al.  Patient activation and 30-day post-discharge hospital utilization. J Gen Intern Med  2014; 29 (2): 349–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Dalal AK, Dykes PC, Collins S, et al.  A web-based, patient-centered toolkit to engage patients and caregivers in the acute care setting: a preliminary evaluation. J Am Med Inform Assoc  2016; 23 (1): 80–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Dalal AK, Dykes P, Samal L, et al.  Potential of an electronic health record-integrated patient portal for improving care plan concordance during acute care. Appl Clin Inform  2019; 10 (03): 358–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Huerta T, Fareed N, Hefner JL, et al.  Patient engagement as measured by inpatient portal use: methodology for log file analysis. J Med Internet Res  2019; 21 (3): e10957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Haldar S, Mishra SR, Pollack AH, Pratt W.  Informatics opportunities to involve patients in hospital safety: a conceptual model. J Am Med Inform Assoc  2020; 27 (2): 202–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Greysen SR, Khanna RR, Jacolbia R, Lee HM, Auerbach AD.  Tablet computers for hospitalized patients: a pilot study to improve inpatient engagement. J Hosp Med  2014; 9 (6): 396–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Greysen SR, Harrison JD, Rareshide C, et al.  A randomized controlled trial to improve engagement of hospitalized patients with their patient portals. J Am Med Inform Assoc  2018; 25 (12): 1626–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Schnock KO, Snyder JE, Fuller TE, et al.  Acute care patient portal intervention: portal use and patient activation. J Med Internet Res  2019; 21 (7): e13336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Fuller TE, Pong DD, Piniella N, et al.  Interactive digital health tools to engage patients and caregivers in discharge preparation: implementation study. J Med Internet Res  2020; 22 (4): e15573. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. O’Leary KJ, Lohman ME, Culver E, Killarney A, Randy Smith G, Liebovitz DM.  The effect of tablet computers with a mobile patient portal application on hospitalized patients' knowledge and activation. J Am Med Inform Assoc  2016; 23 (1): 159–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Masterson Creber RM, Grossman LV, Ryan B, et al.  Engaging hospitalized patients with personalized health information: a randomized trial of an inpatient portal. J Am Med Inform Assoc  2019; 26 (2): 115–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Dalal AK, Fuller T, Garabedian P, et al.  Systems engineering and human factors support of a system of novel EHR-integrated tools to prevent harm in the hospital. J Am Med Inform Assoc  2019; 26 (6): 553–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Donze J, Aujesky D, Williams D, Schnipper JL.  Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med  2013; 173 (8): 632–8. [DOI] [PubMed] [Google Scholar]
  • 37. Donze JD, Williams MV, Robinson EJ, et al.  International validity of the HOSPITAL score to predict 30-day potentially avoidable hospital readmissions. JAMA Intern Med  2016; 176 (4): 496–502. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Dixon A, Hibbard J, Tusler M.  How do people with different levels of activation self-manage their chronic conditions?  Patient  2009; 2 (4): 257–68. [DOI] [PubMed] [Google Scholar]
  • 39. Riippa I, Linna M, Ronkko I.  The effect of a patient portal with electronic messaging on patient activation among chronically ill patients: controlled before-and-after study. J Med Internet Res  2014; 16 (11): e257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Rudin RS, Fanta CH, Qureshi N, et al.  A clinically integrated mHealth app and practice model for collecting patient-reported outcomes between visits for asthma patients: implementation and feasibility. Appl Clin Inform  2019; 10 (05): 783–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Huerta TR, McAlearney AS, Rizer MK.  Introducing a patient portal and electronic tablets to inpatient care. Ann Intern Med  2017; 167 (11): 816–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Grossman LV, Masterson Creber RM, Ancker JS, et al.  Technology access, technical assistance, and disparities in inpatient portal use. Appl Clin Inform  2019; 10 (01): 40– 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Businger AC, Fuller TE, Schnipper JL, et al.  Lessons learned implementing a complex and innovative patient safety learning laboratory project in a large academic medical center. J Am Med Inform Assoc  2020; 27 (2): 301–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Hibbard JH, Mahoney ER, Stock R, Tusler M.  Do increases in patient activation result in improved self-management behaviors?  Health Serv Res  2007; 42 (4): 1443–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Blakemore A, Hann M, Howells K, et al.  Patient activation in older people with long-term conditions and multimorbidity: correlates and change in a cohort study in the United Kingdom. BMC Health Serv Res  2016; 16 (1): 582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Cunningham PJ, Hibbard J, Gibbons CB.  Raising low ‘Patient Activation’ rates among Hispanic immigrants may equal expanded coverage in reducing access disparities. Health Aff  2011; 30 (10): 1888–94. [DOI] [PubMed] [Google Scholar]
  • 47. Dykes PC, Rozenblum R, Dalal A, et al.  Prospective evaluation of a multifaceted intervention to improve outcomes in intensive care: The promoting respect and ongoing safety through patient engagement communication and technology study. Crit Care Med  2017; 45 (8): e806–e13. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocaa321_Supplementary_Data

Data Availability Statement

The data underlying this article are available in the article and in its online supplementary material.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES