Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 May 1.
Published in final edited form as: Contemp Clin Trials. 2023 Mar 28;128:107171. doi: 10.1016/j.cct.2023.107171

Implementation and Evaluation of an Expanded Electronic Health Record-Integrated Bilingual Electronic Symptom Management Program across a Multi-Site Comprehensive Cancer Center: The NU IMPACT Protocol

David Cella 1,2, Sofia F Garcia 1,2, September Cahue 1, Justin D Smith 3, Betina Yanez 1, Denise Scholtens 4, Nicola Lancki 4, Michael Bass 1, Sheetal Kircher 5, Ann Marie Flores 1,2,6, Roxanne E Jensen 7, Ashley Wilder Smith 7, Frank J Penedo 8
PMCID: PMC10164083  NIHMSID: NIHMS1890089  PMID: 36990275

Abstract

Background:

People with cancer experience symptoms that adversely affect quality of life. Despite existing interventions and clinical guidelines, timely symptom management remains uneven in oncology care. We describe a study to implement and evaluate an electronic health record (EHR)-integrated symptom monitoring and management program in adult outpatient cancer care.

Methods:

Our cancer patient-reported outcomes (cPRO) symptom monitoring and management program is a customized EHR-integrated installation. We will implement cPRO across all Northwestern Memorial Healthcare (NMHC) hematology/oncology clinics. We will conduct a cluster randomized modified stepped-wedge trial to evaluate patient and clinician engagement with cPRO. Further, we will embed a patient-level randomized clinical trial to evaluate the impact of an additional enhanced care (EC; cPRO plus web-based symptom self-management intervention) relative to usual care (UC; cPRO alone). The project uses a Type 2 hybrid effectiveness-implementation approach. The intervention will be implemented across seven regional clusters within the healthcare system comprising 32 clinic sites. A 6-month prospective pre-implementation enrollment period will be followed by a post-implementation enrollment period, during which newly enrolled, consenting patients will be randomly assigned (1:1) to EC or UC. We will follow patients for 12 months post-enrollment. Patients randomized to EC will receive evidence-based symptom-management content on cancer-related concerns and approaches to enhance quality of life, using a web-based tool (“MyNM Care Corner”). This design allows for within- and between-site evaluation of implementation plus a group-based comparison to demonstrate effectiveness on patient-level outcomes.

Discussion:

The project has potential to guide implementation of future healthcare system-level cancer symptom management programs.

ClinicalTrials.gov # NCT03988543

Keywords: cancer, patient-reported outcomes, symptom management, electronic health record, implementation science, patient engagement

BACKGROUND

Cancer and associated treatments can lead to multiple symptoms that can adversely affect health-related quality of life (HRQoL).1,2 Previous research has extensively documented the nature, severity, course and impact of cancer-related symptoms, and considerable progress has been made on their prevention, assessment, management, and reduction.2 Technology-enabled symptom monitoring is an effective way to identify and manage many challenges that cancer patients face during and after treatment. Recent work has demonstrated the efficacy of such monitoring in improving patient-reported and clinical outcomes;3-6 Basch et al. implemented a web-based symptom program that showed improvements in HRQOL, fewer emergency department visits, and improved survival.6,7

Despite the emphasis on routine symptom management in treatment guidelines, timely delivery of interventions remains limited in most healthcare systems.8 There are many reasons for this, including patient factors (e.g., low engagement; hesitancy to communicate symptoms), clinician factors (e.g., competing demands for time and attention; lack of knowledge regarding supportive care best practices), and systemic/technical factors (e.g., fragmented care transitions; lack of integration of symptom reports in the electronic health record [EHR]).9 Additionally, uneven symptom management implementation in oncology care disproportionately affects vulnerable populations, widening health disparities.9,10

Barriers to widespread implementation of symptom management interventions persist, including: (a) limited scalability and generalizability; (b) limited sensitivity and specificity in measurement; (c) no evaluation of the impact on clinic workflows or system-level outcomes; and (d) lack of implementation evaluation (e.g., acceptability, adoption, reach, sustainability). To date, there is limited evidence that large-scale, system-wide evidence-based symptom management systems are effective. Therefore, we propose to implement and test a patient-centered, EHR-integrated oncology symptom assessment and management system.

Northwestern University is one of three research centers that form the “Improving the Management of Symptoms during and Following Cancer Treatment” (IMPACT) Consortium. Funded by the National Cancer Institute (NCI) Cancer Moonshot Initiative, the IMPACT Consortium’s goal is to improve symptom control for patients with cancer. Research centers are testing systems that engage patients in systematic symptom reporting and guideline-based clinical management and evaluating the effects on patient centered outcomes.11 Our research initiative at Northwestern University (NU IMPACT) seeks to implement and evaluate an EHR-integrated oncology symptom assessment and management program across a multi-site healthcare delivery system: Northwestern Memorial Healthcare (NMHC). We will investigate the effectiveness of an e-health approach to actively engage and motivate patients in monitoring and managing their cancer-related symptoms. We hypothesize that the implementation of a rigorous EHR-integrated symptom monitoring and self-management system will reduce symptom burden, optimize healthcare use, and improve satisfaction with care. We have many research questions, the primary of which are: 1) What is the acceptability, appropriateness, and perceived sustainability of our EHR-integrated cancer patient-reported outcome (cPRO) initiative? (implementation outcome); and 2) Can a web-based self-management intervention added to cPRO enhance care to further reduce symptom burden as compared EHR-integrated cPRO symptom monitoring alone? (effectiveness outcome)?

METHODS

The NU IMPACT study utilizes a hybrid type 2 effectiveness-implementation design with two complementary features to address implementation and effectiveness research questions independently. A stepped-wedge cluster randomized trial will address implementation research questions. A patient-randomized clinical trial embedded within the post-implementation phase of the stepped-wedge trial will address effectiveness research questions.

Ethics Approval

The study procedures described in this protocol were approved by the Northwestern University Institutional Review Board.

Participants:

Outpatients who are in treatment with curative or non-curative intent, or in post-treatment survivorship will be eligible to participate in the study. Eligibility criteria apply to the participants included for analysis in the stepped wedge trial and in the participant-level randomized trial and include the following: 1) ≥ 18 years of age, 2) EHR confirmed diagnosis of a solid or hematological malignancy within the last 10 years, 3) medical provider confirmed planned cancer treatment or follow-up within NMHC, 4) able to read in English or Spanish, and 5) valid email address.

Study Design

This design allows for evaluation of the implementation strategy on healthcare system-level outcomes, along with a sufficiently-powered group-based comparison to demonstrate effectiveness on patient-level outcomes. The implementation strategy includes education and support of clinical staff on the interpretation of scores from patient-reported measures of pain, fatigue, depression, anxiety, and physical function, and messages to staff alerting them when patient scores exceed a clinical threshold suggesting a possible change in management. Embedded within the implementation strategy is a randomized controlled trial that examines impact of enhancing the patient’s care experience with an App that encourages symptom self-management and communication with clinical staff.

First, to evaluate implementation outcomes, we will use a modified stepped-wedge trial design randomizing seven clusters that comprise 32 clinic sites (range: 1-9 sites per cluster; see Figure 1). Clusters are comprised of existing administrative and organizational units of the healthcare system. This will allow within- and between-site evaluation of system-level changes (e.g., adoption and reach rates) due to the prospective introduction of implementation strategies aimed at increasing engagement in cPRO for both clinicians and patients with each cluster’s crossover from pre- to post-implementation. The stepped-wedge trial involves an observational group of patients completing symptom assessments under usual care (UC) in both pre- and post-implementation phases using an open cohort design. All eligible patients (as described above) in both pre- and post-implementation phases will be invited to participate in a study assessment.

Figure 1.

Figure 1.

Consented Patient Flow by Clinical Cluster and Implementation Phase

*UC = Usual Care; EC = Enhanced Care

The subsample of consented patients will complete a monthly study assessment asking about symptoms and other related quality of life measures (see Supplemental Table 1) and will be followed for 12 months. Each cluster will have a 6-month pre-implementation period to enroll patients, before crossing over to post-implementation. This modified stepped-wedge feature of the study design allows for testing the effect of the strategies on cluster- and system-level implementation outcomes (described in the measures section). After the vanguard clinical cluster starts the first prospective 6-month observational pre- implementation group of the stepped wedge, the remaining 6 clusters will be assigned to launch on a quarterly basis using a randomized rollout sequence. All consented patients in both the observational implementation group and the patient-level randomized trial will be followed for 12 months with monthly assessments via REDCap.12

The cPRO program builds upon our previous work,25,26 and resides within the EHR patient portal (Epic MyChart, called MyNM) and its hyperspace (for in-clinic assessments). It assesses patient-reported symptoms and needs that include the Patient Reported Outcomes Measurement Information System® (PROMIS®) measures27-31 and two supportive oncology care need checklist items. It is available in English and Spanish. Assessments are scored in the EHR in real time. Elevated symptom scores (above established thresholds) and endorsed needs trigger clinical alerts via EHR in-basket messages to designated oncology clinicians. The stepped-wedge, cluster randomized trial will allow us to evaluate the impact of implementation strategies (e.g., smart phrases for clinician ease of access to cPRO results, patient-facing education print materials, in-clinic assistance of cPRO) to enhance engagement of patients and adoption by clinicians.

During the post-implementation phase, an enhanced care (EC) intervention becomes available through the randomized clinical trial. To test the effectiveness of EC on patient outcomes compared to UC, eligible consenting patients who are new to care will be randomized using a 1:1 randomization scheme (EC:UC). Those patients who began care during the pre-implementation phase who continue in care are ineligible for the embedded clinical trial that begins in the post-implementation phase. Finally, we will identify patient- and system-level facilitators and barriers to implementation to disseminate to other health systems.

Our enhanced engagement of cPRO intervention will examine health system data gathered through the EHR for implementation evaluation. We expect to include at least 12,000 patients overall, inclusive of consented and unconsented patients. The projected minimum enrollment for the embedded patient-level randomized trial comparing EC v. UC in the post-implementation phase is 1,000 patients. This unique design allows for cluster-level data collection for pre- and post-implementation comparison of system-level strategies within and across clinical clusters and affords assessment of contamination effects within each clinical cluster during the embedded patient-level randomized trial. We expect roughly half of the total study sample (i.e., 6,000 patients) will be included during pre-implementation. The other half, consisting of patients who consent to the randomized trial and those who do not, will be included in the post-implementation phase. Participants in both phases who consent to additional assessment will be assessed and followed for the duration of the study or duration of their care in the healthcare system, whichever is shorter.

The patient-randomized embedded trial aims to consent at least 1,000 patients to be randomized to UC or EC at a 1:1 ratio, stratified by clinical unit, gender, language preference (English vs Spanish), and cancer continuum phase: curative intent; non-curative intent; post-treatment survivorship (Figure 2). Patient outcomes will be evaluated in the embedded trial and include common cancer symptoms of pain, fatigue, depression, and anxiety, plus physical function and concerns related to nutrition and practical needs. A subset of patients in the pre-implementation phase will be asked, with their consent, to complete the same assessments planned for the subsequent post-implementation period (the contemporaneous randomized clinical trial). Depending on the specified hypothesis, we will examine clinical outcomes, healthcare utilization, and treatment delivery outcomes (Table 1).

Figure 2.

Figure 2.

NU IMPACT CONSORT Diagram Template (Target overall eligible n = 12,000 [6,000 pre-implementation; 6,000 post-implementation])

Table 1.

Overview of planned analysis of research questions

Research
Questions
Planned
analyses/outcomes
Answered by Sample Size Source of
Measures
Cluster Randomized cPRO Enhanced Engagement Trial – Implementation Outcomes
What is the acceptability, appropriateness, and perceived sustainability of cPRO? (Primary implementation outcome) Descriptive summaries of survey-based evaluations improve overtime (Post-Implementation only) Implementation 100 stakeholders (physicians, nurses, clinic administrators) – by survey REDCap Implementation survey
Does the reach of cPRO improve for patients in the Post-Implementation period compared to Pre-Implementation? (Secondary implementation outcome) Differences in measures of cPRO reach for Post-Implementation versus Pre-Implementation using mixed effects models
  • % patients approached for cPRO completion

Non-Consented, Observational Implementation Group Full sample* EHR/Epic
Does adoption of cPRO improve for patients in the Post-Implementation period compared to Pre-Implementation? (Secondary implementation outcome) Differences in measures of cPRO adoption for Post-Implementation versus Pre-Implementation using mixed effects models
  • % of approached patients who complete cPRO

Non-Co nsented, Observational Implementation Group Full sample* (the subset who complete at least 1 cPRO) EHR/Epic
Does follow up by clinicians with patients who trigger alerts improve in the Post-Implementation period? (Secondary implementation) Differences in clinician follow-up for Post-Implementation versus Pre-Implementation using mixed effects models
  • % of alerts followed up with using cPRO dot phrase

Non-Consented, Observational Implementation Group Full Sample* (the subset who trigger alert) EHR/Epic
Do referrals by clinicians to appropriate services for patients who trigger alerts improve in the Post-Implementation period? (Secondary implementation) Differences in appropriate service referrals for Post-Implementation versus Pre-Implementation using mixed effects models
  • Appropriate service referral counts

Non-Consented, Observational Implementation Group Full sample* (the subset who trigger alert) EHR/Epic
Does healthcare utilization and cancer treatment delivery improve? Healthcare utilization and two measures of cancer treatment delivery data will be available for the consented and non-consented patients Non-Consented, Observational Implementation Group Full sample*–with secondary/exploratory adoption trigger subsets EHR/Epic
What are the barriers and facilitators to implementation of cPRO? What are patient and stakeholder experiences? (Secondary implementation) Descriptive summaries of focus group qualitative findings. Qualitative data management and analysis will be performed using Nvivo, Version 11.53 Implementation Subset of stakeholders (n=90;− 30 clinicians, 10 administrators, 50 patients) collected from focus groups Focus group discussion
Embedded Individual Patient-Randomized Trial - Effectiveness Outcomes
Does the EC self-management system reduce symptom burden compared to UC of EHR-integrated symptom monitoring? (Primary effectiveness outcome) Average differences in longitudinal PROs from enrollment to completion of follow-up across groups using mixed effects models (EC vs UC; Pre-Implementation to assess contamination) Randomized Consented Trial (RCT)/Consented Consented sub-sample includes: Consented Pre-Implementation Sub-sample** and Consented Post-Implementation Sub-sample*** Monthly PROMIS measures, FACT G7 - REDCap surveys
Does the EC self-management system optimize healthcare use compared to UC of EHR-integrated symptom monitoring? (Secondary effectiveness outcome) Average differences in utilization from enrollment to completion of follow-up across groups using generalized linear models (EC vs UC; Pre-Implementation to assess contamination)
  • ER/urgent care visits

  • Hospital admissions/LOS

  • Unscheduled visits

  • Supportive care visits

RCT/Consented Consented sub-sample includes: Consented Pre-Implementation Sub-sample**
Consented Post-Implementation Sub-sample***
EHR/Epic for NM, survey on Healthcare Utilization outside of NM as REDCap survey
Does the EC system improve satisfaction with care (and reduce breaks/changes in care, response time to alerts?) healthcare use compared to UC of EHR-integrated symptom monitoring? (Secondary effectiveness outcome) Average differences in cancer treatment delivery outcomes from enrollment to completion of follow-up across groups using mixed effects models and generalized linear models (EC vs UC; Pre-Implementation to assess contamination)
  • Treatment satisfaction

  • Number of treatment breaks

  • Number of changes in regimen

RCT/Consented Consented sub-sample includes: Consented Pre-Implementation Sub-sample**
Consented Post-Implementation Sub-sample***
CAHPS, Collaborate, REDCap surveys, EHR/Epic Beacon treatment data, EHR/Epic dot phrase data
Does the EC self-management improve self-management measures compared to UC of EHR-integrated symptom monitoring? (Secondary effectiveness outcome)…if yes, does this impact the relationship between EC approach and the other patient, healthcare utilization, and cancer treatment delivery outcomes? (Exploratory) Average differences in self-management outcomes from enrollment to completion of follow-up across groups (EC vs UC; Pre-Implementation to assess contamination), as well as exploratory mediation analyses:
  • Self-efficacy

  • Patient/physician interactions and communication

  • Coping skills

RCT/Consented Consented sub-sample includes: Consented Pre-Implementation Sub-sample**
Consented Post-Implementation Sub-sample***
Self-efficacy manage meds/treatment, CASE, Support REDCap surveys
Does the EC self-management system improve clinical outcomes compared to UC of EHR-integrated symptom monitoring? (Exploratory effectiveness outcome) Differences in exploratory clinical outcomes from enrollment through follow-up across groups (EC vs UC; Pre-Implementation to assess contamination) by continuum using time-to-event and generalized linear models Curative
  • Recurrence

  • Secondary cancer(s)

  • Disease free / overall survival

Non-curative
  • Secondary cancer(s)

  • Progression free and overall survival

Survivorship
  • Recurrence

RCT/Consented Consented sub-sample includes: Consented Pre-Implementation Sub-sample**
Consented Post-Implementation Sub-sample***
Epic diagnoses/death info
*

Full sample includes anyone who is eligible to have completed a cPRO during the study period/implementation phase for each clinical unit

**

Consented Pre-Implementation Sub-sample = 2,500

***

Consented Post-Implementation Sub-sample = 1,000

The EC condition will provide symptom self-management materials in addition to the cPRO symptom monitoring available in UC. Additional EC tools will include tabular and graphic feedback on reported symptoms in MyNM patient portal, and targeted, web-based self-management materials with patient-centered information about cancer-related concerns, self-management and communications skills (via MyNM Care Corner). Only post-implementation consented patients randomized to the EC condition will receive these additional self-management supports (i.e., symptom feedback and MyNM Care Corner). Assessments for UC and EC, and web-based intervention materials for EC will be provided in English and Spanish.

Detail on pre-implementation and post-implementation assessment can be found in the supplemental appendix.

During the post-implementation period, consenting patients will be randomized into UC or EC (Figure 2). The UC condition continues with cPRO conducted as in the pre-implementation period. The EC arm will receive additional self-management information through a patient-centered website and smartphone application called MyNM Care Corner (available in English and Spanish). After each cPRO assessment is completed, EC-assigned patients will be sent a study email and text message via REDCap that provides information on how to access patient materials with a secure URL link to MyNM Care Corner, available in English and Spanish.

The EC intervention available via a website known as MyNM Care Corner will include patient-facing materials targeting common symptoms related to patients’ cancer experiences. There will be an audio option available in English and Spanish for all material presented on the website. A patient video on the homepage will educate patients on the type of information available and navigation options. EC patients will receive information and management tips based on their cPRO results. In addition to a personalized dashboard, EC patients will be able to access other information such as a symptom library and patient resources. Additional patient resources will include information on improving emotional well-being, enhancing diet and nutrition, psychosocial supportive services within NMHC, financial and practical matters, and symptom management resources such a palliative care within NMHC.

To sustain patient engagement, EC patients will receive automated reinforcement emails, calls or text messages encouraging them to access MyNM Care Corner, and personalized content based on a patient’s cPRO results. All website activity will be tracked to capture data on patient engagement with use of the symptom management content. The links to the website will contain an encrypted, unique non-identifiable study token and an encrypted list of the patient’s symptoms as collected through cPRO. The EC condition website does not contain any medical information. However, EC patients can review their cPRO values in MyNM (their EHR patient portal) through a cPRO summary table. The Spanish-language option will be delivered the same way.

Health System-Level Data: Implementation Metrics and Evaluation Approach:

Our evaluation plan is guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) implementation evaluation framework.32 In addition to reach, adoption, and sustainability, the implementation domain will include acceptability, appropriateness, feasibility, and fidelity, as shown in Figure 3. We will evaluate implementation outcomes specific to the cPRO rollouts across the seven clinics in the NMHC. We will use a mixed methods approach for evaluating implementation effects,33,35 a preferred approach when there is a small number of clinical clusters. Mixed methods will also be employed to test the efficacy of the EC intervention.36

Figure 3.

Figure 3.

Conceptual framework for a multi-level, multi-method evaluation plan for (a) implementation, (b) scalability and spread, and (c) patient and health system level outcomes

Consistent with the recommendations of Curran et al.36 and Santana and Feeny’s model for evaluating ePRO applications,37 we will conduct a system-level process evaluation of implementation to inform the spread of NU IMPACT outside of NMHC. Our conceptual framework for evaluating facilitators and barriers, the implementation plan, and patient- and system-level data is depicted in Figure 3.32 The de-identified EHR data can be used to determine adoption (use/intent to use) and reach (proportion of patients reached) of the program in each clinic. Adoption is important at the patient and clinician levels. Reach is determined by computing proportions at two levels: (1) Proportion of patients enrolled and completing assessments; and (2) patients who are referred for appropriate services from among those that trigger an alert. We will calculate the second level of reach using a 3- month sampling period.38 To more closely examine health equity factors in implementation and focus on sustainment, we will apply a RE-AIM extension, integrating recent conceptualizations of sustainability with a focus on long-term sustainability by addressing dynamic context and promoting health equity within the RE-AIM framework.39 We will also conduct a descriptive evaluation using multiple data sources to describe characteristics of the clinic sites and clinicians (i.e., size of the clinic [FTE clinicians, monthly patient encounters], clinician demographics, patient mix, etc.)

Study Sites:

This study will be conducted at seven outpatient oncology clinical clusters across three NMHC geographic regions in the Chicago Metropolitan Area.

Sample Size:

This study involves an estimated 12,000 patients for implementation evaluation across the health system in a modified cluster-randomized stepped-wedge trial, and participation of an anticipated 1000 consented English- and Spanish-speaking oncology patients in the embedded patient-level randomized trial comparing EC v. UC in the post-implementation phase.

We will study implementation of cPRO for all adult patients being treated in all three phases of the cancer treatment continuum. Although there is some cross-cluster variability in the proportion of patients in each treatment phase, we estimate that on average each cluster will be comprised as: 30% of patients being treated for cure, 30% being treated for life extension, and 40% disease-free survivors.25,26 We expect 10% dropout (20% in palliative setting).

Based on the demographic characteristics of ambulatory oncology patients across the seven target clusters, we anticipate that 13% of our patients will be Hispanic/Latinx (hereinafter referred to as Hispanic), 13% Black/African-American, 4% Asian, and about 70% non-Hispanic White. We anticipate that our sex distribution will be 45% male and 55% female, and that about 40% of our Hispanic patients will either be Spanish monolingual or Spanish preferred.

Analyses:

NU IMPACT study design allows joint evaluation of system-level implementation and patient-level effectiveness metrics (Table 1). An extensive array of longitudinal EHR and PRO survey data will be integrated and deployed for statistical analysis.

Analysis of System-Level Implementation Outcomes:

The modified stepped wedge cluster randomized design was developed to answer questions related to the effects of the implementation strategies on cluster- and systems-level implementation outcomes. These will be analyzed by fitting a generalized linear mixed model or using generalized estimating equations with time as a fixed effect for each step and random effect for each cluster.45 Additionally, a random effect for the serial dependence of observations within each cluster will be added as month-to-month reach and adoption rates are not independent. We will test for the confounding effect of time and learning (i.e., sequence effects) on the results. We will also explore effect heterogeneity between clusters, using within-cluster comparisons of exposed and unexposed periods, both within the primary analytic model (which will provide an estimate of the aggregate effect), and also using interrupted time-series analysis46 of each cluster separately. Given that each IMPACT Consortium project is unable to examine the effectiveness of individual implementation strategies, data from this trial will be pooled with the other IMPACT projects to begin to assess implementation strategy effectiveness across projects We will examine focus group data using a directed content analysis approach47 that involves use of existing implementation research theory and empirical frameworks (e.g., Consolidated Framework for Implementation Research) to develop a coding scheme and hypothesize the relationship between codes. Focus group transcripts (~2) will be used to develop this codebook and establish agreement. Twenty percent of the remaining focus group transcripts will be double-coded to calculate reliability.48 Disagreements will be resolved via expert consensus.

Results of the implementations surveys such as those evaluating appropriateness, acceptability, feasibility, and sustainability will be summarized as recommended by the survey developer and analyzed for change over time and for group differences (e.g., comparing pre-implementation with the post-implementation UC group). Survey summary scores can be compared within- and between-site using simple mean differences tests, such as t-tests and analysis of variance (ANOVA) at individual waves (e.g., baseline). We will analyze data using repeated measures ANOVA for measures collected repeatedly, to evaluate significant change. For measures with three-time points, and approximately 10 respondents per cluster, we have sufficient power to detect a medium effect (f ≥ .30; odds ratio ≥ 1.4). Given the sample size expected for survey completion by clinicians, adoption data collected from the EHR at the patient and clinician levels, will have power to detect small effects even with conservative estimates of retention and response rates (f ≥ .10; odds ratio ≥ 1.12). Adoption is measured in patients (who complete a screener) and clinicians (following-up on trigger alerts), while penetration is determined by computing proportions of patients enrolling in and completing cPRO assessments (all patients across all study periods and conditions), from the total number approached to participate, and patients who are referred for appropriate services from among those that trigger an alert. However, if descriptive data suggest significant variation in trajectories overall or as a function of introducing the EC intervention, methods for small-sample repeated measures studies can be used.49 Using SAS Proc Mixed, we can use case-specific coefficients to control for variation in pre- and post-implementation effects across and within clusters to address potentially biasing assumptions of multilevel modeling.50 Best fit of an unconditional two-level growth model testing linear, quadratic, cubic, and polynomial terms to account for possible trends in the data will be determined by the lowest Bayesian Information Criterion value. Time of implementation and other parameters (e.g., autocorrelation) and covariates (e.g., size of clinic, type of practice) can then be added to the model and tested for relation to the model’s growth parameters. New developments in multilevel model specification demonstrate applicability to small sample sizes (N ≤ 10 clusters).49,50

Given the study design and nature of the NM healthcare delivery system, analyses will also account for hierarchical data structures (e.g., clusters in the roll out, respondents within clusters). When intraclass correlation coefficients indicate that the amount of variance attributable to nesting or clustering is significant and cannot be ignored, hierarchical and multilevel models will be used when analyzing outcomes.51 For example, should clinical clusters show significant variation in rate of adoption or acceptability, this will be accounted for when evaluating patient-level clinical outcomes as these are conceptually and empirically linked.52 Results of quantitative data analysis will be used to inform the foci of qualitative data collection via focus groups.35

The patient-level RCT effectiveness analysis plan can be found in the supplemental appendix.

DISCUSSION

This project aims to lessen symptom burden and improve healthcare satisfaction among people receiving treatment for cancer. Investigators of the NU IMPACT study seek to shift the current, often reactive supportive oncology paradigm with a novel, proactive combination of systematic assessments, evaluations, and interventions aimed to more fully integrate supportive care with active cancer treatment and post-treatment surveillance. We will do this through a hybrid pragmatic randomized rollout implementation trial design with embedded patient-level randomization to enhanced care aided by patient-centered technology. Over the past several decades, there have been unprecedented advances in measurement science, implementation of technology based electronic and mobile (e/mHealth) interventions, and the ability to integrate PROs and experiences into the patient portal. Despite these advances, limited work has developed scalable programs and interventions that can be broadly applied in major health systems, and evaluated the efficacy and effectiveness of such programs on patient- and system-level outcomes.

Our study provides the opportunity to test a customized EHR-integrated installation that combines symptom monitoring with symptom self-management training across a multi-practice healthcare delivery system. Using brief and clinically actionable ePROs to assess priority symptoms and supportive care needs in cancer care, we aim to determine whether systematic monitoring of oncology symptoms and triaging into an e-health self-management intervention in a subset of patients significantly reduces symptom burden and influences utilization of system-level factors. Our findings, if favorable, will provide the development and implementation of additional programs that provide patients with accessible, evidence-based, tools to aid in symptom monitoring and management while simultaneously reducing health system burden. More specifically, our implementation evaluation may provide valuable information on effective implementation strategies for EHR-integrated e-health interventions.

The NU IMPACT study has several notable strengths. First, our symptom monitoring approach deploys state-of-the art assessment of several PROMIS domains, enabling accurate individual assessment of core symptoms (pain interference, fatigue, anxiety, depression) and physical function, in a cancer clinical setting. These assessments can be done using computer adaptive tests (CATs) or custom short forms that retain scores on a common metric. The software, scoring protocols and procedures to implement PROMIS assessments are accessible to other researchers and clinicians, further enabling scalability. A major feature of our program is its use of tailored, web-delivered, evidence-based recommendations for symptom management. Giving patients access to tailored symptom management information provides personalized feedback on symptoms that may sustain user engagement within MyNM Care Corner. When compared to in-person symptom management, the web-based nature of MyNM Care Corner may be more widely and easily accessible patients, especially among patients who have difficulty scheduling in-person visits. Additionally, all assessments and symptom management information within our trial are available in Spanish, which enhances the generalizability of our study findings. Finally, our hybrid implementation effectiveness trial design allows us to comprehensively evaluate our intervention on patient- and system-level outcomes during expanded implementation.

Despite the many strengths of the NU IMPACT program, there are some challenges. A barrier to widespread adoption is the fact that many patients in the health system do not register for the Epic patient-facing portal. Such patients do not automatically receive cPRO assessments and therefore may not be reached effectively, potentially biasing the enrolled sample. To avoid introducing bias, we plan to develop several system-wide, patient engagement strategies to improve uptake of the portal in our patient population. While the symptom self-management intervention and cPRO are available in Spanish, our MyChart is only available in English, which can lead to challenges in Hispanic patient enrollment or engagement. Additionally, MyNM Care Corner does not provide direct communication with clinicians. Finally, our implementation findings might not generalize to community-based clinics or other healthcare systems.

The results of this trial may help expand options for evidence-based cancer symptom management and has the potential to inform the implementation of other, non-cancer, symptom management interventions linked to patient-reported outcomes. Our trial may provide valuable data on both the effectiveness of our e-health symptom management intervention and the scalability of e-health interventions within healthcare systems.

Protocol Modifications and Current Status.

There have been protocol modifications since beginning the trial.

  • Conduct recruitment by electronic invitation only, due to the restrictions imposed by the SARS COVID-19 pandemic, which reduced consented patient enrollment targets into the pre-implementation and post-implementation Send invitation to consent for participation in post-implementation randomized trial to eligible pre-implementation patients that did not respond to initial invitation for consent during the pre-implementation phase

  • Change target N for post-implementation clinical trial to focus on patient-level effectiveness outcomes in randomized trial

Data Sharing Statement:

The final trial dataset will be de-identified for sharing with the research community at large to advance science and health. We will remove or code any personal information that could identify participants before files are shared with other researchers to ensure that, by current scientific standards and known methods, no one will be able to identify these participants from the information we share.

Supplementary Material

1

Acknowledgement of Funding:

The Improving the Management of symPtoms during And following Cancer Treatment (IMPACT) Consortium is a Cancer Moonshot Research Initiative under the authorization of the 2016 United States 21st Century Cures Act. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number UM1CA233035 to D. Cella. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

The article was prepared as part of one of the author's (REJ) official duties as an employee of the US Federal Government. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the National Cancer Institute.

Acknowledgement of REDCap:

REDCap is supported by the Northwestern University Clinical and Translational Science (NUCATS) Institute. Research reported in this publication was supported, in part, by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number UL1TR001422. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain

Declaration of interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Competing Interests: None declared

Ethics Approval: This study has been approved by the Social and Behavioral Research Panel of Northwestern University’s Institutional Review Board (IRB; study number STU00208413).

References

  • 1.Harrington CB, Hansen JA, Moskowitz M, Todd BL, Feuerstein M. It's not over when it's over: long-term symptoms in cancer survivors--a systematic review. Int J Psychiatry Med. 2010;40(2):163–81. doi: 10.2190/PM.40.2.c [DOI] [PubMed] [Google Scholar]
  • 2.Stein KD, Syrjala KL, Andrykowski MA. Physical and psychological long-term and late effects of cancer. Cancer. Jun 1 2008;112(11 Suppl):2577–92. doi: 10.1002/cncr.23448 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cella D, Choi S, Garcia S, et al. Setting standards for severity of common symptoms in oncology using the PROMIS item banks and expert judgment. Qual Life Res. Dec 2014;23(10):2651–61. doi: 10.1007/s11136-014-0732-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wagner LI, Schink J, Bass M, et al. Bringing PROMIS to practice: brief and precise symptom screening in ambulatory cancer care. Cancer. Mar 15 2015;121(6):927–34. doi: 10.1002/cncr.29104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yost KJ, Eton DT, Garcia SF, Cella D. Minimally important differences were estimated for six Patient-Reported Outcomes Measurement Information System-Cancer scales in advanced-stage cancer patients. J Clin Epidemiol. May 2011;64(5):507–16. doi: 10.1016/j.jclinepi.2010.11.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Basch E, Deal AM, Dueck AC, et al. Overall Survival Results of a Trial Assessing Patient-Reported Outcomes for Symptom Monitoring During Routine Cancer Treatment. Jama. Jul 11 2017;318(2):197–198. doi: 10.1001/jama.2017.7156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Basch E, Deal AM, Kris MG, et al. Symptom Monitoring With Patient-Reported Outcomes During Routine Cancer Treatment: A Randomized Controlled Trial. J Clin Oncol. Feb 20 2016;34(6):557–65. doi: 10.1200/jco.2015.63.0830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zebrack B, Kayser K, Bybee D, et al. A Practice-Based Evaluation of Distress Screening Protocol Adherence and Medical Service Utilization. J Natl Compr Canc Netw. Jul 2017;15(7):903–912. doi: 10.6004/jnccn.2017.0120 [DOI] [PubMed] [Google Scholar]
  • 9.Anthony DL, Campos-Castillo C, Lim PS. Who Isn't Using Patient Portals And Why? Evidence And Implications From A National Sample Of US Adults. Health Aff (Millwood). Dec 2018;37(12):1948–1954. doi: 10.1377/hlthaff.2018.05117 [DOI] [PubMed] [Google Scholar]
  • 10.McCleary NJ, Greenberg TL, Barysauskas CM, et al. Oncology Patient Portal Enrollment at a Comprehensive Cancer Center: A Quality Improvement Initiative. J Oncol Pract. Aug 2018;14(8):e451–e461. doi: 10.1200/jop.17.00008 [DOI] [PubMed] [Google Scholar]
  • 11.IMPACT. Description of IMPACT for body of manuscripts Accessed August 5, 2021.
  • 12.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. Apr 2009;42(2):377–81. doi: 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kluetz PG, Chingos DT, Basch EM, Mitchell SA. Patient-Reported Outcomes in Cancer Clinical Trials: Measuring Symptomatic Adverse Events With the National Cancer Institute's Patient-Reported Outcomes Version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). Am Soc Clin Oncol Educ Book. 2016;35:67–73. doi: 10.1200/edbk_159514 [DOI] [PubMed] [Google Scholar]
  • 14.Dueck AC, Mendoza TR, Mitchell SA, et al. Validity and Reliability of the US National Cancer Institute's Patient-Reported Outcomes Version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). JAMA Oncol. Nov 2015;1(8):1051–9. doi: 10.1001/jamaoncol.2015.2639 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Amireault S, Godin G. The Godin-Shephard leisure-time physical activity questionnaire: validity evidence supporting its use for classifying healthy adults into active and insufficiently active categories. Percept Mot Skills. Apr 2015;120(2):604–22. doi: 10.2466/03.27.PMS.120v19x7 [DOI] [PubMed] [Google Scholar]
  • 16.Amireault S, Godin G, Lacombe J, Sabiston CM. Validation of the Godin-Shephard Leisure-Time Physical Activity Questionnaire classification coding system using accelerometer assessment among breast cancer survivors. J Cancer Surviv. Sep 2015;9(3):532–40. doi: 10.1007/s11764-015-0430-6 [DOI] [PubMed] [Google Scholar]
  • 17.CAHPS Cancer Care Survey Measures. Accessed August 12, 2021, https://www.ahrq.gov/cahps/surveys-guidance/cancer/survey-measures.html.August 12, 2021
  • 18.Wolf MS, Chang CH, Davis T, Makoul G. Development and validation of the Communication and Attitudinal Self-Efficacy scale for cancer (CASE-cancer). Patient Educ Couns. Jun 2005;57(3):333–41. doi: 10.1016/j.pec.2004.09.005 [DOI] [PubMed] [Google Scholar]
  • 19.Yanez B, Pearman T, Lis CG, Beaumont JL, Cella D. The FACT-G7: a rapid version of the functional assessment of cancer therapy-general (FACT-G) for monitoring symptoms and concerns in oncology practice and research. Ann Oncol. Apr 2013;24(4):1073–8. doi: 10.1093/annonc/mds539 [DOI] [PubMed] [Google Scholar]
  • 20.Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract. Mar 24 2006;7:21. doi: 10.1186/1471-2296-7-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.de Souza JA, Yap BJ, Hlubocky FJ, et al. The development of a financial toxicity patient-reported outcome in cancer: The COST measure. Cancer. Oct 15 2014;120(20):3245–53. doi: 10.1002/cncr.28814 [DOI] [PubMed] [Google Scholar]
  • 22.Barr PJ, Thompson R, Walsh T, Grande SW, Ozanne EM, Elwyn G. The psychometric properties of CollaboRATE: a fast and frugal patient-reported measure of the shared decision-making process. J Med Internet Res. Jan 3 2014;16(1):e2. doi: 10.2196/jmir.3085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Salsman JM, Schalet BD, Merluzzi TV, et al. Calibration and initial validation of a general self-efficacy item bank and short form for the NIH PROMIS(®). Qual Life Res. Sep 2019;28(9):2513–2523. doi: 10.1007/s11136-019-02198-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Gruber-Baldini AL, Velozo C, Romero S, Shulman LM. Validation of the PROMIS(®) measures of self-efficacy for managing chronic conditions. Qual Life Res. Jul 2017;26(7):1915–1924. doi: 10.1007/s11136-017-1527-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Pearman T, Garcia S, Penedo F, Yanez B, Wagner L, Cella D. Implementation of distress screening in an oncology setting. J Community Support Oncol. Dec 2015;13(12):423–8. doi: 10.12788/jcso.0198 [DOI] [PubMed] [Google Scholar]
  • 26.Garcia SF, Wortman K, Cella D, et al. Implementing electronic health record-integrated screening of patient-reported symptoms and supportive care needs in a comprehensive cancer center. Cancer. Nov 15 2019;125(22):4059–4068. doi: 10.1002/cncr.32172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Pilkonis PA, Choi SW, Reise SP, Stover AM, Riley WT, Cella D. Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS®): depression, anxiety, and anger. Assessment. Sep 2011;18(3):263–83. doi: 10.1177/1073191111411667 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Amtmann D, Cook KF, Jensen MP, et al. Development of a PROMIS item bank to measure pain interference. Pain. Jul 2010;150(1):173–182. doi: 10.1016/j.pain.2010.04.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Cella D, Lai JS, Jensen SE, et al. PROMIS Fatigue Item Bank had Clinical Validity across Diverse Chronic Conditions. J Clin Epidemiol. May 2016;73:128–34. doi: 10.1016/j.jclinepi.2015.08.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Cella D, Riley W, Stone A, et al. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008. J Clin Epidemiol. Nov 2010;63(11):1179–94. doi: 10.1016/j.jclinepi.2010.04.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Northwestern University. HealthMeasures: PROMIS Measure Development & Research. 2017. http://www.healthmeasures.net/explore-measurement-systems/promis/measure-development-research
  • 32.Glasgow RE, Harden SM, Gaglio B, et al. RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review. Front Public Health. 2019;7:64. doi: 10.3389/fpubh.2019.00064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Adm Policy Ment Health. Jan 2011;38(1):44–53. doi: 10.1007/s10488-010-0314-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. Jan 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Palinkas LA RCB, ed. Mixed methods evaluation in dissemination and implementation science. Oxford University Press; 2017. In: Brownson RCCG, Proctor EK, eds, ed. Dissemination and implementation research in health: translating science to practice. [Google Scholar]
  • 36.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. Mar 2012;50(3):217–26. doi: 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Santana MJ, Feeny D. Framework to assess the effects of using patient-reported outcome measures in chronic care management. Qual Life Res. Jun 2014;23(5):1505–13. doi: 10.1007/s11136-013-0596-1 [DOI] [PubMed] [Google Scholar]
  • 38.Stiles PG, Boothroyd RA, Snyder K, Zong X. Service penetration by persons with severe mental illness: how should it be measured? J Behav Health Serv Res. May 2002;29(2):198–207. doi: 10.1007/bf02287706 [DOI] [PubMed] [Google Scholar]
  • 39.Shelton RC, Chambers DA, Glasgow RE. An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity Over Time. Front Public Health. 2020;8:134. doi: 10.3389/fpubh.2020.00134 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. Apr 14 2014;9(1):45. doi: 10.1186/1748-5908-9-45 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Finch TL, Girling M, May CR, et al. Improving the normalization of complex interventions: part 2 - validation of the NoMAD instrument for assessing implementation work based on normalization process theory (NPT). BMC Med Res Methodol. Nov 15 2018;18(1):135. doi: 10.1186/s12874-018-0591-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Armenakis AA, Bernerth JB, Pitts JP, Walker HJ. Organizational Change Recipients' Beliefs Scale: Development of an Assessment Instrument. The Journal of applied behavioral science. 2007;43(4):481–505. doi: 10.1177/0021886307303654 [DOI] [Google Scholar]
  • 43.Malone S, Prewitt K, Hackett R, et al. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implementation science communications. 2021;2(1):77–77. doi: 10.1186/s43058-021-00181-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Gamma E H R J R, Vlissides J. Design patterns: elements of reusable object-oriented software. Addison-Wesley; 2005. [Google Scholar]
  • 45.Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. Bmj. Feb 6 2015;350:h391. doi: 10.1136/bmj.h391 [DOI] [PubMed] [Google Scholar]
  • 46.Bernal JL, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. Feb 1 2017;46(1):348–355. doi: 10.1093/ije/dyw098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Strasberg HR, Del Fiol G, Cimino JJ. Terminology challenges implementing the HL7 context-aware knowledge retrieval ('Infobutton') standard. J Am Med Inform Assoc. Mar-Apr 2013;20(2):218–23. doi: 10.1136/amiajnl-2012-001251 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Health Level Seven® International. HL7 Version 3 Standard: Context Aware Knowledge Retrieval Application (“Infobutton”), Knowledge Request, Release 2. 2017. Accessed January 10, 2018. http://www.hl7.org/implement/standards/product_brief.cfm?product_id=208
  • 49.Shadish WR, Kyse EN, Rindskopf DM. Analyzing data from single-case designs using multilevel models: new applications and some agenda items for future research. Psychol Methods. Sep 2013;18(3):385–405. doi: 10.1037/a0032964 [DOI] [PubMed] [Google Scholar]
  • 50.Ferron JM, Bell BA, Hess MR, Rendina-Gobioff G, Hibbard ST. Making treatment effect inferences from multiple-baseline data: the utility of multilevel modeling approaches. Behav Res Methods. May 2009;41(2):372–84. doi: 10.3758/brm.41.2.372 [DOI] [PubMed] [Google Scholar]
  • 51.Rowe AK, Lama M, Onikpo F, Deming MS. Design effects and intraclass correlation coefficients from a health facility cluster survey in Benin. Int J Qual Health Care. Dec 2002;14(6):521–3. doi: 10.1093/intqhc/14.6.521 [DOI] [PubMed] [Google Scholar]
  • 52.Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. Mar 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.NVivo qualitative data analysis software [computer program]. Version V. 2017.
  • 54.CAHPS Cancer Care Survey Measures. Accessed August 12, 2021. https://www.ahrq.gov/cahps/surveys-guidance/cancer/survey-measures.html
  • 55.Vandyk AD, Harrison MB, Macartney G, Ross-White A, Stacey D. Emergency department visits for symptoms experienced by oncology patients: a systematic review. Support Care Cancer. Aug 2012;20(8):1589–99. doi: 10.1007/s00520-012-1459-y [DOI] [PubMed] [Google Scholar]
  • 56.Enright K, Grunfeld E, Yun L, et al. Population-based assessment of emergency room visits and hospitalizations among women receiving adjuvant chemotherapy for early breast cancer. J Oncol Pract. Mar 2015;11(2):126–32. doi: 10.1200/jop.2014.001073 [DOI] [PubMed] [Google Scholar]
  • 57.Kuderer NM, Dale DC, Crawford J, Cosler LE, Lyman GH. Mortality, morbidity, and cost associated with febrile neutropenia in adult cancer patients. Cancer. May 15 2006;106(10):2258–66. doi: 10.1002/cncr.21847 [DOI] [PubMed] [Google Scholar]
  • 58.NIH Common Data Element (CDE) Resource Portal. PROMIS - Patient Reported Outcomes Measurement Information System (NIH CDE Initiative Summary). Accessed January 3, 2018. https://www.nlm.nih.gov/cde/summaries.html#PROMIS

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES