Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Jul 5;128(17):3217–3223. doi: 10.1002/cncr.34376

Responsiveness and interpretation of the PROMIS Cancer Function Brief 3D Profile

Sean R Smith 1,, Mary Vargo 2, David S Zucker 3, Samman Shahpar 4, Lynn H Gerber 5,6, Maryanne Henderson 7, Gina Jay 1, Andrea L Cheville 8
PMCID: PMC9541445  PMID: 35788990

Abstract

Background

Measuring function with valid and responsive tools in patients with cancer is essential for driving clinical decision‐making and for the end points of clinical trials. Current patient‐reported outcome measurements of function fall short for many reasons. This study evaluates the responsiveness of the Patient‐Reported Outcomes Measurement Information System (PROMIS) Cancer Function Brief 3D Profile, a novel measure of function across multiple domains.

Methods

Two hundred nine participants across five geographically distinct tertiary care centers completed the assessment and pain rating at two outpatient cancer rehabilitation clinic visits. Patients and providers completed a global rating of change measure at the second visit to indicate whether the patient was improving or worsening in function. Multiple response indices and linear models measured whether the measure was responsive to self‐reported and clinician‐rated changes over time. Correlations between changes in function and changes in anchors (pain rating and performance status) were also calculated.

Results

Function as measured by the PROMIS Cancer Function Brief 3D Profile changed appropriately as both patients and clinicians rated change. Small to moderate effect sizes supported the tool's responsiveness. Function was moderately correlated with pain and more strongly correlated with performance status, and changes in function corresponded with changes in anchor variables. No floor/ceiling effect was found.

Conclusions

The PROMIS Cancer Function Brief 3D Profile is sensitive to changes over time in patients with cancer. The measure may be useful in clinical practice and as an end point in clinical trials.

Lay summary

  • We gave patients a questionnaire by which they told their physicians how well they were functioning, including how fatigued they were.

  • This study tested that questionnaire to see whether the scores would change if patients got better or worse.

Keywords: cancer fatigue, cancer rehabilitation, function, outcome measurement, patient‐reported outcome measures, Patient‐Reported Outcomes Measurement Information System (PROMIS), rehabilitation outcomes

Lay summary

  • We gave patients a questionnaire by which they told their physicians how well they were functioning, including how fatigued they were.

  • This study tested that questionnaire to see whether the scores would change if patients got better or worse.

Short Abstract

The PROMIS Cancer Function Brief 3D Profile was found to be responsive to measuring change in function in patients with cancer. The changes were correlated with changes in performance status, pain, and both patient‐ and physician‐reported global ratings of change.

INTRODUCTION

Measuring function with valid patient‐reported outcome measures (PROMs) is essential to cancer and rehabilitation research. PROMs help us to assess the effectiveness of interventions, understand the extent of disability that a patient is experiencing, and track the long‐term trajectory of function. Valid measures that respond to changes in patient status are especially crucial in guiding clinical decision‐making and informing research outcomes. 1 , 2 , 3

It is critically important to accurately measure function in cancer care for numerous reasons. First, the decision to prescribe potentially toxic antineoplastic therapy relies heavily on a patient's physical and cognitive function to ensure that the patient can tolerate treatment. Second, multiple guidelines have called for ways to measure function to direct treatment decision‐making, to evaluate patients' impairments, and for end points of clinical trials. 4 , 5 , 6 , 7 , 8 Finally, measuring function in patients with a history of cancer helps to inform clinical decision‐making for rehabilitation and resource utilization. Understanding the responsiveness of a measure, including the use of anchors to indicate change, is essential to ensure that change is detected.

Unfortunately, function is inconsistently measured in patients with cancer, and PROMs often measure health‐related quality of life and not actual physical and/or cognitive function. 9 Measures that purport to measure function often include items that are not directly linked to function and are not obviously modifiable with rehabilitation, including a patient's religious beliefs, his or her family's acceptance of the diagnosis, nausea, and more. 10 , 11

To improve the evaluation of function in patients with cancer, the Cancer Rehabilitation Medicine Metrics Consortium developed the Patient‐Reported Outcomes Measurement Information System (PROMIS) Cancer Function Brief 3D Profile, a composite of three short forms that evaluate the primary domains of gross and upper extremity motor function, fatigue, and social participation as well as subdomains, including cognition, fine motor skills, and more. 12 Scores generated by the PROMIS Cancer Function Brief 3D Profile are representative of patients with cancer across all tumor types and stages and multiple trait ranges, including disease severity and the presence of active disease versus no evidence of disease. 13

Although this PROM represents a true measure of cancer patient function using item response theory–calibrated items, additional analyses of the comparative responsiveness of the measure are needed to further support its use in informing clinical decision‐making, research outcomes, and longitudinal studies. This article reports the results of a test of the measure's responsiveness to change, longitudinal construct validity, and relation to anchor variables.

MATERIALS AND METHODS

Patients treated in outpatient cancer rehabilitation clinics at five tertiary academic medical centers were recruited via a convenience sample to test the validity of the PROMIS Cancer Function Brief 3D Profile (Table S1). There were no restrictions in terms of demographic characteristics, tumor type or stage, or whether the patients were actively receiving or had previously received antineoplastic treatment. To be included, patients had to be 18 years old or older and had to either have sufficient English proficiency to complete the questionnaire (written at an approximately eighth‐grade reading level) or have a caregiver present who could help them to complete the items.

Patients who presented for follow‐up visits and filled out the PROM at both time points were included in this analysis to evaluate changes between visits 1 and 2. Additional follow‐up visits were not included in the analysis because of decreasing numbers of patients who were evaluated three or more times. There were no required time points for the follow‐up assessment, as subsequent visits were scheduled on the basis of clinical need as determined by the rehabilitation physician. Responses were recorded on either paper or a tablet according to the resources available at the study performance site, and data were entered into a REDCap database (copyright Vanderbilt University). Incomplete data were excluded from analysis. Internal ethical review board approval was obtained at each performance site. The University of Michigan was the coordinating center and was responsible for data management.

Patient‐reported assessments

Patients who consented to the study completed the PROMIS Cancer Function Brief 3D Profile, a 0–10 numeric pain rating scale (NRS), and a global rating of change scale (GRS) at follow‐up visits. Specifically, they responded to “Since my last visit here, I am…” on a five‐item Likert scale that included “a lot worse,” “a little worse,” “no change,” “a little better,” and “a lot better.” One of the six performance sites for the initial validation study of the measure did not record the GRS, so no data from that site were included in this study.

Clinician‐reported assessments

The cancer rehabilitation physicians treating the patients also completed a GRS and answered “Since this patient's last visit here, he/she is…” with the same answers available for the patient GRS. The same physician completed the assessment for each patient to ensure the validity of the clinician‐perceived global rating of change. Physicians also input clinical information about the patient at each visit; the Karnofsky Performance Scale (KPS) was assessed by the rehabilitation physicians at the time of the visit. 14 .

Statistical analysis

Longitudinal construct validity

Multiple hypotheses were tested for longitudinal construct validity, with at least 75% of the hypotheses needing to be accepted for sufficient construct validity and responsiveness (Table 1). 15

TABLE 1.

Responsiveness and Longitudinal Construct Validity Hypotheses

Function will change in line with the patient‐reported global rating of change.
Function will change in line with the clinician‐reported global rating of change.
There will be a negligible effect size for patients who report no change.
Function will improve as the performance status improves.
Function will improve as pain decreases.

Pearson correlation coefficients were calculated to determine the relationship between the KPS and the NRS and PROMIS Cancer Function Brief 3D Profile scores in each of the three domains. A threshold of |R| > 0.3 determined if there was a moderate correlation, and |R| > 0.5 indicated a strong correlation.

Responsiveness

There is no consensus opinion about a single approach to assessing a PROM's responsiveness, but many psychometricians recommend a combination of anchor‐based and distribution‐based methods (which has been borne out in numerous studies), including PROMs that evaluate patients with cancer and are based on item response theory. 16 , 17 , 18 , 19 , 20 , 21 , 22 These studies used patient‐reported global ratings of change as an anchor as well as clinical factors that may change over time (e.g., performance status and pain). Additionally, there are several possible statistical approaches to evaluating responsiveness, with no gold standard. 2

In this study, responsiveness was measured in multiple ways. Mean changes in scores and standard deviations (SDs) were calculated for individual patient and clinician ratings on each of the three short forms. For example, the mean change and SD for every patient who completed the physical function short form and indicated that they were “a lot worse” since their last visit were calculated. Because of subtle discrepancies in patients completing each of the three short forms, the number of responses differed slightly among the three domains. Function was analyzed vis‐à‐vis a change in clinician‐rated and patient‐reported GRS, KPS, and NRS, as recorded at both visits. A change of 2 points on the NRS was the cutoff for measuring change on the PROM; this was based on multiple prior studies finding that the minimal clinically important difference on the NRS ranged from 1.5 to 2.2. 23 , 24 , 25

Furthermore, effect size was measured in two ways. Guyatt's responsiveness statistic (RS) was calculated by dividing the reported change in each group by the SD of the group that indicated no change. 26 The standardized response mean (SRM) is another index used to gauge the responsiveness of scales to clinical change, and it was determined by calculating the mean change by the SD of the change. Two responsiveness indices were used to ensure the accuracy of the results. Both indices were analyzed in the patient‐ and clinician‐reported global rate of change groups. Scores of 0.2, 0.5, and 0.8 were used as cut points for small, moderate, and large responsiveness, respectively, for the RS index. The SRM effect size was deemed to be small from 0.20 to 0.49, moderate from 0.50 to 0.79, and large if it was higher than 0.80. 27 , 28

Interpretability and measurement error

To augment the responsiveness data and to make the PROM more usable, multiple approaches were used to evaluate the interpretability of the PROMIS Cancer Function Brief 3D Profile. First, the minimal important change, defined as the smallest change perceived to be important, was calculated as the mean change in participants who rated their function as “a little bit worse” or “a little bit better.” Next, detectable change, defined as the amount of change not attributable to measurement error within a 95% confidence interval, was calculated by multiplying the standard error of measurement (SEM) by 1.96 and the square root of 2. 29 This formula has been used in prior studies evaluating the responsiveness of similar PROMs. 30 , 31

The SEM was generated to evaluate the maximum difference between an observed score and a true score of a person's function in each of the three domains. This was calculated with the following formula:

[(SD12+SD22)/n where n is the number of responses. 32

Finally, calculations were performed to determine whether a floor/ceiling effect was present by assessing how many patients scored the highest (ceiling) or lowest (floor) possible score in each domain during either visit. A cutoff of ≥15% was used to determine whether a floor or ceiling effect was present. 15

RESULTS

Patient characteristics

Demographic and clinical data for the cohort, including disease group, age, and gender, are included in Table S2, which presents a comparison with the original validation study of the PROMIS Cancer Function Brief 3D Profile. 13 Two hundred nine patients had a follow‐up assessment across the five performance sites during the study duration.

Longitudinal construct validity and responsiveness

No hypotheses regarding longitudinal construct validity were rejected (Table 1). Both patient and clinician perceptions of change, as measured on the five‐point GRS, aligned with changes in the PROMIS Cancer Function Brief 3D Profile across physical function, fatigue, and social participation scores (Table 2). When the GRS was broken down into just three scores—worse, same, and better—the correlation was even stronger (Table 3). Patients and clinicians tended to perceive that they had improved between visits, with more than 50% selecting that they were either “a little better” or “a lot better” since their previous visit.

TABLE 2.

Score Changes Between Assessments 1 and 2 by Patient and Physician Change Ratings (five levels)

Lot worse Little worse No change Little better Lot better
Physical function
Patient N 12 27 45 71 28
Mean −1.08 −1.97 −0.53 0.85 0.97
SD 2.48 2.04 2.27 2.67 3.31
CI 1.40 0.77 0.66 0.62 1.23
SRM −0.44 −0.97 −0.23 0.32 0.29
RS −0.48 −0.87 −0.23 0.37 0.42
Physician N 8 29 50 70 26
Mean −2.88 −0.93 0.12 0.71 1.42
SD 3.23 3.17 2.73 3.96 3.43
CI 2.24 1.15 0.76 0.93 1.23
SRM −0.89 −0.29 0.04 0.18 0.41
RS −1.05 −0.34 0.04 0.26 0.52
Fatigue
Patient N 13 28 46 71 29
Mean 1.23 0.75 0.35 −0.55 −1.28
SD 2.39 2.05 2.37 2.67 3.26
CI 1.30 0.76 0.68 0.62 1.19
SRM 0.51 0.37 0.15 −0.21 −0.39
RS 0.51 0.32 0.15 −0.23 −0.54
Physician N 10 29 50 71 27
Mean 0.80 1.59 0.22 −0.66 −1.52
SD 1.23 2.82 2.42 2.27 3.27
CI 0.76 1.03 0.67 0.53 1.23
SRM 0.65 0.56 0.09 −0.29 −0.46
RS 0.33 0.66 0.09 −0.27 −0.63
Social participation
Patient N 12 27 42 68 28
Mean −1.25 −0.19 0.10 0.60 1.93
SD 1.14 2.24 2.02 1.97 2.80
CI 0.65 0.84 0.61 0.47 1.04
SRM −1.10 −0.08 0.05 0.30 0.46
RS −0.62 −0.09 0.05 0.30 0.96
Physician N 10 28 48 66 25
Mean −0.20 −1.00 0.52 0.64 1.68
SD 1.48 1.94 1.95 2.30 2.50
CI 0.92 0.72 0.55 0.55 0.98
SRM −0.14 −0.52 0.27 0.28 0.67
RS −0.10 −0.51 0.27 0.33 0.86

Note: Columns indicate global rating of change scores, which reflect a patient's overall perception of change (not specific to each domain).

Abbreviations: CI, 95% confidence interval of the mean; RS, Guyatt's responsiveness statistic; SD, standard deviation; SRM, standardized response mean.

TABLE 3.

Score Changes Between Assessments 1 and 2 by patient and Physician Change Ratings (three levels)

Worse No change Better
Physical function
Patient N 39 45 99
Mean −1.69 −0.53 0.96
SD 3.61 2.54 2.96
CI 1.13 0.66 0.58
SRM −0.47 −0.21 0.32
RS −0.67 −0.21 0.38
Physician N 37 50 96
Mean −1.35 0.12 0.91
SD 3.24 2.73 3.33
CI 1.02 0.76 0.67
SRM −0.42 0.04 0.27
RS −0.49 0.04 0.33
Fatigue
Patient N 41 46 100
Mean 0.90 0.35 −0.76
SD 2.14 2.37 2.85
CI 0.66 0.68 0.59
SRM 0.42 0.15 −0.27
RS 0.38 0.15 −0.32
Physician N 39 50 98
Mean 1.38 0.22 −0.90
SD 2.52 2.42 2.60
CI 0.79 0.67 0.51
SRM 0.55 0.09 −0.35
RS 0.57 0.09 −0.37
Social participation
Patient N 39 42 96
Mean −0.51 0.10 0.99
SD 2.01 2.02 2.31
CI 0.63 0.61 0.46
SRM −0.25 0.05 0.43
RS −0.25 0.05 0.49
Physician N 38 48 91
Mean −0.79 0.52 0.92
SD 1.85 1.95 2.39
CI 0.59 0.55 0.49
SRM −0.43 0.27 0.38
RS −0.41 0.27 0.47

Note: Columns indicate global rating of change scores, which reflect a patient's overall perception of change (not specific to each domain).

Abbreviations: CI, 95% confidence interval of the mean; RS, Guyatt's responsiveness statistic; SD, standard deviation; SRM, standardized response mean.

Effect sizes, determined by RS and SRM, were small to moderate. Larger perceived changes (higher or lower function) had larger effect sizes than “a little” change as rated by both patients and clinicians. Similarly, mean changes in function were generally greater in magnitude when patients and clinicians selected “a lot better” or “a lot worse” for the GRS scales (Table 2). The mean change in score was higher when the GRS was condensed into three categories (worse, same, and better), but the effect sizes remained small to moderate (Table 3).

Scores on the PROMIS Cancer Function Brief 3D Profile responded appropriately as KPS scores changed: As the performance status declined or improved, so too did the mean score of the assessment in each of the three domains. Approximately half of the patients had no change in the KPS between visits, and those whose KPS scores declined or improved were split roughly evenly. The absolute values of the Pearson correlation coefficients were all greater than 0.3, with physical function having the strongest correlation with the KPS (Table 4).

TABLE 4.

Changes by the magnitude of the KPS and Pain Scale Changes Between Assessments 1 and 2

Karnofsky change Pearson R
<−5 Between 5 and − 5 >5
Physical function 0.68
N 44 93 46
Mean −0.20 0.23 1.80
SD 3.21 3.26 3.40
Fatigue −0.32
N 40 99 48
Mean 0.80 −0.12 −1.14
SD 2.57 2.68 2.87
Social participation 0.49
N 40 94 43
Mean −0.10 0.45 1.56
SD 2.38 2.26 2.41
NRS change Pearson R
>1 0–1 <1
Physical function −0.32
N 22 115 44
Mean −1.95 0.23 1.95
SD 3.66 3.26 2.93
Fatigue 0.44
N 22 114 45
Mean 1.14 0.05 −1.08
SD 1.81 2.55 2.94
Social participation −0.36
N 19 109 45
Mean −0.89 0.69 1.09
SD 2.18 2.06 2.30

Abbreviations: NRS, numeric pain rating scale; KPS, Karnofsky Performance Scale; SD, standard deviation.

Changes in function also responded to the NRS anchor; as patients reported higher levels of pain, function declined across all three domains (Table 4). Conversely, reduced pain was associated with mean T scores of improved function (including reduced fatigue). Patients with pain scores that changed by 0–1 points (not reaching the threshold of clinical significance), who represented more than half of the sample, had relatively unchanged function scores across all three domains. The Pearson correlation coefficients all showed an association between pain and function; however, it was not as strong as the association between the KPS and function.

Interpretability

Values for the minimal important change ranged from −1.91 to 0.85 and are included in Table 5. Detectable change within 95% confidence ranged from 6.31 to 7.42 across the three domains. No floor or ceiling effect was found in any domain, as the number of maximum and minimum scores was well below 15% in each domain. Values for SEM ranged from 1.61 to 2.30 across the domains.

TABLE 5.

MIC Scores, Detectable Changes, and Floor/Ceiling Effect Measurements

Domain MIC, self‐reported decline (SD) MIC, self‐reported improvement (SD) DC95 SEM Floor/ceiling, %
Physical function −1.91 (2.04) 0.85 (2.67) 6.39 2.30 0.82/2.46
Fatigue 0.75 (2.05) −0.55 (2.67) 7.42 1.89 2.67/8.29
Social participation −0.19 (2.24) 0.60 (1.97) 6.31 1.61 7.06/4.24

Abbreviations: DC95, reliable change score with 95% confidence; MIC, minimal important change; SD, standard deviation; SEM, standard error of measurement.

DISCUSSION

Using patient and provider global rating of change data, we found that the PROMIS Cancer Function Brief 3D Profile was responsive to changes across all domains when it was measuring function in patients with cancer. As expected, function changed in line with both patient‐ and clinician‐reported GRS, and negligible effect sizes were seen in patients who reported no change. Additionally, scores changed appropriately when the anchors of performance status and pain rating changed. The results confirmed the authors' hypotheses regarding responsiveness.

Effect sizes were typically moderate or small, and this was consistent with other studies evaluating PROMs in patients with cancer. 21 , 33 Furthermore, our results are consistent with prior studies finding that patients with cancer tend to require less of a change to report improvement in comparison with worsening. 30 , 34 These results were not surprising in light of the numerous factors that contribute to functional decline in patients with cancer; for example, rehabilitation interventions may not improve function in a patient who experiences disease progression between assessment time points. However, large effect sizes were seen in patients who reported more significant change, and this further validated the PROM as a measure of function. The larger effect sizes with higher perceived rates of change potentially suggest that patients without disease‐related factors may improve significantly on this PROM, and patients with advancing disease and/or new treatments may not respond as well to rehabilitation interventions. The lack of strict time points for the follow‐up assessment and a controlled clinical trial environment contributed to wide patient variability; this may have reduced the effect size but also makes the assessment more applicable to real‐world use.

Both the KPS and the NRS were sufficient anchor variables with the measure, with the KPS understandably having a stronger association with function than the NRS. The stronger relationship between function and performance status, in comparison with pain, is likely due to the fact that the KPS is a clinician‐measured assessment of physical and cognitive health, which is closely linked to function. Additionally, not every patient needing rehabilitation has significant pain; for example, a hemiparetic patient with a glioblastoma may report no pain but have severe limitations in function. It is likely that patients with pain have reduced function (and vice versa), and this link should be explored in more depth. These findings support this PROM complementing clinician‐rated function. Used together, they may lead to a better understanding of the effectiveness of interventions or a patient's ability to tolerate cancer treatments.

Although the findings support the use of the PROMIS Cancer Function Brief 3D Profile as a measure of function in patients with cancer, there are limitations. First, the lack of consistent assessment intervals precluded reliability testing. Further work is needed in this regard. It is worth pointing out, however, that this was a real‐world assessment of patients, as they were followed up on the basis of clinical needs and not on the basis of arbitrary research time points. Second, further testing of the construct validity, including against legacy PROMs or a symptom assessment tool, would bolster the case for using this instrument. Comparing this assessment with clinical measures such as balance tests or the 6‐min walk test may also provide insight into ways to best record function in this population. Next, physician assessors were not blinded by study design to patient GRS scores; it is possible that sometimes physicians saw patient‐reported scores before they input their own. Physicians who used tablets to enter their responses, however, could not see patient scores, and this represented well over half of the study sample. Finally, the nature of cancer is such that many disease‐related factors may have contributed to changes in function, especially worsening. In this sample, it is not clear how much these factors contributed to PROM score changes; however, the measure reflected changes in function appropriately.

In conclusion, the PROMIS Cancer Function Brief 3D Profile has sufficient longitudinal construct validity and responsiveness. This, coupled with prior validity testing, supports its use as a measure of function in patients with cancer. Investigation into its reliability is an area of future study.

AUTHOR CONTRIBUTIONS

Sean R. Smith: Conceptualization, methodology, formal analysis, investigation, writing, visualization, supervision, and funding acquisition. Mary Vargo: Conceptualization, methodology, investigation, and writing. David S. Zucker: Conceptualization, methodology, investigation, and writing. Samman Shahpar: Conceptualization, methodology, investigation, and writing. Lynn H. Gerber: Conceptualization, methodology, and writing. Maryanne Henderson: Conceptualization, methodology, investigation, and writing. Gina Jay: Validation, data curation, writing, and project administration. Andrea L. Cheville: Conceptualization, methodology, formal analysis, writing, and visualization.

Funding information

Foundation for Physical Medicine and Rehabilitation.

CONFLICTS OF INTEREST

Mary Vargo reports that her institution has been paid for her expert testimony, but she has not personally been paid for it; she also serves on an advisory board for a randomized controlled trial of massage therapy for breast cancer survivors and serves as a cochair of Cancer Rehabilitation Medicine for the American Academy of Physical Medicine and Rehabilitation. The other authors made no disclosures.

Supporting information

Appendix XXX

ACKNOWLEDGMENT

This study was funded in part by a grant from the Foundation for Physical Medicine and Rehabilitation.

See editorial on pages 3155‐3157, this issue.

REFERENCES

  • 1. Weiss DJ, Wang C, Cheville AL, Basford JR, DeWeese J. Adaptive measurement of change: a novel method to reduce respondent burden and detect significant individual‐level change in patient‐reported outcome measures. Arch Phys Med Rehabil 2022;103(5)(suppl):S43–S52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Liang MH. Longitudinal construct validity: establishment of clinical meaning in patient evaluative instruments. Med Care. 2000;38(9):II–84‐84‐II‐90. [PubMed] [Google Scholar]
  • 3. Revicki DA, Cella D, Hays RD, Sloan JA, Lenderking WR, Aaronson NK. Responsiveness and minimal important differences for patient reported outcomes. Health Qual Life Outcomes. 2006;4(1):1‐5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Simoff MJ, Lally B, Slade MG, et al. Symptom management in patients with lung cancer: diagnosis and management of lung cancer: American College of Chest Physicians evidence‐based clinical practice guidelines. Chest. 2013;143(5):e455S‐e497S. [DOI] [PubMed] [Google Scholar]
  • 5. Swarm RA, Paice JA, Anghelescu DL, et al. Adult cancer pain, version 3.2019, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2019;17(8):977‐1007. [DOI] [PubMed] [Google Scholar]
  • 6. Denlinger CS, Sanft T, Baker KS, et al. Survivorship, version 2.2018, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2018;16(10):1216‐1247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Stout NL, Santa Mina D, Lyons KD, Robb K, Silver JK. A systematic review of rehabilitation and exercise recommendations in oncology guidelines. CA Cancer J Clin. 2021;71(2):149‐175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Commission on Cancer . Optimal Resources for Cancer Care: 2020. Standards. American College of Surgeons Published January 2020. Accessed December 14, 2021. https://www.facs.org/quality‐programs/cancer/coc/standards/2020
  • 9. Harrington SE, Stout NL, Hile E, et al. Cancer rehabilitation publications (2008–2018) with a focus on physical function: a scoping review. Phys Ther. 2020;100(3):363‐415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Brucker PS, Yost K, Cashy J, Webster K, Cella D. General population and cancer patient norms for the Functional Assessment of Cancer Therapy–General (FACT‐G). Eval Health Prof. 2005;28(2):192‐211. [DOI] [PubMed] [Google Scholar]
  • 11. Maldonado E, Thalla N, Nepaul S, Wisotzky E. Outcome measures in cancer rehabilitation: pain, function, and symptom assessment. Front Pain Res (Lausanne). 2021;2:692237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Smith SR, Vargo M, Zucker DS, et al. The Cancer Rehabilitation Medicine Metrics Consortium: a path to enhanced, multi‐site outcome assessment to enhance care and demonstrate value. Front Oncol. 2020;10:625700. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Smith SR, Vargo M, Zucker D, Shahpar S, Gerber L, Henderson M, Jay G, Lee M, Cheville A Psychometric characteristics and validity of the PROMIS Cancer Function Brief 3D Profile. Arch Phys Med Rehabil 2022;103(5)(suppl):S146–S161. [DOI] [PubMed] [Google Scholar]
  • 14. Schag CC, Heinrich RL, Ganz PA. Karnofsky performance status revisited: reliability, validity, and guidelines. J Clin Oncol. 1984;2(3):187‐193. [DOI] [PubMed] [Google Scholar]
  • 15. Terwee CB, Bot SD, de Boer MR, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34‐42. [DOI] [PubMed] [Google Scholar]
  • 16. Krebs EE, Bair MJ, Damush TM, Tu W, Wu J, Kroenke K. Comparative responsiveness of pain outcome measures among primary care patients with musculoskeletal pain. Med Care. 2010;48(11):1007‐1014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Dworkin RH, Turk DC, Wyrwich KW, et al. Interpreting the clinical importance of treatment outcomes in chronic pain clinical trials: IMMPACT recommendations. J Pain. 2008;9(2):105‐121. [DOI] [PubMed] [Google Scholar]
  • 18. Revicki D, Hays RD, Cella D, Sloan J. Recommended methods for determining responsiveness and minimally important differences for patient‐reported outcomes. J Clin Epidemiol. 2008;61(2):102‐109. [DOI] [PubMed] [Google Scholar]
  • 19. Lehman LA, Velozo CA. Ability to detect change in patient function: responsiveness designs and methods of calculation. J Hand Ther. 2010;23(4):361‐371. [DOI] [PubMed] [Google Scholar]
  • 20. Jensen RE, Moinpour CM, Potosky AL, et al. Responsiveness of 8 Patient‐Reported Outcomes Measurement Information System (PROMIS) measures in a large, community‐based cancer study cohort. Cancer. 2017;123(2):327‐335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Hays RD, Spritzer KL, Fries JF, Krishnan E. Responsiveness and minimally important difference for the Patient‐Reported Outcomes Measurement Information System (PROMIS) 20‐item physical functioning short form in a prospective observational study of rheumatoid arthritis. Ann Rheum Dis. 2015;74(1):104‐107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Yost KJ, Eton DT, Garcia SF, Cella D. Minimally important differences were estimated for six Patient‐Reported Outcomes Measurement Information System–Cancer scales in advanced‐stage cancer patients. J Clin Epidemiol. 2011;64(5):507‐516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Michener LA, Snyder AR, Leggin BG. Responsiveness of the numeric pain rating scale in patients with shoulder pain and the effect of surgical status. J Sport Rehabil. 2011;20(1):115‐128. [DOI] [PubMed] [Google Scholar]
  • 24. Childs JD, Piva SR, Fritz JM. Responsiveness of the numeric pain rating scale in patients with low back pain. Spine. 2005;30(11):1331‐1334. [DOI] [PubMed] [Google Scholar]
  • 25. Young IA, Dunning J, Butts R, Mourad F, Cleland JA. Reliability, construct validity, and responsiveness of the neck disability index and numeric pain rating scale in patients with mechanical neck pain without upper extremity symptoms. Physiother Theory Pract. 2019;35(12):1328‐1335. [DOI] [PubMed] [Google Scholar]
  • 26. Guyatt G, Walter S, Norman G. Measuring change over time: assessing the usefulness of evaluative instruments. J Chronic Dis. 1987;40(2):171‐178. [DOI] [PubMed] [Google Scholar]
  • 27. Stucki G, Liang MH, Fossel AH, Katz JN. Relative responsiveness of condition‐specific and generic health status measures in degenerative lumbar spinal stenosis. J Clin Epidemiol. 1995;48:1369‐1378. [DOI] [PubMed] [Google Scholar]
  • 28. Husted JA, Cook RJ, Farewell VT, Gladman DD. Methods for assessing responsiveness: a critical review and recommendations. J Clin Epidemiol. 2000;53:459‐468. [DOI] [PubMed] [Google Scholar]
  • 29. Beaton DE, Bombardier C, Katz JN, et al. Looking for important change/differences in studies of responsiveness. OMERACT MCID working group. Outcome measures in rheumatology. Minimal clinically important difference. J Rheumatol. 2001;28(2):400‐405. [PubMed] [Google Scholar]
  • 30. Carlozzi NE, Boileau NR, Chou KL, et al. HDQLIFE and Neuro‐QoL physical function measures: responsiveness in persons with Huntington's disease. Mov Disord. 2020;35(2):326‐336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Smit EB, Bouwstra H, Roorda LD, et al. A Patient‐Reported Outcomes Measurement Information System short form for measuring physical function during geriatric rehabilitation: test–retest reliability, construct validity, responsiveness, and interpretability. J Am Med Dir Assoc. 2021;22(8):1627‐1632.e1. [DOI] [PubMed] [Google Scholar]
  • 32. Wyrwich KW, Tierney WM, Wolinsky FD. Further evidence supporting an SEM‐based criterion for identifying meaningful intra‐individual changes in health‐related quality of life. J Clin Epidemiol. 1999;52(9):861‐873. [DOI] [PubMed] [Google Scholar]
  • 33. Cella D, Hahn EA, Dineen K. Meaningful change in cancer‐specific quality of life scores: differences between improvement and worsening. Qual Life Res. 2002;11(3):207‐221. [DOI] [PubMed] [Google Scholar]
  • 34. Cheville AL, Yost KJ, Larson DR, et al. Performance of an item response theory–based computer adaptive test in identifying functional decline. Arch Phys Med Rehabil. 2012;93(7):1153‐1160. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix XXX


Articles from Cancer are provided here courtesy of Wiley

RESOURCES