Dear Editor
We would like to thank Johnson et al. for their thoughtful comments. We consider the use of Patient’s Global Impression (PGI) as an anchor to be a major strength of this study.[1] By asking patients “How is your symptom over the last 24 hours compared to your last visit?” with answers being “better” (ranging from “much better” to “a little better”), “about the same” or “worse” (ranging from “much worse” to “a little worse”), we are identifying a patient-reported change that has clinical relevance for the individuals being treated. Validation studies have found that the PGI is sensitive to change and correlates with patient satisfaction.[2] Thus, it is widely accepted as an anchor to determine the minimal clinically important difference (MCID).[3–5]
We selected the sensitivity-specificity approach as our primary method of analysis because it provides sensitivities and specificities associated with a cutoff, thus giving us a better understanding of its performance. By definition, this approach requires a pre-specified binary cutoff for the gold standard (e.g. any PGI that was at least “a little better” was considered as improvement). Binary cutoffs are justifiable provided that they have clinically relevance, which was the case in our study.[6] Reassuringly, the MCIDs estimated by a different anchor-based method (i.e. within-patient changes) were generally consistent with the sensitivity-specificity analyses.
The subtle difference between “minimal clinically important difference” and “minimal clinically detectable difference” lies in the method of analysis—with anchor-based approaches that employ clinically meaningful gold standards addressing the former and distribution-based approaches addressing the later. As nicely stated by McGlothlin and Lewis, “distribution-based methods… can only identify a minimal detectable effect: that is, an effect that is unlikely to be attributable to random measurement error.” [7] Thus, our primary analysis to identify the MCID for ESAS was based on a well-accepted anchor-based approach.[6] We also employed several distribution-based approaches, including standard error of measurement, as part of our sensitivity analyses. We were also encouraged that other investigators, such as Johnson et al., working with different patient populations yielded consistent cutoffs.
Importantly, our study represents one of the few prospective studies specifically powered to examine MCID. By examining 0–10 numeric rating scales for 10 different symptoms, our findings are simple yet powerful – that a change of 1 point is clinically significant for both improvement and deterioration and is applicable to all 10 symptoms.
It should be noted that MCIDs have some limitations–that any cutoff represents a delicate balance between sensitivity and specificity, and that there will always be false positives and false negatives. Thus, MCIDs remain mostly useful in the research setting for power calculation and response determination instead of in the clinical setting to determine if an individual has improved or deteriorated and if there should be a change in treatment plan. To overcome these challenges, we recently examined the concept “Personalized Symptom Goal”, which asks patients “At what level would you feel comfortable with this symptom?” using the same 0–10 numeric rating scale.[8, 9] The use of personalized symptom goals allows us to define a treatment response tailored to the individual patient. Further research is needed to characterize the utility of this novel concept.
Acknowledgments
Funding: This research is supported by the Sister Institution Network Fund. EB is supported in part by National Institutes of Health grants R01NR010162-01A1, R01CA122292-01, and R01CA124481-01. DH is supported in part by an American Cancer Society Mentored Research Scholar Grant in Applied and Clinical Research (MRSG-14-1418-01-CCE) and a National Institutes of Health grant (R21CA186000-01A1).
Footnotes
Disclosure: The authors have declared no conflicts of interest.
References
- 1.Hui D, Bruera E. Minimal clinically important differences in the edmonton symptom assessment system: the anchor is key. J Pain Symptom Manage. 2013;45:e4–5. doi: 10.1016/j.jpainsymman.2012.12.003. [DOI] [PubMed] [Google Scholar]
- 2.Fischer D, Stewart AL, Bloch DA, et al. Capturing the patient’s view of change as a clinical outcome measure. JAMA. 1999;282:1157–62. doi: 10.1001/jama.282.12.1157. [DOI] [PubMed] [Google Scholar]
- 3.Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials. 1989;10:407–15. doi: 10.1016/0197-2456(89)90005-6. [DOI] [PubMed] [Google Scholar]
- 4.Wyrwich KW, Wolinsky FD. Identifying meaningful intra-individual change standards for health-related quality of life measures. J Eval Clin Pract. 2000;6:39–49. doi: 10.1046/j.1365-2753.2000.00238.x. [DOI] [PubMed] [Google Scholar]
- 5.Turner D, Schunemann HJ, Griffith LE, et al. The minimal detectable change cannot reliably replace the minimal important difference. J Clin Epidemiol. 2010;63:28–36. doi: 10.1016/j.jclinepi.2009.01.024. [DOI] [PubMed] [Google Scholar]
- 6.Hui D, Shamieh O, Paiva C, et al. Minimal Clinically Important Differences in the Edmonton Symptom Assessment Scale in Cancer Patients: A Prospective Study. Cancer. 2015 doi: 10.1002/cncr.29437. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.McGlothlin AE, Lewis RJ. Minimal clinically important difference: defining what really matters to patients. JAMA. 2014;312:1342–3. doi: 10.1001/jama.2014.13128. [DOI] [PubMed] [Google Scholar]
- 8.Dalal S, Hui D, Nguyen L, et al. Achievement of personalized pain goal in cancer patients referred to a supportive care clinic at a comprehensive cancer center. Cancer. 2012;118:3869–77. doi: 10.1002/cncr.26694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hui D, Park M, Shamieh O, et al. Personalized Symptom Goals (PSG) in Symptom Assessment: “At what level would you feel comfortable?”. ASCO Palliative Care in Oncology Symposium; 2015. (accepted) [Google Scholar]