Skip to main content
Hand (New York, N.Y.) logoLink to Hand (New York, N.Y.)
. 2018 Nov 21;15(3):360–364. doi: 10.1177/1558944718812180

Interpreting Patient-Reported Outcome Results: Is One Minimum Clinically Important Difference Really Enough?

Dylan L McCreary 1, Benjamin C Sandberg 1,2, Debra C Bohn 1,3, Harsh R Parikh 1,2, Brian P Cunningham 1,2,3,
PMCID: PMC7225877  PMID: 30461316

Abstract

Background: Patient-reported outcomes (PROs) are the gold standard for reporting clinical outcomes in research. A crucial component of interpreting PROs is the minimum clinically important difference (MCID). Patient-Rated Wrist Evaluation (PRWE) is a disease-specific PRO tool developed for use in distal radius fractures. The purpose of this study was to determine the influence of injury characteristics, treatment modality, and calculation methodology on the PRWE MCID in distal radius fractures. We hypothesize the MCID would be significantly influenced by each of these factors. Methods: From 2014 to 2016, 197 patients with a distal radius fracture were treated at a single level I trauma center. Each patient was asked to complete a PRWE survey at preoperative baseline, 6-week postoperative, and 12-week postoperative dates. The MCID was derived utilizing 2 distinct strategies, anchor and distribution. Anchor questions involved overall health anchor and mental and emotional health anchor. Patient variables regarding demographics, injury characteristics, and treatment modality were collected. Results: The MCID was unique between analytical methods at all time points. The distribution MCID presented commonality across assessed variables. However, the anchor MCID was unique by AO/OTA fracture classifications, treatment modality, and time points. Conclusions: Our study found the MCID was heavily influenced by assessment time points, analytical method, treatment modality, and fracture classification. These results suggest that to accurately interpret PRO data in clinical trials, an anchor question should be included so that the MCID can be determined for the specific patient population included in the study.

Keywords: outcomes, distal radius fracture, patient-reported outcomes, patient-rated wrist evaluation (PRWE), minimum clinically important difference (MCID)

Introduction

Over the past 25 years, patient-reported outcomes (PROs) have replaced clinical outcome measures as the gold standard for clinical research outcome metrics.1,2 Patient-reported outcomes provide a patient-centric measure of a patient’s general quality of life or outcome regarding a specific disease or anatomic location.3 While PRO scores are commonly analyzed for statistical differences, this type of analysis provides no information about the clinical significance of those differences.3,4 To address the need for a measure of clinical significance of PRO results, the term “minimum clinically important difference” (MCID) was used to define the smallest change in a PRO measure that a patient perceives as beneficial.4

Currently, there is no standard method for determining the MCID for a PRO tool. Current recommendations are to look for a convergence among multiple methods when establishing an MCID.5,6 This technique aligned with traditional thought that the MCID was a single value inherent to each PRO tool.4 However, recent studies have shown that the MCID may vary widely according to patient characteristics, clinical characteristics, or analytical method.7,8 To allow better interpretation of the MCID, and therefore PRO results, it is necessary to determine which variables affect the determination of an MCID.

Patient-Rated Wrist Evaluation (PRWE) is a disease-specific PRO tool that was designed for use in patients with distal radius fracture and has been commonly reported in clinical trials.9-13 The PRWE contains questions regarding function and pain. Function and Pain sections are equally weighted and added together for a total score ranging from 0 to 100. Higher scores indicate higher levels of pain and disability. The MCID for the PRWE in patients with distal radius fracture has been previously defined. However, this study used poor methodology such as including patients from 3 different prospective trials, leading to large variations in patient follow-up.14 The purpose of this study was to define the MCID for the PRWE using multiple analytical methods to determine whether a single value can be reported. The secondary purpose of this study was to determine whether time points utilized, patient injury characteristics, treatment modality, analytical method, or anchor questions affect the MCID.

Materials and Methods

Patients with a distal radius fracture, from 2014 to 2016, were identified from a prospective registry at a single level I trauma center. The registry contained demographics, AO/OTA fracture classification, treatment modality, operative information, PRWE, and anchor questions. Inclusion criteria were isolated distal radius fracture, age 18 or greater, and completed PRWE surveys at preoperative baseline and postoperative 6 and 12 weeks. Two unique anchor questions were completed in tangent with the PRWE survey. The first asked patients to rank their overall health (OHA) on a Likert-type 5-point scale, ranging from “poor” to “excellent.” The second asked them to rate their mental and emotional health anchor (MEHA) on the same 5-point scale. The Pearson r correlation was utilized to determine the relationship between the anchors and the PRWE scores.5

Two derivation strategies were used to determine the MCID. The anchor method derived the average change in PRWE, between 2 time points, only for those patients who recorded a minimum 1-point increase for the respective anchoring question, OHA or MEHA15,16 The anchoring method was then stratified by the different combinations of the 3 time points. For the distribution method, the standard deviation of all preoperative scores was halved to define the MCID. In the multiple subgroup analyses, the same methods were then applied to determine the MCID independently for patients who had operative treatment, to those who had nonoperative treatment, and stratified by AO/OTA fracture classification.

Results

Review of PWRE Surveys

A total of 197 patients were included in the study, of which 149 (76%) were women and 112 (57%) were treated operatively. The average age was 57 ± 17 years. There were 100 patients with AO/OTA classification 23 A fractures (51%), 30 patients with 23 B (15%), and 62 patients with 23 C (31%). No injury films were available for 5 patients (3%). Overall, 91 patients had a complete data set with scores for the preoperative, 6-week follow-up, and 12-week follow-up time points. A total of 67 patients were missing the 12-week follow-up and 39 patients were missing the 6-week follow-up. The average number of questions answered was 14.2 + 0.8/–1.3 (range = 9-15). As laid out in the scoring section of the PRWE User Manual, the score for all missing questions was substituted with the average score of the other questions in that subsection of the PRWE.17 The average PRWE score was 61.6 ± 21.8 preoperatively, 38.2 ± 22.3 at the 6-week follow-up, and 25.6 ± 22.6 at the 12-week follow-up.

MCID Calculations

Both anchor questions had a correlation coefficient of 0.3 with the overall PRWE score, indicating the anchors were appropriate.5 The MCID did not differ substantially between anchor questions (26.8 ± 24.7 for OHA vs 28.1 ± 22.4 for MEHA using the baseline to 6-week time points). The MCID was variant across time points: 26.8 ± 24.7 for baseline to 6 weeks, 42.6 ± 23.2 for baseline to 12 weeks, and 14.6 ± 21.3 for 6 to 12 weeks using the OHA method. The distribution method MCID (10.9) was lower than the anchor method MCID using most anchor and time point combinations (Table 1). The MCID for patients with AO/OTA 23 B fractures (43.7 ± 27.9) was greater than patients sustaining either 23 A (17.6 ± 21.2) or 23 C fractures (29.5 ± 21.0), employing the OHA method and baseline to 6-week time points (Table 2). The MCID was larger for patients in the operative group than those in the nonoperative group (31.8 ± 19.9 vs 20.0 ± 29.4, respectively) when using the OHA between the baseline and 6-week time points (Table 3). Combining all MCID values, the average MCID was 26.0 ± 15.8 (range = 5.5-63.7).

Table 1.

Summary of MCID Values Based Upon Different MCID Calculation Methods, Time Points, and Anchor Questions.

Method Time points Distribution OHA MEHA
Distribution Baseline 10.9
Anchor Baseline to 6 wk 26.8 ± 24.7 28.1 ± 22.4
Anchor Baseline to 12 wk 42.6 ± 23.2 48.0 ± 22.6
Anchor 6-12 wk 14.6 ± 21.3 6.5 ± 26.9

Note. MCID = minimum clinically important difference; OHA = overall health anchor; MEHA = mental and emotional health anchor.

Table 2.

Summary of MCID Values Based Upon Different MCID Calculation Methods and Fracture Patterns.

Method Time points 23 A 23 B 23 C
Distribution Baseline 10.6 11.9 10.9
OHA Baseline to 6 wk 17.6 ± 21.2 43.7 ± 27.9 29.5 ± 21.0
OHA Baseline to 12 wk 39.1 ± 13.2 54.1 ± 28.7 39.9 ± 27.6
OHA 6-12 wk 15.2 ± 18.4 13.0 ± N/A 13.4 ± 36.1
MEHA Baseline to 6 wk 23.1 ± 18.5 36.9 ± 24.3 23.1 ± 22.9
MEHA Baseline to 12 wk 44.3 ± 24.2 63.7 ± 21.9 46.6 ± 20.3
MEHA 6-12 wk 15.7 ± 25.1 9.7 ± 16.3 6.4 ± 28.6

Note. MCID = minimum clinically important difference; OHA = overall health anchor; MEHA = mental and emotional health anchor; N/A = not applicable.

Table 3.

Summary of MCID Values Based Upon Different MCID Calculation Methods, Time Points, and Treatment Methods.

Method Time points Operative Nonoperative
Distribution Baseline 10.2 11.3
OHA Baseline to 6 wk 31.8 ± 19.9 20.0 ± 29.4
OHA Baseline to 12 wk 46.3 ± 23.7 35.2 ± 21.7
OHA 6-12 wk 10.7 ± 22.9 22.3 ± 18.0
MEHA Baseline to 6 wk 28.9 ± 21.1 26.3 ± 26.5
MEHA Baseline to 12 wk 53.8 ± 22.6 37.6 ± 19.4
MEHA 6-12 wk 7.4 ± 26.3 5.5 ± 29.6

Note. MCID = minimum clinically important difference; OHA = overall health anchor; MEHA = mental and emotional health anchor.

Discussion

The purpose of this study was to utilize multiple analytical methods to observe whether a single MCID could be reported for the PRWE in distal radius fractures. The secondary purpose was to determine whether the time points utilized, patient injury characteristics, treatment modality, analytical method, or anchor question affect the MCID. Our study found that a single MCID cannot be reported for the PRWE due to the large variability between analytical methods, follow-up time periods, treatment modality, and fracture classification (Tables 1-3). The MCID was similar using both anchor questions in the anchor method when the same time points were utilized. Combining all MCID values, the average MCID was 26 ± 15.8 (range = 5.5-63.7).

A previous study to define the MCID for the PRWE in patients with distal radius fracture concluded the MCID was 11.5.14 The value is lower than the average MCID found in our study (26), but was within the range we found (5.5-63.7). There were multiple inconsistencies with the methodology. The authors utilized only one method to determine their MCID, despite multiple other studies showing variation in MCID based on analytical method7,8 and current recommendations calling for convergence of multiple methods.5,6 In addition, the patient population was mainly composed of patients from 2 randomized clinical trials, one enrolling patients with extra-articular fractures treated nonoperatively and one enrolling patients with intra-articular fractures treated surgically. This means the patient population in this study does not necessarily reflect a random sample of patients with distal radius fracture, reducing the generalizability of these results. Finally, there was large variation in follow-up in this study, with the time from trauma to initial measurement ranging from 6 to 13 weeks and the time between measurements ranging from 6 to 39 weeks. This makes it difficult to compare these results with other study results as follow-up time period has previously been shown to influence MCID.7,8

Choice of analytical method has previously been shown to affect the MCID within a patient population.7,8 Our study found inconsistencies in MCID when using distribution and anchor methods. A previous study showed a difference in MCID between anchor and distribution methods.8 An additional study found a difference in MCID when utilizing a mean change anchor method and a receiver operating characteristic (ROC) curve method.7 Distribution, mean change, and ROC curve methods each have benefits and limitations4,5,18,19; however, no method has been recommended as the gold standard.6,18 The distribution method was developed from an analysis of a trend found in multiple studies utilizing the anchor method and as such should likely be utilized only as a secondary method to confirm results or if the other methods are not possible.18 It would be beneficial to define a single method as the gold standard for MCID analysis to address the variation in MCID within a population utilizing different methods. This would increase how results can be compared and allow easier detection of the other factors that affect the MCID. For example, our study showed preference for the mean change anchor method as it allows for adjustment due to changing time points.

Follow-up period has also been previously shown to substantially influence the MCID.7,8 Our results showed an increasing MCID as the time between measurements increased (26.8 ± 24.7 for baseline to 6 weeks, 42.6 ± 23.2 for baseline to 12 weeks) in agreement with a previous study.7 This may reflect the changes in a patient’s expectations throughout the course of their treatment as a larger change in PRWE score is required for a patient to feel minimally changed when there is a larger interval between measurements. A separate study showed a difference in MCID with different follow-up periods, but no discernible pattern.8 Our results also indicated a smaller MCID the farther the patient is from baseline with similar time between measurements (26.8 ± 24.7 for baseline to 6 weeks vs 14.6±21.3 for 6-12 weeks using the OHA method). It is possible that not only the length of time between measurements but also when the measurements occur relative to injury may also influence the MCID. Our results suggest that patients expect greater change in their health status in the beginning of their recovery and may require smaller changes in their health status to feel they have improved as their recovery progresses.

The effect of anchor question on MCID has conflicting prior results as one study showed choice of anchor question to substantially affect the MCID7 and a separate study showed no effect.15 It is possible that a difference exists between the 2 types of anchors: clinically centered and patient centered.18 Clinically centered anchors utilize a clinical milestone as opposed to a patient-reported question. Both our study and the study showing no effect of anchor choice15 utilized only patient-centered anchors; however, the study showing an effect7 utilized both patient- and clinically centered anchors. Further research is needed to establish the effect clinical and patient-centered anchor questions have on MCID.

Previous studies have suggested patient and clinical characteristics, such as disease severity and treatment, affect the MCID.5,7,20 In our study, even when follow-up time period, anchor question, and analytical method were kept constant, fracture pattern (43.7 ± 27.9 for AO/OTA 23 B fractures, 17.6 ± 21.2 for 23 A fractures, and 29.5 ± 21.0 for 23 C fractures) and treatment modality (31.8 ± 19.9 for operative vs 20.0 ± 29.4 for nonoperative) substantially influenced the MCID. The variability of the MCID even when methodological variables are controlled suggests the concept of one MCID for each PRO and even one MCID for each injury or disease state within a given PRO may not be appropriate. This result contradicts the traditional presentation of one MCID for each PRO4 and instead suggests multiple MCIDs should be reported for a PRO based on disease state, disease severity, and treatment modality. The addition of the information presented in this study provides clinicians with unique insight to interpret the PRWE for distal radius fractures. To accurately interpret PRO data in future clinical trials, we suggest anchor questions should be included so that MCID values can be determined for the specific patient population included in the study. By determining these study population–specific MCID values, PROs can continue to provide the gold standard patient-centric measure of clinical outcome.

This study had multiple strengths and weaknesses. A strength is this study utilized multiple analytical methods and a large patient population to analyze the MCID. In addition, the data were taken from a prospective registry of all patients with distal radius fracture treated at our institution increasing the generalizability of the study. A weakness is that not all analytical methods or types of anchor questions were assessed in this study. Furthermore, the recommendations made in this study do not address the variability reported among different analytical methods. A deep review of the literature with expert opinion may be required to determine which analytical method should be used as the standard. Future work should analyze the effect of anchor question on MCID and establish a standard method for determining the MCID. Eliminating the variability in MCID between studies due to methodological differences would allow better comparisons of MCID between studies to determine which patient and clinical variables affect the MCID.

Conclusion

Our study found the MCID of the PRWE in patients with distal radius fracture to be heavily influenced by analytical method, follow-up time period, injury severity, and treatment modality. These results do not indicate the information provided by the PRWE is not meaningful. Rather this study supports the conclusions of recent studies and represents a shift from the traditional thought of one MCID for each PRO tool to an MCID for each distinct patient population within a disease state for a PRO. We suggest to accurately interpret PRO data in clinical trials anchor questions should be included in the study so that the MCID can be determined for the specific patient population included in the study.

Footnotes

Ethical Approval: This study was approved by our institutional review board.

Statement of Human and Animal Rights: All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008.

Statement of Informed Consent: Informed consent was obtained from all patients and participants who were included within this study.

Declaration of Conflicting Interests: The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: DCB reports financial interests with Bristol-Myers Squibb, Eli Lilly, and Pfizer outside the submitted work. DCB also serves on the Board of the American Academy of Orthopaedic Surgeons, Ruth Jackson Orthopaedic Society, and TRIA Orthopaedic Center for Research & Education. BPC reports that his spouse is the CEO and Founder of CODE Technology. DLM, BCS, and HRP have no conflicts to report.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

  • 1. Poolman RW, Swiontkowski MF, Fairbank JCT, Schemitsch EH, Sprague S, de Vet HCW. Outcome instruments: rationale for their use. J Bone Joint Surg Am. 2009;91(suppl 3):41-49. doi: 10.2106/JBJS.H.01551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Swiontkowski MF, Buckwalter JA, Keller RB, Haralson R. The outcomes movement in orthopaedic surgery: where we are and where we should go. J Bone Joint Surg Am. 1999;81(5):732-740. http://www.ncbi.nlm.nih.gov/pubmed/10360703. [DOI] [PubMed] [Google Scholar]
  • 3. Jackowski D, Guyatt G. A guide to health measurement. Clin Orthop Relat Res. 2003;413(413):80-89. doi: 10.1097/01.blo.0000079771.06654.13. [DOI] [PubMed] [Google Scholar]
  • 4. Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials. 1989;10(4):407-15. http://www.ncbi.nlm.nih.gov/pubmed/2691207. [DOI] [PubMed] [Google Scholar]
  • 5. Revicki D, Hays RD, Cella D, Sloan J. Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes. J Clin Epidemiol. 2008;61(2):102-109. doi: 10.1016/j.jclinepi.2007.03.012. [DOI] [PubMed] [Google Scholar]
  • 6. Wyrwich KW, Bullinger M, Aaronson N, et al. Estimating clinically significant differences in quality of life outcomes. Qual Life Res. 2005;14(2):285-295. http://www.ncbi.nlm.nih.gov/pubmed/15892420. [DOI] [PubMed] [Google Scholar]
  • 7. Mills KAG, Naylor JM, Eyles JP, Roos EM, Hunter DJ. Examining the minimal important difference of patient-reported outcome measures for individuals with knee osteoarthritis: a model using the knee injury and osteoarthritis outcome score. J Rheumatol. 2016;43(2):395-404. doi: 10.3899/jrheum.150398. [DOI] [PubMed] [Google Scholar]
  • 8. de Vet HCW, Ostelo RWJG, Terwee CB, et al. Minimally important change determined by a visual method integrating an anchor-based and a distribution-based approach. Qual Life Res. 2007;16(1):131-142. doi: 10.1007/s11136-006-9109-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Costa ML, Achten J, Parsons NR, et al. Percutaneous fixation with Kirschner wires versus volar locking plate fixation in adults with dorsally displaced fracture of distal radius: randomised controlled trial. BMJ. 2014;349 http://www.bmj.com/content/349/bmj.g4807. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. MacDermid JC, Richards RS, Donner A, Bellamy N, Roth JH. Responsiveness of the short form-36, disability of the arm, shoulder, and hand questionnaire, patient-rated wrist evaluation, and physical impairment measurements in evaluating recovery after a distal radius fracture. J Hand Surg Am. 2000;25(2):330-340. doi: 10.1053/jhsu.2000.jhsu25a0330. [DOI] [PubMed] [Google Scholar]
  • 11. MacDermid JC, Turgeon T, Richards RS, Beadle M, Roth JH. Patient rating of wrist pain and disability: a reliable and valid measurement tool. J Orthop Trauma. 1998;12(8):577-586. http://www.ncbi.nlm.nih.gov/pubmed/9840793. [DOI] [PubMed] [Google Scholar]
  • 12. Mehta SP, MacDermid JC, Richardson J, MacIntyre NJ, Grewal R. A systematic review of the measurement properties of the patient-rated wrist evaluation. J Orthop Sport Phys Ther. 2015;45(4):289-298. doi: 10.2519/jospt.2015.5236. [DOI] [PubMed] [Google Scholar]
  • 13. Ring D, Roberge C, Morgan T, Jupiter JB. Osteotomy for malunited fractures of the distal radius: a comparison of structural and nonstructural autogenous bone grafts. J Hand Surg Am. 2002;27(2):216-222. http://www.ncbi.nlm.nih.gov/pubmed/11901380. [DOI] [PubMed] [Google Scholar]
  • 14. Walenkamp MMJ, de Muinck Keizer R-J, Goslings JC, Vos LM, Rosenwasser MP, Schep NWL. The minimum clinically important difference of the patient-rated wrist evaluation score for patients with distal radius fractures. Clin Orthop Relat Res. 2015;473(10):3235-3241. doi: 10.1007/s11999-015-4376-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Tubach F, Ravaud P, Baron G, et al. Evaluation of clinically relevant changes in patient reported outcomes in knee and hip osteoarthritis: the minimal clinically important improvement. Ann Rheum. Dis. 2005;64(1):29-33. doi: 10.1136/ard.2004.022905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Tubach F, Wells GA, Ravaud P, Dougados M. Minimal clinically important difference, low disease activity state, and patient acceptable symptom state: methodological issues. J Rheumatol. 2005;32(10):2025-2029. http://www.ncbi.nlm.nih.gov/pubmed/16206363. [PubMed] [Google Scholar]
  • 17. MacDermid JC, Wessel J, Humphrey R, Ross D, Roth JH. Validity of self-report measures of pain and disability for persons who have undergone arthroplasty for osteoarthritis of the carpometacarpal joint of the hand. Osteoarthr Cartil. 2007;15(5):524-530. doi: 10.1016/j.joca.2006.10.018. [DOI] [PubMed] [Google Scholar]
  • 18. Norman GR, Sloan JA, Wyrwich KW. Interpretation of changes in health-related quality of life. Med Care. 2003;41(5):582-592. doi: 10.1097/01.MLR.0000062554.74615.4C. [DOI] [PubMed] [Google Scholar]
  • 19. Turner D, Schünemann HJ, Griffith LE, et al. The minimal detectable change cannot reliably replace the minimal important difference. J Clin Epidemiol. 2010;63(1):28-36. doi: 10.1016/j.jclinepi.2009.01.024. [DOI] [PubMed] [Google Scholar]
  • 20. Terwee CB, Roorda LD, Dekker J, et al. Mind the MIC: large variation among populations and methods. J Clin Epidemiol. 2010;63(5):524-534. doi: 10.1016/j.jclinepi.2009.08.010. [DOI] [PubMed] [Google Scholar]

Articles from Hand (New York, N.Y.) are provided here courtesy of American Association for Hand Surgery

RESOURCES