We have come a long way in reporting about what really matters in clinical research. In the early 1960s, Neviaser summarized his results of surgery for large rotator cuff tears rather simply: “My own experience has shown that … massive ruptures with marked separation and retraction of the cuff do not do well by operative repair” [1]. He reported nothing regarding patients’ pain or function after surgery. This was common at the time.
More recently, an appropriate focus has been placed on patient-centered outcomes tools. Instead of qualitative comments like Neviaser’s, and in lieu of lists of physical findings or radiographic signs — ROM, lucent lines — that may or may not correlate with patient satisfaction, we now use numerous surveys which are “validated” to greater or lesser degrees. This is a clear improvement, but the problem is far from solved.
In clinical research, if one gathers enough patients or measures enough x-rays, small differences can be detected as statistically different. How can we know whether those modest differences are clinically important? Does a 1°-difference in tibial varus matter, even if the difference is statistically significant? What about a 1 cm-difference on a 10-cm VAS pain scale? Even if “real,” would a patient even notice?
It turns out that there is a way to get that answer, although as a specialty, we have not put nearly as much effort into working it out as we should. In “Comparative Responsiveness and Minimal Clinically Important Differences for Idiopathic Ulnar Impaction Syndrome,” Drs. Kim and Park take a step in the right direction in their study of a concept that is unfamiliar to many surgeons and clinician-scientists: the minimum clinically important difference (MCID).
The MCID is the smallest difference in a score that a patient would identify as important. In principle, we should know the MCID for every outcomes tool that we use. We do not.
To begin to chip away at this important gap in our collective knowledge, Kim and Park calculated MCIDs for two common upper extremity instruments by testing them in a population of patients undergoing ulnar shortening osteotomy.
If you are not a hand surgeon, the actual finding of this particular study (the MCID for two upper extremity instruments, the Patient-Rated Wrist Evaluation [PRWE] and the Disabilities of the Arm, Shoulder and Hand [DASH], are about 15 points on a 100-point scale) may not grab you. But if you read clinical research of any sort, and certainly if you perform clinical research, the study by Drs. Kim and Park should be required reading. The methods are interesting, generalizable, and worth understanding; the topic itself, as mentioned, is critically important.
Going forward, patients, payers, and government entities will demand that we show that the work we do actually improves health. Knowing our MCIDs, and showing that the treatments we advocate exceed them, will help us do that.
We have come a long way since Neviaser’s work in terms of outcomes reporting, but given that we do not know the MCIDs for most of the outcomes instruments we use, we still have a long way to go.
Take 5 with Jae Kwang Kim MD, PhD
Lead author of Comparative Responsiveness and Minimal Clinically Important Differences for Idiopathic Ulnar Impaction Syndrome
Seth S. Leopold MD:What sparked your interest in MCIDs? It is a somewhat unusual area of inquiry.
Jae Kwang Kim MD, PhD: There are two reasons for my interest in MCIDs. First, several years ago, I submitted a report about a new treatment method with a significant clinical outcome for a certain disease. The reviewer asked whether my result was clinically important, even though it showed a statistically significant improvement. So I searched for the MCID for that disease, but no studies related to the MCID for that disease had been reported. Additionally, the MCID is an important determinant in calculating the sample size in prospective randomized trials. I had trouble planning a clinical trial, because limited studies related to MCID in this area of orthopaedic research had been reported; this further motivated my interest.
Dr. Leopold:What should every reader of clinical research know about the concept of MCIDs?
Dr. Kim: In general, we have used statistical methods to show whether there are improvements in outcomes after treatment, or to compare our results with outcomes of other treatments. But statistically significant differences are largely driven by sample size. In other words, a clinically small improvement might show up as a statistically significant difference, if a large enough population is analyzed. In the realm of clinical medicine and surgery, though, a small difference, even if statistically significant, may be of little or of no importance to the health or the quality of life of patients. The concept of MCID evolved as a way to overcome this shortcoming of statistical methods in clinical research. The MCID represents a change that would be considered meaningful and worthwhile by the patients studied, and provides a threshold value for that clinically relevant change in a patient’s health.
Dr. Leopold:MCIDs for many of our commonly used orthopaedic outcomes tools are not known. Is this a problem, and if so, is it one shared by other medical specialties? How does orthopaedic surgery compare with other specialties in this regard?
Dr. Kim: Although the concept of MCID was first described in 1989, this is not a familiar concept for many orthopaedic surgeons, as you mentioned. MCID studies usually were performed by internists, statisticians, or epidemiologists, often in departments of epidemiology or preventive medicine. These publications were not widely seen by surgeons. However, I feel that there is an increasing demand for the study of MCID among orthopaedic surgeons, because so many patient-reported outcome measurements are being used in our area.
Dr. Leopold:Do we know whether the MCID is driven by the outcomes instrument used, the surgical intervention studied, the diagnosis treated, or all of those criteria? In other words, should we surmise that the MCID for the DASH questionnaire (one of the tools you studied) would be the same in a population of patients who underwent wrist fusion instead of ulnar shortening osteotomy? If not, is this line of research practical?
Dr. Kim: The MCID varies according to diseases and outcome instruments, but it does not depend on treatment methods. Therefore, we can compare the ulnar shortening osteotomy and wafer procedure for ulnar impaction syndrome using the PRWE or DASH, because the MCIDs of the PRWE and the DASH for ulnar impaction syndrome were reported in our paper.
Dr. Leopold: I hope that your study inspires other investigators to try to determine MCIDs for other commonly used outcomes tools. What advice do you have for clinician scientists who want to begin to look at these questions? What pitfalls might you avoid in your next study given all you learned from doing this one?
Dr. Kim: The MCID can be established in two ways. One is through an anchor-based method that compares changes in scores on the instrument with an anchor, where the patient indicates whether he or she is better than at baseline (the anchor). The other is a distribution-based method that evaluates the minimal difference in excess of that expected by random sample variation or by measurement errors in the instrument. Several methods to calculate an anchor-based MCID have been reported. However, only sensitivity- and specificity-based methods, using receiver operating characteristic curves (the approach we used in our study), can show that the calculated MCID is able to distinguish improved from unchanged patients. Importantly, though, this method is not suitable for conditions where most of patients will improve and few will remain unchanged, such as routine fracture care.
Footnotes
Note from the Editor-in-Chief: In “Editor’s Spotlight,” one of our editors provides brief commentary on a paper we believe is especially important and worthy of general interest. Following the explanation of our choice, we present “Take Five,” in which the editor goes behind the discovery with a one-on-one interview with an author of the article featured in “Editor’s Spotlight.”
The author certifies that he or a member of his immediate family has no funding or commercial associations (eg, legal, consultancies, stock ownership, equity interest, patent/licensing arrangements, etc) that might pose a conflict of interest in connection with the submitted article.
All ICMJE Conflict of Interest Forms for authors and Clinical Orthopaedics and Related Research® editors and board members are on file with the publication and can be viewed on request.
The opinions expressed are those of the writers, and do not reflect the opinion or policy of CORR® or the Association of Bone and Joint Surgeons®.
Reference
- 1.Neviaser JS. Ruptures of the rotator cuff. Clin Orthop. 1954;3:92–98. [PubMed] [Google Scholar]