Skip to main content
Clinical Orthopaedics and Related Research logoLink to Clinical Orthopaedics and Related Research
. 2023 Feb 2;481(4):715–716. doi: 10.1097/CORR.0000000000002582

CORR Insights®: Discordance Abounds in Minimum Clinically Important Differences in THA: A Systematic Review

Kim Madden 1,2,
PMCID: PMC10013653  PMID: 36735583

Where Are We Now?

Minimum clinically important differences (MCIDs) are undervalued and underused in orthopaedic surgery research. There is growing recognition that p values alone are not sufficient to report results in studies evaluating differences between groups. It is possible for differences between groups to be large enough to be statistically significant, but this does not necessarily mean that the difference is clinically meaningful, or even big enough for a patient to notice the “difference.”

This is where the MCID comes in: It’s a measure of the size of a treatment effect that tells readers the observed difference between intervention groups or over time is large enough to matter to a patient.

The authors of this study in Clinical Orthopaedics and Related Research® [2] conducted a systematic review of MCIDs reported in THA studies. This well-conducted methodologic review identified 242 eligible THA studies that calculated an MCID for any patient-reported outcome measure (PROM). The authors recommend using an MCID of 9 points on the Oxford Hip Score (OHS), 33 points on the Hip Disability and Osteoarthritis Outcome Score (HOOS) pain subscale, and 25 points on the HOOS quality of life subscale. The authors noted that anchor-based MCIDs were larger than distribution-based MCIDs in all PROMs for which there was a direct comparison between the two approaches, indicating that the distribution-based method may underestimate the MCID. I suggest that readers of clinical research “bookmark” this CORR study [2] and keep it handy. Given how often these particular outcome tools are used in research and clinical quality measurement efforts, these MCIDs will prevent clinicians, clinician-scientists, and policymakers from making decisions based on p values, and instead inform their choices with the use of effect size thresholds that patients consider important.

Where Do We Need To Go?

The authors rightly point out that anchor-based MCIDs should be used wherever possible because they are based on patient values rather than the statistical properties of the sample (usually the variance; for example, one-half of the standard deviation) [3]. The MCID is supposed to reflect patient values and preferences, but the distribution-based approach does not consider the patient’s perspective. Deckey at al. [2] report that in all cases in which both anchor-based and distribution-based MCIDs are available for PROMs, the median distribution-based MCID was smaller than the median anchor-based MCID. This means that the distribution-based approach might substantially underestimate the difference that is important to patients. For these reasons, researchers in orthopaedic surgery should focus on generating credible anchor-based MCIDs for various PROMs, clinical populations, and treatments. Distribution-based MCIDs should not be used unless there is no other information available to inform the MCID.

MCIDs also can and should be used in orthopaedic research to calculate sample sizes. Many investigators base their sample size calculations on the difference a previous study or pilot study found, but the MCID is conceptually and methodologically sounder for this purpose. For example, if a previous study showed a particular difference between groups, there is no reason to believe this difference is clinically meaningful unless patients specifically help us make that determination, and an anchor-based MCID is the best tool we have for this job [6]. Pilot trials (and small trials in general) are known to have spuriously high or low point estimates with wide confidence intervals that can be misleading; in fact, previous orthopaedic studies have shown that pilot trials may even show the opposite result of a larger, more-definitive trial [7, 8]. Therefore, differences found in pilot studies are unreliable estimates on which to base a sample size calculation for a larger trial. Credible MCID estimates help to overcome these issues. We therefore need studies to provide anchor-based MCIDs for all the common nonoperative interventions and surgical procedures surgeons perform, and to verify that these MCIDs don’t differ too widely in different study populations.

How Do We Get There?

MCIDs used in orthopaedics need to be credible. Devji et al. [3] developed and validated an instrument to measure the credibility of MCIDs. Their criteria were that the anchor is rated by the patient, the anchor is relevant, the estimate is precise, the anchor and outcome of interest should be correlated, and the threshold selected is small but important. These criteria innately favor anchor-based MCIDs over distribution-based MCIDs because they consider the patient’s perspective. Journal editors, peer reviewers, and authors should be aware of these credibility criteria and use the most credible MCID available. When investigators generate new MCIDs, they should be mindful of these criteria to calculate credible MCIDs. When several different MCIDs have been published for a particular PROM and clinical context, Wang et al. [9] recommend choosing the most credible MCID available. I agree.

Methodologic studies of research generating MCIDs identify that reporting of MCIDs tends to be poor [4]. Carrasco-Labra et al. [1] provide guidance on reporting how MCIDs are calculated, which includes reporting the anchor used, the number of participants in the study, the analytical approach, variance in the sample, the clinical context, the description of the threshold used to determine MCID, and other items. Another systematic review found that only 21% of surgical trials explicitly used an MCID in their sample size calculation [5]. This is a problem that should be easy to remedy and would cost nothing to fix: Trial investigators should be aware that MCIDs can be used in sample size calculations, institutional review boards and ethics boards should insist that they be used a priori, and journal editors should provide clarity to their readers on this important point.

Footnotes

This CORR Insights® is a commentary on the article “Discordance Abounds in Minimum Clinically Important Differences in THA: A Systematic Review” by Deckey and colleagues available at: DOI: 10.1097/CORR.0000000000002434.

The author certifies that there are no funding or commercial associations (consultancies, stock ownership, equity interest, patent/licensing arrangements, etc.) that might pose a conflict of interest in connection with the submitted article related to the author or any immediate family members.

All ICMJE Conflict of Interest Forms for authors and Clinical Orthopaedics and Related Research® editors and board members are on file with the publication and can be viewed on request.

The opinions expressed are those of the writer, and do not reflect the opinion or policy of CORR® or The Association of Bone and Joint Surgeons®.

References

  • 1.Carrasco-Labra A, Devji T, Qasim A, et al. Serious reporting deficiencies exist in minimal important difference studies: current state and suggestions for improvement. J Clin Epidemiol. 2022;150:25-32. [DOI] [PubMed] [Google Scholar]
  • 2.Deckey DG, Verhey JT, Christopher ZK, et al. Discordance abounds in minimum clinically important differences in THA: a systematic review. Clin Orthop Relat Res. 2023;481:702-714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Devji T, Carrasco-Labra A, Qasim A, et al. Evaluating the credibility of anchor based estimates of minimal important differences for patient reported outcomes: instrument development and reliability study. BMJ. 2020;369:m1714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Devji T, Carrasco-Labra A, Guyatt G. Mind the methods of determining minimal important differences: three critical issues to consider. Evid Based Ment Health. 2021;24:77-81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Kashani I, Hall JL, Hall JC. Reporting of minimum clinically important differences in surgical trials. ANZ J Surg. 2009;79:301-304. [DOI] [PubMed] [Google Scholar]
  • 6.Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006;63:484-489. [DOI] [PubMed] [Google Scholar]
  • 7.Sprague S, Tornetta P, 3rd, Slobogean GP, et al. Are large clinical trials in orthopaedic trauma justified? BMC Musculoskelet Disord. 2018;19:124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.SPRINT Investigators; Bhandari M Tornetta P 3rd et al. (Sample) size matters! An examination of sample size from the SPRINT trial study to prospectively evaluate reamed intramedullary nails in patients with tibial fractures. J Orthop Trauma. 2013;27:183-188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wang Y, Devji T, Qasim A, et al. A systematic survey identified methodological issues in studies estimating anchor-based minimal important differences in patient-reported outcomes. J Clin Epidemiol. 2022;142:144-151. [DOI] [PubMed] [Google Scholar]

Articles from Clinical Orthopaedics and Related Research are provided here courtesy of The Association of Bone and Joint Surgeons

RESOURCES