Skip to main content
The Journal of Manual & Manipulative Therapy logoLink to The Journal of Manual & Manipulative Therapy
. 2012 Aug;20(3):160–166. doi: 10.1179/2042618612Y.0000000001

Clinimetrics corner: a closer look at the minimal clinically important difference (MCID)

Alexis Wright 1, Joseph Hannon 2, Eric J Hegedus 1, Alicia Emerson Kavchak 3
PMCID: PMC3419574  PMID: 23904756

Abstract

Minimal clinically important difference (MCID) scores are commonly used by clinicians when determining patient response to treatment and to guide clinical decision-making during the course of treatment. For research purposes, the MCID score is often used in sample size calculations for adequate powering of a study to minimize the false-positives (type 1 errors) and the false-negatives (type 2 errors). For clinicians and researchers alike, it is critical that the MCID score is a valid and stable measure. A low MCID value may result in overestimating the positive effects of treatment, whereas a high MCID value may incorrectly classify patients as failing to respond to treatment when in fact the treatment was beneficial. The wide range of methodologies for calculating the MCID score results in varied outcomes, which leads to difficulties with interpretation and application. This clinimetrics corner outlines key factors influencing MCID estimates and discusses limitations with the use of the MCID in both clinical and research practice settings.

Keywords: MCID, Responsiveness, Outcome measures, Psychometric properties

Introduction

Escalation of health care costs mandate accountability for providing informed and effective health care interventions. Increasingly, health outcomes are being utilized in a standardized attempt to observe change in an often complex clinical presentation.1 Outcome measures provide context for evidence-based clinical decision-making regarding patient management strategies and serve as a mechanism to monitor treatment effectiveness and to predict which patients will benefit most from a particular intervention.2 This ability to stratify patients becomes especially important as, for example, the prevalence of chronic diseases continues to increase, which present with typical intermittent flare-ups that require further medical management over time.3 Therefore, as a ‘cure’ is not possible in these chronic conditions, understanding clinically important change to the patient becomes more relevant. Routine utilization of outcome measures can facilitate detection of functional limitations that might not be discovered during the subjective interview,4 allowing for improved communication between the patient and the physical therapist. Further, outcome measures allow for improved documentation and a more direct dialogue with providers, including increasing appropriate referral to other medical specialties.4

For an outcome measure to be clinically useful, the measure must first reflect sound psychometric properties of reliability and validity. Beyond this, the outcome tool must demonstrate an ability to accurately detect change, otherwise known as responsiveness.2 Measures of responsiveness have traditionally been reported as statistically significant change scores; that is, change beyond measurement error. Both for the clinician and the researcher, a threshold to detect change beyond that of random error improves confidence that the difference observed in the outcome measure from initial visit to follow up is indeed a true change. The more precise the responsiveness of the outcome measure is, the more accurately the change can be categorized.5

Current outcome measure tool development has evolved to include not only the statistical analysis of reliability and validity, but also an understanding of the patient’s perspective of change. The minimal clinically important difference (MCID) score is defined as the minimal amount of change that is important to the patient.6 The ability to define a stable, universal MCID score for a particular instrument is an attractive concept for several reasons. Clinically, the MCID score can be utilized to establish a therapeutic threshold via an outcome measure. For example, a clinician may establish a goal for a patient to attain the MCID of the 6-minute-walk-test over the episode of care. This serves as an objective, measurable goal that is patient-centered. Another example with clinical impact is that third party payers may try to utilize the MCID score for determining continued payment for care. If a patient with chronic low back pain demonstrates stability on the Oswestry disability index questionnaire, further authorization for physical therapy visits may be rejected. For the clinical researcher, the MCID score can be used to establish a priori power, to determine sample size, and to ascertain the therapeutic threshold signaling the intervention’s effectiveness.7

Though the MCID score is an alluring concept, clinicians must be cautious in accepting an MCID score at face value given the wide variability of established MCID scores available for a single outcome scale (Table 1). Understanding how these scores were established can facilitate a more nuanced interpretation and application of the MCID score. The purposes of this clinimetrics corner are to inform the reader about the methodologies used in establishing the MCID score, to discuss the variety of factors that can influence the MCID score for any given outcome instrument, and to promote clinician critical appraisal of the literature when establishing goals for care, developing research trials, and communicating with referral sources and payer sources.

Table 1. Selective sample of reported MCID scores among commonly used outcome measures.

Reference Population description Sample size Method of calculation Cutoffs used MCID score
Reported MCID scores for the numeric pain rating scale
Maughan and Lewis19 Chronic low back pain >3 months, with and without radicular symptoms 63 patients Anchor: ROC curve approach GPE: +2 ROC curve approach: 4 points
Distribution: SEM SEM: 2·4 points
Pool et al.20 Non-specific neck pain >2 weeks 183 patients Anchor: ROC curve approach GPE: +2 ROC curve approach: 2·5 points
Distribution: SEM SEM: 4·3 points
Mintken et al.12 Patients presenting to PT with a primary complaint of shoulder pain 101 patients Anchor: ROC curve approach GRCS: +3 ROC curve approach: 1·1 points
Salaffi et al.21 Chronic musculoskeletal pain 825 patients: knee OA 223, hip OA 86, hand OA 133, RA 290, AS 83. Anchor: ROC curve approach GPE: +3 ROC curve approach: 1 point
Reported MCID scores for the visual analog scale for pain
Beurskens et al.22 Non-specific low back pain for >6 weeks 81 patients Anchor: ROC curve approach GPE: +2 ROC curve approach: 10–18 mm
Hagg et al.23 Chronic low back pain >2 years, surgical and non-surgical 289 patients Anchor: between patients score change, Patient Global assessment of treatment effect questionnaire: ‘better’ Between patients score change: 18–19 mm
Distribution: SEM SEM: 15 mm
Tashjian et al.24 Non-operative rotator cuff disease 81 patients Anchor: between patients score change 4 item improvement questionnaire: ‘good- satisfactory effect with occasional episode of pain or stiffness’ Between patients score change: 13·7 mm
Tubach et al.25 Hip/knee pain 1224 patients: hip OA 310, knee OA 914 Anchor: within patients score change 5 point Likert scale: ‘good- satisfactory effect with occasional episode of pain or stiffness’ Within patients score change
Hip: 15·4 mm
Knee: 19·9 mm
Reported MCID scores for the lower extremity functional scale
Binkley et al.26 Lower extremity musculoskeletal dysfunction defined as ‘any condition of the joints, muscle, or other soft tissues of the lower extremity’ 107 patients Anchor: ROC curve approach 7 point rating of change score: +2 ROC curve approach: 9 points
Wang et al.15 Orthopedic knee impairments 6651 patients Anchor: ROC curve approach GRCS: +3 ROC curve approach: 12 points
Reported MCID scores for the patient-specific functional scale
Maughan and Lewis19 Chronic low back pain >3 months, with and without radicular symptoms 63 patients Anchor: ROC curve approach GPE +2 ROC curve approach: 2 points
Distribution: SEM SEM: 1·4 points
Reported MCID scores for the neck disability index
Carreon et al.27 Patients post-cervical fusion due to degenerative conditions 505 patients Anchor: ROC curve approach HTI: ‘somewhat better’ ROC curve approach: 7·5 points
Pool et al.20 Non specific neck pain >2 weeks 183 patients Anchor: ROC curve approach GPE: +2 ROC curve approach: 3·5 points
Distribution: SEM SEM: 10·5 points
Reported MCID scores for the Oswestry disability index
Beurskens et al.22 Non specific Low Back Pain >6 weeks 81 patients Anchor: ROC curve approach GPE: +2 ROC curve approach: 4–6% (on a 100 point scale)
Hagg et al.23 Chronic low back pain >2 years, surgical and non-surgical 289 patients Anchor: between patients score change Patient global assessment of treatment effect questionnaire: ‘better’ Between patients score change: 10% (on a 100 point scale)
Distribution: SEM SEM: 10% (on a 100 point scale)
Maughan and Lewis19 Chronic low back pain >3 months, with and without radicular symptoms 63 patients Anchor: ROC curve approach GPE +2 ROC curve approach: 8% (on 100% scale)
Distribution: SEM SEM: 16·7% (on a 100-point scale)

Note: Abbreviations: ROC, receiver operating curve; SEM, standard error of measurement; GPE, global perceived effect; MCID, minimal clinically important difference; GRCS, global of change score; OA, osteoarthritis; RA, rheumatoid arthritis; AS, ankylosing spondylitis; HTI, health transition index of the SF-36.

Methodological Approaches to Determining the MCID

At least nine methodological approaches have been reported in calculating MCID (Table 2). In general, methodological approaches can be classified into two broad groups: anchor-based and distribution-based. Previous authors have outlined the strengths and weaknesses associated with these two methods; however, a single standardized methodology for calculating MCID has yet to be determined.3,8

Table 2. Methods used for calculating the MCID3.

Method Description Cutoffs used
Anchor-based approaches
Within patients score change The MCID is the mean Δscore of patients who improved Selection of cut-point used on the anchor is arbitrary and can correspond to small, moderate, or large changes
Between patients score change The MCID is the mean Δscore of patients who improved minus the mean Δscore of those who did not Selection of cut point on the anchor is arbitrary
ROC curve approach The MCID is the Δscore that corresponds to the point at the most left-top corner of the curve. This point corresponds to a change in score that is associated with the smallest amount of misclassification. Different cut-points may be used on the anchor to dichotomize patients into those who improved and those who did not.
Social comparison approach The MCID is the difference in scores of patients who rate themselves as ‘a little better’ or ‘a little worse’ instead of ‘about the same’ as compared to the other patient Can compare groups by level of disease severity, severity of condition, external non-disease-related criteria, hypothetical health states
Distribution-based approaches (MDC)
SEM MCID = X×SDbaseline [ √(1−r)] X is set at 1 for small effect, 1·96 for moderate or 2·77 for large effect
Reliable change index Standard error of the measurement difference X is usually set at 1·96
MCID = X×SDbaseline {√[2×(1−r)]}
0·5 SD MCID = 0·5 SD of the Δscore 0·5 SD is equal to an effect size of 0·5, generally considered a moderate effect
Effect size MCID = X×SDbaseline For the effect size, values of 0·2, 0·5, and 0·8 defined small, moderate, and large changes, respectively
Pre-test standard deviation
Paired t-statistic The MCID is calculated as the difference between pre-test and post-test scores divided by the SEM change
MCID = x1x0/√∑(did)2/n(n−1)
Growth curve analysis Standard error of the slope
MCID = B/√V
Standardized response mean Standard deviation of change
MCID = x1x0/√∑(did)2/n−1
95% limits of agreement Define the boundaries of error around the change score
MCID is the mean Δscore ±1·96 SDΔscore of stable patients
Responsiveness statistic Standard deviation of change in a stable group
MCID = x1x0/√∑(distabledstable)2/n−1

Notes: Abbreviations: MCID, minimal clinically important difference; ROC, receiver operating curve; SEM, standard error of measurement; MDC, minimal detectable change; SD, standard deviation.

For the SEM and RCI, the test–retest intraclass correlation coefficient of stable patients is used for r.

Anchor-based approach

The advantage of the anchor-based approach is that change in the outcome measure score is linked to a meaningful external anchor that accounts for the patient’s perspective.3 Most commonly, the global rating of change score is used to define the MCID when attempting to identify change within a patient.3,8 This is clinically relevant for the clinician who wants to understand if the intervention was effective over an episode of care. Anchor-based methods have been criticized for the effect of recall bias on long-term responsiveness.9 Recall bias happens when a patient remembers best what has happened most recently and has a less clear memory of the more distant past. Authors have established that the patient report of change has been found to be strongly reflective of the patient’s current health status rather than the amount of change from baseline.9,10 Anchor-based methods using global ratings have also been criticized for their inability to take into account the measurement precision of the global instrument.8 This becomes important when comparing a worsening of a condition to an improvement of a condition. Often, the minimal amount of change necessary for deterioration of the condition is less than is required for a positive change in the condition. If the anchor lacks in precision, the patient’s true response will be masked.

Distribution-based approach

Distribution-based approaches to determining MCID scores are based on the statistical characteristics of the sample and in turn on statistically significant changes in relation to the probability that the change has occurred by chance. One advantage of distribution-based methods is the ability to account for change beyond some level of random variation.3 Conversely, a weakness of distribution-based methods is that there are few agreed-upon benchmarks for establishing clinically significant improvement. Perhaps more importantly, distribution-based methods do not address the question of the patient’s perspective of clinically important change which is distinctly different from statistical significance.3,8

Distribution-based methods are also sample-specific. For example, in a study with a large sample size and wide distribution, an MCID score can still be extracted by statistical analysis alone even if actual change has not occurred. One of the more common distribution-based change indices is the minimal detectable change (MDC), which is considered the minimal amount of change that is not likely to be due to random variation in measurement.11 However, the MDC alone does not provide information regarding the clinical significance of this change.11 Mintken et al.12 reported an MCID of 8 percentage points and an MDC of 11·2 percentage points for the QuickDASH demonstrating that the MCID can be less or greater than the MDC given that one is calculated as a statistical threshold (MDC) while the other is based on a patient response-anchored method (MCID).

Other Factors Influencing the Variability of MCID Score

While interpreting the MCID score as a standardized single score of therapeutic threshold is a tantalizing concept for both researchers and clinicians, the stability of a single MCID score has not been demonstrated in the literature (Table 1). There are several factors influencing the variability of the MCID score. First, MCID scores reported as a single point estimate based upon the ‘average score’ of the group are difficult to interpret as they lack associated confidence intervals representative of the wide distribution of actual change score values.10 This lack of a confidence interval is in contrast to typical statistical procedures whereby group means accompanied by 95% confidence intervals are recommended, identifying a range of values to be expected. At the individual level, reported MCID values may misclassify people below the mean as not having experienced a clinically important change when in fact they have.7

A major limitation of the MCID is the wide range of available calculation methods, which produce a wide range of available MCID scores within a single outcome scale (Tables 1 and 2). This myriad of methods leads to problems of interpretation and a state of conflict when deciding which of the reported MCID values is most appropriate. Terwee et al.13 demonstrate this situation perfectly when they describe the large variation in reported MCID scores for a single outcome scale, in one patient group, in one study, at one follow-up when using multiple methods. In this case, Terwee et al.13 reported five different MCID scores, ranging from −4·2 to 18·9 points (with 95% of the values lying between −14·9 and 13·8) for the WOMAC physical function subscale when using five different MCID methods. The authors13 further go on to state that it is unlikely that the variation could be explained by differences in disease severity, disease group, or length of follow-up since the same cohort of patients was analyzed. One possible explanation is that different MCID methods result in different MCID scores due to the multiple reported conceptual and methodological differences. Ostelo et al.14 supported this premise when the authors reported a wide range of MCID scores on commonly used back pain outcome measures. In this study, the reported MCID score for the Oswestry disability index reportedly ranged from 2·0 to 8·6 points on a 100-point scale.14 This wide variability becomes important when considering the paradigm shift to fee for performance from fee for service. Certainly, the clinician/clinic using 2·0 points as the MCID is going to demonstrate better results compared to the clinician/clinic using 8·6 points as the MCID. Caution is needed if we are to adopt a system where reimbursement is a punitive system of withholding reimbursement if certain quality indicators including the MCID score are not met for a particular patient. Given the inherent limitations in the MCID score, little confidence should be placed in the MCID score as a single quality indicator for reimbursement.

A third contributing factor to the variability in reported MCID scores is study population. In particular, patient demographics and patient baseline status can significantly influence the MCID score.15 After reviewing Table 1, note that even within the same methodology, MCID scores for the Oswestry can vary from 10 to 17% (Table 1). Terwee et al.13 found large variation in MCID scores using a single method across several studies based on practice setting, suggesting that possible explanations for this variation included population characteristics such as age, disease group, disease severity, treatment, and period of follow-up. Wang et al.15 confirmed Terwee’s hypothesis and found that MCID scores are context-specific, depending upon patient baseline and demographic characteristics. In studying, the lower extremity functional scale, Wang et al.15 found that patients who were female, presented with a high baseline functional status score, and sub-acute symptoms required lower change scores to report meaningful change. This highlights that the MCID score is not a fixed attribute and will fluctuate based on what is interpreted as important to the patient.15 Failure to acknowledge this limitation runs the risk of misclassifying patients below a pre-selected MCID as non-responders when in fact they have improved. On the flip side, we also run the risk of overestimating the number of responders in patient groups with more acute symptoms or disease severity.

Discussion

The use of the MCID score alone when determining treatment effects may be limited due to the inherent limitations of the methodology and baseline dependency of the sample. In trying to establish responsiveness of an outcome measure using MCID score, the question becomes one of which method is most appropriate. At this point, no one method appears to be sufficient. Previous authors suggest comparing methodologies and reporting a range of values associated with MCID score.13,16,17 However, when this has been attempted, the MCID score continues to show large variability rather than a convergence of MCID values for reporting. This large variability of reported values would suggest that the recommendation to use multiple methods and then triangulate or report the results as one value or a small range of values is nearly impossible given the lack of homogeneity between studies.3,13

Crosby and colleagues make an important point that ‘it may be time for the field to re-evaluate the terminology that is used to describe clinically meaningful change’,3 and have suggested the terms ‘criterion-referenced change’ and ‘precision-referenced change’ to describe estimates based on anchor-based methods and distribution-based measurement precision, respectively.3 The strengths and weaknesses of both anchor-based and distribution-based methods are acknowledged. However, given Jaeschke’s original definition of clinically meaningful change as ‘the smallest difference in score in the domain of interest which patients perceive as beneficial…’, the authors agree with others that MCID estimates should be based upon patient perspective using anchor-based methods.3,17,18 Methods failing to acknowledge the patient’s perspective of clinical significance (distribution-based methods) raise a separate question unrelated to the definition of MCID and are based purely on sample distribution alone.

Recommendations for the Clinician

The MCID score is an evasive concept that continues to require rigorous examination prior to application in a clinical or research setting. Understanding the current concepts of calculation better facilitates understanding and application of the MCID score. The clinician should understand the factors affecting MCID scores, which are specific to the population being studied and are non-transferable across patient groups. This becomes important for an open dialogue with referring providers and third party payers. Cautious application of the MCID score both in the clinical setting and in research is prudent until a consensus can be reached on calculation to address the limitations in methodology, population, and baseline.

Key Points

MCID strengths

Clinically, the MCID may be used as a threshold to detect change beyond that of random error signaling patient response to treatment.

When using an anchor-based approach, the MCID is designed to bring the patient’s perspective to prominence to help guide clinical decision-making during the course of treatment.

For the clinical researcher, the MCID is often used to determine sample size calculations needed to demonstrate treatment effectiveness.

MCID weaknesses

The MCID is not a universal fixed attribute and cannot be transferred across patient populations, disease-specific states.

Lack of a universally accepted methodology to determine MCID results in a wide range of reported MCID values for a single outcome measure based on the methodology chosen.

A single, universally accepted MCID value for a specific outcome measure does not exist as currently reported MCID values will vary based on the population studied and the methodology chosen to derive the reported MCID value.

MCID scores reported as a single point estimate based upon the average score of the group lack associated confidence intervals representative of the wide distribution of actual change score values. Use of a single point estimate runs the risk of misclassifying people below the mean as not improved when, in fact, they have.

References

  • 1.AIHW & Commonwealth Department of Health and Family Services Report of the National Health Information Management Group Working Party on Health Outcomes Activities and Priorities, September 1996. First report on the national health priority areas, full report. Vol. 1. Canberra: AIHW; 1996 [Google Scholar]
  • 2.Roach KE. Measurement of health outcomes: reliability, validity, and responsiveness. J Prosthet Orthot. 2006;18(1S):8–12 [Google Scholar]
  • 3.Crosby RD, Kolotkin RL, Williams GR. Defining clinically meaningful change in health-related quality of life. J Clin Epidemiol. 2003;56:395–407 [DOI] [PubMed] [Google Scholar]
  • 4.Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract. 1999;4:401–16 [DOI] [PubMed] [Google Scholar]
  • 5.Fitzpatrick R, Davey C, Buxton MJ, Jones DR. Evaluating patient based outcome measures or use in clinical trials. Health Technol Assess. 1998;2(14):1–74 [PubMed] [Google Scholar]
  • 6.Jaeschke R, Guyatt GH, Sackett DL, for the Evidence-Based Medicine Working Group. Users’ guides to the medical literature, III: how to use an article about a diagnostic test, B: what are the results and will they help me in caring for my patients? JAMA. 1994;271:703–7 [DOI] [PubMed] [Google Scholar]
  • 7.Beaton DE, Boers M, Wells GA. Many faces of the minimal clinically important difference (MCID): a literature review and directions for future research. Curr Opin Rheumatol. 2002;14:109–14 [DOI] [PubMed] [Google Scholar]
  • 8.Copay AG, Subach BR, Glassman SD, Polly DW, Schuler TC. Understanding the minimum clinically important difference: a review of concepts and methods. Spine J. 2007;7:541–6 [DOI] [PubMed] [Google Scholar]
  • 9.Norman GR, Stratford PW, Regehr G. Methodological problems in the retrospective computation to change: the lesson of Cronbach. J Clin Epidemiol. 1997;50(8):869–79 [DOI] [PubMed] [Google Scholar]
  • 10.Cook CE. Clinimetrics corner: the minimal clinically important change score (MCID): a necessary pretense. J Man Manip Ther. 2009;16(4):E82–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Haley SM, Fragala-Pinkham MA. Interpreting change scores of tests and measures used in physical therapy. Phys Ther. 2006;86:735–43 [PubMed] [Google Scholar]
  • 12.Mintken PE, Glynn P, Cleland JA. Psychometric properties of the shortened disabilities of the Arm, Shoulder, and Hand Questionnaire (QuickDASH) and Numeric Pain Rating Scale in patients with shoulder pain. J Shoulder Elbow Surg. 2009;18(6):920–6 [DOI] [PubMed] [Google Scholar]
  • 13.Terwee CB, Roorda LD, Dekker J, Bierma-Zeinstra SM, Peat G, Jordan KP, et al. Mind the MIC: large variation among populations and methods. J Clin Epidemiol. 2010;63:524–34 [DOI] [PubMed] [Google Scholar]
  • 14.Ostelo RW, Deyo RA, Stratford P, Waddell G, Croft P, Von Korff M, et al. Interpreting change scores for pain and functional status in low back pain. Spine (Phila Pa 1976). 2008;33(1):90–4 [DOI] [PubMed] [Google Scholar]
  • 15.Wang YC, Hart DL, Stratford PW, Mioduski JE. Baseline dependency of minimal clinically important improvement. Phys Ther. 2011;91(5):675–88 [DOI] [PubMed] [Google Scholar]
  • 16.Wright AA, Cook CE, Baxter GD, Dockerty JD, Abbott JH. Approaches to defining major clinically important improvement of 4 performance measures in patients with hip osteoarthritis. J Orthop Sports Phys Ther. 2011;41(5):319–27 [DOI] [PubMed] [Google Scholar]
  • 17.Turner D, Schunemann HJ, Griffith LE, Beaton DE, Griffiths AM, Critch JN, et al. The minimal detectable change cannot reliably replace the minimal important difference. J Clin Epidemiol. 2010;63:28–36 [DOI] [PubMed] [Google Scholar]
  • 18.Jaeschke R, Singer J, Guyatt GH. Measurement of health status: ascertaining the minimal clinically important difference. Control Clin Trials. 1989;10:407–15 [DOI] [PubMed] [Google Scholar]
  • 19.Maughan EF, Lewis JS. Outcome measures in chronic low back pain. Eur Spine J. 2010;19(9):1484–94 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Pool JJ, Ostelo RW, Hoving JL, Bouter LM, de Vet HC. Minimal clinically important change of the Neck Disability Index and the Numerical Rating Scale for patients with neck pain. Spine (Phila Pa 1976). 2007;32(26):3047–51 [DOI] [PubMed] [Google Scholar]
  • 21.Salaffi F, Stancatti A, Silvestri CA, Ciapetti A, Grassi W. Minimal clinically important changes in chronic musculoskeletal pain intensity measured on a numeric rating scale. Eur J Pain. 2004;8(4):283–91 [DOI] [PubMed] [Google Scholar]
  • 22.Beurskens AJ, De Vet HC, Koke AJ. Responsiveness of functional status in low back pain: a comparison of different instruments. Pain. 1996;65(1):71–6 [DOI] [PubMed] [Google Scholar]
  • 23.Hagg O, Fritzell P, Nordwall A, Group atSLSS The clinical importance of changes in outcome scores after treatment for chronic low back pain. Eur Spine J. 2003;12(1):12–20 [DOI] [PubMed] [Google Scholar]
  • 24.Tashjian RZ, Deloach J, Porucznik CA, Powell AP. Minimal clinically important differences (MCID) and patient acceptable symptomatic state (PASS) for visual analog scales (VAS) measuring pain in patients treated for rotator cuff disease. J Shoulder Elbow Surg. 2009;18(6):927–32 [DOI] [PubMed] [Google Scholar]
  • 25.Tubach F, Ravaud P, Baron G, Falissard B, Logeart I, Bellamy N, et al. Evaluation of clinically relevant states in patient reported outcomes in knee and hip osteoarthritis: the patient acceptable symptom state. Ann Rheum Dis. 2005;64(1):34–7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Binkley JM, Stratford PW, Lott SA, Riddle DL. The lower extremity functional scale (LEFS): scale development, measurement properties, and clinical application. North American Orthopaedic Rehabilitation research network. Phys Ther. 1999;79:371–83 [PubMed] [Google Scholar]
  • 27.Carreon LY, Glassman SD, Campbell MJ, Anderson PA. Neck Disability Index, short form-36 physical component summary, and pain scales for neck and arm pain: the minimum clinically important difference and substantial clinical benefit after cervical spine fusion. Spine J. 2010;10(6):469–74 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Manual & Manipulative Therapy are provided here courtesy of Taylor & Francis

RESOURCES