Skip to main content
Journal of Patient Experience logoLink to Journal of Patient Experience
. 2021 Nov 26;8:23743735211060811. doi: 10.1177/23743735211060811

Validation of the SDM Process Scale to Evaluate Shared Decision-Making at Clinical Sites

Floyd J Fowler Jr 1,, Karen R Sepucha 2,3, Vickie Stringfellow 4, KD Valentine 3
PMCID: PMC8640277  PMID: 34869847

Abstract

The Shared Decision-Making (SDM) Process scale (scored 0-4) uses 4 questions about decision-making behaviors: discussion of options, pros, cons, and preferences. We use data from mail surveys of patients who made surgical decisions at 9 clinical sites and a national web survey to assess the reliability and validity of the measure to assess shared decision-making at clinical sites. Patients at sites using decision aids to promote shared decision-making for hip, knee, back, or breast cancer surgery had significantly higher scores than national cross-section samples of surgical patients for 3 of 4 comparisons and significantly higher scores for both comparisons with “usual care sites.” Reliability was supported by an intra-class correlation at the clinical site level of 0.93 and an average correlation of SDM scores for knee and hip surgery patients treated at the same sites of 0.56. The results document the reliability and validity of the measure to assess the degree of shared decision-making for surgical decisions at clinical sites.

Keywords: shared decision-making, quality measures, patient-centered care

Introduction

Shared decision-making (SDM) is consistently listed as one essential part of good quality medical care. The Patient-Centered Primary Care Collaboration, for example, lists SDM as a key element of the Patient-Centered Medical Home (PCMH) (1). The National Quality Forum (NQF) touts informing patients so they can actively participate in decision-making as a high priority and has worked to develop standards for decision aids to inform patients (2). Legal scholars King and Moulton argue that shared decision-making is essential to moving toward truly informed consent (3). Most concretely, the U.S. National Learning Consortium promotes shared decision-making as an essential component of quality medical care (4).

The Shared Decision-Making (SDM) Process scale is designed to assess the extent to which there was an interaction between provider and patient that would meet standards for shared decision-making (58). Based on an extensive body of evaluation, this measure has been endorsed by the National Quality Forum (NQF) to be used to assess the quality of interactions between patients and providers when surgical decisions are made (9).

One set of analyses on which the endorsement was based looked at its reliability. For example, breast cancer patients’ answers to the SDM Process items were compared with responses from independent staff who observed the interactions between patients and physicians, and there was evidence of good agreement (10). Short term (∼4 weeks) test–retest reliability with a sample of hip and knee osteoarthritis patients was 0.64 (11).

Other published studies have found consistent evidence of its construct validity. Higher SDM Process scores have been associated with high decision quality, as measured by informed patients who received treatment consistent with their goals and concerns (11), less reported regret (12), and lower decision dissonance (where dissonance reflected lack of alignment with patients’ goals and concerns (13).

In addition to evidence of reliability and validity at the individual patient level, there is the further question of how well the measure works as a measure of the performance of clinical sites in providing shared decision-making to their patients facing surgical decisions (14). Because one of the important uses of a measure like this is as a measure of clinical site quality, it is crucial to document that the measure does produce reliable and valid information about decision-making at a site. The main goal of this paper is to present evidence about how well this measure performs when used to evaluate the extent to which clinical practices are practicing shared decision-making.

Methods

Data Sources

A fundamental premise of our analysis is that sites that give high-quality decision aids to patients when decisions are made will do more shared decision-making than sites that do not routinely use decision aids.

The analyses presented in this paper are derived from the following sources, all of which collected data from patients using the SDM Process questions.

  1. SDM Demonstration sites (SDM Demo sites): A number of clinical sites across the country were part of an experiment in integrating decision aids into routine clinical sites. Two of those sites surveyed patients who received a decision aid. Decisions for which they have data include knee and hip replacement and surgery for lower back pain. The surveys of patients from the SDM Demo sites were conducted by mail, with at least one follow-up mailing and often a reminder phone call. The surveys were done within 6 months of the decision. The data collection was carried out between 2009 and 2014 (15).

  2. Decision Quality Field Tests were set up to establish the reliability and validity of decision quality measures. Two field tests provide data for the current analyses.
    1. The first field test was a retrospective, mailed survey in 2009 among adults 40 years of age and older with hip or knee osteoarthritis who made a decision about total hip or knee replacement with their physician in the prior 2 years (11). Briefly, participants were enrolled from 3 academic hospitals and self-identified via newspaper advertisements. At one hospital, patients were recruited from a sample that had received a video decision aid for treatment for knee or hip osteoarthritis prior to meeting with a surgeon. This group was categorized as a “DQ (Decision Quality) SDM site.” The other respondents were categorized as a “DQ usual care site.”
    2. The second field test was a prospective, longitudinal survey of breast cancer patients and was conducted at 4 cancer centers from February 2010 through February 2011. One of the centers had a formal decision support system that provided patient decision aids and decision coaching and was categorized as a “DQ SDM site.” The others did not and were categorized as “DQ usual care sites.” Data collection, by mail and phone, with a small gift incentive, took place about 4 weeks after a treatment decision was made.
  3. MOORE SDM Study: Surveyed patients in 2014 to 2015 at a single site who met with a specialist to discuss treatment for hip or knee osteoarthritis. Some patients received a decision aid as part of their care. The assessment of the decision process was done by mail approximately 6 months after the visit or 6 months after surgery for those who had surgery (11).

  4. DECIDE OA (osteoarthritis) STUDY: This 3-site study in 2016 to 2017 randomly assigned patients to 1 of 2 decision aids before meeting with a surgeon to discuss total knee or hip replacement. Data were collected from patients by mail survey about 6 months after their visit or 6 months after surgery (for those who had surgery) (16).

  5. The TRENDS survey data: This 2012 survey used an online Internet panel maintained by Knowledge Networks. Panel members were recruited from a national probability sample of households that were provided with Internet equipment and service if they lacked it. A cross-section sample of adults 40 or older were asked to participate in the TRENDS survey, first by reporting whether or not they had made one or more of 10 common decisions in the preceding 2 years and then answering questions about their interactions with providers around those decisions (17). These data gave us another set of estimates of SDM Process scores for cross-sections of patients for 3 surgical decisions: knee and hip replacement and surgery for low back pain. The overall response rate was 59%.

A summary of the various data sources used in the analyses is included in Table 1.

Table 1.

Summary of Data Sources Used in the Validity and Reliability Analyses.

Data source Clinical areas included Timing of survey # Responses Validity analysis: SDM sites compared with “Usual Care” Used in clinic-level reliability analysis?
1. SDM Demo Sites Knee and Hip Osteoarthritis; Low back pain ≤6 months after decision 397 2 of 2 sites had SDM; used in validity analysis No
2. DQ Field Tests Knee and Hip Osteoarthritis ≤2 years after visit with surgeon 318 1 of 3 sites had SDM; used in validity analysis Yes
Breast Cancer About 4 weeks after surgery 248 1 of 4 sites had SDM; used in validity analysis No
3. MOORE SDM Study Knee and Hip Osteoarthritis; About 6 months after visit with surgeon or after surgery 637 1 site, subset of patients had SDM; not used in validity analysis Yes
4. DECIDE OA Study Knee and Hip Osteoarthritis About 6 months after visit with surgeon or after surgery 944 3 of 3 sites had SDM; not used in validity analysis Yes
5. TRENDS Knee and Hip Osteoarthritis; Low back pain ≤2 years after visit 370 No SDM; used in validity analysis No

Measures

The SDM Process score consists of 4 questions that can be tailored to a specific intervention or treatment:

  1. How much did a doctor (or health care provider) talk with you about the reasons you might want to (HAVE INTERVENTION)? with responses of a lot, some, a little, or not at all.

  2. How much did a doctor (or other health care provider) talk with you about reasons you might not want to (HAVE INTERVENTION)? with responses of a lot, some, a little or not at all.

  3. Did any of your doctors (or health care providers) ask you if you wanted to (HAVE INTERVENTION)? with responses of yes or no.

  • a. Did any of your doctors (or health care providers) explain that you could choose whether or not to (HAVE INTERVENTION)? with responses of yes or no.

OR

  • b. Did any of your doctors (or health care providers) explain that there were choices in what you could do to treat your [condition]? with responses of yes or no.

OR

  • c. Did any of your doctors (or health care providers) talk about (ALTERNATIVE INTERVENTION) as an option for you? with responses of yes or no.

The questions used in the various settings from which data were drawn used the first 3 questions nearly exactly as worded. An example of a slight variation on question 3 was asking whether patients wanted to have a “lumpectomy or mastectomy.” With respect to the issue of whether patients were told there were options or alternatives, the exact questions asked have varied more across the various settings in which this series has been used (see item 4a, b, and c).

Analysis

SDM process score

Responses of “a lot” to items 1 and 2 receive 1 point; answers of “some” to items 1 and 2 receive 0.5 points; and answers of “yes” to items 3 and 4 receive 1 point. All other responses receive 0 points. A total score is calculated by summing up the points, with a range from 0 to 4. Note this scoring approach is slightly different from some previously published studies, based on a recent analysis showing that using 0.5 instead of 1 for a “some” response has some psychometric advantages (18). A total score is only calculated when all questions are answered.

Reliability at clinical site level

For this analysis, we used data from 7 sites in the various DQ Field tests. All the patients had undergone either knee or hip replacements. Patients at 4 of the sites had all been offered decision aids, patients at 2 sites had not, and at the seventh site, patients had been randomized to seeing a decision aid or not. In the last site, the patients who had and had not been offered decisions aids were treated as having had care at different sites (because the decision protocols were clearly different), producing an effective number of 8 sites.

Within each of the sites, random groups of patients who were making the same decision were created, with a minimum size of 25 patients. In all, 54 patient groups were created, based on answers from 1701 patients, and the mean SDM process score was calculated for each group.

The reliability of the measure of patient decision-making experience was calculated in 2 ways. First, using perhaps the most common approach to reliability, we calculated the intra-class correlation coefficient by dividing the between-site variance by the total variance (19). Second, another, though less commonly used, way to assess reliability is to see how consistent the estimates are from different groups of patients who received care from the same site for the same decision. Each group from a site was paired with another group from that same site that had made the same decision (a total of 27 pairs). The Pearson correlation coefficient of the average scores from the patient group pairs across all the sites was calculated (20). This process of randomly assigning participants to groups and calculating the correlation was repeated 10,000 times so that a mean and 95% confidence interval (CI) could be calculated.

Validity at clinical site level

We assessed whether or not clinical sites that were making a special effort to implement shared decision-making (SDM Demo sites and DQ Field Test sites that were routinely having patients review decision aids prior to making a surgical decision) had higher SDM Process scores than sites practicing “usual care” (DQ Usual care) or than cross-sections of patients (TRENDS survey) who faced the same decision. We used t tests to compare mean SDM Process scores from different settings, using a Welch's correction when needed. We also calculated Cohen's d effect sizes for all comparisons. This effect size indicates the difference between groups in terms of their standard deviations where a d of 0.2 would indicate a small effect, a d of 0.5 indicates a medium effect and a d of 0.8 indicates a large effect.

All the data collection protocols for results reported in this paper were overseen by the appropriate Institutional Review Boards. None of the data used in this paper are in publicly available repositories, but potentially they are available by making specific request to the authors or their collaborators.

Results

Reliability at the Clinical Site Level

The more traditional measure of reliability, the intra-class correlation coefficient (proportion of the total variance accounted for by the between-site variance), was 0.93. When we calculated the correlations of the average SDM Process scores of 27 random pairs of patient groups that had made the same decisions at the same sites across 8 clinical sites, the average Pearson correlation coefficient from 10,000 samples was 0.56 (95% CI 0.56, 0.56; P < .001).

Validity at the Clinical Site Level

In Table 2, for osteoarthritis of the knee and hip, patients in the SDM Demo sites where decision aids were used reported significantly better decision processes than a cross-section sample of adults who faced the same decisions (2.92 vs. 2.47, P < .001, d = 0.49 and 2.93 vs. 2.12, P < .001, d = 0.84 respectively). The difference in SDM Process Scores for patients making decisions about lower back pain (2.98 vs. 2.75, P = .12, d = 0.22) was in the expected direction but was not large enough to reach statistical significance.

Table 2.

Mean Shared Decision-Making Process Scores at SDP Demo Sites and From a National Sample of Patients (TRENDS) for 3 Orthopedic Procedures.

Data source Decision topic N Mean decision process score SD decision process score t comparing SDM demo sites with TRENDSa P d
TRENDS Surgery: Knee Pain 160 2.47 1.11
SDM Demo Sites Knee Osteoarthritis 224 2.92 0.783 4.46 <.001 0.49
TRENDS Surgery: Hip Pain 56 2.12 1.276
SDM Demo Sites Hip Osteoarthritis 123 2.93 0.786 4.39 <.001 0.84
TRENDS Surgery: Low Back Pain 154 2.75 1.101
SDM Demo Sites Herniated Disc + Spinal Stenosis 50 2.98 0.802 1.58 .118 0.22
a

All analyses used a Welch's 2-sample t test as the homogeneity of variance assumption was not met.

 Table 3 presents the data from the Decision Quality Field Tests comparing the SDM Process scores for patients who were treated in the site encouraging shared decision-making with patients from the 3 usual care sites for which knee and hip surgery decisions were made. We also have comparable TRENDS data for comparison. The DQ SDM site had significantly better SDM Process scores than usual care sites with no formal shared decision-making (2.80 vs 243, P < .001, d = 0.42). The DQ SDM site scores were better than the national TRENDS sample as well (2.80 vs. 2.38, P < .001, d = 0.41).

Table 3.

Mean Shared Decision-Making Process Scores From DQ SDM Site, DQ Usual Care Sites, and a Cross-Section Sample of Adults (TRENDS) Who Made Decisions for How to Treat Osteoarthritis of the Hip or Knee.

Data source N Mean decision process score SD decision process score t comparing DQ SDM site with usual care sites and TRENDSa P d
DQ SDM site 147 2.8 0.81
DQ Usual Care Sites 171 2.43 0.95 3.7 <.001 0.42
TRENDS National survey of adults who made decisions about knee or hip replacementa 216 2.38 1.16 4.12 <.001 0.41
a

Analysis used a Welch's 2-sample t test as the homogeneity of variance assumption was not met.

 Table 4 shows that the site-level SDM Process Scores from breast cancer patients surveyed shortly after the decision in the Decision Quality Field Tests were significantly better than scores from clinical sites where there was no intervention to promote shared decision-making (2.74 vs 2.32, P < .05, d = 0.47).

Table 4.

Mean Shared Decision- Making Process Scores From DQ Field Test With One SDM Site, 3 “Usual Care” Sites for Decisions for Surgical Treatment for Early Stage Breast Cancer.

Data source N Mean decision process score SD decision process score t comparing DQ SDM Site with usual care sites I d
DQ SDM Site 33 2.74 0.78
DQ Usual Care Sites 227 2.32 0.92 2.5 0.013 0.47

Sample characteristics for each data source used in Tables 2 to 4 are provided in Tables A to C in the Appendix.

Discussion

These findings are an important addition to the published literature on measurement because of the growing interest in the use of shared decision-making as one way to assess the quality of care at clinical sites (1,4). For decisions to have hip and knee surgery, the ICC showed excellent reliability of the SDM Process scale (0.93) (22). A reliability measure that is perhaps less familiar, Pearson correlation coefficient, found that randomly paired groups of patients getting their care at the same sites had reasonably consistent average scores (0.56) (19). Further, there was good evidence of construct validity, as practices that are making a special effort to do shared decision-making, in the form of providing patient decision aids, get higher scores in all of our 6 comparisons, 5 being clearly statistically significant and the sixth being in the expected direction, though falling short of statistical significance (P = .12).

We make 2 recommendations about how to use this measure to evaluate clinical practice: First, there is marked variation in decision-making processes across different types of decisions (17,21). Although one could argue that shared decision-making standards should be the same regardless of the decision, we think fair comparisons among clinical sites should be for the same procedures or set of options. Second, the questions apply equally well to patients who do and do not decide to have the intervention. However, in clinical practice, we have found that it is very difficult to reliably and consistently identify patients who discuss a decision with a doctor but do not have the intervention. For comparing sites, we recommend sampling only those who actually received the interventions, based on medical records.

There are limits to the data we currently have. The data on the reliability are only from 2 decisions. Larger samples and testing across more decisions and more sites would provide a stronger basis for assessing reliability.

Another potential concern is that the period of time between the time the decision was made and the questions were answered varied, from shortly after the decision was made up to 2 years. While one study has shown that the lag time does not affect results (22), we hope future studies can strengthen the data.

Finally, the TRENDS data were collected on the web and the other surveys were done using paper. Since both approaches are self-administered, effects of mode on answers are likely minimal, but we did not have a way to test for mode effects.

Of course, there are a number of other measures that have been used to attempt to measure shared decision-making. A recent review looked at psychometric evidence for the validity of 40 different measures (23). However, none of those measures stood out as having a particularly strong psychometric profile. Meanwhile, another study compared this SDM Process scale with the SURE (24) scale and CollaboRATE (25), 2 measures that have been widely used to assess shared decision-making. Although they both are used to measure the decision-making process, SURE measures how patients feel about how prepared they are to make a decision and CollaboRATE is a sum of 3 patient ratings of the physician's effort to involve the patient. Despite the differences in approach, the 3 measures were found to produce similar substantive results (26). The SURE and SDM Process measures, in particular, showed strong evidence of validity (26). The SDM Process Scale has demonstrated strong psychometric properties, as good or stronger than other available measures.

In summary, some key strengths of this measure are that it is short, easy to complete, and easy to tailor to almost any decision context or medical intervention. It focuses on patient descriptions of their interactions, rather than their ratings, which our testing suggests is an asset in obtaining meaningful patient reports about their interactions with their providers. It has been translated into Spanish and was shown to work well with Spanish speakers (12). We have presented a generic version of the questions, but slight wording changes to fit different situations do not appear to affect the validity so long as the basic target of the questions is maintained. Moreover, because the questions focus on behaviors, the results provide a clear direction for clinicians interested in improving their practice. It is available for use at no charge upon request to the authors.

Conclusions

This paper presents evidence documenting the reliability and validity of the SDM Process score, a short, patient-reported survey that can be used to assess the extent to which shared decision-making is being used in clinical sites across a range of health care decisions. This NQF-endorsed measure can make an important contribution to evaluating the extent to which clinical care is meeting standards for patient-centered care.

Appendix: Demographic Characteristics of Samples

Table A.

Demographic Characteristics of Respondents in Table 2.

Demographic characteristics Demo site, knee osteoarthritis TRENDS knee pain decision Demo site, hip osteoarthritis TRENDS hip pain decision Demo site, back disc or stenosis decision TRENDS low back pain decision
Percent male 39% 48% 46% 57% 48% 52%
Age <40 2% 0 2% 0 28% 0
Age 40-64 52% 52% 52% 48% 44% 63%
Age 65+ 45% 48% 46% 52% 28% 37%
Education HS graduate or less 28% 43% 23% 34% 40% 43%
Education some college 27% 19% 25% 25% 26% 23%
Education college graduate or more 45% 38% 52% 41% 34% 34%
Ethnicity, white non-Hispanic 97% 80% 99% 79% 92% 82%

Table B.

Demographic Characteristics of Respondents in Table 3.

Demographic characteristics DQ SDM site, osteoarthritis hip or knee DQ usual care site, osteoarthritis, hip or knee TRENDS, knee or hip replacement decision
Percent male 40% 50% 50%
Age <40 0 1 0
Age 40-64 58% 61% 51%
Age 65+ 42% 38% 49%
Education HS graduate or less 20% 18% 41%
Education some college 23% 25% 20%
Education college graduate or more 57% 57% 39%
Ethnicity, white non-Hispanic 99% 92% 80%

Table C.

Demographic Characteristics of Respondents in Table 4.

Demographic characteristics DQ SDM Site, early stage breast cancer DQ Usual Care site, early stage breast cancer
Percent female 100% 100%
Age <40 6% 7%
Age 40-64 79% 65%
Age 65+ 15% 27%
Education HS graduate or less 16% 11%
Education some college 28% 31%
Education college graduate or more 56% 58%
Ethnicity, white non-Hispanic 78% 92%

Footnotes

Authors’ Note: All of the data used in this paper were reviewed and the protocols approved by the appropriate Institutional Review Boards (IRB). The New England IRB oversaw the protocols for the TRENDS survey and deemed the project exempt because respondents could not be identified, either directly or by links, by the investigators. The Partners Human Research Committee at Massachusetts General Hospital (now Massachusetts General Brigham) reviewed and approved the protocols for Demo Site data (2006P002025), the Decision Quality Field Tests (2008P02488), the Moore SDM Study (2013P001794), and the DECIDE OA Study (2016P00229). All the protocols for collecting data for these studies were approved by the appropriate Institutional Review Boards as listed above. Except for the TRENDS study, which was declared “exempt,” all the subjects whose survey responses are reported in this study were provided with information that was reviewed and approved by the appropriate Institutional Review Board. In all cases, because participating in the surveys was deemed minimal risk, the IRBs ruled that the act of completing the self-administered survey, on paper or on the web, implied giving informed consent.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Agency for Healthcare Research and Quality (grant number 1R01HS025718, 1503-28799, NA). The TRENDS survey, Demo Site studies and the Decision Quality Field tests were funded by the former Informed Medical Decisions Foundation, a not-for-profit organization devoted to informing and involving patients in medical decisions about their health care. The Foundation was dissolved in April, 2014. The Moore SDM Study was funded by a grant from the Gordon and Betty Moore Foundation. Funding for the DECIDE OA study was provided by an award from the Patient-Centered Outcomes Research Institute (PCORI) (CDR#1503-28799) Some of KRS's, KDV's and FJF's time for the analysis was supported by a grant from the Agency for Healthcare Research and Quality (1R01HS025718-01A1). None of the funding organizations played any role in the analysis of data or preparation of this manuscript. We explicitly note that the statements presented in this publication are solely the responsibility of the authors and do not necessarily represent the views of PCORI, its Board of Governors or Methodology Committee.

ORCID iDs: Floyd J. Fowler https://orcid.org/0000-0001-5889-6362

Vickie Stringfellow https://orcid.org/0000-0002-9474-0100

References

  • 1.Patient-centered Primary Care Collaboration. Joint principles of the patient-centered medical home. 2007. Available from: https://www.pcpcc.org/metrics/shared-decision-making. Accessed March 27, 2021.
  • 2.National Quality Forum. National standards for the certification of decision aids. 2016. Available at: https://www.qualityforum.org/Publications/2016/12/National_Standards_for_the_Certification_of_Patient_Decision_Aids.aspx. Accessed March 27, 2021.
  • 3.King JS, Moulton BW. Rethinking informed consent: the case for shared medical decision-making. Am J Law Med 2006;32(4):429-501. [DOI] [PubMed] [Google Scholar]
  • 4.National Learning Consortium. Shared decision-making fact sheet. 2013. https://www.healthit.gov/sites/default/files/nlc_shared_decision_making_fact_sheet.pdf. Accessed March 27, 2021.
  • 5.Charles C, Gafni A, Whelan T. Decision-making in the physician-patient encounter: revising the shared decision-making model. Soc Sci Med. 1999;65:651-61. [DOI] [PubMed] [Google Scholar]
  • 6.Makoul G, Clayman ML. An integrative model of shared decision making in medical encounters. Patient Educ Couns. 2006;60:301-12. [DOI] [PubMed] [Google Scholar]
  • 7.Sepucha K, Mulley AG. A perspective on the patients’ role in treatment decisions. Med Care Res Rev. 2009;66(1):53S-74S. [DOI] [PubMed] [Google Scholar]
  • 8.National Quality Partners Playbook™: Shared decision making in healthcare. 2018. https://store.qualityforum.org/products/national-quality-partners-playbook™-shared-decision-making. Accessed March 27, 2021.
  • 9.National Quality Forum. Shared decision making process (#2962) Measures, Reports and Tools. Available at: https://www.qualityforum.org/Measures_Reports_Tools.aspx. Accessed April 22, 2021.
  • 10.Pass M, Belkora J, Moore D, Volz, S, Sepucha, K. Patient and observer ratings of physician shared decision making behaviors in breast cancer consultations. Patient Educ Couns. 2012;88(1):93-9. doi: 10.1002/pon.3093 [DOI] [PubMed] [Google Scholar]
  • 11.Sepucha KR, Feibelmann S, Chang Y, Clay CF, Kearing SA, Tomek I, et al. Factors associated with the quality of patients’ surgical decisions for treatment of hip and knee osteoarthritis. J Am Coll Surg. 2013;217(4):694-701. [DOI] [PubMed] [Google Scholar]
  • 12.Sepucha K, Feibelmann S, Chang Y, Hewitt HS, Ziogas A Measuring the quality of surgical decisions for Latina breast cancer patients. Health Expect. 2015;18(6):2389-400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Fowler FJ, Gallagher PM, Drake KM, Sepucha K. Decision dissonance: evaluating an approach to measuring the quality of surgical decisions. Jt Comm J Qual Patient Saf 2013;39(3):136-44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Dyer N, Sorra JS, Cleary P, Hays R. Psychometric properties of the consumer assessment of healthcare providers and systems (CAHPS) clinician and group adult visit survey. Med Care. 2012;50(Suppl):S28-34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Sepucha KR, Feibelman S, Abdu WA, Clay CF, Cosenza C, Kearing S, et al. Psychometric evaluation of a decision quality instrument for treatment of lumbar herniated disc. Spine. 2012;17(18):1602-16. [DOI] [PubMed] [Google Scholar]
  • 16.Mangla M, Bedair H, Chang Y, Daggett S, Dwyer MK, Freiberg, AA et al. Protocol for a randomised trial evaluating the comparative effectiveness of strategies to promote shared decision making for hip and knee osteoarthritis (DECIDE-OA study). BMJ Open 2019;9(2):e024906. doi: 10.1136/bmjopen-2018-024906. PMID: 30804032; PMCID: PMC6443066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Fowler FJ, Gerstein BS, Barry MJ. How patient centered are medical decisions? JAMA Intern Med. 2013;173(13):1215-21. [DOI] [PubMed] [Google Scholar]
  • 18.Valentine KD, Vo H, Fowler FJ, Brodney S, Barry MJ, Sepucha KR. Development and evaluation of the shared decision- making process scale: a short patient-reported measure. Med Decis Making. 2021;41(2):108-19. [DOI] [PubMed] [Google Scholar]
  • 19.Fleiss J. The Design and Analysis of Clinical Experiments. Hoboken, New Jersey: Wiley and Sons; 1999. [Google Scholar]
  • 20.Rousson V, Gasser T, Seifert B. Assessing the intra rater, interrater and test retest reliability of continuous measurements. Stat Med. 2002;21(22):3431-46. [DOI] [PubMed] [Google Scholar]
  • 21.Zikmund-Fisher B, Couper M, Singer E, Ubel PA, Ziniel, S, Fowler FJ, et al. Deficits and variations in patients’ experience with making 9 common medical decisions: the DECISIONS survey. Med Decis Making 2010;30:85S-95S. [DOI] [PubMed] [Google Scholar]
  • 22.Sepucha KR, Langford AT, Belkora JK, Chang Y, Moy B, Partridge AH, et al. Impact of timing on measurement of decision quality and shared decision making: longitudinal cohort study of breast cancer patients. Med Decis Making. 2019;39(6):642-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Gärtner FR, Bomhof-Roordink H, Smith IP, Scholl I, Stigglebout A, Peiterse AH. The quality of instruments to assess the process of shared decision making: a systematic review. PLoS One 2018;13(2):e0191747. 10.1371/journal.pone.0191747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Legare F, Kearing S, Clay K, Gagnon S, D'Amours D, Rousseau M, et al. Are you SURE? Assessing patient decisional conflict with a 4-item screening test. Can Fam Physician 2010;56(8):e308-14. [PMC free article] [PubMed] [Google Scholar]
  • 25.Barr PJ, Thompson R, Walsh T, Grande SW, Ozanne E, Elwyn G. The psychometric properties of CollaboRATE: a fast and frugal patient-reported measure of the shared decision-making process. J Med Internet Res. 2014;16(1):e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Brodney S, Fowler FJ, Barry MJ, Chang Y, Sepucha K. Comparison of three measures of shared decision making: SDM Process_4, CollaborRATE, and SURE scales. Med Decis Making. 2019;39(6):673–680. doi: 10.1177/0272989X19855951 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Patient Experience are provided here courtesy of SAGE Publications

RESOURCES