Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 1.
Published in final edited form as: Med Decis Making. 2021 Aug 3;42(1):105–113. doi: 10.1177/0272989X211029267

Using standardized videos to examine the validity of the Shared Decision Making Process Scale: Results of a randomized online experiment

KD Valentine 1,2, Brittney Mancini 1, Ha Vo 1, Suzanne Brodney 1, Carol Cosenza 3, Michael J Barry 1,2, Karen R Sepucha 1,2
PMCID: PMC8633028  NIHMSID: NIHMS1715129  PMID: 34344233

Abstract

Background:

The Shared Decision Making (SDM) Process scale is a brief, patient-reported measure of SDM with demonstrated validity in surgical decision making studies. Herein we examine the validity of the scores in assessing SDM for cancer screening and medication decisions through standardized videos of good-quality and poor-quality SDM consultations.

Method:

An online sample was randomized to a clinical decision—colon cancer screening or high cholesterol—and a viewing order—good-quality video first or poor-quality video first. Participants watched both videos, completing a survey after each video. Surveys included the SDM Process scale and the 9-item SDM Questionnaire (SDM-Q-9); higher scores indicated greater SDM. Multilevel linear regressions identified if video, order, or their interaction predicted SDM Process scores. To identify how the SDM Process score classified videos, area under the curve (AUC) was calculated. The correlation between SDM Process score and SDM-Q-9 assessed construct validity. Heterogeneity analyses were conducted.

Result:

In the sample of 388 participants (68% white, 70% female, average age 45) good-quality videos received higher SDM Process scores than poor-quality videos (ps<.001) and those who viewed the good-quality high cholesterol video first tended to rate the videos higher. SDM Process scores were related to SDM-Q-9 scores (rs>0.58; ps<0.001). AUC for the high cholesterol model was poor (0.69); for the colorectal cancer model was fair (0.79). Heterogeneity analyses suggested individual differences were predictive of SDM Process scores.

Conclusion:

SDM Process scores showed good evidence of validity in a hypothetical scenario, but was lacking in ability to classify good-quality or poor-quality videos accurately. Considerable heterogeneity of scoring existed, suggesting that individual differences played a role in evaluating good or poor-quality SDM conversations.

Introduction

The SDM Process scale is a short, patient-reported measure that has been used extensively to examine the amount of shared decision making (SDM) patients experience.15 The measure focuses on core aspects of SDM including the discussion of options, pros and cons, and the patient’s preferences. Two national studies of the United States used items in the SDM Process scale to examine the quality of common medical decisions. The results showed very little evidence of SDM for common decisions such as cancer screening and medications for high cholesterol or high blood pressure.6,7

The ability of a measure of SDM to discriminate among interactions where clinicians meaningfully engage patients or not, is an important measurement property. Although several measures of SDM exist, a recent systematic review of 40 such measures indicated that good-quality evidence of validity was largely lacking for these instruments.8 One measure discussed in that review was an early (unpublished) version of the SDM Process scale. The SDM Process scale has since been tested extensively, finding the scale to be robust to scoring variation and providing evidence of the scale’s validity in surgical decision contexts.9 The SDM Process scale has also demonstrated the ability to distinguish between those who had been given formal decision support and those who had not in surgical decisions.4 However, evidence of this type of validity has not yet been demonstrated for cancer screening or medication decisions. The very limited use of SDM and patient decision aids in cancer screening and medication settings and the relative low prevalence of these decisions for any primary care doctor makes it difficult to design a study to collect this type of validity data in routine clinical care settings.

The purpose of this study is to examine the ability of the SDM Process scale to distinguish between a good and a poor quality SDM interaction and to classify a good quality SDM interaction as better than a poor quality SDM interaction (discriminant validity), as well as to gather evidence of convergent validity in non-surgical decision contexts.

Methods

Design

This study consisted of two studies—one study focused on decision making for colorectal cancer screening and the second focused on decision making for taking medication for high cholesterol. In each study, participants were randomly assigned to view videos of both good and poor quality quality SDM conversations between a physician and patient actor in a specified order. Those randomized to the good quality first group viewed the good quality SDM video followed by the poor quality SDM video, while those randomized to the poor quality first group viewed the poor quality SDM video followed by the good quality SDM video. Participants completed surveys after watching each video to rate the amount of SDM in the conversation. The study was powered to test our primary hypotheses, that the mean SDM Process scores would be higher for the good quality videos than the poor quality videos. With 400 respondents (200 in each clinical condition), we would have 80% power to detect a difference of 0.33 SD between the ratings of the good and poor quality videos for each clinical condition.

Participants and Procedure

The sample was obtained from Marketing Systems Group (MSG) from March 2020 to April 2020 and consisted of participants who were part of an online non-probability panel. MSG sent out invitation emails introducing the survey, including the approximate length and available incentive (panel participation points) to potential respondents for each of the two studies – colorectal cancer screening and medication for high cholesterol. Invitations were sent to a sample of adults aged 18 to 75 who spoke English and lived in the United States. The sample was selected to have approximately equal age distribution above and below 40 years old (e.g. 18–39 and 40–75). Eligible respondents were those who had not had a prior diagnosis of colorectal cancer (if assigned to the colorectal cancer videos), or a history of heart attack or stroke (if assigned to the high cholesterol videos).

In each study, participants were screened for eligibility at the beginning of the survey and eligible participants were divided into two age groups (18–39 and 40–75). Then, respondents were randomly assigned to one of two groups: the good quality SDM video first (and the poor quality SDM video second) or the poor quality SDM video first (and the good quality SDM video second). Participants accessed a link to the videos electronically and completed a survey after the first video and again after the second video. After the initial mailing, additional invitations were sent to MSG panel participants in age groups that were underrepresented. Cells were closed when the quotas were reached.

The study was approved by the Institutional Review Board at Mass General Brigham (protocol# 2019P001434) and was registered at ClinicalTrials.gov (ID# NCT04317274).

SDM Videos

The videos were taken from two SDM training courses that were created for clinicians and researchers (https://mghdecisionsciences.org/tools-training/clinician-training/). Scripts for the good and poor quality videos were developed by a team with expertise in SDM and the clinical areas (primary care physicians and gastroenterologists) as well as patient partners. In the “poor” quality cases, the physician focuses on one option, makes a strong recommendation for it, and tries to persuade the patient to follow it. In the “good” quality cases, the physician describes multiple options, presents both benefits and harms, and elicits patients’ preferences. Although aiming to keep the conversation length similar for both videos, the “good” quality SDM interactions were longer than the “poor” quality SDM interactions (Table 1 displays information about each of the four videos). The colorectal cancer screening videos depicted a female patient meeting with a male physician; the high cholesterol videos depicted a male patient meeting with a female clinician. The videos can be accessed at the Massachusetts General Hospital Health Decision Sciences Center website (https://mghdecisionsciences.org/sdmvideos/).

Table 1.

Descriptions of Shared Decision Making Videos

Colorectal Cancer Screening High Cholesterol
Good Quality Poor Quality Good Quality Poor Quality
Duration in Minutes 4:41 1:37 7:08 5:26
Braddock Elements* 5 1 7 3

Note: SDM=Shared Decision Making;

*

a maximum of 9 Braddock elements is possible.

Prior to use in these studies, all four videos were rated by 4 trained, independent raters to assess the level of SDM in each encounter using the 9 Braddock Elements for Informed Decision Making. The 9 elements include: 1) discussion of the patient’s role in decision making, 2) discussion of the clinical issue/nature of decision, 3) discussion of alternatives, 4) discussion of the pros and cons of the alternatives, 5) discussion of uncertainties associated with the decision, 6) assessment of the patient’s understanding, 7) exploration of the patient’s preferences, 8) opportunity to involve trusted others, and 9) impact of the decision on daily life.10 Observers reached consensus on ratings and found a higher level of SDM in the good quality videos as compared to the poor quality videos. Ratings also differed by clinical condition with the colorectal cancer screening conversations having fewer elements of a good decision process than the high cholesterol conversations (see Table 1). Overall the two videos for high cholesterol were much closer in terms of duration, and identified more Braddock elements than the videos created for colorectal cancer screening.

Measures:

Online participants completed the following measures after each video:

  • Shared Decision Making (SDM) Process scale. The SDM Process scale measures the amount of SDM that occurs in an interaction.11 This scale has shown validity in the context of surgical decision making. 11,12 For this project the scale items were adapted to be completed by an observer rather than a participant in the interaction. Scores range 0–4; larger values indicate greater SDM occurred. The adapted questions were pilot tested with six respondents who watched the video and then completed the survey about each video. (Table 2)

  • 9-Item Shared Decision Making Questionnaire (SDM-Q-9). The SDM-Q-9 is a widely used patient-reported measure of SDM that focuses on the decisional process in a medical encounter. The scale asks patients about their experience with their providers using a series of 9 items which are summed and then transformed to create a score that ranges from 0 to 100; higher scores indicate more SDM occurred.13 The 9 items were adapted to reflect participants rating the SDM videos instead of rating their own experiences. The SDM-Q-9 has demonstrated good internal consistency14,15, high acceptance rates14, and strong concurrent validity.16 However, the SDM-Q- 9 has previously been shown to have weak convergent validity.17

  • Demographic and Individual Difference Measures. Participants reported their demographics including age, race, ethnicity, gender, income, and education. Participants were also asked to report on their healthcare utilization (a single item asking how many visits they had made to their healthcare provider in the past year; scores were truncated at 20 visits), physical health (1=excellent, 5=poor),18 mental health (1=excellent, 5=poor),18 and to respond to the medical Maximizer Minimizer single item scale, a single item that measures the extent to which patients prefer to maximize or minimize the extent to which they have medical interventions.19 Individuals who watched the colorectal cancer screening videos were asked if they had ever been screened for colorectal cancer; similarly, individuals who watched the high cholesterol videos were asked whether they have high cholesterol and if they had ever taken a medication for high cholesterol.

Table 2.

Items of Shared Decision Making Process Scale

  1. How much did the patient and the doctor talk about the reasons [she/he] might want to [have a colonoscopy/take medicine] to [screen for colon cancer/manage his high cholesterol]?

    Response options: A lot; Some; A little; Not at all

  2. How much did the patient and the doctor talk about the reasons [she/he] might not want to [have a colonoscopy/take medicine] to [screen for colon cancer/manage his high cholesterol]?

    Response options: A lot; Some; A little; Not at all

  3. Did the doctor talk about [having a stool-based test/ways other than taking medicine] as an option to [screen for colon cancer/ manage his high cholesterol]?

    Response options: Yes; No

  4. Did the doctor ask the patient [what she wanted to do to screen for colon cancer/ whether not he wanted to take medicine to manage his high cholesterol]?

    Response options: Yes; No

Analyses

First, we examined descriptive statistics of the SDM Process items for the two clinical conditions and orders. We examined rates of missing data to determine acceptability and descriptive results to see whether the scores span the range of total possible scores, are normally distributed, and whether there is evidence of floor or ceiling effects. All analyses analyzed by original assigned groups.

We examined the following hypotheses:

  1. The good quality SDM video will have higher mean SDM Process scores than the poor quality SDM video in both colorectal cancer screening and high cholesterol studies, indicating that the scale has the ability to distinguish between a good quality SDM video and a poor quality SDM video and has discriminant validity.

  2. The SDM Process scale will be able to classify a good quality SDM video as better than a poor quality SDM video (i.e. AUC ≥ .70 indicting fair classification20 and >50% of participants giving the good quality video a higher SDM Process score than the poor quality video).

  3. The SDM Process and SDM-Q-9 scores will be at least moderately correlated (i.e. correlations ≥.5) indicating convergent validity.

Analysis plans aimed to use repeated measures ANOVAs to test for impact of order (good quality first v. poor quality first), video (good quality v. poor quality), and the order by video interaction; however, the assumption of homogeneity of variance was not met and thus multilevel models were used instead to test for impact of order (good quality first v. poor quality first), video (good quality v. poor quality), and the order by video interaction while adjusting for nesting of repeated videos within participant for the SDM Process scores. Area under the ROC curve (AUC) and 95% confidence intervals (CI) were calculated separately for the high cholesterol and colorectal cancer screening studies to identify how the SDM Process scale was capable of classifying individuals scoring the good quality SDM and poor quality SDM videos. Following this, scores for each individual were subtracted to identify in more detail how often each individual rated the good quality SDM video higher than the poor quality SDM video (and also how frequently individuals incorrectly rated both videos the same or rated the good quality video lower than the poor quality video); chi-square analyses were used to identify if there were differences in this classification by order. Pearson correlations were used to test for concurrent validity between the SDM-Q-9 and the SDM Process scores. Exploratory multi-level models were used to explore the relationship between order, video, and their interaction alongside other covariates including mental and physical health status, health utilization (e.g. number of visits to their healthcare provider in the past year), previous experience with screening (colorectal cancer screening only), previous diagnosis of high cholesterol ( high cholesterol only), Medical Maximizer-Minimizer Single Item scale, education, race/ethnicity (i.e. White non-Hispanic compared to all others), income, age, and gender. For these exploratory models all variables (except for gender, race/ethnicity, and previous screening experience or high cholesterol diagnosis) were treated as continuous variables. All continuous variables were centered (though this made no difference to conclusions). Analogous models for hypotheses 1, 2, and exploratory analyses were also carried out with the SDM-Q-9 scores as the dependent variable. However, as these results were not a primary outcome of interest for our project they are not discussed herein but have been included in the supplemental materials.

Funding

This project was funded under grant number R01HSO25718 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The opinions expressed in this document are those of the authors and do not reflect the official position of AHRQ or the U.S. Department of Health and Human Services.

Results:

A total of 401 participants viewed the SDM videos; 201 individuals in the colorectal cancer screening group (Good Quality Video First Group, N=97; Poor Quality Video First Group, N=103) and 200 for the high cholesterol group (Good Quality Video First Group=94; Poor Quality Video First Group, N=107). Thirteen participants (3%) skipped one or more SDM Process items and were removed from our analytic dataset. However, the vast majority of participants completed all SDM Process items for both videos (97%; 388/401) in both the colorectal cancer screening study and the high cholesterol study), indicating that the scale was acceptable to participants. Table 3 presents the characteristics for each group, which were wellbalanced across the studies.

Table 3.

Respondent Characteristics Across Conditions for Colon Cancer Screening and High Cholesterol Studies

Variable Poor Quality CRC Video First Group Good Quality CRC Video First Group CRC P-values Poor Quality HC Video First Group Good Quality HC Video First Group HC p-values
Sample Size 103 91 99 95
Age M (SD) 43.9 (15.5) 44.9 (16.2) 0.353 44.5 (15.8) 45.1 (15.8) 0.101
Percent Female (N)* 65% (67) 70% (64) 0.749 74% (72) 70% (66) 0.254
Percent HS or Less Education (N) 29% (30) 24% (22) 0.977 33% (33) 30% (28) 0.226
Percent White, Non-Hispanic (N) 68% (70) 70% (64) 0.985 68% (67) 67% (64) 0.874
Percent income <=50k*(N) 55% (54) 39% (35) 0.839 50% (47) 47% (45) 0.286
Percent Excellent Physical Health (N)* 13% (13) 13% (16) 0.547 14% (14) 14% (13) 0.26
Percent Excellent Mental Health (N)* 24% (24) 24% (16) 0.209 29% (29) 22% (21) 0.269
Visits to HCP Median (1st and 3rd quartile)* 2 (1–4) 2 (1–4) 0.408 2 (1–3.5) 2 (1–4) 0.857
Percent Screened for CRC/Diagnosed with HC (N) 44% (45) 35% (32) 0.116 38% (38) 31% (29) 0.127
Medical Minimizer Maximizer Score M (SD) * 3.9 (1.7) 3.5 (1.8) 0.296 3.8 (1.6) 4.1 (1.6) 0.214

Note: CRC=colorectal cancer screening; HC=high cholesterol; HCP=health care provider; NA=not asked; HS=high school.

*

indicates that some data are unavailable: for physical health for 2 CRC participants, mental health for 1 CRC and 1 HC participant, visits for 4 CRC participants, MM1 for 2 HC participants, gender for 2 HC participants, and income for 6 CRC participants and 5 HC participants.

Table 4 describes the SDM Process scores across studies and videos (poor and good quality). The Poor quality colorectal cancer screening video had the lowest average SDM Process score and the Good quality high cholesterol video had the highest average score. Distributions tended to be normal with only one video exhibiting a ceiling effect (i.e., skewness < −1.00; Good high cholesterol video skew was −1.22) and no studies exhibiting floor effects (i.e., skewness > 1.0). Ratings for all videos except the Good quality colorectal cancer screening video covered the entire range of possible data (i.e. 0 to 4).

Table 4.

Description of Shared Decision Making Process scores for Colorectal Cancer Screening and High Cholesterol Studies

Version Range Mean (SD) Skew
Colorectal Cancer Screening Poor Quality ( 0 – 4 ) 1.95 ( 1.2 ) 0.13
Good Quality ( 1 – 4 ) 3.19 ( 0.71 ) −0.79
High Cholesterol Poor Quality ( 0 – 4 ) 2.58 ( 1.02 ) −0.36
Good Quality ( 0 – 4 ) 3.23 ( 0.71 ) −1.22

Note: SD=standard deviation

Hypothesis 1: The good quality SDM video will have higher mean SDM Process scores than the poor quality SDM video in both colorectal cancer screening and high cholesterol studies.

Table 5 presents the statistics for the models. Video seen was a significant predictor (p<.001), in both colorectal cancer screening and high cholesterol models (see Table 5). The SDM Process scores were significantly lower for the poor quality videos than the good quality videos (colorectal cancer screening M=3.3, SE=0.1 vs. M=2.1, SE=0.1 p<.001; high cholesterol M=3.1, SE=0.1 vs. M=2.6, SE=0.1; p<.001). The order in which videos were presented was significant in the high cholesterol model indicating that those who viewed the good quality high cholesterol video first had higher SDM Process scores than those who viewed the poor quality video first; however, this relationship was not significant in the colorectal cancer screening model (p=0.13). The interaction between order (good quality v. poor quality first) and video (good quality v. poor quality) was not a significant predictor of SDM Process scores in either the colorectal cancer screening or high cholesterol models (ps>.21).

Table 5.

Statistics for models predicting Shared Decision Making Process score with Order, Video, and Interaction High Cholesterol Study and Colorectal Cancer Screening Study.

High Cholesterol Colorectal Cancer Screening
b 95% CI p b 95% CI p
Intercept 3.1 (2.85, 3.19) <.001 3.29 (3.00, 3.37) <.001
Order (Good Quality First) 0.26 (0.00, 0.47) 0.04 −0.21 (−0.46, 0.07) 0.13
Video (Poor Quality) −0.56 (−0.63, −0.25) <.001 −1.23 (−1.24, −0.79) <.001
Order*Video −0.19 (−0.45, 0.10) 0.21 −0.03 (−0.48, 0.18) 0.87

Note: b=unstandardized slope estimates; CI=confidence interval

Hypothesis 2: The SDM Process scale will be able to classify a good quality video as better than a poor quality video.

AUC for the high cholesterol model was poor, AUC=0.69, CI (0.64, 0.74), while AUC for the colorectal cancer screening model was fair, AUC=0.79, 95%CI (0.75, 0.84). When exploring data at the level of the individual, more than half of participants rated the good quality video higher on the SDM Process scale when compared to the poor quality video in both the colorectal cancer screening (76%; 147/194) and high cholesterol studies (55%, N=107/194), indicating that our scale was able to distinctly classify the good quality video as having more SDM than the poor quality video. However, the classification rate was far lower for the high cholesterol video which we believe is due to both videos being perceived as higher in SDM. This can be seen when we look at the rate of ties (i.e., the participant’s SDM Process scores were the same for both videos) which is twice as large in the high cholesterol study (29%, N=56/194 ) as compared to the colorectal cancer screening study (13%, 26/194). There were also a few participants who scored the poor quality video higher than the good quality video (colorectal cancer screening: 11%, 21/194; high cholesterol: 16%, 31/194). There was no difference in classification by order (ps>0.48).

Hypothesis 3: The SDM Process and SDM-Q-9 scores will be moderately correlated.

The SDM Process and SDM-Q-9 scores were highly correlated for both colorectal cancer screening (r=0.71, p<.001) and high cholesterol (r=0.58, p<.001) videos. The relationship between SDMP and SDM-Q-9 is similar when looking exclusively at the Poor quality video scores in both colorectal cancer screening (r=0.65, p<.001) and high cholesterol (r=0.55, p<.001) studies. This relationship weakens a bit when looking exclusively at the Good quality scores (colorectal cancer screening: r=0.41, p<.001; high cholesterol: r=0.48, p<.001). This difference in relationships is likely due to the negatively skewed distribution present in the good quality video in both colorectal cancer screening and high cholesterol studies (skew −0.79 and −1.22, respectively) which is dampened when both good quality and poor quality video scores are combined (skew −0.57 and −0.81, respectively).

Heterogeneity Analyses:

Table 6 shows the direction of the association for respondent characteristics that are predictive of SDM Process scores for the good and poor quality videos in the contexts of colorectal cancer screening and high cholesterol. Full model statistics are available in the supplemental materials.

Table 6.

Heterogeneity of Shared Decision Making Process Scores for Colorectal Cancer Screening and High Cholesterol Studies

Good Quality Video Poor Quality Video
Colorectal Cancer Screening High Cholesterol Colorectal Cancer Screening High Cholesterol
Order-Good Quality first +
Age +
Excellent Physical Health +
Excellent Mental Health
Visits to HCP
Screened for CRC/Have HC +
MM1 + +
Gender-Female
Education
White
Income

Note: CRC=colorectal cancer screening; HC=high cholesterol; HCP=health care provider. + indicates that as the variable increases or changes , values of the SDM scale increase; − indicates that as the variable increases or changes, values of the SDM scale decrease. MM1 is the single-item measure of medical maximizing-minimizing where larger values indicate a greater desire to take action when it comes to health/medicine; Education combined the values of 8th grade or less and some high school into a lowest-valued “less than high school” category, but this variable was treated as continuous in this model; Income was treated as continuous; Visits was treated as continuous with the raw data (sensitivity analysis has not shown changing the variable to matter).

In the high cholesterol context, both order (b=0.23, suggesting again that those who viewed the good quality video first rated SDM Process as higher) and age (b=0.01, suggesting that as age increases so too do SDM Process scores) were significant predictors of SDM Process scores for the good quality video; the only significant predictor for the poor quality video was medical maximizer/minimizer tendency (b=0.10, those who were more maximizing tended to have higher SDM Process scores).

For colorectal cancer screening, prior colorectal cancer screening is the only significant predictor of SDM Process scores for the good quality video (b=0.35, those who had been screened before reported higher SDM Process scores compared to those who had never been screened). However, five factors are significant predictors of SDM Process scores for the poor quality colorectal cancer screening video. These include age (b=−0.01, suggesting that as age increases SDM Process scores decrease), physical health (b=0.95, those who were in excellent physical health scored the video higher), medical maximizer/minimizer tendency (b=0.19, those who were more maximizing scored the video higher), gender (b=−0.35, women scored the video lower than men), and education (b=−0.19, those with more education tended to score the video lower).

Discussion

This study examined the validity of the SDM Process scale in the contexts of screening for colon cancer and taking medication for high cholesterol. The SDM Process scores were significantly higher for good quality videos than poor quality videos for both clinical contexts. The SDM Process scale was also able to correctly classify good and poor quality videos more than half the time in both studies. Further, the SDM Process scale was well correlated to the SDM-Q-9 in both the clinical contexts. Together, these data provide strong evidence of validity of the measure.

The order in which the videos were presented did have an impact on scores in the high cholesterol study, as those who saw the good quality video first scored the videos higher than those who saw the poor quality video first. This finding is congruent with work on the anchoring and adjustment bias which suggests that when individuals make initial evaluations these become their “anchor”, then evaluations that follow tend to be adjusted based on that anchor which often leads to a bias in these later judgments being closer to the initial anchor.21

The videos developed for the training courses were meant to be realistic rather than perfect examples. As demonstrated with the independent scoring using the Braddock Informed Decision Making criteria, the good quality videos did not meet all possible criteria and the poor quality videos did meet some criteria. The overall mean scores from the online samples did reflect the correct ranking, with the poor quality colorectal cancer screening video getting the lowest scores, followed by the poor quality high cholesterol, good quality colorectal cancer screening and then the good quality high cholesterol video getting the highest scores. The scale was fair at classification for the colorectal cancer screening videos, but poor at classifying the high cholesterol videos, for which the poor quality video had fairly high SDM Process scores. This suggests that a weakness of the SDM Process score may be differentiating between interactions that have similarly high SDM.

To provide further evidence of validity in these two clinical contexts, this study included another well-established scale (SDM-Q-9) and examined whether the scores were positively correlated. Indeed, the results showed that the SDM Process scale was well correlated with the SDM-Q-9 in both cancer screening and high cholesterol. These relationships exceed the thresholds for positive associations of validity suggested in a recent systematic review of SDM measures.8 Other studies have found the SDM Process scale was significantly associated with lower decisional conflict (as measured by the SURE scale) and less regret.9 Although, we have yet to extend these tests of convergent validity to the areas of medication and screening decisions future work should ensure that these psychometric properties will hold for the SDM Process scale in other decisional contexts.

Even though the SDM Process scale is designed to focus on events or behaviors instead of ratings or evaluations, our exploratory heterogeneity analyses suggests that multiple individual participant differences may have played a role when evaluating good and poor quality SDM video conversations. However, we did not find consistent relationships for most variables across contexts or types of videos. Some factors related to how participants rated the poor quality interactions (including age and general health) might suggest that the level of experience individuals have with the healthcare systems may impact their ratings of interactions. Overall, more factors appeared to influence scores of the poorest quality interaction compared to the good quality interactions. The one relationship that was consistent across both poor quality videos is that respondents who are more favorably disposed to medical interventions tended to give higher ratings to the interactions that were less patient centered and instead were driven by the physician attempting to convince the patient to screen or take medication (i.e. the poor quality videos).

Though not originally the aim of our project, we did find that the SDM-Q-9 showed similar discriminant validity and ability to classify good and poor quality SDM videos (see supplemental materials). Exploratory heterogeneity analyses also suggests that similar individual participant differences may have played a role in evaluation of the content of the videos. We found that similar variables were predictive factors of SDM-Q-9 scores, but in many cases these variables were predictive for the SDM-Q-9 models when they had not been predictive in the SDM Process models, and in some cases (e.g. order of video viewing) these relationships were in opposite directions in SDM-Q-9 models as compared to SDM Process models. Taken together, this suggests that while these two measures are clearly related and valid more work is needed to better understand how individual differences can impact the perception of—and reporting of— SDM in real clinical encounters.

There are some limitations in this study. First, the internet sample was not representative of the population of individuals making these decisions in clinical practice. Our sample consisted of mostly younger individuals, mostly women, and these participants were highly educated. This may limit the generalizability of the results to individuals who actually face these decisions which tend to be more evenly balanced in terms of gender and education, and tend to be a slightly older population. Second the sample used here was asked to rate videos and not their own healthcare interactions. This is particularly critical as participants gave fairly high scores for the poor quality videos, particularly the colorectal cancer screening video. Compared to our prior validation paper in surgical decisions, the mean scores for the poor quality high cholesterol, good quality high cholesterol, and poor quality colorectal cancer screening all surpass the maximum scores identified prior.9 This finding may be due to the fact that these individuals are rating videos contrived to present SDM and individuals were not asked to rate real interactions they had with healthcare providers. Further, these measures were adaptations of their original scales. It is therefore possible that these results may not generalize to use of the SDM Process scale in populations of patients reporting on their own decision making. Additional work is needed to identify if the scale is valid in patients making actual medication and screening decisions. Third, there has yet to be work indicating that this measure has retest reliability. If this measure is to be trusted we must better understand how consistent an individual’s results are when separated by a short window of time. Further studies are needed to identify if the SDM Process scale can reliably measure SDM in these, and other, contexts.

Conclusion

The SDM Process scale is a short, patient-reported measure of SDM that showed evidence of validity. Considerable heterogeneity of scoring existed, suggesting that individual differences played a variable role in evaluating good or poor quality SDM conversations.

Supplementary Material

1

Declaration of Conflict Interests

With regards to potential conflicts of interest for this study Dr. Valentine, Mss. Cosenza, Mancini, and Vo have no conflicts of interest to disclose. In 2017–2018, Dr. Sepucha received salary support from a grant through Massachusetts General Hospital from Healthwise, a nonprofit, outside the submitted work. Both Drs. Barry and Brodney receive a grant through Massachusetts General Hospital from Healthwise, a nonprofit, outside the submitted work. Massachusetts General Hospital is the measure steward for the Shared Decision Making Process Measure that is based on the scale analyzed in this study (measure #2958 endorsed by National Quality Forum).

Financial support for this study was provided entirely by a grant from the Agency for Health Research and Quality. The funding agreement ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report.

The authors would like to acknowledge the sage wisdom and input of Dr. Floyd J. Fowler, Jr on this project.

References

  • 1.Brodney S, Fowler FJ, Barry MJ, Chang Y, Sepucha K. Comparison of Three Measures of Shared Decision Making: SDM Process_4, CollaboRATE, and SURE Scales. Med Decis Mak. 2019;39(6):673–680. doi: 10.1177/0272989X19855951 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Sepucha KR, Belkora JK, Chang Y, et al. Measuring decision quality: Psychometric evaluation of a new instrument for breast cancer surgery. BMC Med Inform Decis Mak. 2012;12(1):1. doi: 10.1186/1472-6947-12-51 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Fagerlin A, Sepucha KR, Couper MP, Levin CA, Singer E, Zikmund-Fisher BJ. Patients’ knowledge about 9 common health conditions: the DECISIONS survey. Med Decis Mak. 2010;30(5_suppl):35–52. [DOI] [PubMed] [Google Scholar]
  • 4.Sepucha KR, Langford AT, Belkora JK, et al. Impact of Timing on Measurement of Decision Quality and Shared Decision Making: Longitudinal Cohort Study of Breast Cancer Patients. Med Decis Mak. 2019;39(6):642–650. doi: 10.1177/0272989X19862545 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Sepucha KR, Feibelmann S, Cosenza C, Levin CA, Pignone M. Development and evaluation of a new survey instrument to measure the quality of colorectal cancer screening decisions. BMC Med Inform Decis Mak. 2014;14(1):1–9. doi: 10.1186/1472-6947-14-72 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Floyd JF, Gerstein BS, Barry MJ. How patient centered are medical decisions? Results of a national survey. JAMA Intern Med. 2013;173(13):1215–1221. doi: 10.1001/jamainternmed.2013.6172 [DOI] [PubMed] [Google Scholar]
  • 7.Zikmund-Fisher BJ, Couper MP, Singer E, et al. Deficits and variations in patients’ experience with making 9 common medical decisions: The DECISIONS survey. Med Decis Mak. 2010;30(5):85–95. doi: 10.1177/0272989X10380466 [DOI] [PubMed] [Google Scholar]
  • 8.Gärtner FR, Bomhof-Roordink H, Smith IP, Scholl I, Stiggelbout AM, Pieterse AH. The Quality of Instruments to Assess the Process of Shared Decision Making: A Systematic Review. Vol 13.; 2018. doi: 10.1371/journal.pone.0191747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Valentine KD, Vo H, Fowler FJ, Brodney S, Barry M, Sepucha K. Development and Evaluation of the Shared Decision Making Process Scale: A Short, Patient-Reported Measure. Med Decis Mak. [DOI] [PubMed] [Google Scholar]
  • 10.Braddock CH, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: Time to get back to basics. J Am Med Assoc. 1999;282(24):2313–2320. doi: 10.1001/jama.282.24.2313 [DOI] [PubMed] [Google Scholar]
  • 11.Valentine KD, Vo H, Fowler FJJ, Brodney S, Barry MJ, Sepucha KR. Development and Evaluation of the Shared Decision Making Process Scale: A Short, Patient-Reported Measure. Med Decis Mak. [DOI] [PubMed] [Google Scholar]
  • 12.Brodney S, Fowler F, Barry M, Chang Y, Sepucha K. Comparison of Three Measures of Shared Decision-Making: SDM Process_4, CollaboRATE and SURE Scales. Med Decis Mak. 2019;39(6):673–680. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Doherr H, Christalle E, Kriston L, Haèrter M, Scholl I. Use of the 9-item Shared Decision Making Questionnaire (SDM-Q-9 and SDM-Q-Doc) in intervention studiesA systematic review. PLoS One. 2017;12(3):1–16. doi: 10.1371/journal.pone.0173904 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kriston L, Scholl I, Hölzel L, Simon D, Loh A, Härter M. The 9-item Shared Decision Making Questionnaire (SDM-Q-9). Development and psychometric properties in a primary care sample. Patient Educ Couns. 2010;80(1):94–99. doi: 10.1016/j.pec.2009.09.034 [DOI] [PubMed] [Google Scholar]
  • 15.Alvarez K, Wang Y, Alegria M, et al. Psychometrics of shared decision making and communication as patient centered measures for two language groups. Psychol Assess. 2016;28(9):1074–1086. doi: 10.1037/pas0000344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Barr PJ, Thompson R, Walsh T, Grande SW, Ozanne EM, Elwyn G. The psychometric properties of collaborate: A fast and frugal patient-reported measure of the shared decision-making process. J Med Internet Res. 2014;16(1):1–14. doi: 10.2196/jmir.3085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Scholl I, Kriston L, Dirmaier J, Härter M. Comparing the nine-item Shared Decision-Making Questionnaire to the OPTION Scale - an attempt to establish convergent validity. Heal Expect. 18(1):137–150. doi: 10.1111/hex.12022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Hays RD, Schalet BD, Spritzer KL, Cella D. Two-item promis® global physical and mental health scales. J Patient-Reported Outcomes. 2017;1:3–7. doi: 10.1186/s41687-017-0003-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Scherer LD, Zikmund-Fisher BJ. Eliciting Medical Maximizing-Minimizing Preferences with a Single Question: Development and Validation of the MM1. [DOI] [PubMed]
  • 20.Hosmer DJ, Lemeshow S, Sturdivant R. Applied Logistic Regression. John Wiley & Sons; 2013. [Google Scholar]
  • 21.Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science (80-). 1974;185(4157):1124–1131. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES