Skip to main content
Annals of Family Medicine logoLink to Annals of Family Medicine
. 2005 Sep;3(5):436–442. doi: 10.1370/afm.305

Rochester Participatory Decision-Making Scale (RPAD): Reliability and Validity

Cleveland G Shields 1,2, Peter Franks 3, Kevin Fiscella 1,4, Sean Meldrum 1, Ronald M Epstein 1,2
PMCID: PMC1466919  PMID: 16189060

Abstract

PURPOSE We wanted develop a reliable and valid objective measure of patient-physician collaborative decision making, the Rochester Participatory Decision-Making Scale (RPAD).

METHODS Based on an informed decision-making model, the RPAD assesses physician behavior that encourages patient participation in decision making. Data were from a study of physician-patient communication of 100 primary care physicians. Physician encounters with 2 standardized patients each were audio recorded, resulting in 193 useable recordings. Transcribed recordings were coded both with RPAD and the Measure of Patient-Centered Communication (MPCC), which includes a related construct, Finding Common Ground. Two sets of dependent variables were derived from (1) surveys of the standardized patients and (2) surveys of 50 patients of each physician, who assessed their perceptions of the physician-patient relationship.

RESULTS The RPAD was coded reliably (intraclass correlation coefficient [ICC] = 0.72). RPAD correlated with Finding Common Ground (r = 0.19, P <.01) and with the survey measures of standardized patient’s perceptions of the physician-patient relationship (r = 0.32 – 0.36 [P <.005]) but less with the patient survey measures (r = 0.06 to 0.07 [P <.005]). Multivariate, hierarchical analyses suggested that the RPAD made a more robust contribution to explaining variance in standardized patient perceptions than did the MPCC Finding Common Ground.

CONCLUSIONS The RPAD shows promise as a reliable, valid, and easy-to-code objective measure of participatory decision making.

Keywords: Physician-patient relations, medical decision making, informatics

INTRODUCTION

Participatory decision making has been reported to affect health outcomes, including control of chronic disease1 and functional outcomes.2 Based on those early results and more recent studies that show a lack of patient involvement in decisions,3 physicians have been encouraged to adopt a more participatory style. Some consider that participatory decision making is a moral imperative in medicine without regard to its impact on outcomes.4 The outcomes of efforts to improve participatory decision making have been mixed; although effects on consultation style and satisfaction have been reported,5,6 effects on control of chronic disease have not been replicated.7 These studies have often relied on patient surveys to assess participatory decision making; a validated observational instrument would provide a more objective description of behaviors and reduce the likelihood of confounding by including both measures of participatory decision making and reported outcomes on the same patient survey.

Participatory decision making emerged in the 1970s as an alternative to a more traditional paternalistic model in which physicians made decisions for their patients812; initially it was influenced by consumerist and models of care, which suggest that patients have the right to information and self-determination.13,14 A contractual model elaborated on the consumerist model by emphasizing the importance of taking into account patients’ stated values to arrive at decisions.15 Participatory decision making is probably most closely related to a deliberative model in which physicians elicit and respect patients’ values, but physicians also offer expertise and recommendations, sometimes using persuasion to adopt healthier options if there is not initial consensus.13 Thus, participatory decision making consists of 2 processes: expert problem solving and decision making.16 Problem solving is the province of physicians whose expertise informs their judgment to determine treatment options. Decision making involves patients working with the physician to determine which treatment options best satisfy the patient’s preferences.

Measurement of the process of participatory decision making has been elusive. Patient surveys may not capture the level of detail to inform physician training interventions. Current interaction analysis systems, such as the Measure of Patient-Centered Communication (MPCC)17 and the Roter Interaction Analysis System (RIAS),18 offer some key behaviors that may be indicators of participatory decision making (patient question-asking), but not others.19 Braddock et al developed an instrument derived from a consensually derived set of behavioral criteria for “informed” decision making.3,20 Using their criteria, informed decision making occurs in only 9% of primary care office visits, raising concerns that physicians need to develop better skills in involving patients in their care.3 Despite its usefulness as a descriptive measure to define the conceptual domains of informed decision making, this instrument has some limitations; there is no overall scale score, and criterion validity has not been reported.

Many of the models described above focus on information sought, offered, and received. But participatory decision making also includes the responsiveness of physicians to a richer range of patient participation in decisions beyond assuring that patients have been informed. Using the Braddock et al scale as a starting point,3 we sought to develop a reliable and valid objective measure of physician behaviors that encourage participatory decision making. We developed new items and a simple method of scoring the scale to construct the Rochester Participatory Decision Making Scale (RPAD). While it is clear that patients also bring attitudes and behaviors that contribute to participatory decision making, our scale was developed to evaluate physician communication behavior and to be used for physician training purposes, rather than as a purely descriptive measure of conversational process. For this reason, we used unannounced and covert standardized patients to reduce patient variability so that we could observe the differences in physician participatory decision-making behavior when confronted with a nearly identical stimulus.

METHODS

The RPAD was developed as part of a larger study that examined the relationship between physicians’ communication behaviors and health care costs. The larger study involved audio recording and coding standardized patient visits to physicians, surveys of standardized patients (measuring their perceptions of the encounter), physician surveys (personality and demographics), patient surveys (measures of the patient-physician relationship, satisfaction, demographics, illness morbidity, physical and mental functioning), and claims data from a large managed care organization.

Research Participants

We had 3 sets of participants in this study: primary care physicians, standardized patients, and real patients. One hundred primary care physicians (internists and family physicians) who were members of the independent practice association of a managed care organization were recruited and enrolled in the study. Standardized patients made 2 unannounced, covert, audio-recorded visits to physicians. The first standardized patient role was constructed to mimic typical patients in primary care with straightforward symptoms of gastroesophageal reflux (GERD case). The second role was designed to simulate patients with medically unexplained symptoms so we could explore how physicians handle situations that involve potential disagreements about the meaning of symptoms, the diagnosis, and its treatment (ambiguous case). Two male and 3 female standardized patients were used. All visits were audio recorded with recorders hidden in purses and backpacks.

The order of standardized patient visits (male or female, role) was randomized for each physician. In the treatment and planning phase of the office visit, standardized patients were instructed to respond to physicians’ questions and to ask clarifying questions, but they were not to challenge directly the physician’s assessment. At one point during each visit, however, standardized patients were instructed to ask whether their symptoms could represent something serious so they could communicate to the physician a moderate level of anxiety. Thus, we sought to create typical patients in current primary care practice. Standardized patients participated in a pilot test to assure they were realistic, and we sought feedback from pilot physicians on whether the standardized patients seemed typical and ordinary.

Physicians completed questionnaires, and 50 visiting patients from each physician’s office were also recruited to complete questionnaires. We approached 4,963 eligible patients; 4,746 (95.6%) completed the questionnaire. The reasons for refusal were as follows: 185 patients stated that they disliked questionnaires, 109 refused because of illness, and 52 felt rushed. Demographic information on the physician and patient samples is contained in Tables 1 and 2.

Table 1.

Characteristics of Physicians in Sample

Characteristic Mean No. SD Percent
Age, years 45 8.2
Sex
    Female 23 23.0
    Male 77 77.0
Family practitioner
    Yes 47 47.0
    No 53 53.0
Solo practitioner
    Yes 24 24.0
    No 76 76.0
Rural practice
    Yes 32 32.0
    No 68 68.0
Total 100 100.00

Table 2.

Characteristics of Patients Surveyed

Characteristic Number Percent
Sex
    Female 2,955 62.3
    Male 1,750 36.9
    Missing 41 0.9
Patient race/ethnicity
    African American 499 10.5
    Hispanic 109 2.3
    Other 110 2.3
    White 3,994 84.2
    Missing 34 0.7
Length of patient-physician relationship
    <1 year 360 7.6
    1–3 years 1,035 21.8
    3–5 years 814 17.2
    >5 years 2,525 53.2
    Missing 12 0.3
Patient education
    <12 years 337 7.1
    12th grade 1,370 28.9
    1–3 years college 1,490 31.4
    4 years college 828 17.4
    Graduate school 700 14.7
    Missing 21 0.4

Two days after the visit, a fax was sent to the physician to determine whether, when prompted, the physician could identify the standardized patient. The fax notified the physician that a standardized patient had visited in the past few days; the physicians were asked whether they suspected they had seen an standardized patient, and if so, to describe the patient and indicate how realistic the standardized patient portrayal was. Forty percent of physicians identified the standardized patients from this prompted recall.

Analysis of Audio-Recorded Encounters

Each standardized patient visit was recorded using a digital audio disk recorder with a high-quality microphone. Visit length was calculated (in minutes), excluding waiting time in the examining room before the visit and any period of more than 1 minute during which the physician left the room.

RPAD Scale Development

The RPAD was developed by incorporating items suggested by Braddock et al3 as indicative of physician behaviors that encourage patient participation in decision making. In developing the RPAD, we observed that some physician behaviors were performed fully, whereas others were completed only partially. This finding led us to create a coding scheme for each item that gave a score of 0 for no evidence of the behavior, ½ for partial presence of the behavior, and 1 for the full presence of the behavior (Table 3). We developed a coding manual with descriptions and examples for each 0, ½, and 1 score to guide raters (available from the first author).

Table 3.

Rochester Participatory Decision-Making Scale (RPAD)

Items Score
1 Explain the clinical issue or nature of the decision*
    0 No evidence ______
    ½ Physician gives a cursory, hurried, unclear, rushed explanation, or long confusing lecture
    1 Physician clearly explains his/her view of the medical/clinical problem
2 Discussion of the uncertainties associated with the situation*
    0 No evidence ______
    ½ Physician acknowledges uncertainties but does not explain thorough or only does with active patient prompting
    1 Physician thoroughly explains uncertainties in the problem or treatment
3 Clarification of agreement
    0 No evidence ______
    ½ Patient expressed passive assent
    1 Physician actively asks for patient agreement and tries to obtain a commitment from the patient to the treatment plan
4 Examine barriers to follow-through with treatment plan
    0 No evidence ______
    ½ Patient discloses concerns or problems with following through with treatment
    1 Physician actively examines patients concerns or problems with following through with treatment
5 Physician gives patient opportunity to ask questions and checks patients understanding of the treatment plan*
    0 No opportunity for patient to ask questions ______
    ½ Patient has opportunity to ask questions
    1 Physician asks patients for their understanding of problem or plans
6 Physician’s medical language matches patient’s level of understanding
    −½ Clear mismatch between the technicality of physician’s and patient’s language ______
    ½ Level of technicality or detail of the physician’s and patient’s language matches most of the time.
    1 Level of technicality or detail of the physician’s and patient’s language clearly matches.
7 Physician asks, “Any questions?”
    0 No evidence ______
    ½ Yes, but no discussion ensues
    1 Yes. and physician engages in a discussion with patient about the questions
8 Physician asks open-ended questions.
    0 No evidence ______
    ½ Yes. but no discussion ensues
    1 Yes.and physician engages in a discussion with patient about the question
9 Physician checks his/her understanding of patient’s point of view*
    0 No evidence ______
    ½ Yes, but no discussion ensues
    1 Yes, and physician engages in a discussion with patient about the physician’s perceptions of patients
Sum ______
Discarded items
Discussion of the patient’s role in decision making*
    0 No evidence
    ½ Yes, but no discussion ensues
    1 Yes, and physician engages in a discussion with patient about the patient’s role
Discussion of the alternatives*
    0 No evidence
    ½ Yes, but no discussion ensues
    1 Yes, and physician engages in a discussion with patient about the alternative treatments available
Discussion of the pros (potential benefits) and cons (risks) of the alternatives*
    0 No evidence
    ½ Yes, but no discussion ensues
    1 Yes, and physician engages in a discussion with patient about the pros and cons of the alternative treatments

* Indicates modified Braddock items.

We pilot tested the scale on 10 audio-recorded visits. We discontinued items that never received a code. We were left with 4 items; we then developed 5 more items and scoring criteria for each and pilot tested them. The final coding system is shown in Table 3. The 10 visits we used to develop the scale were recoded after all other tapes had been coded and used as data in the analysis. We have included the discarded items in the Supplemental Appendix, available online only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1.

Coders first listened to the entire audio recording and then listened again to code the instances of physician behaviors listed on the RPAD coding sheet. Each time they found an example, they stopped the tape and listened again to that section to determine whether the behavior deserved a 0, ½, or 1 full-point score.

The MPCC

We also coded using the MPCC,17 a measure of physician responsiveness to patient concerns, including participation in care. See the Supplemental Appendix for information about the MPCC.

Patient Survey

Patient questionnaires that were administered to 50 patients of each physician included 4 scales: the 5-item Health Care Climate Questionnaire (HCCQ),21, the Primary Care Assessment Survey (PCAS) knowledge and trust subscales,22,23 and a single-item satisfaction scale. Details can be found in the Supplemental Appendix.

Patient data for covariate adjustment were also collected, including demographics (age, sex, race/ethnicity, and educational level), health status medical and physical component scores of the SF-12 Health Survey (MCS-12 and PCS-12),24 SCL-90 (Symptom Checklist − 90) somatization score,25 11 patient-reported morbidities, and the length of the physician-patient relationship.

Standardized Patient Survey

The standardized patients also completed questionnaires after their visits with physicians. The HCCQ21 and the PCAS trust subscale were completed by both patients and standardized patients.22,23,26

Statistical Analysis

We examined the coding reliability of the RPAD by calculating the intraclass correlation coefficient (ICC). We also examined the case-to-case reliability of the RPAD coding of the 2 standardized patient cases as a measure of physician style using the Spearman Brown prophecy formula α = n*r/((1+ (n1)*r) (n = number of standardized patient cases and r = average correlation between cases). This formula treats the 2 cases as items in a scale assessing the physician’s style and calculates a coefficient of reliability. We then examined the relationship of RPAD with MPCC total score and its components. We expected the measures to be moderately related, but our primary hypothesis was that RPAD would correlate with Component 3, because MPCC measures physician-patient interaction around the delivery of the diagnosis and treatment plan. Finally, we examined the criterion validity by examining the relationship of RPAD with patients’ and standardized patients’ perceptions of their relationships with their physicians using multivariate methods. We were particularly interested in the contribution that the RPAD variable made to patient and standardized patient perceptions independent of the other objective measure of physician-patient interaction (MPCC). The multivariate analysis methods and the results are included in an online Supplemental Appendix.

RESULTS

We analyzed 193 audio recordings from 100 physician-patient encounters. Seven recordings were not available because of equipment failure (3 encounters); 4 physicians moved their practices before completion of the study. We averaged 49.4 (SD = 6) patient questionnaires from each physician’s office. Patients reported an average of 1.25 illnesses from a list of 13 commonly treated primary care conditions. (Detailed information on patient illnesses and health status is included in the on-line Supplemental Table 1, available online-only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1.)

Reliability of the RPAD

The ICC for the RPAD was 0.72. Reliability for the RPAD as a measure of physician style, using the Spearman-Brown prophecy formula based on the 2 standardized patient encounters, was 0.53. Audio-recorded encounters took approximately 50 minutes to code; 20 minutes were spent first listening to the tape, and another 30 minutes to code the 20 minutes of the recording.

RPAD Distribution and Scoring

Table 4 shows the distribution of scores on the RPAD. Each item was scored 0, ½, or 1, but when averaged over 2 cases, the scores also included ¼ and ¾. Almost 70% of the physicians gave a clear description of the clinical problem, though 53% did not discuss uncertainties in any way. Almost all the physicians attempted to clarify agreement on the diagnosis and treatment plan; 98% had at least a score of ½ or higher. Most physicians, 93%, did not discuss barriers to carrying out the treatment plan. The bulk of patients, 92%, were given some opportunity to ask questions. Most of the time, physician language matched the patients’. More than 25% of the time, physicians asked whether patients had any questions. A small percentage of physicians used open-ended questions, and a similarly small percentage checked patients’ understanding.

Table 4.

Rochester Participatory Decision-Making Scale (RPAD) Descriptive Statistics

Frequency and Percentage*
Item Mean SD 0 ¼ ½ ¾ 1
1. Explain the clinical issue 0.89 0.18 0 1 11 19 69
2. Discuss uncertainties 0.20 0.25 53 22 19 5 1
3. Clarify agreement 0.57 0.14 0 2 72 23 3
4. Examine barriers 0.02 0.09 93 5 2 0 0
5. Patients asked questions 0.49 0.11 2 6 89 2 1
6. Physician’s medical language 0.55 0.15 3 0 75 20 2
7. Physician asks, Any questions? 0.25 0.29 46 27 12 12 3
8. Physician asks open-ended questions 0.07 0.18 84 7 6 3 0
9. Physician checks understanding 0.10 0.21 77 10 9 3 1

* In this table, the frequency is per 100 cases, so percentage is equal to frequency.

† Items for the RPAD were scored 0, ½, and 1, averaged over 2 cases.

Correlations of RPAD with MPCC, Physician Characteristics, and Patient and Standardized Patient Surveys

Table 5 shows the Pearson correlations between the RPAD and the MPCC total score and components. As hypothesized, RPAD correlated with Finding Common Ground, MPCC Component 3. RPAD also correlated with MPCC total and Exploring Disease and Illness, Component 1. RPAD was not correlated with Understanding the Whole Person, Component 2. RPAD was not correlated with physician , age, sex, or years in practice. RPAD was correlated with standardized patient survey findings on HCCQ and with PCAS-Trust. RPAD, treated as a physician style measure, was significantly correlated with patient survey findings, though the correlations were much smaller than those of the more proximal standardized patient surveys. We also found that RPAD was higher in the unambiguous case (6.8, SD = 2.5) than the ambiguous case (5.7, SD = 2.3) (t = 3.19, P = .002). We found no difference, however, between the RPAD score for internists (6.4, SD = 2.4) and family physicians (6.2, SD = 2.5) (t = 0.59, P = .55).

Table 5.

Correlation of RPAD Score With Self-Report Measures

Patient Self-Report RPAD Total
Coding of audiotapes (n = 193)
    Total MPCC score 0.24*
    C1 - Exploring the Disease and Illness 0.18
    C2 - Whole Person 0.08
    C3 - Diagnosis and Treatment 0.19
Physician characteristics (n = 193)
    Age 0.06
    Female 0.07
    Years in practice 0.02
    Solo practice −0.02
    Number of partners 0.14
SP survey (n = 193)
    Health care climate 0.36*
    Trust in physician 0.32*
Patient survey (n = 4,746)
    Health care climate 0.07*
    Knowledge of patient 0.06*
    Trust in physician 0.06*
    Patient satisfaction 0.06*

RPAD = Rochester Participatory Decision-Making Scale; MD-SP = physician-standardized patient; MPCC = Measure of Patient-Centered Communication; C1, C2, C3 = Components 1, 2, 3.

* P ≤.005.

P ≤.01.

Regression of RPAD on Patient Surveys and Standardized Patient Surveys

We conducted multilevel regression analyses examining the regression of patient survey perception measures on the RPAD and MPCC components. The optimal models for all 4 patient perception measures, based on Akaike’s and Bayes information criteria and physician variance component reduction,27 were the models including RPAD and MPCC Component 1 and Component 2, but not Component 3 (Supplemental Table 2, available online only at http://www.annfammed.org/cgi/content/full/3/5/436/DC1).

We conducted a similar series of regression analyses of the standardized patient survey measures on the RPAD and MPCC components. Again, the optimal models for each of the survey measures were the models including RPAD and MPCC Component 1 and Component 2, RPAD but not Component 3 (Supplemental Table 3, available online only at http://www.annfammed.org/cgi/content/full/3/5/436//DC1).

Consistent with the univariate Pearson correlations, the parameter estimates for the standardized patient survey measures were much larger than those for the patient measures in terms of standard deviation units on the scales examined. For the standardized patient measures, a 1 SD difference in participatory decision making was associated with a 30.3% SD difference in HCCQ and a 25.6% SD difference in satisfaction, whereas for the patient perception measures, a 1 SD difference in RPAD was associated with only a 4.8%–6.1% SD difference in measures of patient perceptions of autonomy support, physician knowledge of patient, trust, and satisfaction.

DISCUSSION

We report exploratory data on a new quantitative objective measure of participatory decision making. The RPAD can be coded reliably, correlates with standardized-patient and real-patient measures of constructs related to participatory decision making, and takes only 50 minutes to code 20-minute office visits. Based on the Braddock et al scale and other literature on participatory decision making, the scale items have face validity.28,29 The scale items address behaviors that physicians use to encourage patient participation in decision making. A difference between our scale and the Braddock et al scale is that we set out to capture physician behaviors that might encourage patient participation, whereas the Braddock et al scale focuses on behaviors that should have occurred during informed decision making. Although we developed the measure in conjunction with our use of the MPCC, we think that the RPAD could be used independently of the MPCC.

The use of standardized patients is both a strength and a weakness of the study. We do not know how the RPAD might work with real patients; however, by using standardized patients, we focused on the physician as an agent encouraging participatory decision making rather than on measuring patient participation in decision making. Future studies should examine using RPAD with real patients.

Because there are no reliable measures of participatory decision making, it was challenging to establish construct validity of the scale. The closest we came to evidence of construct validity was the correlation of MPCC Finding Common Ground with the RPAD. It is difficult to determine whether the modest correlation reflects poor reliability of the MPCC Finding Common Ground subscale or that the 2 scales share variance but measure somewhat different constructs.

Interestingly, RPAD correlated with the MPCC Exploring the Disease and Illness Experience subscale. This finding suggests that the RPAD scale is tapping into other communication processes that are important to patient centered care, or that exploring disease and illness experience is a necessary precursor to participatory decision making. The RPAD includes items that measure physicians’ use of active encouragement for patients to express their ideas and thoughts about the treatment plan. Thus, it includes domains that may not be captured using the MPCC Finding Common Ground subscale, which focuses more on patient question asking, but does not address whether the physician actively encouraged the patients’ participation.

RPAD significantly contributed to the model explaining variance in the degree to which the standardized patients believed that their autonomy was supported by physicians, lending convergent validity. Because no similar relationship was found for MPCC Finding Common Ground subscale, the RPAD may capture the construct of patient-perceived participatory decision making at least as well as other available objective instruments. Not surprisingly, RPAD did not account for as much variance in patient surveys as it did with standardized patient surveys. Patients’ tendency to accommodate to their physician’s communication style may have caused them to judge their physicians’ less critically than standardized patients did, thus muting the association between communication style and patient perceptions of their physicians. In addition, the standardized patients were reporting their perception of the same encounter that was coded using the RPAD, whereas the patients were reporting their perceptions about their ongoing relationship with the physician. Finally, patients’ perceptions were correlated with a measure of physician style assessed from physician interaction with standardized patients.

It is possible that correlations with real patients’ perceptions of their physicians would be stronger had the interactions been with the real patients. These preliminary findings suggest that the RPAD offers promise as a reliable, valid, and easy-to-code objective measure of participatory decision making.

Conflicts of interest: none reported

Funding support: This project was supported by grant No. R01HS10610 from the Agency for Healthcare Research and Quality (Dr. Epstein).

REFERENCES

  • 1.Kaplan SH, Greenfield S, Ware JE. Assessing the effects of physician-patient interactions on the outcomes of chronic disease. [erratum appears in Med Care 1989 Jul;27:679]. Med Care. 1989;27:Suppl-27. [DOI] [PubMed]
  • 2.Greenfield S, Kaplan S, Ware JE, Jr. Expanding patient involvement in care. Effects on patient outcomes. Ann Internal Med. 1985;102:520–528. [DOI] [PubMed] [Google Scholar]
  • 3.Braddock CH, III, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics.[comment]. JAMA. 1999;282:2313–2320. [DOI] [PubMed] [Google Scholar]
  • 4.Guadagnoli E, Ward P. Patient participation in decision-making. Soc Sci Med. 1998;47:329–339. [DOI] [PubMed] [Google Scholar]
  • 5.Griffin SJ, Kinmonth AL, Veltman MWM, Gillard S, Grant J, Stewart M. Effect on health-related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2:595–608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Post DM, Cegala DJ, Miser WF. The other half of the whole: teaching patients to communicate with physicians. Fam Med. 2002;34:344–352. [PubMed] [Google Scholar]
  • 7.Williams GC, McGregor H, Zeldman A, Freedman ZR, Deci EL, Elder D. Promoting glycemic control through diabetes self-management: evaluating a patient activation intervention. Patient Educ Counseling. 2005;56:28–34. [DOI] [PubMed] [Google Scholar]
  • 8.Szasz TS, Hollender MH. The basic models of the doctor-patient relationship. Arch Intern Med. 1956;97:585–592. [DOI] [PubMed] [Google Scholar]
  • 9.Deber RB. Physicians in health care management: 8. The patient-physician partnership: decision making, problem solving and the desire to participate. CMAJ. 1994;151:423–427. [PMC free article] [PubMed] [Google Scholar]
  • 10.McKinstry B. Paternalism and the doctor-patient relationship in general practice. Brit J Gen Pract. 1992;42:340–342. [PMC free article] [PubMed] [Google Scholar]
  • 11.Neighbour R. Paternalism or autonomy? Practitioner. 1992;236:860–864. [PubMed] [Google Scholar]
  • 12.Stewart M, Brown JB, Weston WW, McWhinney IR, McWilliam CL, Freeman TR. Patient-Centered Medicine: Transforming the Clinical Method. Thousand Oaks, Calif: Sage Publications; 1995.
  • 13.Emanuel EJ, Emanuel LL. Four models of the physician-patient relationship. JAMA. 1992;267:2221–2226. [PubMed] [Google Scholar]
  • 14.Lazare A, Eisenthal S, Wasserman L. The customer approach to patienthood. Attending to patient requests in a walk-in clinic. Arch Gen Psychiatry. 1975;32:553–558. [DOI] [PubMed] [Google Scholar]
  • 15.Quill TE. Partnerships in patient care: A contractual approach. Ann Intern Med. 1983;98:228–234. [DOI] [PubMed] [Google Scholar]
  • 16.Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Arch Intern Med. 1996;156:1414–1420. [PubMed] [Google Scholar]
  • 17.Brown JB, Stewart M, Tessier S. Assessing communication between patients and doctors: a manual for scoring patient-centred communication. Working Paper Series #95-2. London, Ontario: Centre for Studies in Family Medicine and Thames Valley Family Practice Research Unit, 1995.
  • 18.Roter D, Larson S. The Roter interaction analysis system (RIAS): utility and flexibility for analysis of medical interactions. Patient Educ Couns. 2002;46:243–251. [DOI] [PubMed] [Google Scholar]
  • 19.Roter DL. Patient participation in the patient-provider interaction: the effects of patient question asking on the quality of interaction, satisfaction and compliance. Health Educ Monographs. 1977;5:281–315. [DOI] [PubMed] [Google Scholar]
  • 20.Braddock CH, Fihn SD, Levinson W, Jonsen AR, Pearlman RA. How doctors and patients discuss routine clinical decisions. Informed decision making in the outpatient setting. [see comments]. J Gen Intern Med. 1997;12:339–345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Williams GC, Freedman ZR, Deci EL. Supporting autonomy to motivate patients with diabetes for glucose control. Diabetes Care. 1998;21:1644–1651. [DOI] [PubMed] [Google Scholar]
  • 22.Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care [see comments]. J Fam Pract. 1998; 1998;47:213–220. [PubMed] [Google Scholar]
  • 23.Safran DG, Kosinski M, Tarlov AR, et al. The Primary Care Assessment Survey: tests of data quality and measurement performance. Medical Care 1998; 1998;36:728–739. [DOI] [PubMed] [Google Scholar]
  • 24.Ware JJ, Kosinski M, Keller SD. A 12-Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity. Med Care. 1996;34:220–233. [DOI] [PubMed] [Google Scholar]
  • 25.Derogatis LR, Lipman RS, Covi L. SCL-90: an outpatient psychiatric rating scale--preliminary report. Psychopharmacol Bull. 1973;9:13–28. [PubMed] [Google Scholar]
  • 26.Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH. Switching doctors: predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract. 2001;50:130–136. [PubMed] [Google Scholar]
  • 27.Snijders TAB, Bosker RJ. Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. London, UK: Sage Publications, 1999.
  • 28.Braddock CH, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics. [see comments]. JAMA. 1999;282:2313–2320. [DOI] [PubMed] [Google Scholar]
  • 29.Kaplan SH, Greenfield S, Gandek B, Rogers WH, Ware JE, Jr. Characteristics of physicians with participatory decision-making styles [see comments]. Ann Intern Med. 1996; 1996;124:497–504. [DOI] [PubMed] [Google Scholar]

Articles from Annals of Family Medicine are provided here courtesy of Annals of Family Medicine, Inc.

RESOURCES