Skip to main content
Health Expectations : An International Journal of Public Participation in Health Care and Health Policy logoLink to Health Expectations : An International Journal of Public Participation in Health Care and Health Policy
. 2010 Jul 9;14(1):29–37. doi: 10.1111/j.1369-7625.2010.00613.x

Sharing decisions in breast cancer care: Development of the Decision Analysis System for Oncology (DAS‐O) to identify shared decision making during treatment consultations

Richard F Brown 1, Phyllis N Butow 2, Ilona Juraskova 2, Karin Ribi 3, Daniela Gerber 3, Jurg Bernhard 3,4, Martin HN Tattersall 2,5
PMCID: PMC5060562  PMID: 20629766

Abstract

Background  Shared Decision Making (SDM) is widely accepted as the preferred method for reaching treatment decisions in the oncology setting including those about clinical trial participation: however, there is some disagreement between researchers over the components of SDM. Specific standardized coding systems are needed to help overcome this difficulty.

Objective  The first objective was to describe the development of an oncology specific SDM coding system, the DAS‐O. The second objective was to provide reliability and validity data supporting the DAS‐O.

Setting and participants  Consultation data were available from tertiary cancer center out patient oncology clinics in: Australia, New Zealand (ANZ), Switzerland, Germany and Austria (SGA). Patients were women with a confirmed diagnosis of early stage breast cancer. Reliability data were from 18 randomly selected coded transcripts drawn from ANZ and SGA. Concurrent validity data are from 55 (ANZ) consultations.

Measurement  Inter and Intra rater reliability data was evaluated using Kappa correlation statistics and correlation coefficients. Correlation coefficients were used to assess concurrent validity between the DAS‐O and two other SDM coding systems, OPTION and DSAT.

Results  Inter and Intra rater reliability for the system were high with average Kappas of 0.58 and 0.65 respectively. Correlation coefficients between DAS‐O and OPTION was 0.73 and >0.5 for DSAT.

Conclusions  We have developed a reliable and valid coding system for identifying and rating the quality of SDM in breast cancer consultations.

Keywords: analysis system, oncology, shared decision making

Introduction

In contemporary Western cultures decision‐making models that emphasize the role of patients in reaching treatment decisions have now largely usurped the traditional paternalistic model of decision‐making. Ethicists, health professionals and patient advocacy groups have in general viewed this shift to patient participation in decision‐making favorably. 1 Participatory models acknowledge that doctors and patients may have different values and preferences that affect the course of clinical actions.

While there is a general acceptance of the principle of shared decision‐making (SDM), there is dispute over what factors contribute to an optimal shared decision. Moreover, research evidence suggests that there is great variability in the degree to which patients prefer to be involved in reaching their treatment decisions. The degrees of patient involvement in, and doctor facilitation of decision‐making are factors that largely differentiate participatory models. 2 , 3 , 4

Doctor focused interventions to influence SDM require clear criteria for judging pre and post intervention behavior. Although the criteria for evaluating the adequacy of doctor behavior in promoting SDM are still evolving, 5 the literature suggests that the major criteria that should be evaluated when judging the adequacy of shared decisions from both the doctor and patient perspectives are broadly; (i) patient understanding of information and the evidence underpinning the treatment choice, (ii) doctor tailoring of information and involvement to the needs of the patient and facilitation of patient decision making by balancing different options and clarifying values, and (iii) patient adjustment to and satisfaction with various aspects of the decision making process and the ultimate decision. 2 , 5 , 6 , 7 , 8 As theories of communication would suggest, 9 effective SDM requires both the transmission of information essential to decision making and relational skill to ensure that information is tailored to patient needs.

Observational studies that have attempted to identify these criteria have been limited by a lack of standardized coding systems that can be applied to the consultation recordings. Validated coding systems are needed to achieve a number of purposes: to audit current practice, to evaluate the outcomes of SDM and to benchmark health professionals. Recently however, several systems to code SDM have been published including the: Decisional Support Analysis Tool (DSAT), 10 and the OPTION scale. 11 These coding systems meet the following criteria; (i) they are based on audio taped interactions between clinicians and their patients, (ii) they include items relevant to the core components of SDM, and (iii) they have confirmed content validity and inter‐rater reliability. However, they do not address consultations in which clinical trials are discussed, a setting in which SDM has particular importance.

More recently, Albrecht et al. 9 have developed the Karmanos Accrual Analysis System (KAAS) that enables assessment of communication in oncology consultations in which clinical trials are being discussed. This excellent system provides data about the physician – patient alliance and aspects of clinical trial information content but does not operationalize specific aspects of SDM or focus on decisions where patients must weigh up the advantages and disadvantages of standard versus clinical trial treatments. 9

As part of an ongoing program of research 12 we have developed a coding system, the Decision Analysis System for Oncology (DAS‐O), which tries to address both SDM and information important to clinical trial discussions. Our goal was to identify and provide an assessment of the quality of key aspects of shared decision making during oncology consultations in which treatment options, including clinical trials are discussed.

The DAS‐O was developed in the oncology setting specifically and its applicability to non‐cancer settings is unknown. While clinical trials are discussed in other clinical contexts, few diseases combine a life‐threatening illness with multi‐modal treatment options and an uncertain outcome. Thus the complexity of SDM is highlighted in this setting. Nevertheless, many of the issues identified may well be applicable in other settings, and this should be explored in future research.

The coding system identifies items of content that are considered essential for cancer patients to make treatment decisions and assesses relational aspects of the SDM process. Based on rigorous qualitative methodology, the coding system captures some novel informational and relational aspects of shared decision‐making during cancer consultations. 2 , 5 , 6 , 7 , 8 Examples of these include; (i) identifying a sequence of information giving to promote patient understanding of information and an increased ability to weigh up the benefits and costs of various treatment choices in collaboration with their physician, (ii) identifying clinician word choices that foster a sense of active patient participation in the decision making process, and (iii) identifying possible linguistic strategies that may lead to patient coercion to join cancer clinical trials (See Appendix S1).

The primary aim of this manuscript is to describe the early evaluation of the DAS‐O in a sample of breast cancer patients and the secondary aim is to describe the extent to which SDM behaviors are evident in discussions about breast cancer clinical trials.

Method

Development of items

We conducted a qualitative analysis of audio‐recordings of 16 general oncology consultations during which treatment options including clinical trials were discussed between nine oncologists and their patients. This analysis provided data about the way that decisions were being made. The transcripts were analyzed using the constant comparative method 13 by an expert panel from diverse disciplines including: Ethics, Cancer Medicine and Psycho‐oncology. Once this analysis was completed, in order to ensure theoretical saturation, a further set of ten consultations were audio‐recorded, transcribed and analyzed. These transcripts were subjected to an identical analytic procedure. A subset of seven of the 26 transcripts were analyzed by expert Linguists using a Systemic Functional Linguistic Approach. 14 , 15 This process has been described in full elsewhere. 12

These analyses resulted in the identification of a range of issues that included a set of strategies to assist oncologists to facilitate shared treatment decision making with their patients. In addition, the strategies provided guidance about essential items of clinical and ethical information that were necessary for SDM about standard treatments and clinical trials. A detailed description of the development of these items has been provided elsewhere. 12 , 16 These strategies were operationalized as a set of discrete items listed with an explanation and rationale for inclusion. In addition, characteristic examples extracted from the transcripts were provided for each item. These were collated in tabular form in a document with accompanying explanation of the developmental process.

Face validity

In order to assess face validity, this document was presented to a consensus workshop consisting of a multidisciplinary panel of experts convened by a peak Australian cancer research agency. This form of validity, expert panel validity, is ‘face validity performed by a group of experts’. 17 The Delphi technique 18 was used to guide the conduct of the workshop and to ensure that consensus was reached among 27 workshop participants including expert: oncology clinicians, linguists, ethicists, psychologists, research nurses, cancer survivors, patient advocates, research nurses and lawyers specializing in medico legal litigation. Participants’ views about the adequacy and completeness of the items were sought during workshop discussion groups that were audio‐recorded. These recordings were then transcribed and content analyzed. Suggestions for changes were distributed to all participants who agreed unanimously to a revised set of items.

Once consensus was reached about the items, their explanation and examples, a formal coding sheet and manual were developed. The items generated were collectively called the DAS‐O coding system and were grouped into two themes, (i) establishing a shared decision making framework (22 items) and (ii) providing clear and unbiased information about standard treatments and clinical trials (48 items). These two themes are further divided into five subscales; (i) establishing the physician‐ patient team, (ii) following a consultation pathway, (iii) providing information about standard treatments and clinical trials, (iv) promoting clarity and, (v) avoiding coercion. Total scores were calculated by adding scores for the two subscales. These items are presented in Appendix S1.

To utilize the coding system, the transcript of an oncology consultation is read in entirety while listening to the audio recording and the items are rated as present or absent, with the former being further coded as basic or extended.

Inter and intra rater reliability

In order to assess inter and intra‐rater reliability we applied the DAS‐O coding system to transcripts of audio recordings of initial oncologist – patient consultation interactions in a sample of women who had been diagnosed with early stage breast cancer. This sample was drawn from a large international randomized controlled trial IBCSG 33 – 03 exploring an intervention to assist oncologists to communicate more effectively with their patients and make shared treatment decisions. As the intervention would not affect inter and intra‐ rater reliability calculations the transcripts were drawn from both the experimental and control arms of the study.

Participants

Twenty one surgical, radiation and medical oncologists from Australian and New Zealand (ANZ) centers and 25 medical, surgical and gynaecological oncologists from Swiss, German and Austrian (SGA) centres who were participating in International Breast Cancer Study clinical trials, were invited to participate in the communication study.

Eligible patients were; (i) over the age of 18, (ii) recently diagnosed with early stage breast cancer, (iii) with adequate native language skills to complete questionnaires, and (iv) mentally and physically capable of participating in the study.

Procedure

Eligible patients were identified by the doctor or the research nurse who obtained written informed consent from agreeable patients. The research nurse then administered a short questionnaire prior to the consultation in which treatment options were discussed, gathering demographic data (age, marital status, education, occupation and prior medical training). Four consultations per doctor, in which treatment options were presented were audio‐taped and transcribed verbatim. Coders were trained on five transcripts from another study with an independent experienced coder until a high level of agreement was established. Separate inter and intra‐rater reliability checks were conducted in the ANZ and SGA samples and used slightly different methods; however, summary data are available. To establish inter‐rater reliability, in the ANZ sample, two coders coded pairwise six transcripts selected at random, in the SGA sample two coders coded pairwise 12 transcripts selected at random. Each coder coded each transcript and the data were entered into a statistical program for reliability analyses. To establish intra‐rater reliability in the ANZ sample coders re‐coded a random five transcripts of those they had already coded while in the latter sample, one coder recoded 12 transcripts selected at random 2 months (median; range: 10 days – 2.5 months) after first coding.

It was noted that in some circumstances coders had difficulty coding one item in Subscale 1, ‘offering choice between treatments’. For example, when a joint tumor board took decisions about the preferred treatment option and it was left to the clinician to communicate that preference. This occurred in three cases used for the inter/intra rater reliability analyses. We initially calculated the reliability statistics including these three cases and then again after excluding the cases. There was no change in the reliability statistics after excluding these cases.

Approval to conduct the study was obtained from the relevant institutional review boards in all participating countries.

Analysis plan

Inter‐ and intra‐rater reliability was calculated for individual items, subscales and for the total scale. For items, agreement was calculated for; (i) the presence or absence of the item and (ii) the quality (basic or extended) of items present. Items were assigned a code for ‘0 – absent’, ‘1 – basic’ and ‘2 – extended’. Thus, if an item was not observed it was assigned a 0, if present it was coded as either ‘1 – basic’ if mentioned or ‘2 – extended’ if mentioned and the oncologist provided an explanation and rational for the item. For some items Kappas could not be calculated either because all ratings were constant within and across raters or because of incomplete contingency tables. Average Kappas for items within each subscale are were calculated in addition to correlation coefficients that assessed agreement on the subscales. Due to a limited number of consultations in which a clinical trial was discussed coding items assessing discussions regarding participation in trials were not included in the reliability analysis.

The data were analyzed using the Statistical Program for Social Sciences spss for Windows, Rel. 10.0.0. 1999. SPSS, Inc., Chicago, IL, USA.

Concurrent validity

To establish concurrent validity, the DAS‐O and two other coding systems designed to assess SDM, the OPTION scale 11 and the Decision Support Analysis Tool (DSAT) 10 were applied to the same data set and correlations were calculated between the coding systems. Because the OPTION and DSAT tools were developed in the primary health setting, (although they have been applied in the cancer setting), we expected moderate, rather than high correlations with DAS‐O.

The OPTION scale was designed to assess shared decision making in the primary care setting. 11 OPTION consists of 12 items that independently assess key competencies of shared decision‐making on a five‐point scale, ranging from ‘the behavior is not observed’ (0) to ‘the behavior is exhibited to a very high standard’. 11 Scores for the OPTION scale range from 0 to 48, with higher scores indicating extended behavior of the competencies.

The Decision Support Analysis Tool (DSAT) was designed to evaluate practitioner knowledge of decision support skills and interventions for patients facing value‐sensitive health decisions. 10 DSAT assesses the presence or absence of 22 behaviors in 6 domains (checking decision making status, providing information, clarifying values, discussing others involvement in the decision, clarifying the next steps and tailoring the discussion to the individual patient). Scores range from 0 to 12, with some elements given marks if at least some behaviors are present.

All three of the coding systems included in this study focus on the health practitioner’s behavior, because they were primarily designed to allow evaluation of the health practitioner in studies and clinical practice. Patient behavior is an essential component of understanding how shared decisions are made and should also be coded. However, given the lack of instruments that evaluate SDM and also explore patient data, 19 it was felt that these instruments were the best available for this analysis in the Oncology setting.

Participants and procedure

In the concurrent validity analysis data were derived from the above sample but were limited to transcription data from the 21 participating ANZ physicians and 70 of their patients. The procedure was identical to that described above. After removing consultations with technical (i.e. recording) difficulties, the validity analysis was based on 55 consecutive audio‐taped consultations. Coders were each provided with a published manual for the coding system they were to apply. Two coders then coded five incomplete transcripts (thus unusable for the main analysis) with an independent coder, who was an expert in the use of the three coding systems, until a high level of agreement was established. The independent coder was not then involved in further coding. Coders read the hard copy of the transcript while listening to the audio‐tape’.

Analysis plan

Concurrent validity was tested by exploring correlation coefficients between the three coding systems (i.e. DAS‐O, OPTION and DSAT) applied to the set of transcripts.

The agencies that funded this research did not have a role in the design, analysis or any aspect of the conduct of this study.

Results

Inter and intra‐rater reliability

Inter‐rater reliability

The average Kappa for total scale was 0.584, which using Landis and Koch’s 20 benchmarks for strength of agreement, represents substantial agreement. For those items for which Kappas could not be calculated average percentage agreement was 83% for recognition of individual items and 98% for quality ratings, representing substantial to almost perfect agreement. 20 The correlation coefficient for the total scale was 0.884.

Inter‐rater agreement: subscales

Inter‐rater agreement on subscales assessed by correlations was generally strong (see Table 1). The correlation coefficient and average Kappa for one subscale ‘Following a Consultation Pathway – Standard Treatment were slightly lower than the those of the other subscales although the correlation coefficients and all Kappa calculation showed significant associations between raters.

Table 1.

 Inter‐rater agreement for subscales 1–5 and total

Subscales Item number Average kappa Correlation coefficient
1. Establishing physician – patient team 1.1–1.16 0.644 0.841
2. Following a Consultation pathway – standard treatment 2.1–2.6 0.486 0.695
3. Providing information about standard treatment 2.1.1–2.1.3 + 2.3.1a–2.3.1c 0.541 0.855
4. Promoting clarity 3.1–3.7 0.541 0.802
5. Avoiding coercion 4.2.1–4.2.5 0.594 0.773
Total 0.584 0.844

*40/45 Kappa calculations reached significance P < 0.05.

Intra‐rater reliability

The average Kappa for the recognition of individual items was 0.648 reflecting substantial agreement. 20 For those items for which Kappas could not be calculated average percentage agreement was 83% for recognition of individual items and 98% for quality, representing substantial to almost perfect agreement. 20 The correlation coefficient for then total scale was 0.945.

Intra‐rater agreement: subscales

Intra‐rater agreement on subscales assessed by correlations was generally very strong (see Table 2). All but five correlations were highly significant.

Table 2.

 Intra‐rater agreement for subscales 1–5 and total

Subscales Item number Average kappa Correlation coefficient
1. Establishing physician – patient Team 1.1–1.16 0.634 0.941
2. Following a consultation pathway – standard treatment 2.1–2.6 0.744 0.950
3. Providing information about standard treatment 2.1.1–2.1.3 + 2.3.1a–2.3.1c 0.766 0.946
4. Promoting clarity 3.1–3.7 0.493 0.596
5. Avoiding coercion 4.2.1–4.2.5 0.669 0.882
Total 0.648 0.945

*40/45 Kappa calculations reached significance P < 0.05.

Concurrent validity

Inspection of correlations coefficients confirmed a significant positive relationship between total scores on the three coding systems. DAS‐O was strongly correlated with OPTION at 0.73. DSAT was moderately correlated with DAS‐O (0.58) and OPTION (0.59).

Discussion

The twofold aims of this paper were to describe the development of the DAS‐O and to present reliability and validity data supporting the ongoing use of this coding system to identify and rate the adequacy of shared decision‐making in oncology consultations.

We have demonstrated face validity of the DAS‐O through an expert panel review process that was guided by participants who were selected as representatives of the wide range of views and expertise in theoretical and clinical aspects of treatment decision‐making, including decisions about clinical trial participation.

Inter‐rater reliability scores for the individual items were generally high with an average Kappa coefficient of 0.584 and average correlation co‐efficient of 0.844 Percent agreement was similarly high for individual items where it was not possible to calculate a Kappa statistic. Inter‐rater reliability correlation coefficients were higher for the individual subscales ranging between 0.695 and 0.855. Intra‐ rater reliability scores were similarly high with Kappa coefficients and correlation coefficients exceeding scores for inter rater reliability. These scores together suggest that it is possible to train coders to consistently achieve agreement about both identifying and rating items.

Achieving high reliability scores for item quality ratings is difficult. In a pilot study that used a modified version of the DAS‐O Kappa correlations for quality ratings were 0.62 (inter‐rater) and 0.54 (intra‐rater) respectively indicating acceptable levels of agreement. 16 The substantial increases in Kappa coefficients observed in this study are likely due to improvements in the description of the rating parameters and a reduction in the number of possible rating responses. Coders reported difficulty assigning quality ratings to items where a judgment was required about the clarity of communication about treatment options. In future we will provide more examples with greater detail in the coding manual.

Coefficients for Subscale 2, ‘Following a consultation pathway’ (See Appendix S1) were lower and did not reach statistical significance although a trend towards significance was noted. Coders reported difficulty in distinguishing between some of the pathway items, particularly, Patient Voice, Dr Recommendation and Patient Decision. This is likely due to the conversational nature of these steps with clinicians not making an explicit transition between them resulting in difficulty coding these elements as discrete items. In future, coders will be trained to identify more subtle transitions and examples of these will be added to the coding manual.

As part of our ongoing program of research, we have explored the correlations between different coding systems that use the same data set to capture shared decision making behaviors in the medical context. Our results show a high level of concordance between DAS‐O and OPTION with strong inter‐correlations of >0.73 thus they seem to be measuring similar concepts. However, DSAT also showed moderate correlations with the other coding systems of >0.5. It is possible that as the DSAT was developed within the context of evaluating a nurse‐delivered telephone decisional support intervention 21 and to evaluate practitioner knowledge of decision support skills, 10 the DSAT items are somewhat different from DAS_O and OPTION items that were developed to evaluate SDM in actual medical consultations.

One limitation of this analysis was our inability to apply the DAS‐O items related to clinical trial discussions due to their lack of frequency in the sample. This may reflect the oncologists’ reluctance to consider clinical trial participation as a treatment option. Seeking informed consent to cancer clinical trials presents a very challenging area of communication for oncologists and their patients, particularly discussing complex issues such as randomization. 22 , 23 Promoting shared decision making is especially important in this context as cancer patients face complex information and difficult decisions at a time of heightened anxiety. Further research is warranted to explore the utility of the DAS‐O in complicated trial discussions.

These findings suggest that we have developed a reliable and valid coding system that captures the content and quality of shared decision‐making in oncology consultations. Skills that originate from the Subscales of the DAS‐O are currently being used as part of an assessment system to evaluate the efficacy of communication skills training for oncologists at a major U.S. Comprehensive Cancer Center. 24 Video‐recordings of oncology consultations are coded to identify changes in participants’ communication skills, including shared decision‐making behavior and these data provide the basis for pre and post training feedback to participants. A computerized coding system has been developed and successfully used by trained coders to identify pre/post skills uptake. The program allows inter‐rater reliability calculations on every consultation recording. These data are being used to extend the DAS‐O codes to video recording. Additional analyses are planned to determine new codes to capture non‐verbal behavior that promote shared decision‐making.

This study also provides preliminary evidence that the DAS‐O may be applied in different cultural settings. Further research is needed to explore the utility of the coding systems in a broader sample of cultural contexts. Furthermore, shared decision‐making occurs in many varied health care and illness settings. Our sample included only female patients, many of whom were already cured, but were considering the benefits & risks of additional adjuvant treatments. Future plans include the use of DAS–O to capture communication in cancer consultations in which patients with metastatic disease face treatment decision making at a time of a worsening prognosis. Finally, we intend to conduct validation studies of the Decision Analysis System in other illness areas.

Acknowledgement

The National Breast Cancer Foundation Grant of Australia and a grant of Oncosuisse/Swiss Cancer League supported this research.

Supporting information

Appendix S1. DAS‐O subscale items.

Supporting info item

References

  • 1. Llewellyn‐Thomas HA, McGreal MJ, Thiel EC. Cancer patients’ decision making and trial entry preferences: the effects of “framing” information about short term toxicity and long term survival. Medical Decision Making., 1995; 15: 4–12. [DOI] [PubMed] [Google Scholar]
  • 2. Charles C, Gafni A, Whelan T. Shared decision‐making in the medical encounter: What does it mean? (or it takes at least two to tango) Social Science and Medicine., 1997; 44: 681–692. [DOI] [PubMed] [Google Scholar]
  • 3. Evans RG. Strained Mercy: The Economics of Canadian Health Care. Toronto: Butterworths, 1984. [Google Scholar]
  • 4. Gattellari M, Butow PN, Tattersall MH. Sharing decisions in cancer care. Social Science and Medicine., 2001; 52: 1865–1878. [DOI] [PubMed] [Google Scholar]
  • 5. Makoul G, Clayman ML. An integrative model of shared decsion making in medical encounters. Patient Education and Counseling, 2006; 60: 301–312. [DOI] [PubMed] [Google Scholar]
  • 6. Charles C, Redko C, Whelan T, Gafni A, Reyno L. Doing nothing is no choice; lay constructions of treatment decision‐making among women with early‐stage breast cancer. Sociology of Health & Illness, 1998; 20: 71–95. [Google Scholar]
  • 7. Entwistle V, Sowden AJ, S WI. Evaluating interventions to promote patient involvement in decision making: by what criteria should effectiveness be judged. Journal of Health Services Research & Policy, 1998; 3: 100–107. [DOI] [PubMed] [Google Scholar]
  • 8. Towle A, Godolphin W. Framework for teaching and learning informed shared decision making. British Medical Journal, 1999; 319: 766–769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Albrecht TL, Eggly SS, Gleeson MEJ et al. Influence of clinical communication on patients’ decision making on participation in clinical trials. Journal of Clinical Oncology, 2008; 26: 2666–2673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Guimond P, Bunn H, O’Connor A et al. Validation of a tool to assess health practitioners’ decision support and communication skills. Patient Education and Counseling, 2003; 50: 235–245. [DOI] [PubMed] [Google Scholar]
  • 11. Elwyn G, Edwards A, Wensing M, Hood K, Atwell C, Grol R. Shared Decision Making: Developing the OPTION scale for measuring patient involvment. Quality and Safety in Health Care, 2003; 12: 93–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Brown RF, Butow PN, Butt DG, Moore AR, Tattersall MHN. Developing ethical strategies to assist oncologists in seeking informed consent to cancer clinical trials. Social Science and Medicine., 2004; 58: 379–390. [DOI] [PubMed] [Google Scholar]
  • 13. Glaser BJ, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative Research. New York: Adline Publishing Company, 1967. [Google Scholar]
  • 14. Eggins S. An Introduction to Systemic Functional Linguistics. London: Printer Publishers, 1994. [Google Scholar]
  • 15. Halliday MAK. An Introduction to Functional Grammar, 2nd edn London: Edward Arnold, 1994, p. xiv. [Google Scholar]
  • 16. Brown RF, Butow PN, Ellis P, Boyle F, Tattersall MHN. Seeking informed consent to cancer clinical trials: describing current practice. Social Science and Medicine, 2004; 58: 2445–2457. [DOI] [PubMed] [Google Scholar]
  • 17. Baxter LA, Babbie E. The Basics of Communication Research. Belmont CA: Wadsworth, 2004, p. 126. [Google Scholar]
  • 18. Bowles N. The Delphi technique. Nursing Standard, 1999; 13: 32–36. [DOI] [PubMed] [Google Scholar]
  • 19. Braddock CH, Edwards KA, Hasenberg NM, Laidley TL, W L. Informed decision making in outpatient practice ‐ time to get back to basics. Journal of the American Medical Association, 1999; 282: 2313–2320. [DOI] [PubMed] [Google Scholar]
  • 20. Landis JR, Koch GG. The measurement of observer agreement for categorical agreement. Biometrics, 1997; 33: 59–74. [PubMed] [Google Scholar]
  • 21. Stacey D, O’Connor AM, D GI, Pomet MP. Randomized controlled trial of the effectiveness of an intervention to implement evidence‐based patient decision support in a nursing call center. Journal of Telemedicine and Telecare, 2006; 12: 410–415. [DOI] [PubMed] [Google Scholar]
  • 22. Brown RF, Butow P, Boyle F, Tattersall MHN. Seeking informed consent to cancer clinical trials: evaluating the efficacy of communication skills training. Psycho-Oncology, 2007; 16: 507–516. [DOI] [PubMed] [Google Scholar]
  • 23. Fallowfield L, Jenkins V, Brennan C, Sawtell M, Moyihan C, Souhami A. Attitudes of patients to randomised trials of cancer therapy. European Journal of Cancer, 1998; 34: 1544–1559. [DOI] [PubMed] [Google Scholar]
  • 24. Brown RF, Bylund CL. Communication Skills Training: describing a new conceptual model. Academic Medicine, 2008; 83: 37–44. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1. DAS‐O subscale items.

Supporting info item


Articles from Health Expectations : An International Journal of Public Participation in Health Care and Health Policy are provided here courtesy of Wiley

RESOURCES