Abstract
To facilitate treatment decision‐making, one aims to provide information, present it in a way that makes it as easy as possible to understand, and to help the decision‐maker through the cognitive processes that result in a treatment decision. Decision aids aim to accomplish just these goals and this paper identifies practical issues that we have encountered in creating a decision aid for men with early stage prostate cancer. We highlight the results of studies we carried out to provide an empirical basis for the decision aid that we were developing. Several of the studies were designed to identify what information key players (health professionals, patients and family members) thought was important for the decision‐making process. Another investigation studied methodological considerations in identifying important information. The final study focused on presentation issues. These studies, designed to explore what information was considered important, found great variability among both health care professionals involved in treating patients with prostate cancer (urologists, radiation oncologists, nurses in cancer clinics, and radiation technologists) and among the patients, themselves. The studies also showed that not all information contained within a typical category is of equal importance. A methodological study showed that the information that patients deem to be important to their decision depends on whether they are rating the information that could be provided, or questions that could be answered. Finally, presentation studies showed that the various formats used in presenting quantitative information are processed with differing degrees of accuracy and ease. Each of the above results has implications for those creating decision aids; these implications are highlighted.
Keywords: decision aids, informed consent, shared decision‐making
Introduction
In the setting of curable prostate cancer, patients frequently have three treatment options: ‘watchful waiting’, surgery, or radiation. Although there have been no clinical trials providing unequivocal evidence, it is thought that active treatment confers some survival benefit 1 but often causes side‐effects which have the potential to impair quality of life and which differ between radiation therapy and surgery. Thus, there is a large amount of information relevant to the disease and its potential treatments that may be important to the treatment decision.
Evidence suggests that the vast majority of patients with prostate cancer in Canada want to participate, at least to some extent, in their treatment decision. 2 The treatment decision‐making time, however, is often a time of great stress, caused by both the trauma of recently receiving their diagnosis, and by feeling pressure to make decisions and receive treatment quickly. Such stress is known to impair the higher cognitive processing required for decision‐making. 3 Thus, the treatment decision‐making time in this setting is one requiring large processing capacity by people whose normal processing capacity has been reduced.
We believed that we could assist patients in their treatment decision challenge in a number of ways. First, we could try to identify the questions that important stakeholders think should be addressed, and try to provide their answers. Second, we could ensure that we present the information in a manner that makes it as easy as possible to process. And finally, we could try and directly facilitate specific cognitive processes that are intrinsic to making such a complex decision. We therefore decided to develop a decision aid to provide the assistance.
This paper describes some of the practical issues that we encountered as we developed the decision aid. The aid is an interview with a third party that is integrated into the cancer clinic practice between the initial consultation with the cancer specialist and a subsequent treatment decision‐making consultation that occurs about one week later. The aid includes three components: the structured presentation of information, exercises designed to help the patient figure out what factors are important to his decision and, finally, exercises designed to help clarify the value of each of those factors. Development of the aid involved several studies that were designed to provide an empirical basis for the information that we would provide and for how we would provide it. Though the studies were focused on early stage prostate cancer, the issues that we encountered are probably relevant to many other cancer and non‐cancer settings. For each of the issues that we have encountered, we have identified what we consider its most salient implication for the development of the decision aid.
Issue 1: Doctors vary in what information they consider important to the decision
We first surveyed the four major professional groups involved in the care of patients with prostate cancer in Ontario to find out what questions they thought should be addressed with the early stage patients before treatment decisions are made. The survey included all Ontario radiation oncologists, urologists, nurses in cancer clinics and radiation therapists.
Through discussions with health care professionals, patients, researchers and lay people, we had assembled a comprehensive list of 78 questions that might be important to discuss. We used these questions as a basis for the survey, asking respondents in each professional group to judge the importance of addressing each of the questions with a case‐scenario patient. The degree of importance was indicated on a four‐alternative response scale, in which each alternative was given a functional definition. An ‘essential’ question was defined as ‘a question that you would want to address in almost all circumstances.’ The other three categories were ‘important’, ‘no opinion or no strong opinion’, and ‘avoid’. More detail about the study is available elsewhere. 4 , 5 Responses were received from 26 oncologists (67%), 97 urologists (54%), 34 nurses (68%) and 126 radiation technologists (50%). Pearson correlation coefficients were used to compare the percentages of ‘essential’ responses to each of the questions between pairs of groups. The correlations showed that questions deemed essential by most respondents in one group were similar to those deemed essential by most respondents in each of the other groups; for example, the correlation between the two physician groups was r(76) = 0.87, P < 0.001. Although there was general agreement between groups, the results also demonstrated that there was a lot of disagreement within each group. We used a measure that we called an Index of Agreement to evaluate attitudes toward individual questions. The measure is the percentage of the group that reflects the majority opinion about whether the question is essential to address or not: 50 + |50% essential| (i.e. 50 plus the absolute value of [50 minus the percentage of essential responses]. Figure 1 shows the Indices of Agreement of the oncologists and of the urologists. For an Index threshold of 67%, arbitrarily chosen to define when there is ‘enough’ agreement, Fig. 1a shows that the oncologists ‘agree’ that 11 questions are essential to address and that 37 questions are not essential to address, but they disagree on the status of the remaining 30 questions. Similarly, Fig. 1b shows that the urologists agree that four questions are essential to address and that 41 are not essential, but they disagree on the status of the remaining 33. An example of a question that both groups agreed is essential is: ‘Will the treatment make me impotent? If so, for how long?’, while a question that generated near maximum disagreement in both groups is ‘Why is it sometimes said that patients don’t live any longer with early diagnosis?’.
Figure 1 The Indices of Agreement of the Oncologists ( Fig. 1a) and of the Urologists ( Fig. 1.
b) show the extent of agreement on each question within each group. The Index reflects the percentage of the group that agrees with the majority opinion on whether the question is essential to address or not. Given a threshold Index of 67%, the black bars identify questions on which there is agreement (either as essential or as not essential) and the grey bars identify questions on which there is not enough agreement.
The disagreement extended to which questions should be avoided: although most doctor respondents did not indicate many questions should be avoided (median oncologists 2, urologists 0), and some doctors thought no questions should be avoided, others thought as many as 24 should be avoided. Most important is that all groups disagree on whether or not it is essential to address approximately one‐half of the questions that we asked them about.
Our demonstration of wide‐spread variation in health care professionals’ attitudes toward specific questions indicates that they disagree on which issues should be discussed with patients before their treatment decisions are made. We have argued that such variation among doctors’ opinions should be resolved (or circumvented) because, to the extent that doctors drive the process of informing patients, the variation means that some patients are at risk of not getting information relevant to their decision. 5 The same is true of the variation in attitudes among the nurses and the radiation technologists.
Implication
The major implication of such wide‐spread variation in information priorities among health care professionals is that a systematic informing process, such as that included in a decision aid, is required. The variation in our results does not mean that an individual professional would necessarily object to their patients receiving information that they, themselves, did not consider essential. Because doctors are ultimately responsible for informed consent 6 , decision aid developers are advised to ensure that the doctors of the targetted patients do not object to the information presented in the aid. When objection does occur, the information can be removed from the aid and patients can be directed to alternate sources of information. 7
Issue 2: Patients vary in what information they consider important to the decision
Following the survey of professionals, we also surveyed patients recently diagnosed with early stage prostate cancer to gather their judgements on the importance of addressing each question. In our survey of professionals, we had asked respondents to add any questions that we had overlooked, and their suggestions were incorporated in an updated list for the patients. Their new list included 93 questions. We also asked the patients to identify why they wanted questions addressed, in order to gain some insight into how they might want to use the information provided. 8
Responses were received from 38 (68%) of the 56 patients surveyed. The results suggest widespread variation among the patients both in the number of questions that they considered essential to address, and in exactly which questions those should be. The Index of Agreement for each of the 93 questions are shown in Fig. 2a. Again, using an Index threshold of 67% to identify when there was ‘enough’ agreement, the figure shows that there were 23 questions that they agreed were essential to address and 12 that they agreed were not essential to address, but they were divided on how essential it was to address each of the remaining 58 questions.
Figure 2 Figure 2a shows the Index of Agreement for each question as judged by patients. Given a threshold Index of 67%, the black bars identify questions on which there is agreement (either as essential or as not essential) and the grey bars identify questions on which there is not enough agreement. Figure 2.
b shows the number of questions that each patient thought were essential to address and, of those, the number that were essential for the purpose of making the treatment decision.
The results also suggest that, for most patients, only a portion of the questions that they think should be addressed would be used to help make the treatment decision. Figure 2b shows the number of questions that each respondent thought were essential for any reason, and of those, the number that were essential for making the decision. As the figure shows, there was wide variation in that number. Although a larger study would be required to determine, with greater confidence, which items are most often considered essential, this study was large enough to demonstrate substantial heterogeneity among patients, both in the items essential for any reason and in those essential for decision‐making.
Implication
Such wide‐spread variation in attitudes among patients means that, as decision aid developers, we needed to build flexibility in our information provision in order to accommodate the variation. For example, in our decision aid, we routinely provide the information identified by at least 67% of our survey respondents as being necessary for their decision. During the interview, however, the patient is encouraged to identify issues important to his decision that are not included in the routine presentation. An initial test of the strategy with surrogate decision‐makers found that 49% (of 69 study participants) added items that were then addressed within the context of the other information provided. 9 Because treatment decisions are not actually made during the interview (the goal is to help the patient become clearer about his options and ultimate choice for the decision‐making consultation with the doctor), patients who want yet more information than could be provided in the interview can be directed to other sources.
Issue 3: Information within a typical category is not all equally important
People have a limited capacity to process information at any one time, 10 thus, there is a need to focus on the subset of information most immediately relevant. Focusing is desirable both to reduce the amount of effort required to make an informed decision but also to limit the likelihood that the patient’s processing capacity would be further reduced by stress caused by being overwhelmed by the amount. Thus, there are many studies that try to identify the subset of information that patients feel they need.
Attempts to identify information needs of patients often focus on the categories of information. The results of our patient survey discussed earlier 8 suggests that we should not assume that all information within any one category would necessary be equally important. For example, ‘treatment side‐effects’ is a category often deemed to be relevant to many patients facing treatment decision. But the details of the salience of each item within the category have important implications for decision support. For example, both bladder and bowel control would be included in the category. In our patient survey, however, 74% of our respondents thought that the question ‘Will the treatment affect my bladder control?’ was essential to address (for any reason), but only 39% thought the question ‘Will the treatment cause diarrhea?’ was essential. Focusing more specifically on making a treatment decision, 26% of all our respondents wanted the bladder question addressed because they thought it would affect their decision while only 11% thought the same of the diarrhea question.
Implication
The difference in attitudes toward details that are included in a typical category means that, as decision aid developers, we needed to determine the specific details that are important to the decision rather than focusing on broad categories of information.
Issue 4: The information that patients deem important to their decision depends on whether they are rating information or unanswered questions
In trying to decide what information is important to the treatment decision, we were concerned that patients identifying important questions were, in fact, choosing them on the basis of what they expected the answers to be. If that was so, and if the answers that they were considering were wrong in some respect, we would be misled about what items are important to the decision. Thus, we tested the concern directly. 11 We selected all the questions from our earlier patient survey that were categorized by at least 10% of our patient respondents as essential for their decisions. Four local physicians (radiation oncologists and urologists) agreed on answers to the questions. We then surveyed a second group of patients recently diagnosed with early stage prostate cancer for items they considered necessary for their decision; participants were randomly assigned such that half identified the questions that they considered necessary and half identified the answers (information).
We received 54 (78%) responses, 30 in the question‐format group and 24 in the answer‐format group. Figure 3 shows the percentage of respondents that deemed an item necessary for the decision as a question as compared to the percentage that deemed it necessary as an answer. Although the percentages were correlated (r(57) = 0.69, P < 0.001), questions were rated ‘necessary’ significantly more often than answers (Kruskal–Wallis chi square = 4.14, P < 0.05).
Figure 3.
The percentage of respondents indicating each item is necessary for the decision when presented with answers (information) as a function of when the items were presented as unanswered questions.
Implication
On comparing the two methods, we found that the differences in information identified by patients as necessary for their decision suggests that some question responses may have been based on misconceptions. The misconceptions mean that decision aid developers need to find out which facts affect decisions rather than the questions that patients think are important. The possibility that patients have misconceptions also led us to decide, in our prostate cancer decision aid, to uncover all of the information that is routinely provided (see Issue 2) then ask the patient if he wants further items added.
Issue 5: The information required is not always of good quality or even in the literature at all
Our attempts to determine the latest information available in the literature relating to early stage prostate cancer highlighted the problem of the quality of the available evidence. A system for stratifying the reliability of different types of clinical evidence has been proposed by Sackett. 12 , 13 The best type of evidence according to the system, commonly referred to as Level I evidence, comes from randomized trials with low false positive and/or low false negative errors. The least reliable evidence, Level V evidence, comes from case series without controls.
In the setting of early stage prostate cancer, there are no large randomized trials comparing any of the treatment options. There are several large case series, which limits the best available evidence for many of the treatment and disease‐related outcomes to Level V quality. Because people learn new information most effectively when the information is organized with the relationship between different items evident, 14 we had to assimilate evidence from very disparate studies in order to produce a coherent picture of the potential risks and benefits of treatments for early stage prostate cancer. For issues like ‘the number of patients like me who have chosen each of the treatments’, an issue identified by 61% of our patient respondents as being necessary for their decision, there is no evidence available at all.
Implication
The state of evidence requires that, as developers of decision aids, we have to be clear about the quality of evidence, search for the best evidence available and promote research to generate Level I evidence where appropriate. The best that we can do when such evidence does not exist is acknowledge to the patient that the information is not available, and when there is conflicting evidence to disclose the resulting uncertainty to the patient. 15
Issue 6: Presentation formats differ in their ease and accuracy of processing
After deciding what information should be included in our decision aid, we wanted to present it in a manner that would make it easy for the patients to process with a high degree of accuracy. As people often have difficulty understanding quantitative information, we chose to study presentation formats specifically of quantitative information. 16
In a series of four experiments, we compared accuracy and efficiency of perceiving 6 different formats that had been used to present quantitative information to patients: pie charts, vertical bars, horizontal bars, numbers, systematic ovals (a 10 × 10 matrix of ovals, coloured in systematically from a bottom corner) and random ovals (the same 10 × 10 matrix of ovals coloured in at random). The six formats are shown in Fig. 4. On each trial, the participant was presented with two quantities in one of the formats, side‐by‐side on a computer screen. They were asked to identify either the larger or the smaller and to indicate their response by pressing one of two response keys. We recorded response time, in addition to determining if the response was correct or not. The participant also estimated the absolute difference between the two quantities. Each participant completed a series of trials in each of the six formats.
Figure 4.
The six formats compared in a test of perception that focused both on accuracy and on speed of processing. Each format shows a quantity of 25 out of 100.
Across the series of experiments, a total of 36 nonpatients and 96 patients with various cancers were participants. The results, consistent across both participant groups, suggest that pie charts and random ovals cause the least accurate perception in trying to decide which quantity is larger/smaller and take the most processing capacity. To estimate the difference, however, numbers resulted in the most accurate estimates followed by systematic ovals.
Implication
The differences in accuracy and efficiency caused by the different formats means that, as developers of decision aids, we need to determine the format for presenting information that is as easily and as accurately processed as possible.
Conclusion
We had decided that one way to facilitate shared decision‐making with men who have early stage prostate cancer was to create a decision aid designed to inform them and help them identify what is important to their decisions. Our experience developing the decision aid has raised several concerns. Although we were focused on early stage prostate cancer, the issues are obviously applicable to many other settings both within and beyond cancer. In describing our experience, we have made no attempt to provide a review of other relevant literature; interested readers can refer to the full reports of our studies for some of that insight 4 , 5 , 8 , 16 and to relevant systematic reviews. 7 , 17 Implications of the issues that we have encountered are that we need to:
1 ensure that the information we present is acceptable to the physicians of the patients we are informing.
2 build in flexibility in our information provision to accommodate patient variation in information needs.
3 determine which specifics are important by identifying what information affects their decision.
4 ensure that our methods for determining what information to provide in the aid do not require patients to guess at the content.
5 determine how to present information so that it is processed by the patient as easily and as accurately as possible.
Acknowledgements
We are grateful for the helpful comments of the anonymous reviewers. We would also like to acknowledge support through operating grants from the National Cancer Institute of Canada and the Canadian Cancer Society, and through a Career Scientist Award from the Ministry of Health, Canada.
References
- 1. Fleming C, Wasson JH, Albertsen PC, Barry MJ, Wennberg JE. A decision analysis of alternative treatment strategies for clinically localized prostate cancer. Journal of the American Medical Association, 1993; 269 : 2650 2658. [PubMed] [Google Scholar]
- 2. Davison J, Degner LF, Morgan TR. Information and decision‐making preferences of men with prostate cancer. Oncology Nursing Forum, 1995; 22 : 1401 1408. [PubMed] [Google Scholar]
- 3. Shanteau J & Dino GA. Environmental stressor effects on creativity and decision making. In: Svenson O, Maule AJ (eds.) Time Pressure and Stress in Human Judgement and Decision Making. New York: Plenum Press, 1993: 293–308.
- 4. Feldman‐Stewart D, Brundage MD, Hayter C, Davidson JR, Groome P, Nickel JC. What the prostate patient should know: variation in urologists’ opinions. Canadian Journal of Urology, 1997; 4 : 438 444 [PubMed] [Google Scholar]
- 5. Feldman‐Stewart D, Brundage MD, Hayter C et al. What prostate cancer patients should know: variation in professionals’ opinions. Radiotherapy and Oncology, 1998; 49 : 111 123 [DOI] [PubMed] [Google Scholar]
- 6. Ontario Legislature. Bill 109 . Statutes of Ontario, Chapter 13, 1992.
- 7. Entwistle V, Watt I, Davis H, Dickson R, Pickard D, Rosser J. Developing information materials to present the findings of technology assessments to consumers: the experience of the NHS Centre for reviews and dissemination. International Journal of Technology Assessment in Health Care, 1998; 14 : 47 70. [DOI] [PubMed] [Google Scholar]
- 8. Feldman‐Stewart D, Brundage MD, Hayter C et al. What questions do patients with curable prostate cancer want answered? Medical Decision Making, 2000; 1 : in press. [DOI] [PubMed] [Google Scholar]
- 9. Feldman‐Stewart D, Brundage M, Van Manen L, Hayter CR, Nickel JC, Mackillop W. Surrogate patients test a decision aid for men with early‐stage prostate cancer (Abstract), Clinical and Investigative Medicine, 1999; 22 : 298 298. [Google Scholar]
- 10. Norman D. Memory and attention: an Introduction to Human Information Processing New York: John Wiley & Sons Inc., 1976.
- 11. Feldman‐Stewart D, Brundage MD, McConnell B, Cosby R, Hayter CR, Mackillop WJ. What information do patients need to participate in treatment decisions? It depends on how you ask (Abstract). Medical Decision Making, 1998; 18 : 474 474. [Google Scholar]
- 12. Sackett DL. Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest, 1989; 95 : 2s 4s. [PubMed] [Google Scholar]
- 13. Sackett DL. Rules of evidence and clinical recommendations. Canadian Journal of Cardiology, 1993; 9 : 487 489. [PubMed] [Google Scholar]
- 14. Mayer RE. Can you repeat that? Qualitative effects of repetition and advance organizers on learning from science prose. Journal of Educational Psychology, 1983; 75 : 40 49. [Google Scholar]
- 15. Coulter A, Entwistle V, Gilbert D. Education and debate. British Medical Journal, 1999; 318 : 318 322. 9924064 [Google Scholar]
- 16. Feldman‐Stewart D, McConnell BA, Kocovski N, Brundage MD, Mackillop WJ. Perception of quantitative information for treatment decisions. Medical Decision Making, 2000; in press. [DOI] [PubMed]
- 17. O’Connor AM, Fiset V, De Grasse C, et al Decision aids for patients considering options affecting cancer outcomes: evidence of efficacy and policy implications. Journal of the National Cancer Institute, 2000; in press. [DOI] [PubMed]