Skip to main content
Sage Choice logoLink to Sage Choice
. 2018 Mar 29;38(6):746–755. doi: 10.1177/0272989X18765185

Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results

Marieke GM Weernink 1,, Janine A van Til 2, Holly O Witteman 3,4, Liana Fraenkel 5, Maarten J IJzerman 6
PMCID: PMC6587358  PMID: 29592585

Abstract

Background. There is an increased practice of using value clarification exercises in decision aids that aim to improve shared decision making. Our objective was to systematically review to which extent conjoint analysis (CA) is used to elicit individual preferences for clinical decision support. We aimed to identify the common practices in the selection of attributes and levels, the design of choice tasks, and the instrument used to clarify values. Methods. We searched Scopus, PubMed, PsycINFO, and Web of Science to identify studies that developed a CA exercise to elicit individual patients’ preferences related to medical decisions. We extracted data on the above-mentioned items. Results. Eight studies were identified. Studies included a fixed set of 4–8 attributes, which were predetermined by interviews, focus groups, or literature review. All studies used adaptive conjoint analysis (ACA) for their choice task design. Furthermore, all studies provided patients with their preference results in real time, although the type of outcome that was presented to patients differed (attribute importance or treatment scores). Among studies, patients were positive about the ACA exercise, whereas time and effort needed from clinicians to facilitate the ACA exercise were identified as the main barriers to implementation. Discussion. There is only limited published use of CA exercises in shared decision making. Most studies resembled each other in design choices made, but patients received different feedback among studies. Further research should focus on the feedback patients want to receive and how the CA results fit within the patient–physician dialogue.

Keywords: conjoint analysis, adaptive conjoint analysis, clinical decision making, shared decision making, values clarification methods, values clarification exercises

Introduction

In shared decision making, patient decision aids are often used to support patients’ understanding of the process of care and the subsequent evidence-based outcomes.1 Values clarification methods are used in decision aids to help patients evaluate the desirability of attributes or options, with the aim that choice of treatment reflects personal preferences and values.2 A review by Witteman et al. (2016) showed that value clarification methods such as rating scales or providing the pros and cons are most commonly used to elicit individual preferences. Only 38% of the articles described a value clarification method explicitly or implicitly based on any theory, framework, model, or mechanism.3

Based on its theoretical axioms, conjoint analysis (CA) may be an effective value clarification method. CA has a long history in marketing, and it has gained widespread use as a tool to elicit patient preferences for health care services.5,6 CA allows patients to think about complex treatment decisions by letting them evaluate scenarios through rating, ranking, or choice tasks, enabling a mathematical model to algorithmically derive the relative value of treatment characteristics and estimate preferences for available treatment options.7,8 CA is, however, mostly known for elicitation of preferences at the population level to support organizational and regulatory decision making.9 As Kaltoft et al. (2015) argued, the results of these studies have limited clinical relevance to individual patients in decision making.10 What is important to one patient may not be the same as what is important to others.

For individual patients to benefit from value elicitation exercises, CA needs to generate part-worth utilities at the individual patient level. Practically, there is a lot of controversy regarding what is the “best” experimental design to elicit individual preferences using CA. Holding on to criteria such as level-balance and orthogonality might lead to a large number of questions, whereas violation of these criteria might influence the estimation of reliable preferences. Other issues in which we seek guidance are whether attributes were selected at the individual level, which statistical analyses were used to estimate individual part-worth utilities, and how and in which format the outcomes were presented to patients.

Therefore, the aim of this study is to systematically review the extent to which CA is used to elicit individual preferences for clinical decision support. Second, we aim to learn from previous research by identifying the common practices in the selection of attributes and levels, the design of choice tasks, and the instrument used to clarify values. Finally, we present how the use of CA in clinical decision support was evaluated in these studies.

Methods

Search Strategy and Screening Process

We conducted a systematic literature search in Scopus, PubMed, PsycINFO, and Web of Science in May 2016 to identify studies that have described the use of CA to elicit individual preferences for clinical decision support. No publication date restrictions were imposed. Specific exclusion criteria were 1) use of other type of stated preference methods, and 2) non-English-language literature. Gray literature was searched using online search engines (Google Scholar) and in conference proceedings (Society for Medical Decision Making and International Society for Pharmacoeconomics and Outcomes Research). Studies were included if they contained sufficient information for data extraction on the relevant criteria detailed below. Reference lists of included articles were reviewed for additional relevant studies. Supplementary Appendix 1 contains a full description of the search strategy. Next, 2 authors (MW and JT) independently screened all articles. Primary exclusion was based on title, abstract, and keywords. Subsequently, potentially relevant articles were reviewed in full. Discrepancies were resolved by discussion between MW and JT until consensus was reached.

Extraction of Relevant Data

For each included study, 2 independent researchers (MW and JT) systematically extracted data using a standardized abstraction form. The extracted data included 5 categories of information: 1) general study information, 2) selection of attributes and levels, 3) choice task design, 4) instrument design, and 5) study outcomes and evaluation.

  1. We derived general information from the included studies, such as each study’s aim, type, and clinical condition.

  2. We reported design decisions regarding the selection of attributes and levels: methods used, number of attributes or levels, possibility for the patient to add extra attributes, and the type of attributes selected, such as health outcomes, nonhealth outcomes (e.g., autonomy), and process attributes (e.g., treatment modality).11,12

  3. Next, choice task design elements were derived from the studies. Depending on the method of CA used, some of the choice task design components are more or less applicable to each study: type of choice questions (rate, rank, or choice), experimental design, and number of choice tasks.

  4. We then described the instrument used to clarify values: the medium and setting used, the preference estimation procedure (analytical method, or use of software packages), and whether and how outcomes were presented to patients (form and setting).

  5. Lastly, we examined the study outcomes and evaluation of the CA exercise: we reported positive and negative outcomes with respect to difficulty of the CA exercise, time to complete it, and the reliability of its outcomes, behavioral outcomes (actions and adherence) and affective and cognitive outcomes (knowledge, decisional conflict, and satisfaction).

Included studies had to present sufficient data on criteria 1–3, but not necessarily on 4 and 5 (e.g., clinical trial protocols). We derived information about design choices from the main text, the appendix, or the decision tool itself if it was accessible on a Web site. If information about the design characteristics was lacking or insufficiently described, we requested the authors to send a copy of the tool used. Descriptive summaries and graphs were used for all categories, except for point 5, in which narrative synthesis methods were used. Results are reported in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement and checklist.13

Results

Search Results

In total, 1057 articles were screened; 994 articles were excluded by title and abstract review. The remaining 63 articles were subjected to full-text review. This led to exclusion of an additional 55 articles, mainly because the focus of the article was elicitation of preferences at the population level. In total, 8 articles described the use of CA to support preference elicitation in clinical decisions in individual patients (Figure 1).

Figure 1.

Figure 1

Selection of studies.

Characteristics of Included Studies

The included articles discussed the application of CA in curative treatment decisions (e.g., breast cancer and prostate cancer) and treatment decisions for chronic conditions (e.g., osteoporosis). The focus of all studies was to examine the use of CA to support preference elicitation in a clinical setting. One article discussed only the clinical trial protocol and not results.14 More specific research questions were to compare the additional value of an interactive CA exercise to a printed booklet and a video booklet on reducing decision conflict,15 and Pieterse et al. (2010)16 examined whether the mode of administration (Web-based or local computer) influenced importance scores. We concluded that the studies of Fraenkel (2010)17 and Rochon et al. (2012)18 described the development of the same value clarification exercise, and they are therefore discussed as one study. Supplementary Appendix 2 shows the complete evidence matrix for all 5 categories of extracted data.

Selection of Attributes and Levels

In our review, we found a range of practices to identify attributes, including literature review,16,19 consulting clinicians,15 and more in-depth qualitative research techniques such as interviews and/or focus groups with patients and/or clinicians.14,20,21 In all studies, the final selection of attributes was made by the researchers or a facilitated discussion with the project team. Only one study specifically stated that the patients had to rank and rate the attributes, from which the top attributes were determined.21 None of the studies allowed individual patients the possibility to add (or remove) attributes prior to the actual start of the exercise.

Each study included attributes concerning health outcomes such as benefits, harms, risks, or adverse events (Figure 2). Except for 1, all studies included a measure of process of care, for example treatment modality,19 costs,17 or days in hospital.15 The studies selected between 4 and 8 attributes for inclusion in the CA exercise. Attribute levels reflect the range of actual variation within attributes. Among all studies, 7 out of 34 attributes were 2-level attributes, 18 were 3-level attributes, and 9 were 4-level attributes. Three studies had equal levels for all attributes,15,18,21 whereas 2 studies had a combination of 2- and 3-level attributes,16,20 and for 2 studies the number of levels for all attributes was unknown.

Figure 2.

Figure 2

Type of attributes selected in the studies.

Three articles explicitly reported that pilot tests were conducted to test the feasibility of the exercise. Patient understanding was enhanced by using frequencies (graphs) to describe risks (e.g., 1 in 10 or 1 in 1000)15,16,18,20 or visual pictographs of attributes or icon arrays.19,20

Choice Task Design

All included studies used adaptive conjoint analysis (ACA). ACA is an interactive computerized way of using CA that enables patients to construct their preferences through self-explicated rating and ranking tasks followed by pairwise comparisons. ACA asks several sets of questions. In the first series of questions, patients are asked to rank the levels of the attributes that do not have a natural ranking. In the reviewed studies, these were primarily process attributes. In the second set of questions, patients are asked to rate the importance of moving from the most preferred level to the least preferred level of each attribute on a Likert-type scale. Three studies, however, used a modified (simplified) version of ACA that has been developed by Fraenkel (2010). It asks patients to first choose the attribute that is most important to them. Next, they rate the remaining attributes relative to the one they indicated as being the most important one.17

Both the original ACA and modified ACA are followed by a series of pairwise comparisons in which patients have to use a 9-point Likert-type scale to indicate to which extent one scenario is preferred to the other. ACA is interactive, which means that after each pairwise comparison, the utility estimates are updated through regression analysis, and a new pair of scenarios is selected that are estimated to be of equal utility. The number of paired comparisons among the ACA studies ranged from 12 to 18 and is determined prior to the preference elicitation (when using Sawtooth Software). The formula that is used to determine the minimum number of paired comparisons is 3 × (N – n − 1) −N, where N is the number of levels among all attributes, and n is the number of attributes.16 Also, the number of attributes in each subset can be chosen by the investigator. Pieterse et al. (2010) included 2 out of the 4 attributes in pairwise comparisons 1–4, 3 attributes in pairs 6–10, and all attributes in final pairs 11 and 12 only.

ACA allows patients to handle a large number of attributes: the highest number of attributes found in this review was a study with 8 attributes, each having 4 levels.15 ACA may simplify the task by presenting fewer attributes in each pairwise comparison than traditional choice-based CA. ACA reduces the attributes to only the most important ones for each patient (tailored design). Whether ACA also reduces cognitive burden, however, should be empirically tested in studies comparing ACA to choice-based CA.

Instrument Design

Except for 1 study,21 all studies used Sawtooth Software to collect and analyze the data. All ACA studies were conducted via a computer, because the pairwise comparisons had to be adapted to the patient’s previous answers. In 4 studies, it was explicitly specified that the tool was used by patients prior to consultation or prior to making their final treatment decision.14,18,19,21 Four studies used computer-assisted personal interviewing (CAPI)-based interviews (computers not connected to the Internet) to elicit patient preferences. These patients had to visit the hospital or the research facility to complete the exercise. In 1 case, patients received prior oral explanation,20 and in another a brief training session.15 In a few cases, a research assistant was present during the exercise to answer procedural questions.15,19,20 Three ACA studies were Web-based, and patients received a link to a Web site containing the ACA21 or a direct link to the exercise.14,16 Patients could choose to complete the exercise at home or in the clinic.14,21 Pieterse et al. (2010) specifically compared CAPI-based and Internet-based ACA exercises, and found that the mode of administration did not influence importance scores.16

Based on the self-explicated rating and ranking tasks and the pairwise comparisons, a final set of part-worth utilities is estimated using ordinary least-square regression analysis. All studies provided patients with the output in real time, immediately after the exercise was performed. Three slightly different formats for feedback on attribute importance were found (Figure 3: reconstructions based on the information in the articles). Figure 3A and 3B are comparable in their format: a longer bar indicates higher importance of an attribute. The difference is in the description and the title of the x-axis. Figure 3C displays the format used in the study of Pieterse et al. (2010), which is not a standard format in the commercial software but is the result of custom programming with Sawtooth Software.16 In addition to presenting attribute importance, three studies converted the part-worth utilities into overall preference scores for specific treatment options using Sawtooth Software Market Research Tools (SMRT).19 Based on the assumption that patients prefer the option that provides them with the highest utility, these studies made full use of the capabilities of CA exercises to predict shares of preference. The studies used different formats to present the outcomes. For instance, in the study of Hawley et al. (2015), patients received a statement regarding which treatment would be the best fit with their values (Figure 4A).21 In the other 2 studies (Figure 4B and 4C), a vertical bar was used to display the relative ranking of options against the worst and best (hypothetical) treatment options; only the layout and format of the vertical bar slightly differ.

Figure 3.

Figure 3

Result presentation to patients: attribute importance.

Figure 4.

Figure 4

Result presentation to patients: overall preference scores.

In the studies of de Achaval et al. (2012) and Fraenkel et al. (2007), a research assistant explicitly explained the results to patients. In addition, 4 studies explicitly mentioned that patients received a handout or a printed sheet with their results to discuss them with their health care provider.14,15,19,21

Evaluation of ACA in Individual Decision Support

In 5 studies, self-administered questionnaires or surveys were used to evaluate the ACA exercises, except for the studies of Rochon et al. (2012) and Abraham et al. (2015), which used qualitative in-depth interviews and/or focus groups. Most patients throughout studies reported that the ACA exercise was easy to do, clear, and interesting.16,19 The study of Pieterse et al. also showed that as educational level increases, the median score on self-reported difficulty of the task (measured on 5-point Likert-type scales) significantly decreases (P = 0.006).16 Furthermore, Rochon et al. reported that patients’ unfamiliarity with computers was the main reason for encountering problems with the ACA exercise (focus group results).18 Also, patients thought that they were being manipulated, because they assumed they had answered the same question more than once.18 In two studies, patients commented on the preselection of attributes, and that there was no possibility to add or exclude attributes or treatment options.18,21 The completion time was not consistently specified and reported among studies. For example, Abraham et al. (2015) reported a 45–60 min face-to-face interview, whereas de Achaval et al. (2012) and Pieterse et al. (2010) reported a mean duration of the ACA exercise of, respectively, 15 and 16 min (A.H. Pieterse, personal communication, 2017). Moreover, the duration time will also depend on the number of attributes and levels included in the ACA.

With regard to reliability of preferences, Pieterse et al. (2010) asked patients to do a retest after 7–10 days and found that preferences were unstable in 1/3 of the sample. Hawley et al. (2015) found a concordance of 96% between predicted and revealed preferences (treatment actually received or planning to receive). Rochon et al. (2012) found that 66% of patients agreed with their prediction of preferred treatment, and in the study of Fraenkel et al. (2007) 68% thought the feedback “very much” reflected their values. Pieterse et al. (2010) reported that patients were satisfied with the feedback received on attribute importance.

In general, patients were positive with regard to the usefulness of results. Pieterse et al. (2010) reported that most patients would discuss results with their doctor and that 62% thought the ACA exercise would be helpful in deciding about treatment.16 Fraenkel et al. (2007) and Abraham et al. (2015) also reported increased patient activation and increased patient engagement with clinicians after the use of the ACA exercise.19,20 In the study of Fraenkel et al. (2007), the ACA group reported having greater self-confidence, self-efficacy, and preparedness for decision making compared to the pamphlet-only control group (all statistically significant).19 Hawley et al. (2015) found higher scores on several knowledge and decision appraisal statements for the ACA group, but only the score for the statement “I feel like I’ve made an informed choice” was significantly higher compared to the control group.21 Abraham et al. (2015) studied the effect of the ACA exercise on adherence and found that patients who used the ACA exercise and were prescribed a therapy that was concordant with their preferences had a 15% increase in adherence to their antithrombotic therapy.20 Lastly, de Achaval et al. (2012) showed that a video booklet decision aid provided the largest reduction in decision conflict compared to a group receiving an education booklet and a group receiving the video booklet + ACA exercise.

Discussion

The aim of this study was to systematically review the extent to which CA is used to elicit individual preferences for clinical decision support. We conclude that there is limited published use of CA exercises in shared decision making. Only 8 ACA studies met our inclusion criteria. Second, we have found that most studies resembled each other in the choices made regarding the design of the choice task and instrument. The principal findings will be discussed in light of future use of CA in clinical decision support.

Selection of Attributes and Levels

Each study included between 4 and 8 attributes, and found a balance between benefits (efficacy) and harms or process characteristics. All studies worked from a fixed set of attributes determined by focus groups, interviews, literature reviews, or expert consultations. None of the included studies allowed individual patients to determine the relevant attributes prior to the start of the ACA exercise (either from a predefined set or by coming up with attributes themselves). Predetermination of attributes and levels might omit characteristics of treatment that are important to individual patients. CA exercises cannot be easily adapted, however, to include additional attributes for each patient at the time of decision making.

Choice Task Design

In this review, all included studies used ACA as their preference elicitation method. Patients received between 12 and 18 paired comparisons, and this is determined prior to the preference elicitation (based on the total number of attributes and levels). Other CA methods, such as discrete choice experiments and full profile conjoint, were not observed in our review. This may be because, only for ACA and maximum-difference scaling (best-worst scaling), commercial software is available to generate reliable preference data based on the responses of a single patient in real time.22 Furthermore, ACA (and also maximum difference scaling) does not require presentation of all attributes in one choice task, which can be useful for complex medical decisions. Another plausible explanation might be the unstudied effect of using less efficient designs (not fully balanced and orthogonal) on the reliability of individual preference estimations. ACA designs (in hindsight), however, usually have good statistically efficiency, although they are often not strictly orthogonal.23 Nevertheless, it might be interesting for researchers to have greater flexibility in CA methods to construct decision support tools in health care settings. Fraenkel (2013) already described some pilot-test work with the best-worst scaling method in patients with rheumatoid arthritis.22 An article describing this work in detail is not yet published, however (and could not be included in this review).

Instrument Design

All studies were conducted via a computer, of which 3 were also Web-based. Web-based ACA exercises offer flexibility to patients who have Internet access to use the ACA exercise at home, at their own pace, and thus they reduce time demands on clinicians. As study results have shown, however, people who are older or have lower levels of education may need additional support.16,18 All studies provided patients with their preference results in real time, although the type of outcome that was presented to patients differed. Only 3 studies presented patients with a ranking of treatment options or a suggestion that treatment would be the best fit with their values. Physicians might feel hesitant in case the treatment with the highest score is not the best treatment option, according to the physician’s experience or the evidence available.22 Shared decision making has historically shied away from making any type of treatment recommendations.24,25 Providing the patient with results on attribute importance leaves more room for interpretation and discussion during shared decision making; however, it is arguable that helping patients to better understand how each treatment option aligns or does not align with what matters to them is a critical—and too often overlooked—step in supporting evidence-informed, values-congruent decisions.8 The question remains, however, whether this should be done by a computer or by the doctor.

Evaluation of ACA

Among studies, patients had a positive attitude about the need to actively think about the relevant trade-offs, and they reported ACA exercises to be useful and informative. One of the main barriers we came across, however, was the implementation of ACA exercises within the clinical workflow. For these tools to succeed, it is of paramount importance that thought is given as to when the appropriate time to elicit preferences is, the amount of effort expected from physicians, and most importantly how the results of the CA exercise fit within the patient–physician dialogue.16,20,21 None of the included studies, however, went into depth about any of these issues. Only Pieterse et al. (2012) questioned whether patients’ values should be incorporated in the medical record to increase the likelihood that they are addressed during the clinical encounter.16

Limitations of This Study

To the best of our knowledge, this is the first review regarding individual preference estimation based on CA methods. Only 8 ACA studies were, however, identified in this review, making generalization difficult. For the quantifiable criteria (e.g., mean duration), no meta-analysis could be performed due to heterogeneity of measurements. In addition, some studies lacked complete information on instrument design, and it was difficult to get an unambiguous picture. Authors were approached for clarification, but few responded and clarified our questions. Except for 1, all studies used Sawtooth Software, and thus our results are generalizable to this software package. Other CA software packages are available (e.g., 1000minds Ltd.); however, to the best of our knowledge, no studies using this method have met our inclusion criteria.

Some studies discussed both the design and evaluation of an ACA exercise, which mostly resulted in less detail about the design phase. For reasons of transparency and to offer the possibility to learn from previous research, we recommend authors include more detailed information about the development phase in an appendix.

Furthermore, there might have been selection bias in our review. The studies of Cheung et al. (2010) and Imaeda et al. (2010) addressed the theme of individual preference estimation, but they were excluded because the experimental designs of this ranking conjoint and maximum difference scaling experiment were designed to elicit population preferences.26,27 Moreover, one study used a discrete choice experiment to improve patient knowledge and adjust high patient expectations of treatment outcomes.28 The study was excluded, because the discrete choice experiment was not analyzed nor did patients receive results.

Conclusion

There is only limited published use of CA exercises in shared decision making. All included studies used ACA to estimate individual preferences. Studies resembled each other in design choices made, but patients received different feedback among studies. Furthermore, patients had a positive attitude about the need to actively think about the relevant trade-offs, and they reported ACA exercises to be useful and informative. For these tools to truly succeed, however, further research should first focus on a more flexible set of attributes and levels, the feedback patients want to and should receive, and how the results fit within the patient–physician dialogue. Furthermore, it might be interesting for researchers to have greater flexibility in the choice of CA methods for decision support tools, because the optimal approach will vary depending on the needs of the clinical setting and the capabilities of patients.

Supplemental Material

Appendix_1_online_supp – Supplemental material for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results

Supplemental material, Appendix_1_online_supp for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results by Marieke G. M. Weernink, Janine A. van Til, Holly O. Witteman, Liana Fraenkel and Maarten J. IJzerman in Medical Decision Making

Supplemental Material

Appendix_2_online_supp – Supplemental material for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results

Supplemental material, Appendix_2_online_supp for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results by Marieke G. M. Weernink, Janine A. van Til, Holly O. Witteman, Liana Fraenkel and Maarten J. IJzerman in Medical Decision Making

Footnotes

Work was performed at the following institutions: Department of Health Technology and Services Research, University of Twente; Department of Family and Emergency Medicine, Office of Education and Professional Development, Laval University; Population Health and Optimal Health Practices Research Unit, CHU de Québec; and School of Medicine, Yale University. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. However, Liana Fraenkel is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases, part of the National Institutes of Health, under award no. AR060231-01 (Fraenkel). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. In addition, Holly Witteman is supported by a Research Scholar Junior 1 career award from the Fonds de recherche du Québec – Santé.

The funding agreement ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report

Supplementary Material: Supplementary material for this article is available on the Medical Decision Making Web site at http://journals.sagepub.com/home/mdm.

Contributor Information

Marieke G.M. Weernink, Department of Health Technology and Services Research, MIRA—Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands.

Janine A. van Til, Department of Health Technology and Services Research, MIRA—Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands

Holly O. Witteman, Department of Family and Emergency Medicine, Office of Education and Professional Development, Laval University, Quebec City, QC, Canada Population Health and Optimal Health Practices Research Unit, CHU de Québec, Quebec City, QC, Canada.

Liana Fraenkel, School of Medicine, Yale University, New Haven, CT, USA.

Maarten J. IJzerman, Department of Health Technology and Services Research, MIRA—Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands

References

  • 1. O’Connor AM, Rostom A, Fiset V, Tetroe J, Entwistle V, Llewellyn-Thomas H, et al. Decision aids for patients facing health treatment or screening decisions: systematic review. BMJ. 1999;319(7212):731–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Fagerlin A, Pignone M, Abhyankar P, Col N, Feldman-Stewart D, Gavaruzzi T, et al. Clarifying values: an updated review. BMC Med Inform Decis Making. 2013;13(2):1–7. doi: 10.1186/1472-6947-13-s2-s8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Witteman HO, Scherer LD, Gavaruzzi T, Pieterse AH, Fuhrel-Forbis A, Chipenda Dansokho S, et al. Design features of explicit values clarification methods: a systematic review. Med Decis Making. 2016. doi: 10.1177/0272989x15626397. [DOI] [PubMed] [Google Scholar]
  • 4. Lancaster KJ. A new approach to consumer theory. J Pol Econ. 1966;74(2):132–57. [Google Scholar]
  • 5. Bridges JF, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, et al. Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4):403–13. doi: 10.1016/j.jval.2010.11.013. [DOI] [PubMed] [Google Scholar]
  • 6. Cattin P, Wittink DR. Commercial use of conjoint analysis: a survey. J Market. 1982;46(3):44–53. doi: 10.2307/1251701. [DOI] [Google Scholar]
  • 7. Ryan M, Farrar S. Using conjoint analysis to elicit preferences for health care. BMJ. 2000;320(7248):1530–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Witteman HO, Gavaruzzi T, Scherer LD, Pieterse AH, Fuhrel-Forbis A, Chipenda Dansokho S, et al. Effects of design features of explicit values clarification methods: a systematic review. Med Decis Making. 2016;36(6):760–76. doi: 10.1177/0272989x16634085. [DOI] [PubMed] [Google Scholar]
  • 9. Weernink MGM, Janus SLM, van Til JA, Raisch DW, van Manen JG, IJzerman MJ. A systematic review to identify the use of preference elicitation methods in healthcare decision making. Pharm Med. 2014;28(4):175–85. doi: 10.1007/s40290-014-0059-1. [DOI] [PubMed] [Google Scholar]
  • 10. Kaltoft MK, Neilsen JB, Salkeld G, Dowie J. Can a discrete choice experiment contribute to person-centred healthcare? Euro J Person Cent Healthcare. 2015;3(4):431–7. [Google Scholar]
  • 11. de Bekker-Grob EW. Discrete Choice Experiments in Health Care: Theory and Applications. PhD thesis. Rotterdam: Erasmus Universiteit Rotterdam; 2009. [Google Scholar]
  • 12. Ryan M, Skåtun D, Major K. Using discrete choice experiments to go beyond clinical outcomes when evaluating clinical practice. In: Ryan M, Gerard K, Amaya-Amaya M. eds. Using Discrete Choice Experiments to Value Health and Health Care. Dordrecht: Springer Netherlands; 2008. p 101–16. [Google Scholar]
  • 13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535. doi: 10.1136/bmj.b2535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Jayadevappa R, Chhatre S, Gallo JJ, Wittink M, Morales KH, Bruce Malkowicz S, et al. Treatment preference and patient centered prostate cancer care: design and rationale. Contemp Clin Trials. 2015;45(Pt B):296–301. doi: 10.1016/j.cct.2015.09.024. [DOI] [PubMed] [Google Scholar]
  • 15. de Achaval S, Fraenkel L, Volk RJ, Cox V, Suarez-Almazor ME. Impact of educational and patient decision aids on decisional conflict associated with total knee arthroplasty. Arth Care Res. 2012;64(2):229–37. doi: 10.1002/acr.20646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Pieterse AH, Berkers F, Baas-Thijssen MC, Marijnen CA, Stiggelbout AM. Adaptive conjoint analysis as individual preference assessment tool: feasibility through the internet and reliability of preferences. Patient Ed Couns. 2010;78(2):224–33. doi: 10.1016/j.pec.2009.05.020. [DOI] [PubMed] [Google Scholar]
  • 17. Fraenkel L. Feasibility of using modified adaptive conjoint analysis importance questions. Patient. 2010;3(4):209–15. doi: 10.2165/11318820-000000000-00000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Rochon D, Eberth JM, Fraenkel L, Volk RJ, Whitney SN. Elderly patients’ experiences using adaptive conjoint analysis software as a decision aid for osteoarthritis of the knee. Health Expect. 2014;17(6):840–51. doi: 10.1111/j.1369-7625.2012.00811.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Fraenkel L, Rabidou N, Wittink D, Fried T. Improving informed decision-making for patients with knee pain. J Rheumatol. 2007;34(9):1894–8. [PubMed] [Google Scholar]
  • 20. Abraham NS, Naik AD, Street RL, Jr., Castillo DL, Deswal A, Richardson PA, et al. Complex antithrombotic therapy: determinants of patient preference and impact on medication adherence. Patient Prefer Adher. 2015;9:1657–68. doi: 10.2147/ppa.s91553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Hawley S, Newman L, Griggs J, Kosir M, Katz S. Evaluating a decision aid for improving decision making in patients with early-stage breast cancer. Patient. 2015:1–9. doi: 10.1007/s40271-015-0135-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Fraenkel L. Incorporating patients’ preferences into medical decision making. Med Care Res Rev. 2013;70(1 0):80S–93S. doi: 10.1177/1077558712461283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Sawtooth Software. ACA 6.0 Technical Paper. Sun Valley: Sawtooth Software; 2007. [Google Scholar]
  • 24. Frongillo M, Feibelmann S, Belkora J, Lee C, Sepucha K. Is there shared decision making when the provider makes a recommendation? Patient Ed Couns. 2013;90(1):69–73. doi: 10.1016/j.pec.2012.08.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Fried TR. Shared decision making: finding the sweet spot. New Engl J Med. 2016;374(2):104–6. doi: 10.1056/NEJMp1510020. [DOI] [PubMed] [Google Scholar]
  • 26. Imaeda A, Bender D, Fraenkel L. What is most important to patients when deciding about colorectal screening? J Gen Intern Med. 2010;25(7):688–93. doi: 10.1007/s11606-010-1318-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Cheung SW, Aranda D, Driscoll CL, Parsa AT. Mapping clinical outcomes expectations to treatment decisions: an application to vestibular schwannoma management. Otol Neurotol. 2010;31(2):284–93. doi: 10.1097/MAO.0b013e3181cc06cb. [DOI] [PubMed] [Google Scholar]
  • 28. Dowsey MM, Scott A, Nelson EA, Li J, Sundararajan V, Nikpour M, et al. Using discrete choice experiments as a decision aid in total knee arthroplasty: study protocol for a randomised controlled trial. Trials. 2016;17(1):1–10. doi: 10.1186/s13063-016-1536-5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix_1_online_supp – Supplemental material for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results

Supplemental material, Appendix_1_online_supp for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results by Marieke G. M. Weernink, Janine A. van Til, Holly O. Witteman, Liana Fraenkel and Maarten J. IJzerman in Medical Decision Making

Appendix_2_online_supp – Supplemental material for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results

Supplemental material, Appendix_2_online_supp for Individual Value Clarification Methods Based on Conjoint Analysis: A Systematic Review of Common Practice in Task Design, Statistical Analysis, and Presentation of Results by Marieke G. M. Weernink, Janine A. van Til, Holly O. Witteman, Liana Fraenkel and Maarten J. IJzerman in Medical Decision Making


Articles from Medical Decision Making are provided here courtesy of SAGE Publications

RESOURCES