Skip to main content
Health Services Research logoLink to Health Services Research
. 2002 Dec;37(6):1659–1679. doi: 10.1111/1475-6773.01116

Measuring What People Value: A Comparison of “Attitude” and “Preference” Surveys

Kathryn A Phillips, F Reed Johnson, Tara Maddala
PMCID: PMC1464045  PMID: 12546291

Abstract

Objective

To compare and contrast methods and findings from two approaches to valuation used in the same survey: measurement of “attitudes” using simple rankings and ratings versus measurement of “preferences” using conjoint analysis. Conjoint analysis, a stated preference method, involves comparing scenarios composed of attribute descriptions by ranking, rating, or choosing scenarios. We explore possible explanations for our findings using focus groups conducted after the quantitative survey.

Methods

A self-administered survey, measuring attitudes and preferences for HIV tests, was conducted at HIV testing sites in San Francisco in 1999–2000 (n = 365, response rate=96 percent). Attitudes were measured and analyzed using standard approaches. Conjoint analysis scenarios were developed using a fractional factorial design and results analyzed using random effects probit models. We examined how the results using the two approaches were both similar and different.

Results

We found that “attitudes” and “preferences” were generally consistent, but there were some important differences. Although rankings based on the attitude and conjoint analysis surveys were similar, closer examination revealed important differences in how respondents valued price and attributes with “halo” effects, variation in how attribute levels were valued, and apparent differences in decision-making processes.

Conclusions

To our knowledge, this is the first study to compare attitude surveys and conjoint analysis surveys and to explore the meaning of the results using post-hoc focus groups. Although the overall findings for attitudes and preferences were similar, the two approaches resulted in some different conclusions. Health researchers should consider the advantages and limitations of both methods when determining how to measure what people value.

Keywords: Conjoint analysis, discrete choice experiment, patient preferences, patient acceptance of health care, attitudes, methods


Measuring individuals' value for health care goods, services, and interventions is a significant challenge for health care researchers. Such valuations are used for many different purposes, including setting of health priorities and policies. The purpose of this paper is to compare results from a survey on HIV testing methods that used two approaches to valuation: measurement of “attitudes” using ranking and rating questions versus measurement of “preferences” using conjoint analysis methods. We also explore possible explanations for our findings using focus groups conducted after the quantitative survey.

The most commonly used approaches to valuation are “attitude” surveys, which ask respondents to rank or rate their opinion about discrete items. Another approach is the measurement of “preferences.” Although the term “preference” is often used informally to mean “attitude,” the economic concept of “preference” assumes adherence to economic theory. We focus on “stated preference” surveys, which are useful when observed behavior (“revealed preference”) is not relevant, such as when markets do not exist. Stated-preference surveys include methods to develop health-state utility weights for use in quality adjusted life years (QALYs) (e.g., rating scales, standard gamble, and time trade-off), willingness to pay as measured by contingent valuation surveys, and “conjoint analysis.” Conjoint analysis surveys involve comparing hypothetical scenarios by ranking, rating, or choosing scenarios. For example, respondents may be asked to choose from “Option A” and “Option B,” where each option is described using a combination of attributes (e.g., a test at a doctor's office costing $100 or a test at a public clinic costing $25) (Appendix 1). Although conjoint analysis has been widely used in several fields of economics as well as marketing research, it has only recently become more widely used in health care research (e.g., Bryan et al. 1998; Bryan et al. 2000; Farrar and Ryan 1999; Johnson and Lievense 2000; Ratcliffe 2000; Ratcliffe and Buxton 1999; Ryan 1999; Ryan and Farrar 2000; Ryan and Hughes 1997; Ryan, McIntosh and Shackley 1998; Singh et al. 1998; Vick and Scott 1998).

This is the first study, to our knowledge, to compare health care “attitudes” and “preferences” using data from the same study, along with focus groups to explore the meaning of such results. Ratcliffe (2000) briefly compared responses from an attitude and a conjoint analysis and found “some concordance,” but these results were not the focus of the paper and were not analyzed or reported in any detail. Several studies have used post-hoc focus groups to examine decision-making processes in contingent valuation studies, but to our knowledge, focus groups have not been used to examine such processes in conjoint analysis.

Our study adds to the literature both from a practical and conceptual perspective. From a practical perspective, this study provides guidance to researchers on the strengths and weaknesses of each approach and when each approach may be most useful. More fundamentally, however, this study addresses two disparate perspectives on the nature of decision making. We examine the theoretical basis for these perspectives and possible explanations for differences in the results produced. This study thus provides a link to the large literature on psychological and economic models of decision making, which is a significant body of literature but one that may be less familiar to health services researchers (Phillips and Rosenblatt 1992). There has been a long debate on the relevance of psychological models in contrast to economic models of how people make decisions, beginning with early work on “bounded rationality” (Simon 1955), to the development of “prospect theory” (Kahneman and Tversky 1979), to recent work on the implications of this debate for health policy (Rice 1997; Rice 1998). For example, work by Kahneman et al. has argued that economic “preferences” as typically measured are simply expressions of attitudes measured on a dollar scale, rather than true preferences as dictated by economic theory (Kahneman et al. 1993; Kahneman, Ritov and Schkade 1999). The implications of the outcome of this debate for health services researchers are substantial, since many of the approaches used to develop and evaluate health policies are based on the fundamental premise that economic preferences are a valid measure of social welfare (Rice 1997; Rice 1998).

Conceptual Framework

We begin by briefly describing the theoretical basis and methods used in attitude and preference surveys and how they differ. We then discuss the role of focus groups in understanding decision-making processes. In this article we are able to provide only a brief overview of the extensive literature on these topics (for further reading see, e.g., Kahneman, Ritov and Schkade 1999).

Attitude Surveys

The concept of an “attitude” comes from social psychology, with an “attitude” defined as “a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor” (Eagly and Chaiken 1996). The theoretical literature on attitudes is broad, and approaches to measuring attitudes range from simple approaches that use straightforward ranking and rating questions, to more complex approaches that distinguish attitudes, perceptions, values, and beliefs (Aday 1996; Sudman and Bradburn 1982; Tanur 1992). In this study, we adopted the commonly used approach in health care surveys of measuring attitudes with ranking and rating questions; for example, we asked respondents to rate the importance of test location on a six-point scale (see Appendix).

Preference Surveys (Conjoint Analysis)

The concept of preferences comes from economic theory, with preferences defined as individuals’“utility” for consuming health care goods and services. Briefly, welfare economics is based on the assumption that individuals maximize a preference (or utility) function. Economic theory further argues that utilities can be scaled in dollar-equivalence terms. That is, the most useful measure of the strength of preferences is the monetary payment that would leave people indifferent between having a given utility change and not having the change (i.e., willingness to pay or willingness to accept). Monetary measures of utility changes require various assumptions about preferences, including completeness, monotonicity, and transitivity. These assumptions arise out of commonly held notions of consistency and rationality. Monotonicity, for example, simply assumes that people prefer more of a good to less of a good.

Conjoint analysis elicits respondents' preferences by asking them to evaluate alternatives consisting of different combinations of attributes; for example, in our study respondents chose hypothetical Test A or B based on attribute levels such as whether the test was conducted at a clinic, doctor's office, or at home (see Appendix). The technique is based on three concepts derived from economic theory:

  1. Each good or service is a bundle of potential attributes;

  2. Each individual has set of unique relative utility weights for attribute levels;

  3. Combining the utilities for different attributes provides an individual's overall relative utility (Singh et al. 1998).

When price is included as an attribute, the money equivalence of a utility difference can be calculated from the rate at which respondents are willing to trade attribute levels against price. This rate indicates the dollar value of a unit change on the utility scale. It thus provides a standardized metric for comparing the utility of different attribute levels and for calculating benefits for use in cost-benefit analyses.

Comparing Attitude and Conjoint Analysis Surveys

Attitude and conjoint analysis surveys are thus different in their theoretical frameworks and methods used to elicit valuations. Two differences are particularly germane to this study. First, conjoint analysis assumes that individuals make trade-offs within a resource constraint and that these decision-making processes conform to the assumptions of economic theory and rational choice. In contrast, the most commonly used approaches to measuring attitudes require minimal assumptions about the underlying theory and model. Respondents are typically not asked to make choices within a resource constraint, for example, when using a series of Likert-type questions, respondents can rate all attributes of a commodity as equally important.

Second, conjoint analysis always uses a decomposed approach, where respondents evaluate scenarios composed of attribute levels, with each level explicitly stated (e.g., attribute levels for test location are clinic, doctor's office, or home). Thus, the method allows estimating utility for each attribute level and estimating utility for any combination of levels, including those respondents do not directly evaluate. In contrast, attitude surveys more typically use a holistic approach, where respondents evaluate attributes as a discrete whole, as we did in this study (e.g., evaluating the importance of test location as a whole rather than separate questions on the importance of a private, public, or home location).

Focus Groups

The vast majority of research done on stated preference methods has focused on quantitative methods (Chilton and Hutchinson 1999). However, qualitative methods, particularly focus groups, have been found useful in exploring the meaning of quantitative results in stated preference surveys (Chilton and Hutchinson 1999; Desvousges and Smith 1988; Devers, Sofaer, and Rundall 1998). Focus groups are informal discussions in which a moderator probes peoples’ attitudes on a specific topic (Desvousges and Smith 1988). The most common use of focus groups is to develop and pilot quantitative surveys. However, in this study, focus groups were used after the quantitative survey was conducted in order to explore how respondents responded to the valuation exercise and reasons for discrepancies between findings from the attitude and conjoint analysis components.

Methods

Survey Procedures

We developed a survey to examine both attitudes and preferences about HIV tests. A previous article (Skolnik et al. 2001) reported on the results from the attitude component as well as details of the survey procedures and sample population, and an accompanying article in this issue (Phillips, Maddala and Johnson 2002) reports on the conjoint analysis component. Surveys were fielded at four publicly funded HIV testing locations in San Francisco, California, between November 1999 and February 2000. Of the 380 HIV testers approached, 365 agreed to complete the survey (96 percent response rate). The 10–15-minute, self-administered survey was completed while respondents waited to be tested. Respondents were paid $5 upon completion.

The attitude and conjoint analysis components were designed to measure the same concepts. However, because of differences in the two methods, they are not exactly comparable (discussed further below). The attitude component consisted of two sections. The first section asked respondents to rank four HIV “test scenarios” that depicted typical testing scenarios (public clinic, doctor's office, one-week home collection test, and instant home test). The second section asked respondents to rate eight testing attributes (see Table 1) in terms of their importance using Likert-type questions.

Table 1.

Attributes and Levels

Attributes Levels (Choices)
Location (Public clinic), doctor's office, home
Price ($0), $10, $50, $100
Sample collection (Draw blood), swab mouth/oral fluids, urine sample, prick finger/fingerstick
Timeliness/accuracy (Results in 1–2 weeks, almost always accurate), immediate results almost always accurate, immediate results less accurate
Privacy/anonymity of test results (Results given in person—not linked to name), “only you know that you are tested”—results not linked, results given by phone—not linked, results given by phone—linked, results given in person—linked
Counseling (Talk to a counselor), read brochure then talk to counselor

We used five steps in creating the conjoint analysis component (Louviere, Hensher and Swait 2000):

  1. 1. Defining attributes

  2. 2. Assigning attribute levels

  3. 3. Creating scenarios

  4. 4. Determining choice sets and obtaining preference data

  5. 5. Model estimation.

Details are provided in the accompanying article in this issue (Phillips, Maddala and Johnson 2002) and its appendix. To summarize, after attributes were chosen we assigned levels, which are realistic ranges under which an attribute may vary (Table 1). Next, hypothetical scenarios were created that are combinations of these attribute levels using a fractional factorial design. Lastly, attribute levels were placed into choice sets (questions). We used a discrete choice approach, where respondents were asked to choose between two groups of attribute levels (“Test A” versus “Test B”). Results were analyzed using random effects probit models.

Focus Group Procedures

Three focus groups were conducted between October and November 2000 by a trained facilitator. The first group (n = 7) was comprised generally of respondents to the original survey who had indicated that they would be willing to participate in such groups. The second group (n = 8) was comprised of people recruited at the same public HIV test sites but who had not participated in the original survey. The third group (n = 10) was comprised of individuals in a drug treatment center. Respondents were paid $30. Standard focus group procedures were followed: an interview guide was prepared, groups were taped, and data were transcribed and coded into themes (details available on request).

Research Questions and Methods

We examined how the results were similar and different for the two components and explored possible explanations for those differences. On the one hand, findings for “attitudes” and “preferences” should be consistent if the underlying valuations are similar. On the other hand, these findings may be different because of the different elicitation methods used and differences in the underlying decision-making processes. We begin with a comparison of the relative ranking of attributes and scenarios, which provides an overview of the consistency of findings. We then explore in more detail possible differences in the findings, as suggested by differences in the rankings and the theoretical and measurement differences discussed previously.

1 How are the results similar?

a. Are the relative rankings of attributes consistent in the attitude and conjoint analysis components?

Methods: In the attitude component, respondents directly rated each attribute using Likert-type questions and therefore we ranked attributes using mean scores. Conversely, in the conjoint analysis, respondents rated levels of attributes rather than individual attributes. We therefore estimated the relative ranking of attributes by averaging the absolute values of the regression coefficients across all relevant levels of specific attributes (Tables 2 and 4; see the accompanying article in this issue for details [Phillips, Maddala and Johnson 2002]). We then compared the relative attribute rankings between the attitude and conjoint analysis components.

Table 2.

Attribute Ratings

Attributes Attitude Rating (Mean scores based on scale from 1–5, 5=very important) Conjoint Analysis Rating (Absolute average of coefficients across attribute levels)
Accuracy/timeliness 1 (4.39) 2 (0.16)
Privacy 2 (4.33) 1# (0.21)
Linking of results 3 (4.26) 1# (0.21)
In-person counseling 4 (3.84) 6 (0.01)
Price 5 (3.61) 5 (0.05)@
Location 6 (3.34) 4 (0.07)
Method of sample collection 7 (2.78) 3 (0.10)

Notes: Within the attitude component, price, location, and method were not significantly different (based on Wilcoxon signed rank tests with a Bonferroni adjustment for multiple tests). Within the conjoint analysis component, location and method were not significantly different (based on 95% confidence intervals).

#

Attributes were combined in the conjoint analysis component.

@

Assumes $5 price.

Table 4.

Results from the Random Effects Probit Model (Based on Effects Coding #)

Variable Coefficient Std. Err.
Testing location
Public 0.110*** 0.023
Doctor's office −0.082** 0.024
Home −0.029 0.021
Sample collection method
Draw blood −0.139*** 0.030
Swab/oral fluids 0.116*** 0.030
Urine 0.088** 0.030
Finger prick −0.065* 0.031
Timeliness/accuracy
1–2 weeks>accuracy −0.074*** 0.022
Immediate>accurate 0.244*** 0.024
Immediate<accurate −0.170*** 0.023
Privacy/anonymity of test results
Only you know 0.225*** 0.036
Results in person—not linked to name 0.217*** 0.034
Results by phone—not linked to name 0.076* 0.037
Results in person—linked to name −0.193*** 0.037
Results by phone—linked to name −0.326*** 0.036
Availability of counseling
In-person counseling 0.024 0.030
Brochure −0.024 0.030
Test price −0.009*** 0.000
Constant 0.019 0.025
Predicted utility .138
Number of observations 3,366
Number of respondents 339
Log-likelihood −1,975.4
Chi-square 739.18 (p<.001)
p <.001***, p <.01**, p <.05*

Notes: # Effects coding computes an effect size for each attribute level. See the accompanying paper (Phillips, Maddala, and Johnson 2002) for details.

b. Are the relative ranking of scenarios (i.e., groups of attributes) consistent in the attitude and conjoint analysis components?

Methods: In the attitude component, respondents directly ranked four scenarios composed of attributes that were chosen to reflect typical testing scenarios (Appendix). Conversely, in conjoint analysis grouping of attributes into scenarios is random rather than determined a priori. Therefore, we estimated the ranking of scenarios by calculating predicted utilities for scenarios that included the same attributes as the scenarios in the attitude component (see the accompanying article in this issue [Phillips, Maddala and Johnson 2002]).

2 How are the results different?

a. Are there differences in the valuation of the price attribute?

Methods: Problem context and form of evaluation task can influence subjects’ appraisal of the importance of various concepts or features (Huber 1997; Huber et al. 1993). Conjoint analysis and attitude measurement differ in part because they frame the evaluation task differently. Since choice-based questions may be perceived as more immediate and real, attributes that are perceived as having more immediate and direct implications—in this case, price—may acquire more salience in the conjoint analysis component. Therefore, we compared the results for the price attribute in the attitude and conjoint analysis components. We also examined the extent to which respondents in the conjoint analysis chose based on price rather than other attributes (see the accompanying paper [Phillips, Maddala and Johnson 2002]).

b. Are there differences in the valuation of attributes with “halo” effects?

We explored whether respondents assigned higher ratings in the attitude component to attributes that they perceived as “markers” for a constellation of attributes that may elicit a “halo” or “warm glow” effect (i.e., when evaluations of one attribute spill over to evaluations of other attributes [Redelmeier, Shafir and Aujla 2001]). As discussed previously, the conjoint analysis approach “breaks apart” the natural groupings of attributes and therefore allows the valuation of each attribute separately. Therefore, we examined incongruent results between the attitude and conjoint analysis components that could be a result of “halo” effects.

c. Are there differences in valuations between attributes as a whole versus attribute levels?

Methods: The valuation of attributes as a whole may be different than the valuation of attribute levels. For example, respondents may rate test location overall as relatively unimportant; yet, they may rate specific locations as highly desirable (e.g., public clinic) and others as undesirable (e.g., doctor's office). In this study, only the conjoint analysis measured valuations for attribute levels. Although it would have been possible to ask respondents about all attribute levels in the attitude component, doing so would have increased the number of Likert-type questions from 8 to 21. We therefore compared the valuation of attributes as a whole (based on the attitude and conjoint analysis components) to attribute levels (based on the conjoint analysis component only).

d. Are there apparent differences in the decision-making processes used?

Methods: Since conjoint analysis tasks are likely to be cognitively more complex, respondents may (1) exhibit inconsistent responses, and (2) show evidence of attempting to simplify the choice task by focusing only on key attributes rather than all attributes simultaneously. We measured inconsistency using two approaches. (Note: we also examined inconsistency within the attitude component; however, since the measures of inconsistency between the two components are not comparable, we do not report these results here.) First, we included a “dominant pair” comparison in which all attributes of one scenario were the same as the other with the exception of price. We expected that subjects would prefer the lower price test, all other attributes held equal. Second, we repeated a question (early and late in the survey instrument). We expected subjects to make the same choice both times the question was offered. We measured whether respondents focused on key attributes by examining the percentage of respondents who always chose a specific level of an attribute when it was offered in the conjoint analysis component. Respondents who always chose an attribute level when it was offered (assuming that it was offered 5+ times and the respondent chose it 5+ times) were defined as having “dominant” preferences (see the accompanying article [Phillips, Maddala and Johnson 2002]).

Results

We found that the findings from the attitude and conjoint analysis components generally were consistent, but there also were some key differences. First, in both components, the top-ranked attributes were accuracy/timeliness, privacy, and linking of results. However, note that accuracy/timeliness was the highest ranked attribute in the attitude component, while it was second in the conjoint analysis component (Table 2; see also Table 4 for details).

Second, scenarios were generally consistently ranked. In both components, respondents rated public clinics most highly, followed by instant home tests (Table 3). We also found that overall rankings for scenarios in which price=$0 were the same as those in the original scenarios (not shown).

Table 3.

Scenario Rankings

Representative Scenarios Attitude Ranking Conjoint Analysis Ranking (Predicted Utility)
Public Clinic ($0, draw blood, accurate results in 1–2 weeks, in-person linked or unlinked results, in-person counseling) 1 1 (−.193)
Instant Home Test ($50, finger prick, immediate but less accurate results, only you know whether tested, brochure/phone counseling) 2 2 (−.651)
Doctor's Office Test ($5 or $50, draw blood, accurate results in 1–2 weeks, in-person or phone linked or unlinked results, in-person counseling) 3 4 (−1.13)
One-week Home Test ($50, finger prick, accurate results in 1–2 weeks, phone unlinked results, brochure/phone counseling) 4 3 (−.704)

Notes: Representative scenarios were estimated in the conjoint analysis component so that they matched the scenarios presented in the attitude component. Predicted utility for the “baseline” scenario of public clinic testing with unlinked results=.144 (Table 4). Thus, these results indicate that respondents prefer the baseline scenario more than the representative scenarios shown above.

Although the findings were generally consistent, some results from the attitude and conjoint analysis components were different. First, price had a higher valuation in the conjoint analysis component. Based only on the attitude survey results, one might conclude that price was relatively unimportant because (1) price was among the four lowest ranking attributes, and (2) the overall rankings for scenarios did not change when the price was assumed to be $0. However, two findings from the conjoint analysis results indicate that price actually was a relatively important factor in determining choice (although still of less importance than many other attributes): (1) price was a significant predictor of choice (p <.001, Table 4), and (2) price was the most attribute that was most often “dominant,” that is, 8 percent of respondents chose based on price. We also explored this issue in our focus groups, which confirmed that respondents make decisions based on price more than might be apparent based on the attitude survey alone. For example, one respondent noted that he didn't want to appear “cheap” by rating price highly in the attitude survey, but the choice context of the conjoint analysis exercises made him realize that he did indeed make decisions based on price.

Second, the valuation of attributes with a “halo” effect was higher in the attitude component than the conjoint analysis component. Specifically, in-person counseling was the fourth-ranked attribute in the attitude component, while in the conjoint analysis component it was the least important attribute and it was not a significant predictor of choice (p = .48, Table 4). Participants in the focus groups reported that they assigned a higher value in the attitude survey to in-person counseling, an attribute that was linked for them to the public clinic testing experience, which they valued and with which they were familiar. They further reported that it was only when they were encouraged to evaluate trade-offs in the conjoint analysis component that this “halo effect” disappeared.

Third, the valuation of attributes as a whole was different than the valuation of attribute levels (Table 4). For example, the attribute levels for timeliness/accuracy, the highest rated attribute as a whole, varied significantly. The level with the highest utility was immediate/highly accurate results, while respondents had strong disutility for immediate/less accurate results. Similarly, there was wide variation among levels for privacy. Respondents had high utility for being the only person to know their results, while they had strong disutility for results by phone that were linked by name.

Lastly, we found evidence of inconsistent responses and attempts to simplify the conjoint analysis tasks. One-third (32 percent) of respondents had inconsistent responses in the conjoint analysis component (25 percent answered the repeated questions and 10 percent answered the dominant pair questions incorrectly). More than one-quarter of respondents (28 percent) appear to have chosen based on a key attribute, particularly price. In the focus groups, one explanation given for inconsistent results was respondents' frustration at having to make difficult trade-offs. Participants also often stated that they focused on key attributes (either most preferred or most disliked) and used a “threshold” approach in making choices, for example, price was most important but if prices were low for both choices they would focus on accuracy. They noted that the more complex the questions were, the more they used simplifying rules.

Discussion

We found that “attitudes” and “preferences” were generally consistent, but there were some important differences. Although rankings using the two methods were similar, closer examination revealed important differences in how respondents valued price and attributes with “halo” effects, variation in how different attribute levels were valued, and apparent differences in decision-making processes. To our knowledge, this is the first health care study to compare results using three methods: attitude surveys, conjoint analysis, and focus groups. By triangulating the results from all three approaches, we were able to obtain greater insights (Huber et al. 1993). Our study adds to the literature by: (1) demonstrating how attitudes and preferences are both similar and different and how preferences can be measured using the relatively less well-known method of conjoint analysis, (2) using detailed empirical comparisons to compare and contrast findings from three approaches, and (3) exploring how the results are relevant to the important issue of how individuals value health care interventions.

The three approaches used are derived from different theoretical traditions and use different methods, and they provide different information and conclusions. Therefore, researchers should consider the strengths and limitations of each approach, and under what circumstances each approach is most relevant. Attitude surveys will undoubtedly continue to be useful in health care research. However, our study illustrates several strengths of measuring preferences using conjoint analysis. The first advantage is that conjoint analysis provided more in-depth understanding of valuations, which may better reflect underlying preferences and thus actual decision making. The conjoint analysis survey uncovered differences in how respondents valued price, attributes with “halo” effects, and variation in preferences for different levels of attributes. These results may be at least partly explained by the nature and methods of conjoint analysis: conjoint analysis is based on a well-formulated theory, it uses a decomposed approach, and it requires respondents to make choices and to trade-off attributes against price. Focus group participants reported that they found the conjoint analysis tasks to be useful in forcing them to think more deeply, and they felt that the results better reflected how they would actually behave.

A second advantage is that conjoint analysis allows the estimation of overall preferences for any combination of attributes, including combinations that represent goods or services that are not currently available and thus for which respondents may not have well-formed preferences. In addition to estimating preferences for representative testing scenarios, we also estimated preferences for additional types of tests, including ones that are currently unavailable (see accompanying article in this issue [Phillips, Maddala and Johnson 2002]).

Last, conjoint analysis allows the estimation of willingness to pay, which provides a standardized approach to quantifying preferences for use in economic evaluations. For example, we found that respondents are willing to pay an additional $35 to take a test with immediate, highly accurate results, which has important implications not just for test manufacturers but also for health policy.

These advantages, however, must be balanced by the associated disadvantages. Conjoint analysis is a complex method, requiring more cognitive effort from respondents and greater effort by researchers to properly design and analyze such surveys. In focus groups, respondents reported that the tasks were difficult and “irritating.” In addition, as noted above, there were a substantial percentage of inconsistent responses. In another study, we reported on an experiment to simplify conjoint analysis surveys (Maddala, Phillips and Johnson 2001), and future research should continue to explore approaches to making conjoint analysis easier for both respondents and researchers.

We also found focus groups to be useful adjuncts to quantitative surveys. Although most focus groups are used for survey development, we found them to be extremely valuable when conducted after a quantitative survey in order to better understand our results.

Our study also touches on the broader debate over how individuals make decisions and whether “attitudes” and “preferences” are essentially similar or different. Although our study cannot answer this very complex question, our findings suggest that attitude and preference surveys—at least as used in this study—do not result in identical findings. However, we also found evidence—as have many other studies (e.g., Fischhoff 1991; Payne, Bettman and Johnson 1992; Schkade and Payne 1994; Slovic 1995) —that respondents' decision-making processes appear to violate assumptions of economic theory, that is, some respondents were inconsistent in their preferences and appeared to have used cognitive heuristics to simplify decision making. However, we did not find evidence that these violations precluded obtaining useful results. Our analysis points out that any method of measuring “value” has trade-offs that must be considered. Future research should examine this issue as well as how respondents interpret questions on “price,”“cost,” and “value” (Ratcliffe 2000; Schkade and Payne 1994). Research is also needed on the convergence of results using conjoint analysis and other preference measurement approaches, and the use of willingness-to-pay (WTP) measures in cost-effectiveness as well as cost-benefit analyses (Johnson and Lievense 2000; Neumann, Goldie and Weinstein 2000).

One inherent limitation of the study is that we could not always directly compare the attitude and conjoint analysis components. By definition, these methods are based on different underlying concepts of what people value and how these values should be measured. However, we were able to generate comparable analyses that addressed the key questions. Further research is needed to examine how to calculate utilities for attributes in aggregate. Our study was also limited by the generalizability of the sample. Our respondents are likely to have had more well-defined preferences for testing than the general population, and thus the correlation found between methods may be higher. We would expect to find similar findings in other populations and in studies of other commodities, but this needs to be examined in future research.

In conclusion, the measurement of the “value” of health care will continue to be a critical issue. Health researchers should consider the advantages and limitations of both attitude and preference surveys when determining how to measure what people value. Using multiple methods may provide the most relevant and compelling answers.

Appendix:  Examples of Attitude and Conjoint Analysis Questions

Example of Attitude Question on Test Scenarios (Using Ranking Scale)

Please read the following descriptions about four different ways a person might be tested for HIV.

A. You can get an HIV test at a public clinic (for example, a community test site such as this one) for no cost. A blood sample is taken from your arm, and sent to a lab. You return to the clinic for your test results in 1–2 weeks, and the results are almost always accurate. You talk to a counselor before your test, and you get your results in person from a counselor.
B. You can also get an HIV test at a doctor's office. If you have health insurance, your cost is about $5. (If you don't have health insurance, the test will cost about $50 or more.) A blood sample is taken from your arm, and sent to a lab. You get the results in 1–2 weeks, and the results are almost always accurate. You talk with your doctor before being tested. You get the results by phone if your test is negative, and in-person if your test is positive.
C. You can also purchase a one-week home test by mail or in a drug store for about $50. You take your own blood sample by pricking your finger. The sample is mailed to a lab. You get your results in about a week, by calling a 1-800 number. The results are almost always accurate. If your test is negative, you will get a recording that tells you so. If your test is positive, you will speak to a counselor on the phone. If you need further counseling or referrals, there are phone numbers that you can call.
D. In the future, you may be able to buy an instant home test, by mail or in a drug store. It will cost about $50. You take your own blood sample by pricking your finger, and test it yourself right away. You get your test results in about 5 minutes. The test will tell you if you do not have HIV. However, if the test tells you that you might have HIV, you will need to go to a clinic or doctor for another test that will almost always be accurate. If you need counseling or referrals, there are phone numbers that you can call. Only you know your test results.

Assume that at some point in the future, you decided to get another HIV test. Please rank the tests in order of your personal preference. Place a “1” by the test that would be your first choice, a “2” by your second choice, a “3” by your third choice, and a “4” by your fourth choice.

A. public clinic test ——
B. doctor's office test ——
C. one-week home test ——
D. instant home test ——

Example of Attitude Question on Test Attributes (Using Rating Scale)

Now we are going to ask about specific aspects of the HIV test, one at a time. Please place an X in the column that indicates how important each characteristic is to you in general when you are choosing an HIV test.

Not at all important Not very important Somewhat important Very important Extremely important Don’t know
Location (clinic, doctor's office, or home)

Example of Conjoint Analysis Question

TEST A Test atdoctor's office Test costs $100 A cotton pad is used to take a sample from your mouth You get your results in 1-2 weeks. The test is almost always accurate. You get your results in person, so the person you see knows your test results. Yourname is not linked to your results. You talk in-person with a counselor or doctor before your test.
OR
TEST B Test at public clinic Test costs $10 You give a sample of your urine You get your results in 5 minutes. The test will tell you if you do not have HIV. However, if your test tells you that you might have HIV, you will need to go to a clinic or doctor for another test. The second test will almost always be accurate. Only you know your test results. Your name is not linked with your results. You get a brochure about HIV. You can get phone counseling if you want it.
Do you prefer Test A —— Test B ——

Footnotes

This work was supported by an NIH R01 grant to Dr. Phillips from the National Institutes on Allergies and Infectious Diseases (AI43744).

References

  1. Aday Lu Ann. Designing and Conducting Health Surveys. San Francisco: Jossey-Bass; 1996. [Google Scholar]
  2. Bryan S, Buxton M, Sheldon R, Grant A. “Magnetic Resonance Imaging for the Investigation of Knee Injuries: An Investigation of Preferences.”. Health Economics. 1998;7(7):595–603. doi: 10.1002/(sici)1099-1050(1998110)7:7<595::aid-hec381>3.0.co;2-e. [DOI] [PubMed] [Google Scholar]
  3. Bryan S, Gold L, Sheldon R, Buxton M. “Preference Measurement Using Conjoint Methods: An Empirical Investigation of Reliability.”. Health Economics. 2000;9(5):385–95. doi: 10.1002/1099-1050(200007)9:5<385::aid-hec533>3.0.co;2-w. [DOI] [PubMed] [Google Scholar]
  4. Chilton SM, Hutchinson WG. “Do Focus Groups Contribute Anything to the Contingent Valuation Process?”. Journal of Economic Psychology. 1999;20(4):465–83. [Google Scholar]
  5. Desvousges WH, Smith K. “Focus Groups and Risk Communication: The ‘Science’ of Listening to Data.”. Risk Analysis. 1988;8(4):479–84. [Google Scholar]
  6. Eagly A, Chaiken S. “Attitude Structure and Function.”. In: Gilbert D, Fiske S, Lindzey G, editors. The Handbook of Social Psychology. New York: McGraw-Hill; 1996. [Google Scholar]
  7. Farrar S, Ryan M. “Reponse-Ordering Effects: A Methodological Issue in Conjoint Analysis.”. Health Economics. 1999;8(1):75–9. doi: 10.1002/(sici)1099-1050(199902)8:1<75::aid-hec400>3.0.co;2-5. [DOI] [PubMed] [Google Scholar]
  8. Fischhoff B. “Value Elicitation: Is There Anything in There?”. American Psychologist. 1991;46(8):835–47. [Google Scholar]
  9. Huber J. “What We Have Learned from 20 Years of Conjoint Research: When to Use Self-Explicated, Graded Pairs, Full Profiles or Choice Experiments.”. 1997. Paper read at Sawtooth Software Conference.
  10. Huber J, Wittink DR, Fiedler JA, Miller R. “The Effectiveness of Alternative Preference Elicitation Procedures in Predicting Choice.”. Journal of Marketing Research. 1993;30(1):105–14. [Google Scholar]
  11. Johnson FR, Lievense K. Stated-preference Indirect Utility and Quality-Adjusted Life Years. Durham, NC: Triangle Economic Research; 2000. [Google Scholar]
  12. Kahneman D, Ritov I, Jacowitz K, Grant P. “Stated Willingness to Pay for Public Goods: A Psychological Perspective.”. Psychological Science. 1993;4(5):310–5. [Google Scholar]
  13. Kahneman D, Ritov I, Schkade D. “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues.”. Journal of Risk and Uncertainty. 1999;19(1–3):203–35. [Google Scholar]
  14. Kahneman D, Tversky A. “Prospect Theory: An Analysis of Decision under Risk.”. Econometrica. 1979;47(2):263–91. [Google Scholar]
  15. Louviere JJ, Hensher D, Swait JD. Stated Choice Methods: Analysis and Application. Cambridge: Cambridge University Press; 2000. [Google Scholar]
  16. Maddala T, Phillips KA, Johnson FR. “An Experiment on Simplifying Conjoint Analysis Designs for Measuring Preferences.”. 2001. Working Paper. [DOI] [PubMed]
  17. Neumann PJ, Goldie SJ, Weinstein MC. “Preference-Based Measures in Economic Evaluation in Health Care.”. Annual Review of Public Health. 2000;21:587–611. doi: 10.1146/annurev.publhealth.21.1.587. [DOI] [PubMed] [Google Scholar]
  18. Payne J, Bettman J, Johnson E. “Behavioral Decision Research: A Constructive Processing Perspective.”. Annual Review of Psychology. 1992;43:87–131. [Google Scholar]
  19. Phillips KA, Maddala T, Johnson FR. “Measuring Preferences for Health Care Interventions Using Conjoint Analysis: An Application to HIV Testing.”. Health Services Research. 2002;37(6):1681–1705. doi: 10.1111/1475-6773.01115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Phillips KA, Rosenblatt AB. “Speaking in Tongues: Integrating Psychology and Economics into Health and Mental Health Services Outcomes Research.”. Medical Care Review. 1992;49(2):191–231. doi: 10.1177/002570879204900204. [DOI] [PubMed] [Google Scholar]
  21. Ratcliffe J. “The Use of Conjoint Analysis to Elicit Willingness-to-Pay Values.”. International Journal of Technology Assessment in Health Care. 2000;16(1):270–90. doi: 10.1017/s0266462300161227. [DOI] [PubMed] [Google Scholar]
  22. Ratcliffe J, Buxton M. “Patient's Preferences Regarding the Process and Outcomes of Life-Saving Technology: An Application of Conjoint Analysis to Liver Transplantation.”. International Journal of Technology Assessment in Health Care. 1999;15(2):340–51. [PubMed] [Google Scholar]
  23. Redelmeier DA, Shafir E, Aujla PS. “The Beguiling Pursuit of More Information.”. Medical Decision Making. 2001;21(5):376–81. doi: 10.1177/0272989X0102100504. [DOI] [PubMed] [Google Scholar]
  24. Rice T. “Can Markets Give Us the Health System We Want?”. Journal of Health Politics, Policy and Law. 1997;22(2):383–426. doi: 10.1215/03616878-22-2-383. [DOI] [PubMed] [Google Scholar]
  25. Rice TH. The Economics of Health Reconsidered. Chicago: Health Administration Press; 1998. [Google Scholar]
  26. Ryan M. “A Role for Conjoint Analysis in Technology Assessment in Health Care?”. International Journal of Technology Assessment in Health Care. 1999;15(3):443–57. [PubMed] [Google Scholar]
  27. Ryan M, Farrar S. “Using Conjoint Analysis to Elicit Preferences for Health Care.”. British Medical Journal. 2000;320(7248):1530–3. doi: 10.1136/bmj.320.7248.1530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Ryan M, Hughes J. “Using Conjoint Analysis to Assess Women's Preferences for Miscarriage Management.”. Health Economics. 1997;6(3):261–73. doi: 10.1002/(sici)1099-1050(199705)6:3<261::aid-hec262>3.0.co;2-n. [DOI] [PubMed] [Google Scholar]
  29. Ryan M, McIntosh E, Shackley P. “Methodological Issues in the Application of Conjoint Analysis in Health Care.”. Health Economics. 1998;7(4):373–8. doi: 10.1002/(sici)1099-1050(199806)7:4<373::aid-hec348>3.0.co;2-j. [DOI] [PubMed] [Google Scholar]
  30. Schkade DA, Payne JW. “How People Respond to Contingent Valuation Questions: A Verbal Protocol Analysis of Willingness to Pay for an Environmental Regulation.”. Journal of Environmental Economics and Management. 1994;26(1):88–109. [Google Scholar]
  31. Simon HA. “A Behavioral Model of Rational Choice.”. Quarterly Journal of Economics. 1955;69(1):99–118. [Google Scholar]
  32. Singh J, Cuttler L, Shin M, Silvers JB, Neuhauser D. “Medical Decision-Making and the Patient: Understanding Preference Patterns for Growth Hormone Therapy Using Conjoint Analysis.”. Medical Care. 1998;36(8, supplement):AS31–45. doi: 10.1097/00005650-199808001-00005. [DOI] [PubMed] [Google Scholar]
  33. Skolnik HS, Phillips KA, Binson D, Dilley JW. “Deciding Where and How to Be Tested for HIV: What Matters Most?”. Journal of Acquired Immune Deficiency Syndromes. 2001;27(3):292–300. doi: 10.1097/00126334-200107010-00013. [DOI] [PubMed] [Google Scholar]
  34. Slovic P. “The Construction of Preference.”. American Psychologist. 1995;50(5):364–71. [Google Scholar]
  35. Sudman S, Bradburn NM. Asking Questions. San Francisco: Jossey-Bass; 1982. [Google Scholar]
  36. Tanur JM. Questions about Questions: Inquiries into the Cognitive Bases of Surveys. New York: Russell Sage Foundation; 1992. [Google Scholar]
  37. Vick S, Scott A. “Agency in Health Care. Examining Patients' Preferences for Attributes of the Doctor–Patient Relationship.”. Journal of Health Economics. 1998;17(1):587–605. doi: 10.1016/s0167-6296(97)00035-0. [DOI] [PubMed] [Google Scholar]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES