Are choice experiments reliable? Evidence from the lab
Introduction
Choice Experiments (CEs) are a popular stated preference method that is applied in economics to value public or publicly-provided goods. CEs are based on Lancaster’s theory of value and describe goods as a composite of several attributes or characteristics (Lancaster, 1966). For example, health care goods are described by their survival benefits, quality of life improvements or side-effects (de Bekker-Grob et al., 2012) or environmental policies by the land area protected, number of animals saved, etc. (Kanninen, 2006). CE practitioners infer respondents’ value of attributes and calculate welfare estimates, such as willingness to pay (WTP) by observing choices between priced alternatives presented in questionnaires.
The methodological debate about CEs has focussed on the gap between choices with and without monetary incentives, i.e. hypothetical bias (Lusk and Schroeder, 2004, Johansson-Stenman and Svedsäter, 2008) and how to mitigate it (Carlsson et al., 2005), how to select the goods presented in the CE questionnaire (Carlsson and Martinsson, 2003), and the estimation of appropriate statistical models (see, e.g., Hole, 2011, in this journal). These studies use indirect tests because the researchers do not control for individuals’ preferences, and therefore do not know how the attributes, and goods, are valued.
We provide a complementary, direct stress test of CE reliability. We bring CEs into the lab and use financial rewards to create individuals’ preferences for the goods being evaluated instead of using homegrown preferences (Smith, 1976). We test the reliability of CEs when choices are hypothetical and real, i.e. rewarded with monetary incentives (see Harrison (2006) for a discussion of incentive compatibility in CEs). Our experimental results suggest that CEs fail to elicit individual’s values in a simple induced value private good setting.
Section snippets
Experimental design
The induced value experiment we use mimics the salient features of a CE. Subjects’ preferences are induced for a multi-attribute laboratory good, we refer to this as a token. A token has four attributes and each attribute has three levels: colour (red, yellow, blue); shape (circle, triangle, square); size (small, medium, large); and cost (see third column of Table 1). The value to subjects of each token depends on the token’s attributes. The total reward, or payoff, that subjects receive from a
Results
The payoff maximizing token in each choice set can be identified from the induced values. Table 2 shows the subjects’ payoffs from each token in the nine choice sets, the payoff difference between the tokens, and the proportion of subjects who chose the payoff maximizing token.
At the aggregate level, the results are two-fold. First, only two thirds of choices are payoff maximizing. Second, the proportion of payoff maximizing choices varies across choice sets. In response to choice set , 34.0%
Comparison of estimated WTP and induced values
In this section, we consider the implications for the welfare estimates of mistakenly including non-payoff maximizing choices in econometric estimations. To do this, we analyse our data as though it were from a CE survey using a conditional logit model.4
Conclusion
We find that CEs fail to elicit payoff maximizing choices. There is little evidence that increasing the salience of attribute levels, engaging subjects with monetary incentives, or both increases the proportion of payoff maximizing choices. Overall, payoff maximizing choices range from only 56.3% to 85.2% of choices, depending on the treatment considered. Problematic choice sets, in our setting, are those in which token A has a significant positive pay-off, but still a lower pay-off than that
Acknowledgements
We thank Nick Feltovich, Jordan Louviere, Mandy Ryan, Rainer Schulz, Joseph Swierzbinski, seminar participants at University of Oxford, Imperial College London, and University of Aberdeen, and conference participants at the European Conference in Health Economics, Helsinki and the joint CES/HESG meeting, Aix-en-Provence for their helpful comments and discussions. Any errors remain the responsibility of the authors. The Health Economics Research Unit received funding from the The Chief Scientist
References (13)
- et al.
Using cheap talk as a test of validity in choice experiments
Econom. Lett.
(2005) A discrete choice model with endogenous attribute attendance
Econom. Lett.
(2011)- et al.
Induced value tests of the referendum voting mechanism
Econom. Lett.
(2001) - et al.
Design techniques for choice experiments in health economics
Health Econ.
(2003) - et al.
Discrete choice experiments in health economics: a review of the literature
Health Econ.
(2012) Zurich toolbox for ready-made economic experiments
Exp. Econ.
(2007)
Cited by (21)
Hypothetical bias in stated choice experiments: Part I. Macro-scale analysis of literature and integrative synthesis of empirical evidence from applied economics, experimental psychology and neuroimaging
2021, Journal of Choice ModellingCitation Excerpt :A closer look at the details of these experiments (see Appendix B) demonstrates that the inclusion of such an option does not guarantee bias-free estimates. In fact, in 8 out of the 11 experiments in this category that detected a significant HB, a status-quo or opt-out option had been included (Alemu and Olsen, 2018; Ding et al., 2005; Liebe et al., 2019; Luchini and Watson, 2014; Moser et al., 2013; Sanjuán-López and Resano-Ezcaray, 2020; Wlömert and Eggers, 2016; Wuepper et al., 2019). Another technique to reduce HB is making the choice setting as tangible and relatable as possible for the respondents.
The determinants of common bean variety selection and diversification in Colombia
2021, Ecological EconomicsCitation Excerpt :Its most important disadvantage in agricultural settings is that experiments are usually applied to varieties that are not in the market yet, which impedes the experimenter to utilise the market value of the options offered. Consequently, these studies usually rely on hypothetical economic values and rewards to elicit behaviour, which may have important consequences on the consistency of the answers (Kuhberger et al., 2002; Locey et al., 2011; Luchini and Watson, 2014). Revealed-preference or market methods are used as an alternative approach to stated-preference experiments (Louviere et al., 2000).
Strategic bias in discrete choice experiments
2021, Journal of Environmental Economics and ManagementCitation Excerpt :Overall, there was little evidence of deviation from induced preferences across all treatments. These findings differ from Luchini and Watson (2014) who, in a simple posted-price provision rule (where respondents receive the option they selected in the binding set), find only 2/3 of respondents make choices that are consistent with their induced values. This may be due to the cognitive effort needed in Luchini and Watson (2014) being more difficult; their study had four attributes including a cost attribute, while Collins and Vossler (2009) had only one attribute and one cost attribute.
Product availability in discrete choice experiments with private goods: Product availability in DCE
2020, Journal of Choice ModellingCitation Excerpt :Participants are randomly assigned to one of four treatment conditions: hypothetical control, partial availability - low, partial availability - high, and full availability. The experimental design uses the shape/value concept developed in the IV experiment of Luchini and Watson (2014). Each participant is shown 12 choice sets with two alternatives and an opt-out option.
Choice certainty and deliberative thinking in discrete choice experiments. A theoretical and empirical investigation
2019, Journal of Economic Behavior and OrganizationComparing experimental auctions and real choice experiments in food choice: a homegrown and induced value analysis
2023, European Review of Agricultural Economics
- 1
Tel.: +33 0 491140789.