Are choice experiments reliable? Evidence from the lab

https://doi.org/10.1016/j.econlet.2014.04.005Get rights and content

Highlights

  • We investigate if choice experiments reliably measure individuals’ values.

  • We measure reliability using an induced value experiment.

  • Choice experiments do not reliably measure individuals’ values.

  • Neither task salience nor monetary incentives increase reliability.

Abstract

This study investigates whether a popular stated preference method, the choice experiment (CE), reliably measures individuals’ values for a good. We address this question using an induced value experiment. Our results indicate that CEs fail to elicit payoff maximizing choices. We find little evidence that increasing the salience of the choices or adding monetary incentives increase the proportion of payoff maximizing choices. This questions the increasing use of CE to value non-market goods for policy making.

Introduction

Choice Experiments (CEs) are a popular stated preference method that is applied in economics to value public or publicly-provided goods. CEs are based on Lancaster’s theory of value and describe goods as a composite of several attributes or characteristics (Lancaster, 1966). For example, health care goods are described by their survival benefits, quality of life improvements or side-effects (de Bekker-Grob et al., 2012) or environmental policies by the land area protected, number of animals saved, etc. (Kanninen, 2006). CE practitioners infer respondents’ value of attributes and calculate welfare estimates, such as willingness to pay (WTP) by observing choices between priced alternatives presented in questionnaires.

The methodological debate about CEs has focussed on the gap between choices with and without monetary incentives, i.e. hypothetical bias (Lusk and Schroeder, 2004, Johansson-Stenman and Svedsäter, 2008) and how to mitigate it (Carlsson et al., 2005), how to select the goods presented in the CE questionnaire (Carlsson and Martinsson, 2003), and the estimation of appropriate statistical models (see, e.g.,  Hole, 2011, in this journal). These studies use indirect tests because the researchers do not control for individuals’ preferences, and therefore do not know how the attributes, and goods, are valued.

We provide a complementary, direct stress test of CE reliability. We bring CEs into the lab and use financial rewards to create individuals’ preferences for the goods being evaluated instead of using homegrown preferences (Smith, 1976). We test the reliability of CEs when choices are hypothetical and real, i.e. rewarded with monetary incentives (see Harrison (2006) for a discussion of incentive compatibility in CEs). Our experimental results suggest that CEs fail to elicit individual’s values in a simple induced value private good setting.

Section snippets

Experimental design

The induced value experiment we use mimics the salient features of a CE. Subjects’ preferences are induced for a multi-attribute laboratory good, we refer to this as a token. A token has four attributes and each attribute has three levels: colour (red, yellow, blue); shape (circle, triangle, square); size (small, medium, large); and cost (see third column of Table 1). The value to subjects of each token depends on the token’s attributes. The total reward, or payoff, that subjects receive from a

Results

The payoff maximizing token in each choice set can be identified from the induced values. Table 2 shows the subjects’ payoffs from each token in the nine choice sets, the payoff difference between the tokens, and the proportion of subjects who chose the payoff maximizing token.

At the aggregate level, the results are two-fold. First, only two thirds of choices are payoff maximizing. Second, the proportion of payoff maximizing choices varies across choice sets. In response to choice set C, 34.0%

Comparison of estimated WTP and induced values

In this section, we consider the implications for the welfare estimates of mistakenly including non-payoff maximizing choices in econometric estimations. To do this, we analyse our data as though it were from a CE survey using a conditional logit model.4

Conclusion

We find that CEs fail to elicit payoff maximizing choices. There is little evidence that increasing the salience of attribute levels, engaging subjects with monetary incentives, or both increases the proportion of payoff maximizing choices. Overall, payoff maximizing choices range from only 56.3% to 85.2% of choices, depending on the treatment considered. Problematic choice sets, in our setting, are those in which token A has a significant positive pay-off, but still a lower pay-off than that

Acknowledgements

We thank Nick Feltovich, Jordan Louviere, Mandy Ryan, Rainer Schulz, Joseph Swierzbinski, seminar participants at University of Oxford, Imperial College London, and University of Aberdeen, and conference participants at the European Conference in Health Economics, Helsinki and the joint CES/HESG meeting, Aix-en-Provence for their helpful comments and discussions. Any errors remain the responsibility of the authors. The Health Economics Research Unit received funding from the The Chief Scientist

References (13)

There are more references available in the full text version of this article.

Cited by (21)

  • Hypothetical bias in stated choice experiments: Part I. Macro-scale analysis of literature and integrative synthesis of empirical evidence from applied economics, experimental psychology and neuroimaging

    2021, Journal of Choice Modelling
    Citation Excerpt :

    A closer look at the details of these experiments (see Appendix B) demonstrates that the inclusion of such an option does not guarantee bias-free estimates. In fact, in 8 out of the 11 experiments in this category that detected a significant HB, a status-quo or opt-out option had been included (Alemu and Olsen, 2018; Ding et al., 2005; Liebe et al., 2019; Luchini and Watson, 2014; Moser et al., 2013; Sanjuán-López and Resano-Ezcaray, 2020; Wlömert and Eggers, 2016; Wuepper et al., 2019). Another technique to reduce HB is making the choice setting as tangible and relatable as possible for the respondents.

  • The determinants of common bean variety selection and diversification in Colombia

    2021, Ecological Economics
    Citation Excerpt :

    Its most important disadvantage in agricultural settings is that experiments are usually applied to varieties that are not in the market yet, which impedes the experimenter to utilise the market value of the options offered. Consequently, these studies usually rely on hypothetical economic values and rewards to elicit behaviour, which may have important consequences on the consistency of the answers (Kuhberger et al., 2002; Locey et al., 2011; Luchini and Watson, 2014). Revealed-preference or market methods are used as an alternative approach to stated-preference experiments (Louviere et al., 2000).

  • Strategic bias in discrete choice experiments

    2021, Journal of Environmental Economics and Management
    Citation Excerpt :

    Overall, there was little evidence of deviation from induced preferences across all treatments. These findings differ from Luchini and Watson (2014) who, in a simple posted-price provision rule (where respondents receive the option they selected in the binding set), find only 2/3 of respondents make choices that are consistent with their induced values. This may be due to the cognitive effort needed in Luchini and Watson (2014) being more difficult; their study had four attributes including a cost attribute, while Collins and Vossler (2009) had only one attribute and one cost attribute.

  • Product availability in discrete choice experiments with private goods: Product availability in DCE

    2020, Journal of Choice Modelling
    Citation Excerpt :

    Participants are randomly assigned to one of four treatment conditions: hypothetical control, partial availability - low, partial availability - high, and full availability. The experimental design uses the shape/value concept developed in the IV experiment of Luchini and Watson (2014). Each participant is shown 12 choice sets with two alternatives and an opt-out option.

View all citing articles on Scopus
1

Tel.: +33 0 491140789.

View full text