Short Communication
Randomization of CATA attributes: Should attribute lists be allocated to assessors or to samples?

https://doi.org/10.1016/j.foodqual.2015.09.014Get rights and content

Highlights

  • Operational power for different attribute list orders in CATA studies investigated.

  • Options: assign attribute list order to assessors or to individual evaluations.

  • Constant attribute list order across evaluations of a panelist found to be superior.

  • Theoretical considerations support this finding.

Abstract

For the check-all-that-apply (CATA) question format, it is good practice to vary the attribute list order between evaluations to account for possible (and likely) position bias in the data. If attribute lists are to be randomized, the question is how to allocate these attribute lists orders. Some authors recommend a “to samples” allocation order, randomizing the attribute list order for each sample presented. Other authors recommend a “to assessors” allocation, randomizing the attribute list order for each assessor, such that the list order is stable across samples for each assessor (if replication is used, assessors are given a new attribute list order per replication). In this study, consumers (n = 93) performed CATA evaluations on 6 breads twice. The evaluation was done once using the “to assessors” CATA list order allocation scheme, and once using the “to samples” CATA list order allocation scheme, with the order of allocation schemes balanced by experimental design. Results suggest higher operational power when using the “to assessor” CATA attribute list order. This conclusion is supported by theoretical considerations.

Introduction

Check-all-that-apply (CATA) questions are used increasingly in consumer testing to enable product characterization from the same assessors who provide hedonic responses to products. As with the application of any sensory methodology, methodological aspects must be considered (Meyners & Castura, 2014) and appropriate data analysis used (Meyners, Castura, & Carr, 2013). For example, it is well documented that the position of attributes in a CATA question biases responses (Ares & Jaeger, 2013). As positional biases cannot be eliminated, they are balanced across products via experimental designs, ensuring each attribute appears with equal frequency in each position for each product. But what is the best way to allocate attribute list orders?

The “to assessor” scheme describes the allocation of attribute orders to assessors, such that each assessor has a particular fixed attribute list for a particular replicate, but each assessor has a different list order. The “to sample” scheme involves allocating attribute orders to samples, such that a particular assessor gets a different attribute list for each sample evaluation. The “to samples” attribute list order allocation scheme has been recommended by Ares et al. (2014) based on results from an eye-tracking experiment involving consumer CATA data collection. Their recommendation is based mainly on the greater level of visual attention given to the CATA terms when the attribute list order changed with each sample presentation, and the deeper cognitive engagement which this additional visual attention might imply.

The intent of this paper is to evaluate whether any of these allocation schemes is superior to the other with regards to operational power, with the hypothesis (based on theoretical considerations) that the “to assessor” scheme should be superior to complete randomization. We discuss the theoretical reasoning that gives rise to this hypothesis, and present data from a consumer bread study that was conducted as an empirical test of operational power differences between these two schemes.

Section snippets

Modeling attribute list order effects

Ideally, the probability of an attribute being checked (or not checked) would depend primarily on the product; however, we know that the checked state will also depend on the assessor and include some error variation. Furthermore, differences due to the attribute list order are anticipated (Ares and Jaeger, 2013, Meyners and Castura, 2014). Consequently, where X=1 indicates the selection of a given attribute, a reasonable model for the selection probability of the attribute can be expressed asP(

Comparing CATA list order allocation schemes in practice

The “to assessor” and “to sample” CATA list order allocation schemes were evaluated in a consumer bread study (n = 93) conducted at Compusense Inc. in March 2015. The ballot included a CATA question with 32 sensory attributes. Consumers were allocated randomly to one of two groups. Group 1 consumers evaluated 6 breads in a Williams modified Latin square design (followed by an ideal product), all with the “to assessor” allocation for the CATA list orders, then after an inter-session delay,

Discussion

In this experiment, data from the “to assessor” allocation and the “to sample” allocation produced very similar sensory profiles. The allocations were similar with respect to the total number of citations, a finding also reported earlier by Ares et al. (2013). However, the “to assessor” allocation had superior operational power, and is hence our CATA list order allocation scheme of choice. In contrast, Ares et al. (2014) present eye-tracking data that suggest a benefit from changing the

References (6)

There are more references available in the full text version of this article.

Cited by (75)

View all citing articles on Scopus
View full text