An Experimental Validation Method for Questioning Techniques That Assess Sensitive Issues
Abstract
Studies addressing sensitive issues often yield distorted prevalence estimates due to socially desirable responding. Several techniques have been proposed to reduce this bias, including indirect questioning, psychophysiological lie detection, and bogus pipeline procedures. However, the increase in resources required by these techniques is warranted only if there is a substantial increase in validity as compared to direct questions. Convincing demonstration of superior validity necessitates the availability of a criterion reflecting the “true” prevalence of a sensitive attribute. Unfortunately, such criteria are notoriously difficult to obtain, which is why validation studies often proceed indirectly by simply comparing estimates obtained with different methods. Comparative validation studies, however, provide weak evidence only since the exact increase in validity (if any) remains unknown. To remedy this problem, we propose a simple method that allows for measuring the “true” prevalence of a sensitive behavior experimentally. The basic idea is to elicit normatively problematic behavior in a way that ensures conclusive knowledge of the prevalence rate of this behavior. This prevalence measure can then serve as an external validation criterion in a second step. An empirical demonstration of this method is provided.
References
2005). The effect of social desirability and social approval on self-reports on physical activity. American Journal of Epidemiology, 161, 389–398.
(2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2, 396–403.
(2006). A meta-analytic investigation of job applicant faking on personality measures. International Journal of Selection and Assessment, 14, 317–335.
(1977). How to ask questions about drinking and sex: Response effects in measuring consumer behavior. Journal of Marketing Research, 14, 316–321.
(1987). Randomized response technique. Science, 236, 1049.
(2007). Item count technique in estimating the proportion of people with a sensitive feature. Journal of Statistical Planning and Inference, 137, 589–593.
(2001). The validity of drug use responses in a household survey in Puerto Rico: Comparison of survey responses of cocaine and heroin use with hair tests. International Journal of Epidemiology, 30, 1042–1049.
(2011). Sensitive questions in online surveys: Experimental results for the randomized response technique (RRT) and the unmatched count technique (UCT). Sociological Methods and Research, 40, 169–193.
(1997). Collecting “sensitive” data in business ethics research: A case for the Unmatched Count Technique (UCT). Journal of Business Ethics, 16, 1049–1057.
(2008). Lies in disguise: An experimental study on cheating. Thurgau Institute of Economics, University of Konstanz Research Paper Series 40.
(2001). Experimental practices in economics: A methodological challenge for psychologists? The Behavioral and Brain Sciences, 24, 383–403.
(2013). What lies beneath: How the distance between truth and lie drives dishonesty. Journal of Experimental Social Psychology, 49, 263–266.
(2000). The detection of deception. In , Handbook of psychophysiology (2nd ed.). (pp. 772–793). New York, NY: Cambridge University Press.
(2005). Modeling sources of self-report bias in a survey of drug use epidemiology. Annals of Epidemiology, 15, 381–389.
(1971). The bogus pipeline: A new paradigm for measuring affect and attitude. Psychological Bulletin, 76, 349–364.
(2005). Meta-analysis of randomized response research: Thirty-five years of validation. Sociological Methods and Research, 33, 319–348.
(1993). A unified approach to measures of privacy in randomized response models. Journal of the American Statistical Association, 88, 97–103.
(1994). An improved randomized-response strategy. Journal of the Royal Statistical Society: Series B, 56, 93–95.
(2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–644.
(2000). Variance in faking across noncognitive measures. Journal of Applied Psychology, 85, 812–821.
(2011). Defection in the dark? A randomized-response investigation of cooperativeness in social dilemma games. European Journal of Social Psychology, 41, 638–644.
(2012). Surveying multiple sensitive attributes using an extension of the randomized-response technique. International Journal of Public Opinion Research, 24, 508–523.
(2012). A stochastic lie detector. Behavior Research Methods, 44, 222–231.
(2010). Reducing socially desirable responses in epidemiologic surveys: An extension of the randomized-response-technique. Epidemiology, 21, 379–382.
(2002). The costs of deception: Evidence from Psychology. Experimental Economics, 5, 111–131.
(2009). Assessing sensitive attributes using the randomized-response-technique: Evidence for the importance of response symmetry. Journal of Educational and Behavioral Statistics, 34, 267–287.
(2009). A randomized-response investigation of the education effect in attitudes towards foreigners. European Journal of Social Psychology, 39, 920–931.
(1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46, 598–609.
(1993). Twenty years of bogus pipeline research: A critical review and meta-analysis. Psychological Bulletin, 114, 363–375.
(2011). Ethical manoeuvring: Why people avoid both major and minor lies. British Journal of Management, 22, S16–S27.
(2006). Doping in fitness sports: Estimated number of unreported cases and individual probability of doping. Addiction, 101, 1640–1644.
(2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859–883.
(1991). A critical evaluation of the randomized response method: Applications, validation and research agenda. Sociological Methods and Research, 20, 104–138.
(1965). Randomized-response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60, 63–69.
(