Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter August 12, 2017

Less Supervision, More Satisficing? Comparing Completely Self-Administered Web-Surveys and Interviews Under Controlled Conditions

  • Monika Mühlböck EMAIL logo , Nadia Steiber and Bernhard Kittel

Abstract

Although online surveys are becoming more and more prominent, the quality of the resulting data is still contested. One potential caveat of web surveys is the absence of an interviewer who controls the interview situation, can motivate respondents and prevent them from satisficing, i.e. answering questions with minimal cognitive effort. While there is evidence for differences between data gathered in interviewer-administered surveys and data from self-administered questionnaires, it has not yet been studied whether the sheer presence of an interviewer affects data quality. The present article addresses this research gap. Based on a recent panel study of young unemployed adults, we compare the results from a completely self-administered web survey with those from interviews which were self-administered but conducted in the presence of an interviewer. In particular, we look for differences concerning drop-out, speed, item-non-response, and item-non-differentiation. While we do find significant differences in drop-out rates, we do not find any evidence for interviewer-absence leading to less diligence in filling in the questionnaire. We thus conclude that the presence of an interviewer does not enhance data quality for self-administered questionnaires, but positively affects completion rates.

Acknowledgments

The study is based on data from a panel survey which has been funded by the Austrian Federal Ministry of Labour, Social Affairs and Consumer Protection as part of the JuSAW project. Work on the paper has been carried out as part of the CUPESSE project, which received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 613257.

Appendix

Table A1:

Balance between treatment and control group of raw and matched data.

Balance of raw dataBalance of matched dataBalance improvement %
Means treatedMeans controlMean differenceMeans treatedMeans controlMean difference
Distance0.210.150.060.210.210.0096.76
Female0.500.450.040.500.480.0255.10
Age
 18–200.310.34−0.030.310.32−0.0167.86
 21–240.400.350.050.400.390.0178.93
 25–280.300.31−0.020.300.300.00100.00
Education
 ISCED 0–20.340.330.000.340.36−0.02−641.26a
 ISCED 3B0.250.28−0.030.250.240.0168.79
 ISCED 3A–40.230.220.000.230.220.01−209.94a
 ISCED 5–60.190.160.030.190.190.00100.00
Migration background
 None0.520.54−0.010.520.53−0.0123.74
 2nd generation0.180.22−0.040.180.180.00100.00
 1st generation0.300.250.050.300.290.0180.87
Diligence (self-reported)3.954.02−0.073.954.00−0.0529.54
Interview month t07.196.970.227.197.160.0386.39
Participation motivation0.860.96−0.100.860.860.00100.00
Had a job at t10.530.410.120.530.54−0.0191.85
Not living in Vienna at t10.040.010.030.040.040.00100.00
Interview duration t037.8337.110.7237.8336.831.00−38.66
Ratio don’t know t00.060.07−0.010.060.050.0050.28
Inconsistency t00.640.83−0.190.640.630.0194.69
Straightlining t00.560.59−0.020.560.560.0083.30
  1. Balance statistics calculated as described in Ho et al. (2011). aThe strong percentage decrease in balance for the two categories of Education is only due to the fact that difference between the means of the treatment and the control group in the raw data is extremely close to zero. Hence, even a minor increase (in this case from 0.00 to 0.02) strongly affects the percentage in the last row.

Table A2:

Regressions on different indicators for satisficing.

Drop-outInterview durationRatio ‘don’t know’Non response open questionInconsistencyStraightlining
No interviewer18.93(1.05)***0.89(0.82)0.00(0.02)−0.35(0.44)−0.36(0.12)**0.03(0.02)
Age
 18–20-ref--ref--ref--ref--ref--ref-
 21–24−0.81(0.73)1.01(1.22)−0.03(0.03)0.53(0.57)−0.13(0.18)0.03(0.03)
 25–28−0.82(1.20)0.30(1.33)−0.02(0.03)0.45(0.73)−0.18(0.22)−0.02(0.03)
Female0.31(0.62)1.69(0.83)*0.01(0.02)0.24(0.45)−0.06(0.13)−0.01(0.03)
Education
 ISCED 0-2-ref--ref--ref--ref--ref--ref-
 ISCED 3B0.26(0.77)2.03(1.06)−0.04(0.03)0.34(0.65)−0.34(0.19)0.03(0.04)
 ISCED 3A–4−2.24(0.92)*1.30(1.18)−0.03(0.03)0.83(0.62)−0.04(0.20)0.04(0.04)
 ISCED 5–6−1.04(1.17)1.36(1.41)−0.01(0.03)0.53(0.73)−0.44(0.23)0.01(0.04)
Migration background
 None-ref--ref--ref--ref--ref--ref-
 2nd generation2.46(0.88)**1.13(1.28)0.01(0.03)−0.06(0.70)−0.43(0.20)*0.09(0.04)*
 1st generation1.89(0.92)*1.45(1.00)0.03(0.03)0.24(0.63)−0.12(0.14)0.06(0.03)
Diligence (self-reported)−0.46(0.46)−0.16(0.72)−0.01(0.02)−0.67(0.34)*−0.03(0.10)0.01(0.02)
Interview month t0−0.27(0.27)−0.67(0.45)0.00(0.01)0.27(0.25)−0.10(0.05)−0.01(0.01)
Participation motivation2.12(1.24)−0.09(1.36)−0.05(0.04)0.49(0.75)−0.24(0.18)0.03(0.04)
Had a job at t10.39(0.75)1.55(0.93)−0.06(0.02)**−0.49(0.44)0.00(0.12)−0.01(0.03)
Not living in Vienna at t13.12(1.25)*−1.64(2.61)−0.04(0.04)0.90(1.20)0.70(0.59)−0.02(0.07)
Interview duration t00.01(0.03)0.58(0.05)***−0.00(0.00)−0.04(0.03)0.01(0.01)−0.00(0.00)
Ratio don’t know t0−9.54(3.92)*2.75(4.69)0.34(0.13)*2.62(2.36)1.10(0.89)−0.08(0.13)
Inconsistency t00.54(0.34)0.37(0.44)−0.01(0.01)−0.07(0.28)0.08(0.08)−0.00(0.01)
Straightlining t03.76(1.88)*2.36(2.52)0.12(0.08)2.19(1.77)0.24(0.42)0.45(0.08)***
Constant−22.20(3.97)***11.14(4.28)*0.15(0.13)−1.72(2.37)1.84(0.66)**0.24(0.12)*
N192172180172167177
adj. R20.5210.0990.1020.192
AIC1041084−172178419−133
  1. Standard errors in parentheses; *p<0.05, **p<0.01, ***p<0.001; Regressions were done on the matched dataset using the weights created by the genetic matching procedure and including not only the treatment variable (No interviewer), but also all variables on which the samples were matched, following Ho et al. (2007). For binary dependent variables (drop-out and non response to open question), we fitted logistic regression models, for the other dependent variables OLS regression models.

Figure A1: Distribution of propensity scores for the raw and the matched data.Note: The figure was produced using the MatchIt package in R (Ho et al. 2011).
Figure A1:

Distribution of propensity scores for the raw and the matched data.

Note: The figure was produced using the MatchIt package in R (Ho et al. 2011).

References

Atkeson, L. R., A. N. Adams and R. M. Alvarez (2014) “Nonresponse and Mode Effects in Self- and Interviewer-Administered Surveys,” Political Analysis, 22(3):304–320.10.1093/pan/mpt049Search in Google Scholar

Barge, S. and H. Gehlbach (2012) “Using the Theory of Satisficing to Evaluate the Quality of Survey Data,” Research in Higher Education, 53(2):182–200.10.1007/s11162-011-9251-2Search in Google Scholar

Callegaro, M., K. L. Manfreda and V. Vehovar (2015) Web Survey Methodology. London: SAGE.10.4135/9781529799651Search in Google Scholar

Cannell, C. F., P. V. Miller and L. Oksenberg (1981) “Research on Interviewing Techniques,” Sociological Methodology, 12:389–437.10.2307/270748Search in Google Scholar

Chang, L. and J. A. Krosnick (2009) “National Surveys Via Rdd Telephone Interviewing Versus the Internet: Comparing Sample Representativeness and Response Quality,” Public Opinion Quarterly, 73(4):641–678.10.1093/poq/nfp075Search in Google Scholar

Chang, L. and J. A. Krosnick (2010) “Comparing Oral Interviewing with Self-Administered Computerized QuestionnairesAn Experiment,” Public Opinion Quarterly, 74(1):154–167.10.1093/poq/nfp090Search in Google Scholar

Couper, M. P. (2011) “The Future of Modes of Data Collection,” Public Opinion Quarterly, 75(5):889–908.10.1093/poq/nfr046Search in Google Scholar

Fang, J., C. Wen and V. Prybutok (2013) “The Equivalence of Internet Versus Paper-Based Surveys in IT/IS Adoption Research in Collectivistic Cultures: The Impact of Satisficing,” Behaviour & Information Technology, 32(5):480–490.10.1080/0144929X.2012.751621Search in Google Scholar

Fang, J., C. Wen and V. Prybutok (2014) “An Assessment of Equivalence Between Paper and Social Media Surveys: The Role of Social Desirability and Satisficing,” Computers in Human Behavior, 30:335–343.10.1016/j.chb.2013.09.019Search in Google Scholar

Fricker, S., M. Galesic, R. Tourangeau and T. Yan (2005) “An Experimental Comparison of Web and Telephone Surveys,” The Public Opinion Quarterly, 69(3):370–392.10.1093/poq/nfi027Search in Google Scholar

Ganassali, S. (2008) “The Influence of the Design of Web Survey Questionnaires on the Quality of Responses,” Survey Research Methods, 2(1). Retrieved from https://ojs.ub.uni-konstanz.de/srm/article/view/598.Search in Google Scholar

Gittelmann, S. H. and E. Trimarchi (2012) “The War Against Unengaged Online Respondents,” Quirk’s Marketing Research Review, December 2012. Retrieved from https://www.quirks.com/articles/the-war-against-unengaged-online-respondents.Search in Google Scholar

Groves, R. M., Jr, F. J. Fowler, M. P. Couper, J. M. Lepkowski, E. Singer and R. Tourangeau (2009) Survey Methodology. New York, NY: John Wiley & Sons.Search in Google Scholar

Heerwegh, D. and G. Loosveldt (2008) “Face-to-Face versus Web Surveying in a High-Internet-Coverage Population: Differences in Response Quality,” Public Opinion Quarterly, 72(5):836–846.10.1093/poq/nfn045Search in Google Scholar

Ho, D., K. Imai, G. King and E. A. Stuart (2007) “Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference,” Political Analysis, 15(3):199–236.10.1093/pan/mpl013Search in Google Scholar

Ho, D., K. Imai, G. King and E. A. Stuart (2011) “MatchIt: Nonparametric Preprocessing for Parametric Causal Inference,” Journal of Statistical Software, 42(8):1–28.10.18637/jss.v042.i08Search in Google Scholar

Holbrook, A. L., M. C. Green and J. A. Krosnick (2003) “Telephone Versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias,” Public Opinion Quarterly, 67(1):79–125.10.1086/346010Search in Google Scholar

Joinson, A. N., A. Woodley and U.-D. Reips (2007) “Personalization, Authentication and Self-Disclosure in Self-Administered Internet Surveys,” Computers in Human Behavior, 23(1):275–285.10.1016/j.chb.2004.10.012Search in Google Scholar

Keusch, F. and T. Yan (2016) “Web Versus Mobile Web,” Social Science Computer Review, 1–19, 0894439316675566. Retrieved from http://journals.sagepub.com/doi/pdf/10.1177/0894439316675566.10.1177/0894439316675566Search in Google Scholar

Klein, J. D., C. G. Havens and R. K. Thomas (2009) “Comparing Adolescent Response Bias Between Internet and Telephone Surveys,” Journal of Adolescent Health, 44(2):S36.10.1016/j.jadohealth.2008.10.100Search in Google Scholar

Kreuter, F., S. Presser and R. Tourangeau (2008) “Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity,” Public Opinion Quarterly, 72(5):847–865.10.1093/poq/nfn063Search in Google Scholar

Krosnick, J. A. (1991) “Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys,” Applied Cognitive Psychology, 5(3):213–236.10.1002/acp.2350050305Search in Google Scholar

Lindhjem, H. and S. Navrud (2011) “Are Internet Surveys an Alternative to Face-to-Face Interviews in Contingent Valuation?” Ecological Economics, 70(9):1628–1637.10.1016/j.ecolecon.2011.04.002Search in Google Scholar

Lozar Manfreda, K., M. Bošnjak, J. Berzelak, I. Haas and V. Vehovar (Eds.) (2008) “Web Surveys Versus Other Survey Modes: A Meta-analysis Comparing Response Rates,” International Journal of Market Research: JMRS ; the Journal of the Market Research Society, 50(1):79–104.10.1177/147078530805000107Search in Google Scholar

Malakhoff, L. A. and M. Jans (2013) “Towards Usage of Avatar Interviewers in Web Surveys,” Survey Practice, 4(3). Retrieved from http://www.surveypractice.org/index.php/SurveyPractice/article/view/104.10.29115/SP-2011-0015Search in Google Scholar

Malhotra, N. (2008) “Completion Time and Response Order Effects in Web Surveys,” Public Opinion Quarterly, 72(5):914–934.10.1093/poq/nfn050Search in Google Scholar

Mavletova, A. (2013) “Data Quality in PC and Mobile Web Surveys,” Social Science Computer Review, 31(6):725–743.10.1177/0894439313485201Search in Google Scholar

Postoaca, A. (2006) The Anonymous Elect: Market Research Through Online Access Panels. New York: Springer Science & Business Media.Search in Google Scholar

Sekhon, J. S. (2011) “Multivariate and Propensity Score Matching Software with Automated Balance Optimization: The Matching Package for R,” Journal of Statistical Software, 42(7):1–52.10.18637/jss.v042.i07Search in Google Scholar

Simon, H. A. (1957) Models of Man: Social and Rational. Mathematical Essays on Rational Human Behavior in Society Setting. Hoboken: Wiley.Search in Google Scholar

Sommer, J., B. Diedenhofen and J. Musch (2016) “Not to Be Considered Harmful. Mobile-Device Users Do Not Spoil Data Quality in Web Surveys,” Social Science Computer Review, 35(3): 378–387.10.1177/0894439316633452Search in Google Scholar

Thomas, R. K. and F. M. Barlas (2014) Respondents Playing Fast and Loose? Antecedents and Consequences of Respondent Speed of Completion. Presented at the The American Association for Public Opinion Research (AAPOR) 69th Annual Conference, 2014, Anaheim, California.Search in Google Scholar

Tourangeau, R. and T. W. Smith (1996) “Asking Sensitive Questions. The Impact of Data Collection Mode, Question Format, and Question Context,” Public Opinion Quarterly, 60(2):275–304.10.1086/297751Search in Google Scholar

Tourangeau, R., M. P. Couper and D. M. Steiger (2003) “Humanizing Self-Administered Surveys: Experiments on Social Presence in Web and IVR Surveys,” Computers in Human Behavior, 19(1):1–24.10.1016/S0747-5632(02)00032-8Search in Google Scholar

Tourangeau, R., L. J. Rips and K. A. Rasinski (2009) The Psychology of Survey Response (10. print). Cambridge, MA: Cambridge UnivPress.Search in Google Scholar

Ward, M. K. and S. B. Pond III (2015) “Using Virtual Presence and Survey Instructions to Minimize Careless Responding on Internet-Based Surveys,” Computers in Human Behavior, 48:554–568.10.1016/j.chb.2015.01.070Search in Google Scholar

Zhang, C. and F. Conrad (2014) “Speeding in Web Surveys: The Tendency to Answer Very Fast and Its Association with Straightlining,” Survey Research Methods, 8(2):127–135.Search in Google Scholar

Published Online: 2017-8-12
Published in Print: 2017-10-26

©2017 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 19.4.2024 from https://www.degruyter.com/document/doi/10.1515/spp-2017-0005/html
Scroll to top button