Skip to main content
Log in

Impact of raising awareness of respondents on the measurement quality in a web survey

  • Published:
Quality & Quantity Aims and scope Submit manuscript

Abstract

Web surveys are very attractive because they offer the possibility to collect a huge amount of data in a very short time, but they also suffer from several problems (Reips in Psychology experiments on the internet, 2000). In particular, in web panels where incentives are used to encourage the respondents’ participation, some of them answer the questions so fast that it seems impossible that they have carefully read the questions, or thought about the best answers. This paper shows the results of two experiments conducted with the online fieldwork provider Netquest, that try to reduce the speeding behaviours and get a better quality of answers by raising awareness among the respondents about the importance of completing a survey thoughtfully. The results show no effect on respondents who have been given an introductory reminder or a single commitment statement about the importance of their answers. However, when this introduction is combined with the commitment statement, a small effect on some respondents’ behaviours is found. Mainly the respondents who already put some effort in answering but not the maximum ones were affected. Participants with low quality answers continued giving low quality answers even when they were committed to do their best. From these results, we may conclude that more radical solutions than raising awareness may be necessary to assure that respondents in web surveys read the questions carefully and answer them the best way they can.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We have to notice that reading on Internet is not the same as reading on paper: people are much more used to go through Internet pages by looking for keywords and reading “in diagonal” than by reading everything in details. Still, experimenting with a few subjects, we found that less than 15 s seems too short to read the text of the developed introduction.

  2. Except for the introduction page, Qareas, Q550, Q750, Q950 and Q606 for which we excluded the highest 5 %. Otherwise, we still had some clearly impossible times. We also tried other computations, using different thresholds than 1 or 5 %, but the overall results were similar.

  3. We can notice that if the groups with treatments took the filter question more seriously, it is possible that, on average, they reported more known brands. However, the proportions of respondents telling that they knew three or more brands were 45.8 % in the control group, 48.2 % in the “intro+sign” group and 45.3 % in the “intro only” group. The differences between the control and the treatment groups were not significant at the 5 % level. This means that the treatments do not lead to more respondents reporting they know three or more brands.

  4. They were excluded from the sample in the substantive survey carried out by the Reputation Institute.

  5. The first battery Q550 included one IMC in addition to the 26 items; in the repetitions for brands 2 (Q750) and 3 (Q950), the IMC was not repeated. Thus, the total number of items was 26 and not 27.

  6. Others, like random answers, could not be detected.

References

  • Aust, F., Diedenhofen, B., Ullrich, S., Musch, J.: Seriousness checks are useful to improve data validity in online research. Behav. Res. Methods 45(2), 527–535 (2013). doi:10.3758/s13428-012-0265-2

    Article  Google Scholar 

  • Baker, R., Blumberg, S., Brick, J., Couper, M., Courtright, M., Dennis, J., Zahs, D.: American association of public opinion researchers report on online panels. Public Opin. Q. 74, 711–781 (2010)

    Article  Google Scholar 

  • Bethlehem, J.: Selection bias in web surveys. Int. Stat. Rev. 78(2), 161–188 (2010). doi:10.1111/j.1751-5823.2010.00112.x

    Article  Google Scholar 

  • Bosnjak, M., Tuten, T.L.: Prepaid and promised incentives in web surveys. An experiment. Soc. Sci. Comput. Rev. 21(2), 208–217 (2003). doi:10.1177/0894439303021002006

    Article  Google Scholar 

  • Cannell, C.F., Oksenberg, L., Converse, J.M.: Striving for response accuracy: experiments in new interviewing techniques. J. Mark. Res. 14, 306–315 (1977)

    Article  Google Scholar 

  • Cobanoglu, C., Cobanoglu, N.: The effect of incentives in web surveys: application and ethical considerations. Int. J. Mark. Res. 45(4), 475–488 (2003)

    Google Scholar 

  • Conrad, F.G., Couper, M.P., Tourangeau, R., Galesic, M.: Interactive feedback can improve the quality of responses in web surveys. Paper presented at the conference of the American Association for Public Opinion Research, Miami Beach, Florida. AAPOR—ASA section on survey research methods. https://www.amstat.org/Sections/Srms/Proceedings/y2005/Files/JSM2005-000938.pdf (2005)

  • Cook, C., Heath, F., Thompson, R.L.: A meta-analysis of response rates in web- or internet-based surveys. Educ. Psychol. Meas. 60(6), 821–836 (2000). doi:10.1177/00131640021970934

    Article  Google Scholar 

  • Couper, M.P.: Web surveys: a review of issues and approaches. Public Opin. Q. 64, 464–494. http://www.jstor.org/stable/3078739 (2000)

  • Couper, M.P., Tourangeau, R., Kenyon, K.: Picture this! Exploring visual effects in web surveys. Public Opin. Q. 68(2), 255–266 (2004). doi:10.1093/poq/nfh013

    Article  Google Scholar 

  • Couper, M.P., Traugott, M.W., Lamias, M.J.: Web survey design and administration. Public Opin. Q. 65(2), 230–253 (2001). doi:10.1086/322199

    Article  Google Scholar 

  • Dillman, D.A., Tortora, R.D., Bowker, D.: Principles for constructing web surveys. SESRC, Washington. http://www.isurveys.com.au/resources/ppr.pdf (1999)

  • Dillman, D.A., Tortora, R.D., Conradt, J., Bowker, D.: Influence of plain versus fancy design on response rates for web surveys. In: Proceedings of the American Statistical Associations Survey Methods Research Section, Washington DC (1998)

  • Göritz, A.S.: Incentives in web studies: methodological issues and a review. Int. J. Internet Sci. 1(1), 58–70 (2006)

    Google Scholar 

  • Kapelner, A., Chandler, D.: Preventing satisficing in online surveys: a “Kapcha” to ensure higher quality data”. In: Proceedings of Crowd Conference 2010, San Francisco, CA. http://www.danachandler.com/files/kapcha.pdf (2010). 04 Oct 2010

  • Krosnick, J.A.: Response strategies for coping with the cognitive demands of attitude measures in surveys. Appl. Cogn. Psychol. 5, 213–236 (1991). doi:10.1002/acp.2350050305

    Article  Google Scholar 

  • Matthijsse, S., Leo, E., Hox, J.: Professional respondents in online panels: a threat to data quality? In: Proceedings of ESOMAR Panel Research, Orlando, FL (2006)

  • Muñoz-Leiva, F., Sánchez-Fernández, J., Montoro-Ríos, F., Ibánez-Zapata, J.A.: Improving the response rate and quality in web-based surveys through the personalisation and frequency of reminder mailings. Qual. Quant. 44, 1037–1052 (2010)

    Article  Google Scholar 

  • Oppenheimer, D.M., Meyvis, T., Davidenko, N.: Instructional manipulation checks: detecting satisficing to increase statistical power. J. Exp. Soc. Psychol. 45, 867–872 (2009). doi:10.1016/j.jesp.2009.03.009

    Article  Google Scholar 

  • Reips, U.-D.: The Web Experiment Method: Advantages, Disadvantages, and Solutions. In: Birnbaum, M. H. (ed.) Psychology Experiments on the Internet, pp. 89–117. Academic Press, San Diego (2000). https://www.casra.ch/fileadmin/files/Media/About_us/Team/Teaching/2001_Summer/UZH_1132/Course/Reips2000.pdf

  • Reips, U.-D.: Internet experiments: methods, guidelines, meta-data. Hum. Vision Electron. Imaging XIV Proc. SPIE 7240, 724008 (2009)

    Article  Google Scholar 

  • Revilla, M., Ochoa, C.: What are the links in a web survey among response time, quality, and auto-evaluation of the efforts done? Social Science Computer Review 33(1), 97–114 (2015). doi:10.1177/0894439314531214. (First published online on 14 May 2014)

    Article  Google Scholar 

  • Saris, W.E., Gallhofer, I.: Design, Evaluation, and Analysis of Questionnaires for Survey Research. Wiley, New York (2007)

    Book  Google Scholar 

  • Toepoel, V., Das, M., Van Soest, A.: Effects of design in web surveys: comparing trained and fresh respondents. Public Opin. Q. 72, 985–1007 (2008)

    Article  Google Scholar 

  • Tourangeau, R., Rips, L.J., Rasinski, K.: The Psychology of Survey Response. Cambridge University Press, Cambridge (2000)

    Book  Google Scholar 

  • Van Selm, M., Jankowski, N.W.: Conducting online surveys. Qual. Quant. 40(3), 435–456 (2006)

    Article  Google Scholar 

Download references

Acknowledgments

I am very grateful for the permission I got to use the questionnaire of the Reputation Institute in Spain (http://www.reputationinstitute.com) for one of our experiments and a questionnaire of Provokers in Mexico for the other experiment (http://www.provokersite.com). I also appreciate very much the careful way in which Netquest performed the data collection and all the help given by different persons of the team, in particular Carlos Ochoa and Salvador Masdeu.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Melanie Revilla.

Appendix

Appendix

1.1 Text of the introduction

1.1.1 Group “intro only”

1.1.2 Group “sign only”

1.1.3 Group “intro+sign”

The respondents of this group got the combination of the introduction and the commitment statement.

1.2 The score of bad quality

In Experiment 1, the score of bad quality was computed as follow:

  • 2 points if a respondent failed the IMC

  • 2.5 points if a respondent straight-lined on the 27 items of Q550

  • 1.5 points if a respondent straight-lined on 26 out of the 27 items of Q550

  • 2 points if a respondent straight-lined on the 26 items of Q750 or Q950

  • 1 point if a respondent straight-lined on 25 out of the 26 items of Q750 or Q950

  • 2 points if a respondent answered a non-sense in the open narrative question ADO6

  • 1 point if a respondent wrote something that was not answering ADO6

  • 0.5 point if a respondent answered “don’t know” or “nothing” in ADO6

In Experiment 2, it was computed as follow:

  • 2 points if a respondent failed the IMC

  • 2 points if a respondent straight-lined on 19 or 18 items of Q5

  • 1.5 points if a respondent straight-lined on 16 or 17 items of Q5

  • 2.5 points if a respondent straight-lined on 24 items of Q29

  • 2 points if a respondent straight-lined on 21 to 23 items of Q29

  • 2 points if a respondent gave a non-substantive answer at the open narrative questions Q19, Q20, Q32 or ADO6

  • 0.5 point if a respondent answered “don’t know” or “nothing” in Q19, Q20, Q32 or ADO6

The number of points added for each undesirable behaviour was defined in function of how bad these behaviours were evaluated to be for the quality.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Revilla, M. Impact of raising awareness of respondents on the measurement quality in a web survey. Qual Quant 50, 1469–1486 (2016). https://doi.org/10.1007/s11135-015-0216-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11135-015-0216-y

Keywords

Navigation