Skip to main content
  • Research article
  • Open access
  • Published:

The effect of a voucher incentive on a survey response rate in the clinical setting: a quasi-randomized controlled trial

Abstract

Background

Financial rewards have been shown to be an important motivator to include normal healthy volunteers in trials. Less emphasis has been put on non-healthy volunteers. No previous study has investigated the impact of a voucher incentive for participants in a cross-sectional study in a clinical setting. The objective of this study was to examine the impact of a small voucher incentive on a survey response rate in a clinical setting at the point-of-care in a quasi-randomized controlled trial (q-RCT).

Methods

This was an ancillary study to a survey of patients subsequent to their appointment with a physician investigating physician-patient communication. We randomized participants to receive or not receive a voucher for a coffee (costs: 1 €) enclosed in the survey package. Alternation of groups was performed on a weekly basis. The exact Chi-square test was used to compare response rates between study arms.

Results

In total, 472 participants received the survey package. Among them, 249 participants were quasi-randomized to the voucher arm and 223 to the control group. The total response rate was 46%. The response rates were 48% in the voucher arm and 44% in the control group. The corresponding risk ratio was 1.09 (95% CI: 0.89, 1.32).

Conclusions

A small voucher incentive to increase the response rate in a survey investigating physician-patient communication was unlikely to have an impact. It can be speculated whether the magnitude of the voucher was too low to generate an impact. This should be further investigated in future real-world studies.

Peer Review reports

Background

Recruitment can be defined as “the dialogue which takes place between an investigator and a potential participant prior to the initiation of the consent process” [1]. Recruitment of study participants poses a challenge to research studies. Poor recruitment can result in underpowered studies. Low response rates can introduce bias and reduce certainty in the study results [2,3,4]. In the worst case, studies might be stopped or abandoned. One study found that only 31% of trials achieved their original recruitment target, while 53% needed to be extended [5].

Problems with recruitment are dependent on the study type. In general, recruitment problems are more likely in clinical trials and cohort studies. Recruitment in cross-sectional studies with only one time point of data collection should generally be less difficult. However, there is a paucity of literature on this topic, making it difficult to underpin this statement with empirical results. Therefore, it is not surprising that the majority of studies investigating strategies to increase recruitment have focused on clinical trials [6].

Among other methods, incentive-based approaches such as small gifts, vouchers or financial incentives have been investigated [6, 7]. These approaches are often used to facilitate study participation among persons who might otherwise not participate [8]. Financial rewards were also shown to be an important motivator for including normal healthy volunteers in trials [9]. The character and magnitude of the voucher can also have an impact on its effectiveness [10].

To the best of our knowledge, no previous study has investigated the impact of a voucher incentive for participants in a cross-sectional study in a clinical setting. Against the background that hypothetical studies to investigate recruitment strategies have their place in this field of research, researchers are encouraged to include an evaluation of recruitment interventions within their studies [6]. Therefore, we conducted a quasi-randomized controlled trial (q-RCT) to examine the impact of a small voucher incentive on a survey response rate in a clinical setting at the point-of-care.

Methods

We conducted the q-RCT as an ancillary study to a survey of patients subsequent to their appointment with a physician [11]. All appointments were held during surgery hours. The aim of the survey was to investigate physician-patient communication.

Trial design

We randomized patients per their week of appointment at the clinic to one of two groups: a) the voucher arm that consisted of a survey package with an enclosed voucher for a coffee (costs: 1 €) to redeem at one of the three cafeterias of the hospital; b) the no voucher arm that only consisted of a survey package without a voucher. Alternation of groups was performed on a weekly basis (i.e. patients attending the clinic in odd weeks received the voucher, while patients attending the clinic in even weeks did not). The survey package consisted of a cover letter, an information leaflet, the questionnaire and a franked return envelope to send back the questionnaire. The voucher was affixed to the top of the questionnaire. In addition, study assistants referred to the voucher when handing out the survey package.

Study population

The study population consisted of outpatients attending surgical hours at a university-affiliated hospital in Cologne, Germany. Participating surgeons had to give informed consent. Eleven surgeons from four departments agreed to participate. Only persons attending a participating surgeon were eligible for inclusion into the study. There were no other inclusion criteria for the participants.

The ethical review board of Witten/Herdecke University approved the study and waived the requirement for a signed consent form. Participants received an information leaflet about the survey’s aim including similar information usually included in a consent form.

Survey design

The Individual Clinician Feedback (ICF) instrument developed by PICKER Europe in 2012 was used for the survey. The aim of the ICF is to collect feedback on physicians’ communication skills based on patients’ experience during their appointment. The questionnaire was translated into German and culturally adapted using established methods [12]. The German ICF questionnaire consists of 38 items with responses ranging from 0 to 10 on a Likert scale, with higher values indicating a higher satisfaction.

Study assistants were present during surgery hours of thoracic surgery, general surgery, orthopedics and trauma surgery, and plastic surgery. Potential participants in the study were approached by study assistants immediately after the appointment. Thus, there was no possibility for sending reminders as no contact information was collected. Study assistants were not blinded (i.e. they were aware of whether the survey package contained the voucher). A franked return envelope was included in the survey package to maximize response rate. The survey took place from June to August 2015 (13 weeks). Data collection was restricted to this time period in this pilot study. As this was a pilot study, and we were also interested to investigate population volume, no sample size calculation was performed a priori.

Statistical analysis

The primary outcome was the response rate. We defined the response rate as the number of completed questionnaires divided by the number of delivered questionnaires. The exact Chi-square test was used to compare response rates between study arms. We considered two-sided p values and 0.05 for statistical significance.

Results

In total, 472 participants received the survey package. Among them, 249 participants were quasi-randomized to the voucher arm and 223 to the control group. The total response rate was 46%. The response rates were 48% (120/249) in the voucher arm and 44% (98/223) in the control group. The difference of 4% points was statistically not significant (p = 0.623). The corresponding risk ratio was 1.09 (95% CI: 0.89, 1.32). The risk ratios ranged from 0.99 to 1.80 across the four different surgery hours, none of the risk ratios being statistically significant.

Discussion

We conducted a qRCT to investigate the impact of a small voucher incentive on a survey response rate of patients after their appointment with a physician. We found that this voucher incentive was unlikely to have had an impact on the response rate.

Our total response rate of 46% is lower than in other studies dealing with communication. Previous studies showed response rates varying between 74 and 83% in different settings, such as dentistry [13], hospital [14], cancer [15], or cardiovascular risk [16]. However, a very similar, but large-scale study found a response rate of 51% for a mailed questionnaire of patients’ experience of face-to-face consultations with general practitioners in the UK [17]. An older review found a mean response rate of approximately 60% among mail surveys published in medical journals [3].

There are not many studies we could compare our results with. Most focus on effective strategies for recruitment to trials. Trials are different from our study in that trials require a follow-up of patients. We did not follow up on the included patients (i.e. data collected at one time point only). Furthermore, patients included in trials mainly suffer from a disease or a health care problem under study, excepting prevention trials. Therefore, in the absence of comparable studies, we can probably best compare our study results with studies dealing with the recruitment of normal healthy volunteers. Our study sample did not consist of healthy volunteers only, as patients were attending surgery hours for an underlying health care problem. However, the focus of our study investigating physician-patient communication is not set on an underlying health care problem.

Ill patients might seek relief, cure or a better understanding of their condition, and this might promote their participation in a trial or a research study [18]. The benefits for healthy volunteers remain less clear [19], and it has been debated whether they can be motivated by financial or voucher incentives given that their motivation might arise from something different [20]. However, a systematic review found financial incentives to be a strong motivator for the participation of healthy volunteers in trials [9]. Similar results were also found for surveys. A recent study found that a £2.50 shop voucher significantly improved response rates when compared to no voucher (43% vs. 38%) [21]. This study is also of particular interest as another study group received a £5.00 shop voucher, but this did not have a significant effect compared to the £2.50 shop voucher group (42% vs. 43%). This study provides evidence that even small voucher incentives might be able to increase response rates. A £5.00 gift voucher, redeemable at a range of shops, was investigated as an incentive to improve the response rate of a postal questionnaire in a RCT [22]. More questionnaires were returned in the incentive arm (risk ratio 1.10 (95% CI 1.05, 1.16)) when compared to the no incentive group. A similar voucher resulted in a 11.7% (95% CI 4.7 to 18.6%) improvement in the response rate of mothers of seven-year-old children, where the latter were to be assessed [23]. However, the incentive used in our study was smaller than in any of the studies mentioned above. Therefore, the voucher might have been too small to have an impact.

Another difference of our study is that questionnaires were not mailed, but handed out personally by the study personnel. This is a rather uncommon strategy as it results in higher resources needed to conduct the study. Personalized questionnaires and letters as well as contacting participants before sending questionnaires were shown to increase response rates in a meta-analysis [24].

It has been stressed before that studies dealing with health communication research might be different with respect to recruitment of patients, among other aspects [25]. The reason for this is the context (i.e. the clinic) within which communication takes place. When studying physician-patient communication it also important to include the clinical staff. Shue 2004 has pointed out three main issues when recruiting participants in the clinical setting: researcher availability, clinical staff knowledge, and clinic scheduling constraints [25]. All of them were considered by us when planning and conducting the study.

We were unable to randomize vouchers in a truly randomized fashion, but performed a quasi-randomized trial instead. Alternation has the drawback that future assignments can be anticipated or are simply known, as in our study. Thus, selection bias might have occurred due to the selective enrolment and non-enrolment of participants into the study [26]. The choice for alternation was made as we could not rule out the possibility that patients sitting in the waiting room would see another patient being included into the study and randomized either to the voucher incentive arm or no incentive arm. This might have introduced the danger of decreasing the willingness of those patients to participate in the study who did not receive the voucher. As patients were asked to participate in the study subsequent to leaving the doctors examination room, it was not possible to randomize patients in another room for practical reasons. Only such a procedure would have made it possible to mask the randomization and not to interfere with other patients sitting in the waiting room.

A limitation of our study is the relatively small sample size, which was due to its nature as a pilot study. Similarly, as we have included a wide range of patients with different conditions, the generalizability of our study results might be limited. A strength of our study is that we report real-world results, while it has been criticized that many studies investigating strategies for increasing participation rates are hypothetical [6].

Conclusions

A small voucher incentive to increase the response rate in a survey investigating physician-patient communication was unlikely to have an impact. It can be speculated whether the magnitude of the voucher was too low to generate an impact. This should be further investigated in future real-world studies. Different study designs, objectives and contexts (e.g. heath communication) should also be taken into consideration.

Abbreviations

qRCT:

quasi-randomized controlled trial

RCT:

randomized controlled trial

RR:

risk ratio

References

  1. Patel MX, Doku V, Tennakoon L. Challenges in recruitment of research participants. Adv Psychiatr Treat. 2003;9(3):229–38.

    Article  Google Scholar 

  2. Curtis EA, Redmond RA. Survey postal questionnaire: optimising response and dealing with non-response. Nurse researcher. 2009;16(2):76–88. Epub 2009/02/27

    Article  PubMed  Google Scholar 

  3. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–36. Epub 1997/11/22

    Article  PubMed  CAS  Google Scholar 

  4. Barriball KL, While AE. Non-response in survey research: a methodological discussion and development of an explanatory model. J Adv Nurs. 1999;30(3):677–86. Epub 1999/09/28

    Article  PubMed  CAS  Google Scholar 

  5. McDonald AM, Knight RC, Campbell MK, Entwistle VA, Grant AM, Cook JA, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies Trials. 2006;7:9. Epub 2006/04/11

    PubMed  Google Scholar 

  6. Treweek S, Lockhart P, Pitkethly M, Cook JA, Kjeldstrøm M, Johansen M, et al. Methods to improve recruitment to randomised controlled trials: Cochrane systematic review and meta-analysis. BMJ Open. 2013;3(2).

  7. Agarwal A, Raad D, Kairouz V, Fudyma J, Curtis AB, Schunemann HJ, et al. The effect of a monetary incentive for administrative assistants on the survey response rate: a randomized controlled trial. BMC Med Res Methodol. 2016;16:94. Epub 2016/08/09

    Article  PubMed  PubMed Central  Google Scholar 

  8. Singer E, Couper MP. Do incentives exert undue influence on survey participation? Experimental evidence. J Empirical Research Human Res Ethics. 2008;3(3):49–56.

    Article  Google Scholar 

  9. Tishler CL, Bartholomae S. The recruitment of normal healthy volunteers: a review of the literature on the use of financial incentives. J Clin Pharmacol. 2002;42(4):365–75. Epub 2002/04/09

    Article  PubMed  Google Scholar 

  10. Businelle MS, Rash CJ, Burke RS, Parker JD. Using vouchers to increase continuing care participation in veterans: does magnitude matter? Am J Addict. 2009;18(2):122–9. Epub 2009/03/14

    Article  PubMed  Google Scholar 

  11. Prengel P, Kotte N, Antoine SL, Eikermann M, Neugebauer EAM, Stahl K, et al. [Patients are Satisfied with the Physician-Patient Communication: A Pilot Study Applying the "Individual Clinician Feedback" Questionnaire]. Gesundheitswesen (Bundesverband der Arzte des Offentlichen Gesundheitsdienstes (Germany)). 2017. Epub 2017/06/07. Patienten zeigen sich zufrieden mit der Arzt-Patienten-Kommunikation - eine Pilotstudie mit dem "Individual Clinician Feedback"- Fragebogen.

  12. Harkness J. Guidelines for best practice in cross-cultural surveys. Survey Research Center: Institute for Social Research, University of Michigan; 2011.

    Google Scholar 

  13. Olausson M, Esfahani N, Ostlin J, Hagglin C. Native-born versus foreign-born patients' perception of communication and care in Swedish dental service. Swed Dent J. 2016;40(1):91–100. Epub 2016/07/29

    PubMed  Google Scholar 

  14. Vucemilo L, Curkovic M, Milosevic M, Mustajbegovic J, Borovecki A. Are physician-patient communication practices slowly changing in Croatia? -- a cross-sectional questionnaire study. Croatian medical journal. 2013;54(2):185–91. Epub 2013/05/01

    Article  PubMed  PubMed Central  Google Scholar 

  15. Johansson M, Ryden A, Finizia C. Self evaluation of communication experiences after laryngeal cancer - a longitudinal questionnaire study in patients with laryngeal cancer. BMC Cancer. 2008;8:80. Epub 2008/03/29

    Article  PubMed  PubMed Central  Google Scholar 

  16. Hobbs FD, Erhardt LR, Rycroft C. Heart study i. The from the heart study: a global survey of patient understanding of cholesterol management and cardiovascular risk, and physician-patient communication. Curr Med Res Opin. 2008;24(5):1267–78. Epub 2008/03/22

    Article  PubMed  Google Scholar 

  17. Roberts MJ, Campbell JL, Abel GA, Davey AF, Elmore NL, Maramba I, et al. Understanding high and low patient experience scores in primary care: analysis of patients’ survey data for general practices and individual doctors. BMJ : British Medical Journal. 2014;349

  18. Ayd FJ Jr, Calabresi P. Motivations and rewards for volunteering to be an experimental subject. Clin Pharmacol Ther. 1972;13(5):771–81. Epub 1972/09/01

    Article  PubMed  Google Scholar 

  19. Vrhovac R, Francetic I, Rotim K. Drug trials on healthy volunteers in Yugoslavia. Int J Clin Pharmacol Ther Toxicol. 1990;28(9):375–9. Epub 1990/09/01

    PubMed  CAS  Google Scholar 

  20. Newton LH. Inducement, due and otherwise. Irb. 1982;4(3):4–6. Epub 1982/03/01

    Article  PubMed  Google Scholar 

  21. Robb KA, Gatting L, Wardle J. What impact do questionnaire length and monetary incentives have on mailed health psychology survey response? British journal of health psychology. 2017; Epub 2017/04/20

  22. Gates S, Williams MA, Withers E, Williamson E, Mt-Isa S, Lamb SE. Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? The MINT incentive study Trials. 2009;10:44. Epub 2009/06/24

    PubMed  Google Scholar 

  23. Kenyon S, Pike K, Jones D, Taylor D, Salt A, Marlow N, et al. The effect of a monetary incentive on return of a postal health and development questionnaire: a randomised trial [ISRCTN53994660]. BMC Health Serv Res. 2005;5:55. Epub 2005/08/20

    Article  PubMed  PubMed Central  Google Scholar 

  24. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324(7347):1183.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Shue CK. Factors that promote and prohibit access to participants in the clinical setting: a review of response rates from a health communication intervention study. Patient Educ Couns. 2011;82(3):488–94. Epub 2010/11/30

    Article  PubMed  Google Scholar 

  26. Higgins JPT, Altman DG. Assessing Risk of Bias in Included Studies. Cochrane Handbook for Systematic Reviews of Interventions: John Wiley & Sons, Ltd; 2008. p. 187–241.

Download references

Acknowledgments

We would like to thank Ms. Anne Trauthwein, Ms. Zemzem Cinar, Ms. Pamela Lenti for assistance with data collection. We would like to thank Dr. Käthe Goossen for proofreading the manuscript.

Availability of data and materials

Data supporting the findings reported in this manuscript can be found in the original paper (Prengel P, Kotte N, Antoine SL, Eikermann M, Neugebauer EAM, Stahl K, et al. [Patients are Satisfied with the Physician-Patient Communication: A Pilot Study Applying the “Individual Clinician Feedback” Questionnaire]. Gesundheitswesen. 2017. Epub 2017/06/07).

Additional data supporting the findings is available upon request.

Author information

Authors and Affiliations

Authors

Contributions

DP, NK, and PO conceived of the RCT and participated in its design and coordination. NK and PO performed the analysis. DP drafted the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Dawid Pieper.

Ethics declarations

Ethics approval and consent to participate

The Institutional Review Board of Witten/Herdecke University approved the study and waived the requirement for a signed consent form. Instead, participants were given a study information sheet that included all information typically included in a consent form. Return of partially or fully completed surveys was considered to imply agreeing to participate in the study, and consent to use the data.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pieper, D., Kotte, N. & Ober, P. The effect of a voucher incentive on a survey response rate in the clinical setting: a quasi-randomized controlled trial. BMC Med Res Methodol 18, 86 (2018). https://doi.org/10.1186/s12874-018-0544-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-018-0544-4

Keywords