When does social desirability become a problem? Detection and reduction of social desirability bias in information systems research

https://doi.org/10.1016/j.im.2021.103500Get rights and content

Abstract

Social desirability (SD) bias occurs in self-report surveys when subjects give socially desirable responses by over- or underreporting their behavior. Despite knowledge of SD as a potential threat to the validity of information systems (IS) research, little has been done to systematically assess its extent. Furthermore, we are uncertain of how to recover reliable estimates of the relationships between research variables contaminated by SD bias. We sought in this study to assess the extent of SD bias in causal inferences when independent and/or dependent variables are contaminated. We also evaluated whether an SD scale in conjunction with partial correlation could effectively and efficiently correct SD bias when it is found. To achieve these purposes, we designed a survey study and collected data from Amazon's Mechanical Turk in the context of mobile loafing, which refers to employees’ personal use of the mobile Internet during business hours. Using various detection methods, we found that SD bias existed in the context of mobile loafing. From the results of the variance reduction rate and a covariate technique, we found that SD bias becomes problematic when both the independent and dependent variables are susceptible to SD bias. Overall, our study contributes significantly to the IS literature by revealing the extent of SD bias and the magnitude of the possible correction for it in IS research.

Introduction

Surveys play a critical role in Information Systems (IS) research. A key assumption of self-report surveys is that respondents bring to mind appropriate information accurately and respond honestly [1]. Social desirability (SD) bias, one of the most frequently raised concerns with self-report surveys, is a tendency for subjects to give socially desirable responses by overreporting behaviors (e.g., knowledge contribution) that make them look good and underreporting behaviors (e.g., cyberbullying) that make them look bad [2], [3], [4]. This bias is problematic because it distorts the means and variances of research variables and threatens the validity of survey-based causal inferences in IS research [5,6]. For these reasons, conclusions drawn from analyses using potentially biased self-report data can misguide managerial and policy decisions [6,7]. Thus, it is crucial for researchers and practitioners to assess the extent of SD bias and, if necessary, control for it in drawing inferences from their survey data.

Recent IS studies have attempted to assess the extent of SD bias when researchers examine sensitive topics such as addiction, software piracy, unethical programming, and cyberbullying [5,6,[8], [9], [10], [11], [12], [13], [14], [15], [16], [17]–18]. Notable methods to assess and/or reduce SD bias in prior IS research are (1) indirect questioning, (2) a randomized response technique (RRT), (3) implicit constructs and measures, and (4) the SD scale and covariate techniques. First, indirect questioning is a projective technique in which subjects are asked to report the thoughts or actions of others similar to themselves [19]. Second, the RRT seeks to prevent SD bias during data collection by protecting privacy through a randomization device with a known probability distribution (e.g., coins, dice) [6,20]. Third, implicit constructs that are only indirectly measured can reduce SD bias because implicit constructs do not depend on explicit and direct measures. Fourth, researchers can include an SD scale in a survey questionnaire and then assess SD bias by examining the correlations between the SD scale and research variables [9,21]. If significant correlations are found, a covariate technique using partial correlation can be used to reduce SD bias [[22], [23]–24].

Previous IS researchers have relied on one or other of the alternative approaches to assess the extent of SD bias, but their conclusions have been contradictory. For example, some research, based on the relatively low correlations between an SD scale and sensitive variables, indicated SD bias was not a problem [8,[13], [14]–15,17]. In contrast, Kwan et al. [6] found the structural paths of respondents with direct questioning were significantly higher than those of respondents with the RRT and thus concluded that SD bias is a real threat in IS research. Similarly, in social psychology, some researchers considered SD bias significant, but others believed it to be trivial [25]. These contradictory conclusions generally hinged on the context and contingencies of admitting to undesirable behavior. For example, anger measures from parolees are significantly susceptible to SD bias [26], whereas admitting anger among college students is not [27].

To resolve these contradictions about SD bias, Paunonen and LeBel [25] used a Monte Carlo simulation to evaluate its effects in an independent variable on an unbiased dependent variable. They concluded that SD bias is not seriously problematic, at least not in a scenario in which only one variable is biased. Nevertheless, despite their findings, the extent of SD bias remains uncertain when both independent and dependent variables are susceptible to it. Although various methods to detect this bias have been proposed, prior IS research has relied heavily on correlation analysis that used a short form of the Marlowe–Crowne SD scale. Furthermore, we remain unsure how to effectively control for SD bias, especially when the RRT is not viable. It is especially important to understand how effective the technique of using SD scales could be in recovering reliable estimates of the relationships between contaminated research variables.

Our main objective in this study was twofold: First, it seeks to assess the extent of SD bias in the estimation of causal relationships when it contaminates independent and/or dependent variables. Second, it evaluates the effectiveness of SD scales with partial correlations in controlling SD bias and acquiring reliable estimates of causal relationships. To achieve these goals, we designed a survey study and collected data from Amazon's Mechanical Turk (MTurk) in the context of mobile loafing, or “nonwork-related mobile computing,” which refers to employees’ personal use of the mobile Internet during business hours [[28], [29], [30]–31]. Our survey is intended to examine how the estimation of causal relationships varies depending on the different conditions of SD bias in independent and dependent variables. This study used indirect questioning, correlation analysis, and a factor mixture model (FMM) to assess the extent of SD bias. It was also designed to help understand how useful a covariate technique is in recovering true causal relationships free of SD bias. To correct for SD bias, we used the methods of a variance reduction rate (VRR) and a covariate technique.

This study contributes to IS research in several ways. First, by venturing beyond the one-biased-variable condition examined in Paunonen and LeBel [25], we contribute to the literature by examining a two-biased-variable condition. Our contribution lies in our finding that, although the level of distortion in causal inferences is minor in a one-biased-variable condition—which is consistent with the prior literature—a two-biased-variable condition may cause a problematic degree of change in causal inferences. Second, our results can resolve the different conclusions of the effects of SD bias in previous IS research. In particular, a one-biased-variable condition supports the argument that SD bias is not problematic (e.g., [13,15]), but a two-biased-variable condition supports the conclusion that SD bias may threaten the validity of self-report research (e.g., [6]). Third, unlike prior IS research that relied on a single method to assess SD bias, our study used—and evaluated—multiple methods. Among these various methods, we found that SD scales are more useful than other techniques because they can assess the degree of SD bias and control it by using partial correlations. Finally, we found that the impression management scale is efficient and effective in measuring and controlling for SD bias in the contexts of negative use of information technology (IT) such as addictive IT use, cyberbullying, and digital piracy.

This article is organized as follows. In the next section, we review SD bias and techniques for reducing it. Then, we present an empirical model for assessing and controlling SD bias and a discussion of our methods. Next, we examine four methods to detect SD bias and a covariate technique to reduce it. Finally, we conclude with theoretical, methodological, and practical contributions, along with limitations of our study and directions for future research.

Section snippets

Social desirability bias

Individuals prone to SD bias respond to questions in ways to make themselves viewed favorably by overreporting socially desirable behaviors or underreporting socially undesirable behaviors. SD has been arguably viewed as either a substance (personality trait) or style (characteristic of survey items) [7,32]. In other words, SD can be seen as an individual difference variable or a property of survey items [33]. Most of the existing SD scales are aimed at capturing substance rather than style [33,

Mobile loafing as an empirical setting

To evaluate the extent of SD bias under different conditions, we collected survey data in the context of mobile loafing [[28], [29], [30]–31]. The widespread diffusion of personal mobile devices (e.g., smartphones, tablets) has multiplied the situations and locations that permit individuals to access the Internet. As of January 2021, the number of global mobile Internet users was 4.32 billion, which is more than the number of desktop Internet users [53]. Employees’ use of the mobile Internet is

Measures

To ensure construct validity, all measures were adapted from previously validated scales. The scales used in our study are included in the Appendix. The survey included direct questions, indirect questions, SD scales, and demographic questions. Mobile Internet addiction was adapted from Kuem et al. [82]. The scales for perceived usefulness were adapted from Davis [66]. We measured neutralization and mobile-loafing intention by using scales adapted from Khansa et al. [29].

All the scales of the

Detection of social desirability bias

To assess SD bias, we used three methods: (1) comparisons between indirect and direct questioning [19], (2) correlations between study constructs and SD scales [3,4], and (3) an FMM [89].

Variance reduction rate

Using partial correlations, we examined the VRR of four relationships under three conditions: (1) biased independent variable and unbiased dependent variable, (2) unbiased independent variable and biased dependent variable, and (3) biased independent variable and biased dependent variable. Table 7 shows results of the VRR. Although the VRR is not a tool for assessing the extent of SD bias, it is useful to determine how many pairs of variables have a correlation that can be attributed to an

Summary of the study

The objective of this study was to assess the extent of SD bias and find an effective way to control, if any, such bias in IS research. We conducted a survey study to explore how seriously SD bias could distort our inferences and to examine the efficacy of partial correlations in reliably estimating causal relationships. In particular, we studied mobile loafing based on data collected from 205 survey participants. The survey results showed that even when SD bias contaminates either an

CRediT authorship contribution statement

Dong-Heon (Austin) Kwak: Conceptualization, Data curation, Formal analysis, Writing – review & editing. Xiao Ma: Formal analysis, Funding acquisition. Sumin Kim: Writing – review & editing.

Declaration of Competing Interest

None.

Acknowledgments

We would like to thank the associate editor and two reviewers for their insightful comments and suggestions on the paper.

Dong-Heon (Austin) Kwak is an associate professor of information systems at Kent State University. He received his PhD in management information systems from the University of Wisconsin-Milwaukee in 2014. His-research focuses on online donations, website design, persuasion, information processing, gamification, and IT training. He has published in Journal of the Association for Information Systems, Computers in Human Behavior, Computers & Education, among other outlets.

References (99)

  • M. Silic et al.

    A new perspective on neutralization and deterrence: predicting shadow IT usage

    Inf. Manag.

    (2017)
  • M. Siponen et al.

    New insights into the problem of software piracy: the effects of neutralization, shame, and moral beliefs

    Inf. Manag.

    (2012)
  • R.A. Davis

    A cognitive-behavioral model of pathological internet use

    Comput. Hum. Behav.

    (2001)
  • J.P. Charlton et al.

    Distinguishing addiction and high engagement in the context of online game playing

    Comput. Hum. Behav.

    (2007)
  • S. Sharma

    I want it my way: using consumerism and neutralization theory to understand students’ cyberslacking behavior

    Int. J. Inf. Manage.

    (2020)
  • S.A. McIntire et al.

    Foundations of Psychological Testing

    (2000)
  • H.J. Arnold et al.

    The role of social-desirability response bias in turnover research

    Acad. Manag. J.

    (1985)
  • C.M. Hart et al.

    The balanced inventory of desirable responding short form

    Sage Open

    (2015)
  • M. Gergely et al.

    Social desirability bias in software piracy research: evidence from pilot studies

  • S.S. Kwan et al.

    Applying the randomized response technique to elicit truthful responses to sensitive questions in IS research: the case of software piracy behavior

    Inf. Syst. Res.

    (2010)
  • J.-B.E.M. Steenkamp et al.

    Socially desirable response tendencies in survey research

    J. Mark. Res.

    (2010)
  • R.Y.K. Chan et al.

    Does ethical ideology affect software piracy attitude and behaviour? An empirical investigation of computer users in China

    Eur. J. Inf. Syst.

    (2011)
  • T.K.H. Chan et al.

    Cyberbullying on social networking sites: the crime opportunity and affordance perspectives

    J. Manag. Inf. Syst.

    (2019)
  • M. Gergely et al.

    Social desirability bias in software piracy research

  • D.-.H. Kwak et al.

    Measuring and controlling social desirability bias: applications in information systems research

    J. Assoc. Inf. Syst.

    (2019)
  • M. Sojer et al.

    Understanding the drivers of unethical programming behavior: the inappropriate reuse of internet-accessible code

    J. Manag. Inf. Syst.

    (2014)
  • A.A. Soror et al.

    Good habits gone bad: explaining negative consequences associated with the use of mobile phones from a dual-systems perspective

    Inf. Syst. J.

    (2015)
  • O. Turel et al.

    The benefits and dangers of enjoyment with social networking websites

    Eur. J. Inf. Syst.

    (2012)
  • O. Turel et al.

    Integrating technology addiction and use: an empirical investigation of online auction users

    (2011)
  • A. Vance et al.

    Increasing accountability through the user interface design artifacts: a new approach to addressing the problem of access-policy violations

    MIS Q.

    (2015)
  • V. Venkatesh et al.

    Children's internet addiction, family-to-work conflict, and job outcomes: a study of parent-child dyads

    MIS Q.

    (2019)
  • Y. Wang et al.

    Individual virtual competence and its influence on work outcomes

    J. Manag. Inf. Syst.

    (2011)
  • R.J. Fisher

    Social desirability bias and the validity of indirect questioning

    J. Consum. Res.

    (1993)
  • M.G. De Jong et al.

    Reducing social desirability bias through item randomized response: an application to measure underreported desires

    J. Mark. Res.

    (2010)
  • R.P. Bagozzi

    Measurement and meaning in information systems and organizational research: methodological and philosophical foundations

    (2011)
  • L. Lazuras et al.

    Predictors of doping intentions in elite-level athletes: a social cognition approach

    J. Sport Exerc. Psychol.

    (2010)
  • M.M. Linehan et al.

    Social desirability: its relevance to the measurement of hopelessness and suicidal behavior

    J. Consult. Clin. Psychol.

    (1983)
  • S.V. Paunonen et al.

    Socially desirable responding and its elusive effects on the validity of personality assessments

    J. Pers. Soc. Psychol.

    (2012)
  • E. Fernandez et al.

    Anger parameters in parolees undergoing psychoeducation: temporal stability, social desirability bias, and comparison with non-offenders

    Crim. Behav. Ment. Health

    (2017)
  • E. Fernandez et al.

    Social desirability bias against admitting anger: bias in the test-taker or bias in the test?

    J. Pers. Assess.

    (2019)
  • G.-.W. Bock et al.

    Why employees do non-work-related computing in the workplace

    J. Comput. Inf. Syst.

    (2010)
  • L. Khansa et al.

    To cyberloaf or not to cyberloaf: the impact of the announcement of formal organizational controls

    J. Manag. Inf. Syst.

    (2017)
  • V.K.G. Lim

    The IT way of loafing on the job: cyberloafing, neutralizing and organizational justice

    J. Organ. Behav.

    (2002)
  • B.S. Connelly et al.

    A meta-analytic multitrait multirater separation of substance and style in social desirability scales

    J. Pers.

    (2016)
  • R.R. McCrae et al.

    Social desirability scales: more substance than style

    J. Consult. Clin. Psychol.

    (1983)
  • E. Perinelli et al.

    Use of social desirability scales in clinical psychology: a systematic review

    J. Clin. Psychol.

    (2016)
  • D.L. Paulhus

    Socially desirable responding: the evolution of a construct

  • A.L. Edwards

    The Social Desirability Variable in Personality Assessment and Research

    (1957)
  • J.S. Wiggins

    Interrelationships among MMPI measures of dissimulation under standard and social desirability instruction

    J. Consult. Psychol.

    (1959)
  • Cited by (20)

    • The use of inter-professional education (IPE) healthcare law and ethics scenario based learning sessions amongst nursing, midwifery and law students: A qualitative investigation

      2022, Nurse Education Today
      Citation Excerpt :

      Another limitation may be the risk of social desirability bias in the responses from participants. Social desirability bias has been described as a discrepancy between participants' real thoughts and opinions and the more socially acceptable version of those thoughts and opinions that they may choose to provide to researchers (Bergen and Labonte, 2020; Kwak et al., 2021). Some actions advocated in reducing the risk of social desirability bias were carried out during the data gathering stage of this investigation, for example, taking opportunities probe and clarify participant responses and to establish an open and trusting rapport with participants such as through the use of humour, and communication skills such as self-disclosure (Bergen and Labonte, 2020).

    View all citing articles on Scopus

    Dong-Heon (Austin) Kwak is an associate professor of information systems at Kent State University. He received his PhD in management information systems from the University of Wisconsin-Milwaukee in 2014. His-research focuses on online donations, website design, persuasion, information processing, gamification, and IT training. He has published in Journal of the Association for Information Systems, Computers in Human Behavior, Computers & Education, among other outlets.

    Xiao Ma is an assistant professor of Business Analytics in the C. T. Bauer College of Business at the University of Houston. He graduated with a PhD in business from the University of Wisconsin, Madison, concentrating on information systems and management. His-previous and recent research focuses on identifying problematic online gambling behavior and proper interventions, behavior analytics in online labor and knowledge communities, and economics of IS. His-latest research focuses on healthcare analytics, natural experiments of digital system design change, and artificial intelligence and deep-learning algorithms. His-research has appeared in premier information systems journals, including Information Systems Research, Journal of Management Information Systems, Decision Sciences, and Journal of the Association for Information Systems. Xiao is a two-time Best Associate Editor (AE) Award recipient of ICIS.

    Sumin Kim is a doctoral student in Management & Information Systems at Mississippi State University. She received her master's degree in Business Administration at Kyungpook National University. Her research interests are in the area of information security, computer-mediated communication, and virtual teams.

    This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

    View full text