Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics

https://doi.org/10.1016/j.infoandorg.2020.100301Get rights and content

Highlights

  • More businesses are using People Analytics (PA) to manage their workforce

  • We describe three ethical consequences of PA: opacity, datafication of the workplace, and nudging

  • We examine the adverse effects of these challenges on members ability to develop their virtue

  • We offer three ways to mitigate these challenges: reframing PA, adopting new roles, and using alternative design principles

Abstract

The increasing use of People Analytics to manage people in organizations ushers in an era of algorithmic management. People analytics are said to allow decision-makers to make evidence-based, bias-free, and objective decisions, and expand workers' opportunities for personal and professional growth. Drawing on a virtue ethics approach, we argue that the use of people analytics in organizations can create a vicious cycle of ethical challenges - algorithmic opacity, datafication, and nudging - which limit people's ability to cultivate their virtue and flourish. We propose that organizations can mitigate these challenges and help workers develop their virtue by reframing people analytics as a fallible companion technology, introducing new organizational roles and practices, and adopting alternative technology design principles. We discuss the implications of this approach for organizations and for the design of people analytics, and propose directions for future research.

Introduction

In recent years, a growing number of organizations have started using People Analytics (PA) to manage their workforce. PA refer to computational techniques that leverage digital data from multiple organizational areas to reflect different facets of members' behavior. Utilizing algorithmic technologies, PA analyze these data for patterns and present decision makers with more granular views of organizational resources, processes, people, and their performance (Huselid, 2018). This can help decision-makers to expand their visibility into the functioning of the business, and consequently make more informed and objective decisions (Barrett & Oborn, 2013; Zarsky, 2016).

Like other AI-based tools, PA are based on algorithmic technologies that rely on large datasets for their operation. Their application raises ethical questions that have been discussed in relation to algorithmic technologies in different fields, such as the potential of algorithms to promote bias in medicine (Joy & Clement, 2016), unduly influence political discourse (Mittelstadt, 2016), and engender racial discrimination in predictive policing practices (Jefferson, 2018).

Unlike other applications of algorithmic technologies (e.g., for financial trading, data security, weather forecasting, or identifying terrorists), PA are aimed at developing the behavior and character of people (Isson & Harriott, 2016) through a quantitative analysis of their conduct and psychological makeup. Their underlying logic can be traced back to principles that were originally elaborated through work conducted within the human relations movement (Bodie, Cherry, McCormick, & Tang, 2017); their application is intended to improve the work experience of organizational members, reduce their stress levels, increase their job satisfaction, and expand their opportunities for personal and professional growth and development (Guenole, Ferrar, & Feinzig, 2017; Isson & Harriott, 2016; Levenson, 2015; Marr, 2018).1

Because of their focus on developing people's behavior and wellbeing, the application of PA in organizations is inherently related to human virtue, which refers to the cultivation of personal excellence that moves an individual towards the accomplishment of a good life (Aristotle, 1987; MacIntyre, 1967); that is, “a morally admirable life [characterized by] the development and exercise of our natural capacities, and especially those which characterize us as humans” (Hughes, 2003, p. 183); a life “worth seeking, choosing, building, and enjoying” (Vallor, 2016, p. 12). This understanding of virtue is elaborated in the moral philosophy of virtue ethics (MacIntyre, 1967), which focuses on people's character and emphasizes their capacity to exhibit good behavior in challenging situations. It places morality in embedded dispositions that individuals can acquire over time through involved participation in social practices and immersion in their community, and in individuals' voluntary and reflective effort to cultivate their moral character and fulfil their human potential (Aristotle, 1987). Thus, when people are hindered in developing their virtue, they are also hindered in pursuing a fulfilling life that is worth seeking.

Among the many ethical challenges arising from the application of algorithmic technologies, there are three, in particular, that may inhibit people from developing their virtue and are, thus, relevant in the context of PA: algorithmic opacity (Burrell, 2016), datafication of the workplace (Tsoukas, 1997), and the use of nudging to incentivize certain behaviors (Mateescu & Nguyen, 2019). Opacity, datafication, and nudging have been explored in the literature. For example, Pasquale (2015), O'Neil (2016) and Mittelstadt, Allo, Taddeo, Wachter, and Floridi (2016) identified the use of opaque algorithms as a concern which may create information asymmetries and inhibit oversight. Zwitter (2014), Baack (2015), and Mai (2016) discussed the adverse impacts of datafication on privacy. Sunstein (2015), Tufekci (2015), and Tene and Polonetsky (2017) described the harmful effects of nudging when used to manipulate people's behavior without their consent, for illicit purposes, or against their interests.

While these works make useful contributions in highlighting prominent issues arising from opacity, datafication, and nudging, we know little about their potential effects on people's ability to develop their character. The identified harmful aspects (e.g., information asymmetries and behavior manipulation) point to potential barriers to people's ability to cultivate their virtue, but these links have not been systematically explored. This is particularly problematic because while PA usage is intended to aid workers' growth and wellbeing, practices stemming from wide-spread application of PA that may adversely affect people's virtue are on the rise; for example, using algorithmic decision-making tools in organizations that curtail workers' capacity for voluntary action (Beer, 2017) and personal integrity (e.g., Leicht-Deobald et al., 2019).

To address this problem, we examine the use of PA in organizations from a virtue ethics approach and observe the effects of algorithmic opacity, datafication, and nudging on people's ability to cultivate their virtue. We find that the use of PA can create a vicious cycle of ethical challenges that adversely impact people's efforts to develop their virtue by pursuing internal goods, acquiring practical wisdom, and acting voluntarily. We maintain that these effects are not a necessary consequence of using PA. Rather, they are likely to manifest when organizations enact as set of frames of PA as a technology that is epistemologically superior to its human counterparts. The framing of technology can shape people's understanding of the nature of the technology, what it can be used for, and the likely outcomes of its utilization. Therefore, it can influence organizational action regarding the design and implementation of the technology (Orlikowski & Gash, 1994). Accordingly, we propose that organizations can mitigate the adverse effects of PA and help workers develop their virtue by reframing PA as a fallible companion technology, which in turn can give rise to new organizational roles and practices, and alternative technology design practices.

The rest of the paper is organized as follows. First, we describe the lens of virtue ethics. Then we examine how the ethical challenges of opacity, datafication, and nudging hinder organizational members' ability to develop their virtue. Next, we discuss a view of PA as a fallible companion technology that can inform a more ethical use of this technology. We finish by proposing avenues for future research.

Section snippets

An introduction to virtue ethics

In order to detail and understand the ethical consequences of PA, we draw on a virtue ethics approach, which was developed by Aristotle. Virtue ethics highlight personal characteristics in determining the ethical nature of individuals and their actions (Aristotle, 1987). Virtue ethics focus on the virtuous agent rather than on right actions or on what anyone should do. They are therefore different to utilitarian ethics, which emphasize the consequences of actions – specifically their capacity

An ethical consideration of PA: a virtue ethics perspective

As we described at the outset of the paper, the application of PA is intended to help workers achieve personal and professional growth. Therefore, it engenders ethical issues that are particularly pertinent to people's ability to cultivate their character and exhibit virtue: algorithmic opacity, datafication of the workplace, and the use of nudging. Despite the extant discussion of these issues in the literature, their significance has not been examined either in the context of PA or in

Mitigating the adverse effects of PA and fostering an ethical approach to their utilization

Above we examined the possible adverse effects of PA from a virtue ethics perspective. Our analysis of the ethical challenges associated with PA should not be misconstrued as an outright repudiation of this technology or as a deterministic statement that using PA will always have detrimental effects. We maintain that the ethical challenges described above are neither a necessary outcome of the use of PA nor a true manifestation of their ‘spirit’ (DeSanctis & Pool, 1994). Rather, they are likely

Conclusions and further research

In this paper, we have examined the ethical consequences of PA from a virtue ethics approach. Because virtue ethics focus on people's capacity to develop their character and flourish, they offer a uniquely pertinent perspective from which to ethically evaluate PA, whose implementation and use are commonly justified by the need to expand workers' opportunities for personal and professional growth (Guenole et al., 2017; Isson & Harriott, 2016; Levenson, 2015; Marr, 2018). We have identified the

CRediT authorship contribution statement

Uri Gal:Conceptualization, Writing - original draft, Writing - review & editing.Tina Blegind Jensen:Conceptualization, Writing - original draft, Writing - review & editing.Mari-Klara Stein:Conceptualization, Writing - original draft, Writing - review & editing.

References (98)

  • Aristotle

    The Nicomachean ethics

    (1987)
  • I. Asadi-Someh et al.

    Ethical implications of big data analytics

  • S. Baack

    Datafication and empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism

    Big Data & Society

    (2015)
  • J. Bambauer et al.

    The algorithm game

    Notre Dame Law Review

    (2018)
  • M. Barrett

    Challenges of EDI adoption for electronic trading in the London insurance market

    European Journal of Information Systems

    (1999)
  • G.R. Beabout

    Management as a domain-relative practice that requires and develops practical wisdom

    Business Ethics Quarterly

    (2012)
  • D. Beer

    The social power of algorithms

    Information, Communication & Society

    (2017)
  • D. Beer

    Envisioning the power of data analytics

    Information, Communication & Society

    (2018)
  • J. Bersin et al.

    People analytics gaining speed

  • S. Biundo et al.

    Companion technology: A paradigm shift in human-technology interaction

    (2017)
  • M.T. Bodie et al.

    The law and policy of people analytics

    (2017)
  • E. Bozdag

    Bias in algorithmic filtering and personalization

    Ethics and Information Technology

    (2013)
  • Bruns, H., Kantorowicz-Reznichenko, E., Klement, K., Luistro Jonsson, & M., Rahali, B. (2018). Can nudges be...
  • J. Burrell

    How the machine “thinks”: Understanding opacity in machine learning algorithms

    Big Data & Society

    (2016)
  • B.S. Butler et al.

    Reliability, mindfulness, and information systems

    MIS Quarterly

    (2006)
  • W.F. Cascio et al.

    Investing in people: Financial impact of human resource initiatives

    (2008)
  • J. Clark

    Design in the era of the algorithm

  • D.I. Constantiou et al.

    New games, new rules: Big data and the changing context of strategy

    Journal of Information Technology

    (2015)
  • H Cronqvist et al.

    When nudges are forever: Inertia in the Swedish premium pension plan.

    AEA Papers and Proceedings

    (2018)
  • K. Cukier et al.

    Big data: A revolution that will transform how we work, think and live

    (2013)
  • T. Davenport

    Make better decisions

    Harvard Business Review

    (2009)
  • E. Davidson

    Technology frames and framing: A socio-cognitive analysis of requirements determination

    MIS Quarterly

    (2002)
  • G. DeSanctis et al.

    Capturing the complexity in advanced technology use: Adaptive structuration theory

    Organization Science

    (1994)
  • C. D’Ignazio et al.

    Approaches to building big data literacy

    (2015)
  • V. Dignum

    Responsible artificial intelligence: Designing AI for human values

    ITU Journal, ICT Discoveries

    (2017)
  • A. Etzioni et al.

    Designing AI systems that obey our laws and values

    Communications of the ACM

    (2016)
  • B. Fecheyr-Lippens et al.

    Power to the new people analytics

    McKinsey Quarterly

    (2015)
  • L. Floridi

    The fourth revolution: How the infosphere is reshaping human reality

    (2014)
  • J. Fox

    From “economic Man” to behavioral economics

  • U. Gal et al.

    People analytics in the age of big data: An agenda for IS research

  • S.J. Gershman et al.

    Computational rationality: A converging paradigm for intelligence in brains, minds, and machines

    Science

    (2015)
  • G. Gigerenzer

    Fast and frugal heuristics: The tools of bounded rationality

  • B. Green et al.

    Algorithmic realism: Expanding the boundaries of algorithmic thought

  • N. Guenole et al.

    The power of people: Learn how successful organizations use workforce analytics to improve performance

    (2017)
  • Haraway, D. (2003). Cyborgs to companion species: Reconfiguring kinship in technoscience. In Chasing technoscience:...
  • S.-h. Hong

    Data’s intimacy: Machinic sensibility and the quantified self

    Communication

    (2016)
  • G.J. Hughes

    Aristotle on ethics

    (2003)
  • R. Hursthouse

    On virtue ethics

    (1999)
  • M.A. Huselid

    The science and practice of workforce analytics: Introduction to the HRM special issue

    Human Resource Management

    (2018)
  • Cited by (130)

    View all citing articles on Scopus
    View full text