Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics
Introduction
In recent years, a growing number of organizations have started using People Analytics (PA) to manage their workforce. PA refer to computational techniques that leverage digital data from multiple organizational areas to reflect different facets of members' behavior. Utilizing algorithmic technologies, PA analyze these data for patterns and present decision makers with more granular views of organizational resources, processes, people, and their performance (Huselid, 2018). This can help decision-makers to expand their visibility into the functioning of the business, and consequently make more informed and objective decisions (Barrett & Oborn, 2013; Zarsky, 2016).
Like other AI-based tools, PA are based on algorithmic technologies that rely on large datasets for their operation. Their application raises ethical questions that have been discussed in relation to algorithmic technologies in different fields, such as the potential of algorithms to promote bias in medicine (Joy & Clement, 2016), unduly influence political discourse (Mittelstadt, 2016), and engender racial discrimination in predictive policing practices (Jefferson, 2018).
Unlike other applications of algorithmic technologies (e.g., for financial trading, data security, weather forecasting, or identifying terrorists), PA are aimed at developing the behavior and character of people (Isson & Harriott, 2016) through a quantitative analysis of their conduct and psychological makeup. Their underlying logic can be traced back to principles that were originally elaborated through work conducted within the human relations movement (Bodie, Cherry, McCormick, & Tang, 2017); their application is intended to improve the work experience of organizational members, reduce their stress levels, increase their job satisfaction, and expand their opportunities for personal and professional growth and development (Guenole, Ferrar, & Feinzig, 2017; Isson & Harriott, 2016; Levenson, 2015; Marr, 2018).1
Because of their focus on developing people's behavior and wellbeing, the application of PA in organizations is inherently related to human virtue, which refers to the cultivation of personal excellence that moves an individual towards the accomplishment of a good life (Aristotle, 1987; MacIntyre, 1967); that is, “a morally admirable life [characterized by] the development and exercise of our natural capacities, and especially those which characterize us as humans” (Hughes, 2003, p. 183); a life “worth seeking, choosing, building, and enjoying” (Vallor, 2016, p. 12). This understanding of virtue is elaborated in the moral philosophy of virtue ethics (MacIntyre, 1967), which focuses on people's character and emphasizes their capacity to exhibit good behavior in challenging situations. It places morality in embedded dispositions that individuals can acquire over time through involved participation in social practices and immersion in their community, and in individuals' voluntary and reflective effort to cultivate their moral character and fulfil their human potential (Aristotle, 1987). Thus, when people are hindered in developing their virtue, they are also hindered in pursuing a fulfilling life that is worth seeking.
Among the many ethical challenges arising from the application of algorithmic technologies, there are three, in particular, that may inhibit people from developing their virtue and are, thus, relevant in the context of PA: algorithmic opacity (Burrell, 2016), datafication of the workplace (Tsoukas, 1997), and the use of nudging to incentivize certain behaviors (Mateescu & Nguyen, 2019). Opacity, datafication, and nudging have been explored in the literature. For example, Pasquale (2015), O'Neil (2016) and Mittelstadt, Allo, Taddeo, Wachter, and Floridi (2016) identified the use of opaque algorithms as a concern which may create information asymmetries and inhibit oversight. Zwitter (2014), Baack (2015), and Mai (2016) discussed the adverse impacts of datafication on privacy. Sunstein (2015), Tufekci (2015), and Tene and Polonetsky (2017) described the harmful effects of nudging when used to manipulate people's behavior without their consent, for illicit purposes, or against their interests.
While these works make useful contributions in highlighting prominent issues arising from opacity, datafication, and nudging, we know little about their potential effects on people's ability to develop their character. The identified harmful aspects (e.g., information asymmetries and behavior manipulation) point to potential barriers to people's ability to cultivate their virtue, but these links have not been systematically explored. This is particularly problematic because while PA usage is intended to aid workers' growth and wellbeing, practices stemming from wide-spread application of PA that may adversely affect people's virtue are on the rise; for example, using algorithmic decision-making tools in organizations that curtail workers' capacity for voluntary action (Beer, 2017) and personal integrity (e.g., Leicht-Deobald et al., 2019).
To address this problem, we examine the use of PA in organizations from a virtue ethics approach and observe the effects of algorithmic opacity, datafication, and nudging on people's ability to cultivate their virtue. We find that the use of PA can create a vicious cycle of ethical challenges that adversely impact people's efforts to develop their virtue by pursuing internal goods, acquiring practical wisdom, and acting voluntarily. We maintain that these effects are not a necessary consequence of using PA. Rather, they are likely to manifest when organizations enact as set of frames of PA as a technology that is epistemologically superior to its human counterparts. The framing of technology can shape people's understanding of the nature of the technology, what it can be used for, and the likely outcomes of its utilization. Therefore, it can influence organizational action regarding the design and implementation of the technology (Orlikowski & Gash, 1994). Accordingly, we propose that organizations can mitigate the adverse effects of PA and help workers develop their virtue by reframing PA as a fallible companion technology, which in turn can give rise to new organizational roles and practices, and alternative technology design practices.
The rest of the paper is organized as follows. First, we describe the lens of virtue ethics. Then we examine how the ethical challenges of opacity, datafication, and nudging hinder organizational members' ability to develop their virtue. Next, we discuss a view of PA as a fallible companion technology that can inform a more ethical use of this technology. We finish by proposing avenues for future research.
Section snippets
An introduction to virtue ethics
In order to detail and understand the ethical consequences of PA, we draw on a virtue ethics approach, which was developed by Aristotle. Virtue ethics highlight personal characteristics in determining the ethical nature of individuals and their actions (Aristotle, 1987). Virtue ethics focus on the virtuous agent rather than on right actions or on what anyone should do. They are therefore different to utilitarian ethics, which emphasize the consequences of actions – specifically their capacity
An ethical consideration of PA: a virtue ethics perspective
As we described at the outset of the paper, the application of PA is intended to help workers achieve personal and professional growth. Therefore, it engenders ethical issues that are particularly pertinent to people's ability to cultivate their character and exhibit virtue: algorithmic opacity, datafication of the workplace, and the use of nudging. Despite the extant discussion of these issues in the literature, their significance has not been examined either in the context of PA or in
Mitigating the adverse effects of PA and fostering an ethical approach to their utilization
Above we examined the possible adverse effects of PA from a virtue ethics perspective. Our analysis of the ethical challenges associated with PA should not be misconstrued as an outright repudiation of this technology or as a deterministic statement that using PA will always have detrimental effects. We maintain that the ethical challenges described above are neither a necessary outcome of the use of PA nor a true manifestation of their ‘spirit’ (DeSanctis & Pool, 1994). Rather, they are likely
Conclusions and further research
In this paper, we have examined the ethical consequences of PA from a virtue ethics approach. Because virtue ethics focus on people's capacity to develop their character and flourish, they offer a uniquely pertinent perspective from which to ethically evaluate PA, whose implementation and use are commonly justified by the need to expand workers' opportunities for personal and professional growth (Guenole et al., 2017; Isson & Harriott, 2016; Levenson, 2015; Marr, 2018). We have identified the
CRediT authorship contribution statement
Uri Gal:Conceptualization, Writing - original draft, Writing - review & editing.Tina Blegind Jensen:Conceptualization, Writing - original draft, Writing - review & editing.Mari-Klara Stein:Conceptualization, Writing - original draft, Writing - review & editing.
References (98)
- et al.
Collective mindfulness in post-implementation IS adaptation processes
Information and Organization
(2016) - et al.
Envisioning E-HRM and strategic HR: Taking seriously identity, innovation practice, and service
Journal of Strategic Information Systems
(2013) - et al.
Lost in translation? An actor-network approach to HRIS implementation
Journal of Strategic Information Systems
(2013) - et al.
Working and organizing in the age of the learning algorithm
Information and Organization
(2018) The limits of privacy in automated profiling and data mining
Computer Law & Security Review
(2011)The tyranny of light: The temptations and the paradoxes of the information society
Futures
(1997)- et al.
People analytics - a scoping review of conceptual boundaries and value propositions
International Journal of Information Management
(2018) Misplacing privacy
Journal of Information Ethics
(2001)Toward an ethics of algorithms: Convening, observation, probability, and timeliness
Science, Technology, & Human Values
(2016)- et al.
Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability
New Media and Society
(2016)
The Nicomachean ethics
Ethical implications of big data analytics
Datafication and empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism
Big Data & Society
The algorithm game
Notre Dame Law Review
Challenges of EDI adoption for electronic trading in the London insurance market
European Journal of Information Systems
Management as a domain-relative practice that requires and develops practical wisdom
Business Ethics Quarterly
The social power of algorithms
Information, Communication & Society
Envisioning the power of data analytics
Information, Communication & Society
People analytics gaining speed
Companion technology: A paradigm shift in human-technology interaction
The law and policy of people analytics
Bias in algorithmic filtering and personalization
Ethics and Information Technology
How the machine “thinks”: Understanding opacity in machine learning algorithms
Big Data & Society
Reliability, mindfulness, and information systems
MIS Quarterly
Investing in people: Financial impact of human resource initiatives
Design in the era of the algorithm
New games, new rules: Big data and the changing context of strategy
Journal of Information Technology
When nudges are forever: Inertia in the Swedish premium pension plan.
AEA Papers and Proceedings
Big data: A revolution that will transform how we work, think and live
Make better decisions
Harvard Business Review
Technology frames and framing: A socio-cognitive analysis of requirements determination
MIS Quarterly
Capturing the complexity in advanced technology use: Adaptive structuration theory
Organization Science
Approaches to building big data literacy
Responsible artificial intelligence: Designing AI for human values
ITU Journal, ICT Discoveries
Designing AI systems that obey our laws and values
Communications of the ACM
Power to the new people analytics
McKinsey Quarterly
The fourth revolution: How the infosphere is reshaping human reality
From “economic Man” to behavioral economics
People analytics in the age of big data: An agenda for IS research
Computational rationality: A converging paradigm for intelligence in brains, minds, and machines
Science
Fast and frugal heuristics: The tools of bounded rationality
Algorithmic realism: Expanding the boundaries of algorithmic thought
The power of people: Learn how successful organizations use workforce analytics to improve performance
Data’s intimacy: Machinic sensibility and the quantified self
Communication
Aristotle on ethics
On virtue ethics
The science and practice of workforce analytics: Introduction to the HRM special issue
Human Resource Management
Cited by (130)
Toward a better digital future: Balancing the utopic and dystopic ramifications of digitalization
2024, Journal of Strategic Information SystemsAlgorithmic management of crowdworkers: Implications for workers’ identity, belonging, and meaningfulness of work
2024, Computers in Human BehaviorEmployees’ acceptance of AI-based emotion analytics from speech on a group level in virtual meetings
2024, Technology in SocietyDeterminants of effective HR analytics Implementation: An In-Depth review and a dynamic framework for future research
2024, Journal of Business Research