1 Introduction

Western feminism, women’s liberation movements, and today’s gender equity debates are undeniably linked to the notion of humanism. Although the role of women in society was debated among some of its contemporary thinkers, Enlightenment’s guiding narrative of liberty and progress served as an ally for advancing women’s rights (Ramazanoğlu & Holland, 2002; Lettow, 2017). Throughout history, including women in the concept of the human and human rights debates nevertheless had to be fought for. One of the most prominent documentations of this is the “Declaration of the Rights of Woman and of the [Female] Citizen” by Olympe de Gouges dating back to 1791 in France. Among other progressive demands, her manifesto declared women as equals to men and worthy of the same rights. De Gouges requested equality in citizenship status—something the famous document of the French Revolution, the “Declaration of the Rights of Man and of the [Male] Citizen” from 1789, did not do (Cokely, 2018; Gouges & Fraisse, 2021). Scholars point out that the term man in the latter declaration refers to Western society’s Christian white man, a rather narrow term contradicting the universal character of the declaration (Taylor, 1999).

Hence, incremental for the first wave of feminism was to interrogate critically who the use of the terms man and human in the debates on universal human rights actually included. Consequently, first-wave feminists sought to expand the terminology to include women.Footnote 1 Prominently, Simone de Beauvoir interrogated the relation between man, woman, and human in the The Second Sex which, first published in 1949 (de Beauvoir, 1949), became later foundational for second-wave feminism. For de Beauvoir, to be considered first and foremost human is liberating for women. The title of the two book volumes points at a central position of her work—women are defined and materialized as the other of man. One of the most quoted sentences of her work is “one is not born, but becomes a woman” (de Beauvoir, 2010, p. 283). This perspective that the category of woman is constructed along sociocultural contexts makes social change possible. Furthermore, solidarity between women because of the shared experience of subjugation is established, and ultimately the case for feminism as humanism is made (Johnson, 1993). Donna Haraway has prominently stated that, even considering their diversity, all modern, Western concepts of feminism are rooted in de Beauvoir’s formative sentence (Haraway, 1991).

The relationship between the gender order, feminism, and humanism has never been straightforward, however. Just as the use of man for all humans is problematic in historical retrospection, so is woman as a unifying term and basis for political action. As early as in 1851, Sojourner Truth delivered her powerful speech “Ain’t I a woman?” at the Women’s Rights Convention in Ohio. The speech challenged the exclusion of African American men and women in debates about legal rights during and after the US-American Civil War. Moreover, and historically significant, Truth brought together women’s issues, the rights of Black women, in particular, and the fight against slavery (Truth, 1851). This challenged social justice movements that focused on either women’s rights or racial justice.

Fast forwarding to third (and fourth) wave feminism—the concept of the universal human or woman has been criticized by political activists, artists, and scholars alike. People from diverse backgrounds, Black, Indigenous, and People, notably Women of Color, have brought attention to whose identity and lived experience is included or excluded in political struggles and in academic knowledge production alike (hooks, 1981; Hill Collins, 1990; Combahee River Collective, 2001; Green, 2007). Furthermore, the lack of recognizing other social categories, such as class (Acker, 2006) or disability (McRuer, 2006; Jenks, 2019), in relation to gender has been brought forward by various thinkers as well as the critique on the binary, heteronormative concept of gender itself (Stryker & Blackston, 2022; Muñoz, 1999).

This complicated history, the ambivalent legacy of the Enlightenment for liberation movements, is important to keep in mind when we speak about Digital Humanism today. In the following, the tension between humanism and feminism is made productive; we will explore how feminist theory and gender research can enrich debates on digital humanism. In addition, and as we will see further on, who is accounted for when talking about the human is highly topical in debates on digital transformation and also has concrete impact on development, use, and effects of digital technology.

2 Intersectional Gender Research, Feminist Theory, and Digital Technology

2.1 Intersectional Gender Research

Gender studies is an interdisciplinary, broad research field with diverse roots in the social sciences and the humanities. Gender studies’ common ground is an analytical approach to how gender as a social category is constructed and unfolds in interaction with societal, cultural, and political contexts. The question of how power is distributed, materialized, and mediated through gender, race, sexuality, class, citizenship, age, and ability is hereby central. To address multiple forms of belonging and to understand how these may result in differing forms of oppression, the concept of intersectionality was coined. Intersectionality is informed by Black, Indigenous, people of color (feminist) scholarship, activism, literature, and art (Lorde, 2001; hooks, 1981; Hill Collins, 1990; Snyder, 2014). Intersectionality interrogates the universal concept of the human (and woman) by asking who is really included and furthermore through examining limits and drawbacks to social categorization. Notably, American legal scholar and civil rights activist Kimberlé Crenshaw showed that existing anti-discrimination laws did not work for Black women since the laws did not recognize multiple causes of discrimination. Crenshaw used the image of a traffic intersection to illustrate that social categories, such as gender and race, do not exist separately but rather are interdependent in the way a person is socially positioned (Crenshaw, 1989).

In Sect. 2.3, we will come back to explore how gender and digital technology interact and how useful an intersectional perspective is for understanding the relation between structural inequalities and digital transformation.

2.2 Feminist Theory and Epistemologies

Gender studies is informed by heterogeneous strands of feminist and critical theory. Importantly, feminist and critical race studies have analyzed the history of science demonstrating the failure to include marginalized people and perspectives. These fields demonstrate how scientific knowledge and the dynamics of intersectional gender relations intertwine, and they provide examples of how individuals deemed as the other may suffer harm (Schiebinger, 1989; Gowder, 2015; Zuberi & Silva, 2008).

Marginalized perspectives and people may mean exclusion from scientific inquiry, denial of epistemic authority, production of harmful theories, or stereotyping of the marginalized group or lack of acknowledging structural inequalities that affect the group (cf. Anderson, 2020). Ultimately, this can lead to biased knowledge and artifact production and hinder scientific and product innovation.

Feminist epistemologies allow us to analyze the role intersectional gender conceptions play in our ways of knowing. According to feminist theory, how and through which means knowledge is formed, and what counts as knowledge, is always situated and context-dependent.Footnote 2 In this regard, situated knowledges is a central concept of feminist epistemologies; it allows one to reflect upon the position from where and by whom knowledge is formed and to acknowledge that all knowledge is partial and forms of knowledge manifold (Haraway, 1988). Two main aspects are noteworthy here. First, the partial perspective questions universal knowledge claims and instead offers what Haraway calls “objectivity as positioned rationality” (ibid., p. 590). Feminist standpoint theory has developed the concept of “strong objectivity” (Harding, 1986, 1992). Sandra Harding questions the proclaimed neutrality of scientific knowledge production and instead introduces reflexivity on the researcher’s standpoint to address and counter possible social bias (ibid.). Second, it is this positioning that makes scientific and technological knowledge and artifact production accountable in the first place. In debates on digital humanism, calls for accountability of technology have gained a new urgency. Realizing accountability is indispensable for ethical, legal, and social aspects or implications in information technology (IT) and artificial intelligence (AI) research and development (Larsson et al., 2019).

2.3 How Gender and Technology Interact

Today, digital technology impacts all life domains, and therefore we need to take a closer look at what this means for social equity. The relationship between the intersectional gender order and technology is complex and multifaceted. We can identify three main perspectives on how to approach the topic: first, unequal participation in technology research, development, and distribution; second, technology’s impact on how gender is shaped, lived, and experienced; and third, how technology itself is gendered, racialized, classed, etc. These perspectives are not independent but impact each other as we will see in the examples provided.

Unequal participation in the technological field is often the first issue that comes to mind when gender and technology are mentioned together. For Western countries and the Global North, gender and BIPOCFootnote 3 inequality in IT research and development is a persisting challenge (Kapor Center and ASU CGEST, 2018; Charleston et al., 2014; Stoet & Geary, 2018). Also, access to digital technology is unfairly distributed—globally but also reflecting and amplifying social inequalities locally (Goedhart et al., 2019; Choi et al., 2022).

By taking up science and technology studies’ (STS) understanding of the co-construction of society and technology (Bijker et al., 1987), feminist scholars have analyzed what this means for the gender order (Wajcman, 2000). Related to the topic of unequal participation, one important strand of work is making marginalized people, perspectives, and experiences visible. For example, the role of Black women scientists in computing has only lately received attention, prominently through the book (and film) Hidden Figures (Lee Shatterly, 2016).Footnote 4 Furthermore, the manufacturing labor of hardware by people (women) of color under often problematic work conditions that makes digital transformation possible is hidden from users of technology (Nakamura, 2014).

Second, digital products and services in use strongly impact people’s well-being. One example are menstrual cycle tracking apps that form a contested zone between gender, politics, healthcare, and the data economy. Noteworthy, menstruation tracking was not implemented in early health monitoring technology.Footnote 5 Today, there is integration as well as stand-alone apps that could be used to promote research into menstrual health, provide a form of empowerment, and promote agency and self-determination of a person’s health status. On the other hand, self-tracking shapes the experience of menstruation as a process that needs to be monitored and controlled and should meet normalized patterns (Hohmann-Marriott, 2021). In addition, popular apps are built on business models that rely on extracting data, which leads to a lack of privacy, transparency, and possibility of intervening and control of data from the lay user: “To perform their explicit functions, menstrual apps collect massive amounts of highly personal data. This data creates vulnerability; for example, data can reveal nonconforming menstruators (i.e., transgender or with health conditions), or information can be used to flag suspected pregnancy or termination of pregnancy” (ibid.). Depending on the political context, this extraction and exploitation of sensitive health data can be really dangerous and deeply affect people’s lives based on their gender, sexual identity, orientation, and choices.Footnote 6 Hence, calls for health app development that take sociopolitical context, unequal power relations, and values such as non-discrimination and self-determination into account are important, as are policy regulations (Fox & Epstein, 2020).

The outlined issues reach beyond the given examples. Shoshana Zuboff has prominently stated that the interplay between data-driven digital technology, big corporations, predominant business models, and lack of regulatory power has led to “the age of surveillance capitalism” (Zuboff, 2018).

Third, as we have noted before, questioning the neutrality of technology is central for science and technology studies. Feminist and postcolonial STS scholars have analyzed the role gender, race, age, and class play in technology design and found that services and products can promote inequalities (but could also serve to alleviate them) (Harding, 2011). In the 1990s, studies on domestic technology—such as the microwave oven, vacuum cleaning systems, or washing machines—showed how appliances are informed by gender roles and reinforce the gendered division of labor at home and in manufacturing (Cockburn & Ormrod, 1993; Cockburn & Fürst-Dilić, 1994). Today, smart home technology design should anticipate and counter misuse in the context of domestic abuse and partner violence (Leitão, 2019).

In recent years, worrisome examples have brought broader attention to ethical aspects of IT/AI design—particularly to machine learning technology, a subfield of AI. These data-driven systems can mirror social bias and lead to an amplification of social inequalities, among others, in domains like the job sector, health and social services, and the justice system (Eubanks, 2017; Wachter-Boettcher, 2017). Therefore, current AI systems vividly demonstrate how interwoven society and technology are. In recent years, researchers from technical and social disciplines have increasingly made an effort to address questions of fairness and social justice of AI (Binns, 2018; Mehrabi et al., 2021; Draude et al., 2022). Social bias leading to problematic gendered, racialized, classed effects of technology has been linked to multiple causes: to the quality of the training data, to constraints and limitations in algorithms and modeling, and to emergent bias through the context of use (Friedman & Nissenbaum, 1996; Draude et al., 2020).Footnote 7

In simplified terms, machine learning technology automatically produces algorithms by training statistical models using existing datasets. As more data becomes available, they may even adapt their behavior. Machine learning systems are utilized to analyze vast amounts of data and predict future outcomes. This also means that these systems can inherit biases from the past datasets they are trained on.Footnote 8

In a much-noted study, Bolukbasi et al. show how word embeddings may reinforce gender stereotypes.Footnote 9 Furthermore, the authors provide a methodology on how to remove gender stereotypes while staying true to word meanings and associations (Bolukbasi et al., 2016). If algorithms are trained using datasets that contain in their majority gender-stereotypical attributions and a proximity between terms such as woman and nurse, but man and doctor, the software learns these attributions and reproduces them in the future. For the study, the authors analyzed an artificial neural network trained by Google, which used over 3 million words from Google News articles as its database. The aim was to derive language patterns which can be represented mathematically (as vectors in vector space). Some of the attributions placed as extremes with respect to feminine pronouns are “homemaker, nurse, receptionist, librarian, socialite, hairdresser, nanny, bookkeeper, stylist, housekeeper”; those with respect to male pronouns are “maestro, skipper, protégé, philosopher, captain, architect, financier, warrior, broadcaster, magician” (ibid., p. 4357). Following this, Bolukbasi et al. presented automatically generated analogies between different terms for review to Amazon Mechanical Turk crowdworkers. For each word embedding, the workers should decide whether it is a gender stereotype or whether it is a gender appropriate analogy. Gender stereotypical she-he analogies included “sewing-carpentry, nurse-surgeon, blond-burly, cupcakes-pizza, lovely-brilliant, softball-baseball,” etc., while gender appropriate she-he analogies were found in “queen-king, waitress-waiter, sister-brother, mother-father,” etc. (ibid., p. 4357).

This example of word embeddings not only shows how technical development can perpetuate discrimination—the study also makes prevalent social bias visible in the first place and offers methods for debiasing. Because gender bias is presented as a mathematical model, mathematical methods can then also be used for alleviating bias.Footnote 10 The authors also point toward critique or potential drawbacks of such debiasing methods:

One perspective on bias in word embeddings is that it merely reflects bias in society, and therefore one should attempt to debias society rather than word embeddings. However, by reducing the bias in today’s computer systems (or at least not amplifying the bias), which is increasingly reliant on word embeddings, in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society. At the very least, machine learning should not be used to inadvertently amplify these biases, as we have seen can naturally happen. In specific applications, one might argue that gender biases in the embedding (e.g., computer programmer is closer to he) could capture useful statistics and that, in these special cases, the original biased embeddings could be used. However, given the potential risk of having machine learning algorithms that amplify gender stereotypes and discriminations, we recommend that we should err on the side of neutrality and use the debiased embeddings provided here as much as possible. (ibid, p. 4363)

As we have learned above, gender is one possible factor of social inequality. Further categories, such as race, class, disability, age, etc., intersect with gender. Already as a computer science student, Joy Buolamwini found out that facial recognition technology would not recognize her face. The technology, at the time, did not work for Black women—while in contrast, a white mask with no human features did work (Buolamwini, 2016). This shows how technology dehumanizes a person based on skin tone. In her study “Gender Shades,” Buolamwini, together with Timnit Gebru, further analyzed commercial facial recognition technology. They found that women with dark skin or non-Western-classified facial features are most often misidentified. However, men with dark skin or non-Western-classified facial features are also more poorly identified than women with light skin (Buolamwini & Gebru, 2018). Other studies have shown that visual data used in AI systems perpetuates cultural and ethnic stereotypes (Zou & Schiebinger, 2018).

These examples illustrate how the three perspectives we mentioned at the start of this section intertwine. It comes as no surprise that discriminatory effects of IT/AI have been brought to our attention often through studies done by Black women, people of color, and marginalized groups, in general. Unequal participation in the technical field can mean that problematic effects of digital technology only become noticed after deployment, and technology’s impact on our gendered, racialized, classed, etc. realities is becoming ever greater as a result of digital transformation. The rise of AI demonstrates how inequity might become automated and amplified, if no intervening countermeasures are undertaken. In the following concluding part, we sum up our findings and furthermore learn about some strategies and approaches toward more equitable IT/AI design.

3 Conclusions

Against the historical background, we have learned that it is important to reflect upon the category of the human and to interrogate who is included and who is not. For a more just and equitable (digital) future—we can turn toward the rich scholarship of critical theory and methodology that centers marginalized perspectives, which allows us to enrich (Digital) Humanism. Elsewhere, we have made the claim that in IT/AI systems development, marginalized perspectives mostly get accounted for when we design for specific user groups, such as the elderly, people in care homes, or people with disabilities (Dankwa & Draude, 2021). A more inclusive digital transformation would mean to always center intersectional, diverse perspectives, people, and contexts and furthermore advance systemic and sociotechnical approaches to IT/AI development.

Returning to the examples from Sect. 2.3, it also would not be enough to counter bias in IT/AI systems through increased data extraction or better mathematical models. Even if we develop facial recognition technology that—from a technical perspective—functions for all people, its use still may heavily impact vulnerable groups. Especially, the heavy reliance on data creates a field of tension for social equity—on the one hand, biased or non-representation in datasets is problematic. Reliable data is needed for making discrimination visible, e.g., as grounds for affirmative action but also for IT/AI development. In many domains, lack of data leads to non-usable, inaccessible, and even dangerous services and products (Criado-Perez, 2019). On the other hand, increased data collection can be highly problematic, depending on the sociopolitical context. Visibility may expose vulnerable people or make them vulnerable in the first place. Categorization runs the risk of solidifying stereotypical assumptions about certain groups of people, and of course, classification systems also have problematic historical backgrounds (Bowker & Star, 1999).

In conclusion, we can sum up steps needed for a more just digital transformation. The first step is awareness that questions of power, inequality, and the affordances of diverse social groups and contexts matter throughout all phases of digital development and later usage. Furthermore, the societal challenges that come with pervasive digital technology can only be met through interdisciplinary exchange; particularly, fields with expertise on discrimination should be worked with. The second step is making the decision to actively design for social good. Various long-standing approaches that foster democratic values, participation, and self-determination in and through IT, such as participatory design (Bødker et al., 2021), value-centered design (Friedman & Hendry, 2019), and socio-technical design (Mumford, 2006), exist. Social justice, however, must first be acknowledged as an important value, actively pursued, and the corresponding expertise must be considered. Design frameworks that have social justice integrated as a core value already are anti-oppressive design (Smyth & Dimond, 2014) and design justice (Costanza-Chock, 2020). Furthermore, AI technology—automated decision-making, recommendations, filtering, content generation—brings new challenges to fields such as human-computer interaction and information systems design. The third step concerns regulatory practices and policy making, which are incremental in making steps one and two possible as well as socio-technically sustainable (Palmiotto, 2023; European Commission, 2021).

Discussion Questions for Students and Their Teachers

Relate the following aspects to digital transformation in your field of research, work, or study.

  1. 1.

    Identity and intersectionality

    When you talk about the human, who is considered, and who is not?

    How could an intersectional perspective broaden your view?

  2. 2.

    Knowledge production and methodology

    Can you identify marginalized perspectives? Think about the in/visibility of people, areas of work, and non-human actors.

    Do your methods, approaches, and tools need to change to be more inclusive?

  3. 3.

    Power and hierarchies

    How do power dynamics materialize in your field, e.g., hierarchies between tech developer and lay user, expert, and non-expert but also structural inequalities in society?

Furthermore, which of the steps outlined in the conclusion (awareness raising, decision to design for social good, policy making) is most needed in your field? Find examples to illustrate your answers!

Learning Resources for Students

  1. 1.

    Bardzell, S. (2010) ‘Feminist HCI: Taking Stock and Outlining an Agenda for Design’, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA, Association for Computing Machinery, pp. 1301–1310.

    Bardzell introduces feminist theory and explores its meaning for interaction design. The paper contains examples from industrial design, architecture, and game design.

  2. 2.

    Irani, L., Vertesi, J., Dourish, P., Philip, K. and Grinter, R. E. (2010) ‘Postcolonial computing’, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA, ACM, pp. 1311–1320.

    This paper brings together human-computer interaction, science and technology studies, and postcolonial thinking to address theory and design issues in so-called designing for development debates in global contexts.

  3. 3.

    Spiel, K. (2021) ‘“Why Are They All Obsessed with Gender?”— (Non)Binary Navigations through Technological Infrastructures,’ Designing Interactive Systems Conference 2021. New York, NY, USA, Association for Computing Machinery, pp. 478–494.

    Excellent study on how gender is encoded in technological infrastructures. The paper explains gender theory and the co-construction of gender, interaction technology, and infrastructures.

  4. 4.

    Draude, C., Klumbyte, G., Lücking, P. and Treusch, P. (2020) ‘Situated algorithms: a sociotechnical systemic approach to bias’, Online Information Review, vol. 44, no. 2, pp. 325–342.

    This paper provides a deeper insight into the relation of algorithms, social bias, and sociotechnical systems design. It accounts for social inequalities in systems design through a proposed methodology.

  5. 5.

    Draude, C., Hornung, G. and Klumbytė, G. (2022) ‘Mapping Data Justice as a Multidimensional Concept Through Feminist and Legal Perspectives’, in Hepp, A., Jarke, J. and Kramp, L. (eds) New Perspectives in Critical Data Studies, Cham, Springer International Publishing, pp. 187–216.

    This interdisciplinary paper interrogates data justice through the lenses of feminist and legal studies to reconfigure data justice as a multidimensional, interdisciplinary practice in IT design.

  6. 6.

    Draude, C. and Maaß, S. (2018) ‘Making IT work: Integrating Gender Research in Computing Through a Process Model’, Conference, Gender & IT: Proceedings: 14.-15.05.2018, Heilbronn. Heilbronn, Germany, 5/14/2018–5/15/2018. New York, New York, The Association for Computing Machinery, Inc, pp. 43–50. Website: www.gerd-model.com

    The GERD model is a process model that allows to work with intersectional gender knowledge in IT systems design, development, and research.