Introduction

Discussions about research ethics have previously focussed mainly on research misconduct. Recently, however, the focus has widened to include concerns about integrity and responsible research (Horbach & Halffman 2017; Owen et al. 2012; Shaw 2019; Steneck 2006)1. By shifting attention from merely avoiding misconduct to ensuring integrity, ethics has become a constitutive element or guarantor of overall good science. As the use of agent-based social simulation grows and the method is increasingly recognised as an effective approach for social research, practitioners may benefit from engaging in collective critical reflection about the way ethics permeates their everyday practices and how disciplinary agreements on ethical compliance could help with the further maturation and consolidation of agent-based social simulation.

Previous literature has mostly considered the ethical challenges practitioners face with their simulations (e.g., Bak 2022; David 2021; Shults & Wildman 2019; Sobkowicz 2019; Wildman 2019). This article takes a complementary approach. We enquire about the ethics that agent-based social simulation faces as a field of study and suggest that, from this perspective, ethical challenges arise from both its practice and its organisation. Our goal is twofold: First, to outline how challenges in each domain manifest and, second, to explore alternatives for disciplinary ethical standardisation.

Overall, there are many reasons for beginning to discuss the ethics of agent-based social simulation, most notably: i) as mentioned, ethics is gradually becoming a constitutive element of good science, ii) some form of ethical standardisation could be seen as a natural next step in the progressive disciplinarisation of agent-based social simulation (Anzola 2021a; Freidson 2007), iii) the ethics of digital technologies are gaining increasing relevance (Floridi 2018; Sandler 2014; Tsamados et al. 2022) and practitioners of agent-based social simulation have much to contribute to the discussion, iv) ethical challenges increase as the modelling process becomes more elaborate and diversified and novel phenomena are explored (Calder et al. 2018; Woermann & Cilliers 2016), and v) ethics could become a resource that promotes and facilitates interaction with the stakeholders who are now involved with simulation models and who use them for a multiplicity of purposes, perhaps most importantly, for decision-making (Gilbert et al. 2018; Shults & Wildman 2019).

This article is an initial step in the process of reflecting on the ethics of agent-based social simulation. First, we outline the motivations to think ethically about agent-based social simulation, both from a research and from a policy and practice point of view. Next, we break down the modelling process to identify when and how ethical considerations arise in the ABM workflow. We then consider how diverse organisational dynamics and structures can become sources of ethical challenges in social simulation. Finally, we ask how we can improve ethical practices in agent-based modelling, and list some actions and decisions that need to be considered moving forward. In the Appendix, we present a draft code of ethics. We intend for this draft to be further developed by the community before being adopted by individuals and groups within the field informally or formally.

Why Think Ethically about Agent-Based Social Simulation?

From a research point of view

Thinking ethically about research promotes values, attitudes, and behaviours that are fundamental for the success of scientific endeavour. There are, in principle, four main reasons to justify this claim (Resnick 2013). Initially, the pursuit of true knowledge, arguably the defining feature of science, as well as the satisfaction of epistemic goals such as explanation, prediction, intervention, or control, might not be attainable without widespread commitment to values such as honesty and objectivity. Likewise, ethics is fundamental to guarantee the stability of the social and cognitive organisation of science, which depends on collaborative work. Behaviours such as respect for intellectual property, among others, permit the emergence of trustful interactions that make strong and longstanding cooperation possible. Ethics also helps to build public support for scientific research. By adopting behaviours that conform to socially accepted norms and custom, such as the stewardship of public funding for science, scientists foster the public’s confidence in their professional actions and the overall results of scientific research. Finally, as a social institution, science needs to align itself with the moral values and expectations of general society. Several instances of research misconduct, such as disrespect for peers’ or participants’ rights, are negatively valued, not only because of their scientific impact, but also because of the societal moral inclination towards fairness, justice and equality.

Although most researchers will probably acknowledge the need to think ethically about research, it is unlikely that they all will agree on what should be subject of ethical concern. Significant differences worldwide have been found regarding perceptions about research misconduct, the development of ethical standards and the institutionalisation of ethical regulation of research (Ana et al. 2013). These differences are found because science, as a social institution, is context-bound and also because of the ever-changing nature of ethical concerns. Aspects such as the under-representation of groups that have been recently brought to the foreground were, for a long time, considered only peripheral to the practice of science or not even a subject of ethical concern.

Disagreement among researchers is not limited to the delimitation of what constitutes an ethical research issue, but also includes the way in which the fulfilment of appropriate ethical behaviour can be guaranteed (Davies 2019; Fleischmann et al. 2010; Shaw & Satalkar 2018). Ethical codes both prescribe and proscribe behaviour. Yet, there are not predefined standards regarding how a specific issue should be addressed and to what extent. Whereas there will likely be widespread consensus about the need, for example, to explicitly proscribe data manipulation in a code of ethics, other concerns, such as fair and equal treatment and inclusions of underrepresented groups, might be more controversial. Disagreements may be due to the nature of the behaviour that is subject to ethical judgements, the centrality of this concern for the everyday practices of science, and beliefs about the proper balance between the promotion of ethical behaviour, monitoring and vigilance, and sanctions.

From a policy and practice point of view

Agent-based modelling is increasingly being used in practical settings (Gilbert et al. 2018). That is, it is not just being used by researchers to address applied research questions, but also being used by decision-making organisations (e.g., government, business, and NGOs) to inform decisions. Often organisations will commission consultants or researchers to develop an agent-based model of a topic, or less often they will develop one themselves in-house. Agent-based models may also be used as a component of larger models that bring together different methodological approaches to represent physical, biological, ecological, technical, and environmental as well as socio-economic systems. Agent-based models are also a part of a wider push for the use of ‘complexity-appropriate’ analysis in applied settings (Barbrook-Johnson et al. 2021) along with other complexity and systems science-inspired methods.

While the use of agent-based models is still not widespread, their growing influence necessitates a reassessment of the ethics of agent-based modelling from a practice point of view. It raises multiple questions for applied settings, such as: should the deployment of agent-based models be more formalised and standardised? Should the method follow an agreed set of technical and ethical standards? As use grows, developing a clearer ethics of agent-based modelling may increase the chances of it being used more in decision-making processes, and in turn improve the quality of those processes, and the final outcomes they deliver.

When and How Ethical Considerations Arise in the Simulation Workflow

The ethics of computer simulation is a surprisingly underdeveloped topic, especially considering the increasing focus on the ethics of digital technologies (e.g., Floridi & Taddeo 2016; Tsamados et al. 2022). Several aspects of computer simulation with potentially relevant ethical implications have been identified in the literature. In most cases, however, they are discussed without elaborating on these implications. For example, while the simplifying nature of computational modelling has some distinctive ethical implications (Brey 2014), the literature frequently fails to acknowledge this.

In the general computer simulation literature, ethics has been accounted for in a multiplicity of ways. Some texts approach computer simulation relying on particular ethical accounts. Palmer (2017), for example, explicitly reframes the model evaluation process within consequentialist ethics. Others incorporate ethics more narrowly through specific morally relevant concepts e.g., trust (Williamson 2010). There are, as well, discussions of ethics and standardisation at the professional or disciplinary level (e.g., Durán 2018; Ören et al. 2002; Tolk & Ören 2017). Regularly, however, ethics is discussed from the perspective of how values influence the modelling process, particularly, the evaluation stage. Prior research has most prominently considered: (i) different types of values that are relevant in the context of computer modelling (Hirsch Hadorn & Baumberger 2019; Intemann 2015), (ii) the connection between values, uncertainty, and subjectivity (Morrison 2014; Parker 2014), and (iii) the contextual determinants and nature of computational evidence (Cassini 2022; Parker & Winsberg 2018).

The agent-based social simulation literature is somewhat different. In most cases, the reference to ethics is more explicit and elaborated. There is also a narrower focus on the ethical implications of the multiple uses of computer simulation, motivated both by the increasing interest in aiding decision-making and the progressive popularisation of empirically calibrated models in social simulation. While there is a common motivation, the discussions address a variety of interrelated topics e.g., potential intentional misuses of simulation models and results (Sobkowicz 2019), ethical challenges arising from decision-making in contexts where negative social consequences are unavoidable e.g., in the distribution of scare resources (Bak 2022), the imperative for practitioners to purposefully seek the betterment of society through their labour (Wildman 2019), the ethical responsibility to model morally sensible topics (or models with morally sensitive implications) (Shults & Wildman 2019), and how any given implementation amounts to taking a moral stance (David 2021).

A meta-ethical framework has also been put forward (Shults & Wildman 2019) and exemplified with models (Shults et al. 2018; Tolk et al. 2021). This framework invites practitioners to reflect about the ethics of modelling on three levels: philosophical meta-ethics (i.e., considerations about what is ‘good’ and ‘right’ in different modelling and simulation activities), scientific meta-ethics (i.e., considerations about how a model captures and justifies salient moral features that are inherent to social dynamics) and the practical import of meta-ethics (i.e., considerations about the criteria used to justify ethical judgements). Even though this framework brings the ethical aspects of the modelling process to the forefront, it is not meant for the identification and resolution of everyday ethical challenges during the simulation lifecycle. As the authors explain, “[r]ather than a guide for resolving specific ethical dilemmas, that framework is meant to provide a way of thinking about the ethics of simulation” (Shults et al. 2018 p. 4069, emphasis in the original).

Still lacking, then, is a systematic exploration of the many ethical challenges that practitioners face in everyday instances of modelling. Table 1 lists a series of questions that could help identifying potential ethical issues during the simulation workflow and, simultaneously, kick-start a further more detailed and more transversal exploration of the source and nature of ethical challenges in the practice of agent-based social simulation. These questions are organised following a common separation of the modelling process into distinct stages or subprocesses: conceptualisation, implementation, execution, analysis, and dissemination of the computational model (Galán et al. 2016; Gilbert 2008; Railsback & Grimm 2012; Squazzoni 2012; Wilensky & Rand 2015).

Table 1: Common ethical issues encountered during the simulation life cycle.
Stage of modelling Potential ethical issues arising
Establishing purpose, scope, and research questions How are the topic selected and the research questions formulated?
Who has input into this stage?
Do disempowered groups, or those that are the subject of analysis, or who may be affected by decisions arising from it, have a say?
Develop a conceptual model A key stage for making large simplifications, abstractions, and assumptions – are these well justified and assessed for ethical implications? Are they reported clearly and honestly?
Acquire information, data, and theory on the target system What data and theory are prioritised over others?
What data or theory is absent?
Was data collected in ethical way?
Develop full model Is the model code transparent (intelligible for an external observer) and documented?
Are the elements identified in the conceptual model sufficiently and adequately incorporated into the computational model?
Verification, experimentation, and validation How is quality and rigour ensured through the experimentation, verification and validation methods selected?
Analyse and interpret outputs What outputs are chosen for analysis?
Who has a say in interpreting results and deciding what is noteworthy and worthy of further exploration?
Is the analysis and interpretation process of the output transparent?
Refine and update model or any of its inputs (at any stage) Reiterate all of the above, as needed
Reporting and publication How are results (visual, numerical, and discursive) tailored for reporting?
Are results published in a fair and unbiased way?
Is the reporting open access?
Is the model code available and usable?
Is authorship and attribution of contributions done fairly?
Will the results be communicated to an appropriate larger audience?
How are results (visual, numerical, and discursive) tailored for reporting?
Post reporting and publication Is the code updated so it remains usable (after software changes)?

There are some elements that are worth mentioning about the challenges listed: (i) while the agent-based social simulation literature has so far centred on ethical concerns associated with how models are used, every stage of the simulation lifecycle is worthy of ethical consideration, (ii) ethical concerns that emerge during the simulation workflow could be one-off, repeated, or transversal, (iii) some major ethical challenges do not depend entirely on the model itself, and (iv) the options that different questions give room for do not necessarily all have the same moral standing. For instance, the question about ethical data collection implies a separation between ethical and unethical alternatives. Other questions, such as the one about the outputs chosen for analysis, address, instead, issues of value conflicts and trade-offs.

The acknowledgement that not all ethical challenges have the same moral standing raises some interesting questions from the wider perspective of scientific integrity. Addressing value conflicts and trade-offs, for example, rather than avoiding misconduct, has to do with critically selecting the alternative that is subjectively believed to best fit the modelling goals and resources, being mindful of these choices and their implications throughout the entire modelling process, and being transparent about these choices in the reporting. It has been shown, however, that some researchers and institutions have a narrow understanding of integrity as misconduct (Anderson 2018; Shaw & Satalkar 2018). Collectively agreeing on the ethical challenges emerging during the simulation workflow in agent-based social simulation seems, then, to call for an institutional reflection on the meta-ethical elements that Shults & Wildman (2019) include in their framework. It is important, however, to determine the dimension and scope of diverse ethical judgements about the modelling process. There is an interesting discussion in the literature on computer simulation ethics about whether the most relevant criteria for any decision in which values and subjectivity are involved is by default ethical. Depending on the values involved and how subjectivity intervenes, some (e.g., Cassini 2022; Morrison 2014) suggest that the best criteria might be methodological or epistemological, rather than ethical.

Organisational Sources of Ethical Challenges in Social Simulation

Reflecting about the appropriate scope and dimension of ethical reflection in social simulation also raises some issues of interpretation for the questions listed in Table 1. For instance, the first questions about topic selection can be reinterpreted in a more general way as a question about whether the full spectrum of relevant possible topics is currently being covered by the agent-based social simulation literature. While, for example, heterogeneity among individuals has inspired models addressing the dynamics of inequality, discrimination and segregation, among others, there is a knowledge gap about the instantiation of these dynamics in particular populations. For example, given the contemporary interest in advancing the promotion and recognition of LGBTQIA+ rights, it could be argued that agent-based social simulation, as a community or area of study, has a moral responsibility to try deliberately to engage with these understudied topics, especially when they figure prominently in the public sphere.

It would not be reasonable to ask for practitioners to always consider this additional interpretation of the first question in particular instances of modelling. It is, however, an important ethical question that should be addressed. It becomes evident, then, that some ethical challenges should be tackled collectively by the community, for they pertain more widely to the organisation of agent-based social simulation as an area of research. A few of the most relevant organisational ethical challenges may be identified by comparing with other well-established disciplinary areas, for they are shared (Iverson et al. 2003). There are, however, some ethical issues that arise from the distinctive organisational features of agent-based social simulation that practitioners should be mindful of. This section will centre centre on two dimensions of the organisation of social simulation: its interdisciplinary and technology-dependent nature, to exemplify the type of ethical challenges that are common at the organisational level and show how they differ from challenges to the practice of social simulation.

Interdisciplinarity

Following Galison (1996), computer simulation is often referred to as a ‘trading zone’ i.e., “an arena in which radically different activities could be locally, but not globally, coordinated” (p. 119). The idea is that, contrary to typical disciplinary work where all collaborating researchers share a paradigm, in computer simulation, there is collaboration among multiple expertise communities that locally contribute to the activity, without merging or renouncing the paradigms they are affiliated to. Because of the different expertises involved, computer simulation displays high levels of opaque epistemic dependence (Wagenknecht 2016). This dependence implies, on the one hand, that there is asymmetrical intellectual authority over a domain or expertise among members and, on the other hand, that the fields of expertise do not necessarily overlap, so there are difficulties in judging the other members’ expertise. For example, often a social scientist will not only defer to a computer scientist when it comes to technical decisions about implementing the computer model, but also will not have sufficient expertise to judge whether it is the most efficient and effective implementation of the model.

This context of opaque epistemic dependence and increasing scientific collaboration is relevant from an ethical point of view because of its effects on aspects such as accountability and epistemic trust. In traditional research and disciplinary collaborations, trust is partially built upon certification (Wagenknecht 2016). Yet, most practitioners of agent-based social simulation are not trained or certified as such. Thus, alternative mechanisms for trust-building need to be employed, not only to judge the competence of others, but also your own. Accountability also becomes more difficult because there might not be overlapping expertise, researchers might be working in dissimilar institutional and normative contexts and research practices are increasingly becoming cognitively, financially, and socially decentralised (Winsberg et al. 2014). It is not clear, for example, to what extent agent-based social simulation is affected when research is privately funded or whether cognitive asymmetries become more relevant when dealing with stakeholders in domains such as policy-making.

Ethical concerns arising from the interdisciplinary nature of agent-based social simulation are not limited to epistemic dependence. Even in instances of overlapping expertise, there might be ethical issues that need to be reviewed either because they are not uniformly covered by the disciplinary traditions or because there are conflicting ethical principles and commitments. Social scientists, for example, have a range of different attitudes to deception in empirical research. In economics, deception is often proscribed, to the point where some journals will refuse to publish manuscripts based on research in which participants have been deceived. Conversely, in sociology and psychology, deception is often not only acceptable, but considered an important methodological resource (Barrera & Simpson 2012; Krasnow et al. 2020). While the reason to permit or proscribe deception might not necessarily be ethical in nature (Barrera & Simpson 2012), the potential professional and personal consequences for researchers who include deception in their methodological designs do raise some ethical concerns. Given the increasing popularity of empirical research, and particularly of experiments, in agent-based social simulation, it would be expected that the community should address the topic in a discussion about the ethics of their research.

Differences between types of computer simulation also have interesting ethical implications. In comparison with the discrete-event simulation community, for example, the agent-based simulation community tends to work predominantly in academia, rely more often on the use of theory, and use models for experimentation and explanation, rather than prediction (Padilla et al. 2018). While the object of study and technical skills required in each case may not significantly differ, the methodological particularities of each method and the physical, social, and cognitive organisation underlying practices of computer simulation lead to disciplinary dynamics with distinct ethical ramifications. For example, there are noticeable differences in the way the agent-based social simulation community and the larger social simulation community perceived their role in the COVID-19 outbreak (compare the editorial articles published in JASSS, Squazzoni et al. 2020 and the Journal of Simulation, Currie et al. 2020), which, in part, can be attributed to disciplinary idiosyncrasies, such as the somewhat ambiguous relationship that the agent-based social simulation community has historically had with prediction (see e.g., the ‘Prediction’: https://rofasss.org/tag/prediction-thread/ and ‘JASSS-COVID’: https://rofasss.org/tag/jasss-covid19-thread/ threads in the Review of Artificial Societies and Social Simulation).

Technology dependence

There are some distinctive ethical concerns that arise from agent-based social simulation’s strong reliance on computer technology. These concerns can be classified into four groups. The first pertains to the moral standing of computational social science. There are distinctive dynamics in technology-intensive social research for which traditional social science research ethics are insufficient. How should, among other things, the morality of ‘virtual’ experiments or ‘artificial’ agents be approached? Dignum et al. (2018), for example, argue that the practice of simulation might become more ethical by making artificial agents capable of reflecting morally about their actions and decisions.

The second group comprises considerations about the ethics of computational modelling. These considerations may be technical or conceptual, and pertain to computer simulation in general or just agent-based social simulation. Any computer simulation, for example, faces problems of trustworthiness linked to epistemic opacity of the computation (Durán & Formanek 2018). At the same time, practitioners of agent-based social simulation face heightened risks with the intelligibility, transparency, and commensurability of representation, given the unformalised and multiparadigmatic nature of social theory and the syntactic and semantic flexibility of computer languages (Anzola 2021b; Poile & Safayeni 2016).

The next group of ethical considerations includes a large and diverse set of issues that are related to the development and governance of the general body of knowledge in social simulation. In comparison to other forms of disciplinary knowledge, everyday practices could be disrupted if, for instance, the current software stops being supported or developed (even though some popular software is free and open source) due to the specificity of programming languages. Knowledge from models not converted might be lost and some technical skills depreciated. This unique risk puts pressure on practitioners to develop adequate practices of knowledge curation e.g., model documentation, update, and preservation (Calder et al. 2018). In turn, as with most information technologies, agent-based social simulation has adopted a distributed structure of knowledge governance that facilitates current practices, but, at the same time, creates additional risks and ethical concerns. The possibility of third-party independent use of models, algorithms, data, and frameworks leads to questions about whether computer simulation should be classified as dual-use research (i.e., research with the potential for both benevolent and malevolent applications) (Sobkowicz 2019). It also might require the development of a more elaborated account of authorship. Similarly, this decentralised governance might hinder collaboration when dealing with stakeholders in other domains, where individuals might feel more comfortable with or are required to adopt centralised models of knowledge governance.

Finally, in the last group, there are ethical concerns associated with the social organisation of agent-based social simulation. As with any other technology, agent-based social simulation must deal with the fact that technological infrastructure is unevenly distributed. If modelling intricate socio-ecological or socio-technical systems requires having access to High Performance Computers (HPC), it is likely that these phenomena will mostly be modelled by researchers in developed countries or that the opinion of these researchers will dominate the discussion. In the same way, the general domain of modelling and simulation seems to reproduce disparities that are common in other STEM areas e.g., gender and ethnicity (Padilla et al. 2018), reinforcing conditions of underrepresentation. There is an ethical challenge for the field to guarantee that access to technology does not become a source of ‘epistemic injustice’ (Fricker 2007), where experiences and knowledge from those social groups with uneven access to and command of technological and technical resources, both inside and outside academia, carry less value. Practitioners, particularly those with privileged access to more advanced technological infrastructure, should also be mindful of the potential risk of a technology-based form of scientific imperialism or determinism. That is certainly a potential source of tension and conflict when engaging in the socially asymmetric relationships, particularly those enabled by digital technologies (Origgi & Ciranna 2017; Wyatt 2008).

The social organisation of agent-based social simulation is equally important when discussing the ethical relationship of the field with science as an institution and society in general. In recent years, the traditional model of science has been challenged to become more open. This challenge manifests in actions such as the coordination of several scientists refusing to submit, review or serve as editors for a closed-access journal (Statement on Nature Machine Intelligence), the resignation of an entire editorial board of a prestigious journal to create an open-access alternative (Singh 2019) or the request to publish publicly funded research in open-access journals (Enserink 2018). In this push for open science, technology-intensive areas of research have played a major role, for they tend to display approaches to work organisation and overall politics (e.g., Free Libre Open Source Software (FLOSS)) that are more compatible with a collaborative and decentralised governance and also more critical of the traditional model of science (Coleman 2009; von Hippel & von Krogh 2003). These new forms of scientific work organisation, however, raise additional ethical concerns that are worth addressing, such as the ethical implications of crowdsourcing data (Gleibs 2017), which move past typical issues of privacy, recruitment, and consent in online research.

Overall, the two examples analysed offer a nuanced and multifaceted picture of scientific integrity. When the focus is on practice, the discussion tends to centre on individual compliance i.e., how researchers, working alone or as part of a team, on the one hand, avoid misconduct and, on the other hand, consciously consider aspects such as a model’s potential uses and implications in their work. Alternatively, when the focus is on organisation, ensuring scientific integrity depends on a variety of contextual determinants. In the context of interdisciplinarity, for example, it requires bringing to the foreground issues of accountability, epistemic trust, and expertise. In the context of technology, conversely, considerations about integrity can range from issues about the morality of artificial agents to issues about technology governance. Ultimately, an organisational approach to scientific integrity means inquiring into how agent-based social simulation can institutionally moderate the multiple determinants and dimensions of integrity, acknowledging the diversity of agents and systems, and the interactions between the two, that influence the everyday practice of science.

How Ethical Behaviour Agent-Based Modelling Can be Enabled

Developing standards

Because of the multiplicity of determinant and dimensions of scientific integrity and the frequent differences in individual perceptions, attitudes and behaviours about the matter (Ana et al. 2013; Davies 2019; Shaw & Satalkar 2018), conscious and deliberate reflection about ethics in a research area or disciplinary field often results in standardisation through different mechanisms of normalisation: training, shared methods and procedures, social norms, and codification (e.g, principles, guidelines, conventions, and laws) (Frankel 2000; Freidson 2007; Israel 2020). The outputs of these mechanisms of normalisation contribute to scientific integrity by creating consensus and making explicit norms for conduct that can guide researchers in the moral assessment of their behaviour and that of their peers. Standards also foster the establishment of a system of mutual regulation of expectations and accountability that, in the most formalised instances, includes mechanisms and procedures for sanctioning and exclusion from the social group. From the perspective of external groups, standards allow for public recognition, as well as the external evaluation and accountability of scientific practices (Fox & Braxton 1994; Resnick 2013).

Even though the potential benefits of ethically regulating agent-based social simulation are hard to deny, one might question whether there is a need for the community to engage in the development of its own ethical standards. Practitioners could, alternatively, voluntarily seek to achieve ethical self-regulation, commit to abide by the regulations of the institutions in which they work, or adopt any of the already existing codes of ethics. Although these options could each contribute to guaranteeing the ethical integrity of social simulation, a deliberate effort to account for the idiosyncratic elements involved in the practice and organisation of agent-based social simulations is likely to be needed.

Normative self-regulation, interestingly, has been a popular topic in the agent-based social simulation literature (Conte et al. 2013; Elsenbroich & Gilbert 2014; Hollander & Wu 2011; Morris-Martin et al. 2019; Neumann 2008). Previous research has shown that both norm emergence and compliance are possible bottom-up outcomes of adaptive uncoordinated interaction at the micro-level. In the case of scientific integrity, there is empirical evidence that supports these results. Through history, self-regulation has proven effective in the generation of several widespread standards that contribute to the ethical integrity of science. Standardisation pertains in some cases to the emergence and diffusion of guiding principles or ideals, such as ‘the scientific method’ and, in others, to the application of specific mechanisms of accountability and self-correction e.g., the peer review process or paper retraction.

While there is theoretical and empirical evidence of its effectiveness, self-regulation cannot, by itself, fully ensure scientific integrity. Initially, the current institutional setting in science not only offers negative incentives for ethical behaviour e.g., pressure to publish more and faster (Alberts et al. 2015; Davies 2019), but also allows for the emergence of normative and institutional setups that favour misconduct e.g., the predatory publishing system. In turn, the effective application of self-correction may be limited by variations in individual perceptions and attitudes towards research and scientific integrity, ethics, and misconduct (Davies 2019; Horbach & Halffman 2017) and by the moral status of self-correcting mechanisms themselves (Koepsell 2010; Satalkar & Shaw 2018). Finally, self-correction, while effective in the long run, might be too slow (Ioannidis 2012). Some of the most pressing ethical challenges for contemporary science have been acknowledged for decades. Yet, in most cases, no satisfactory progress has been made2.

Given the increasing demands for holding science accountable to society (motivated, in part, by popular cases of research misconduct), science may not be able to afford the time needed for self-correction. This standardisation alternative, then, might need to be coupled with an institutional effort that provides means and mechanisms, initially, for the collective moderation of individual values, beliefs, and expectations (Fox & Braxton 1994; Freidson 2007; Iverson et al. 2003) and, later, for oversight and accountability (Salloch 2018; Short & Toffel 2010; Taylor 2009). Since the second half of the twentieth century, several countries have advanced in the articulation of institutional ethics procedures that oversee and regulate behaviour based on the derivation of sets of rules from a series of universal principles e.g., justice or beneficence, that any good research practice would be expected to comply with (Israel 2020). Currently, there are institutional regulations that work at international (e.g., the European Framework Programmes), national (the United States’ common rule) and local (e.g., universities’ research ethics committees) levels. In most cases, these procedures depend on processes of collective (peer) review and deliberation over a research proposal’s ethical compliance. If deemed appropriate, the research might be funded or allowed to continue.

Institutional ethics procedures can, indeed, promote consensus-forming around basic ethical principles and help preventing questionable research from being funded or carried out. Yet, their adequacy as mechanisms of ethical regulation has been extensively questioned in the literature. In general, the problem is that compliance at the level of individuals and institutions might be established following different guidelines and principles and have distinct, and sometimes conflicting, conditions of fulfilment. There are significant organisational and national asymmetries in the way institutional ethical frameworks are developed (Ana et al. 2013). In addition, it has been shown that not all research committees follow the same procedures or enforce ethical standards in the same way (Hoecht 2011) and that some institutional standards are too limited to guarantee individual ethical behaviour (Elliott 2008). Finally, and most important, institutional ethics procedures might not be necessarily conducive to better research practices when the interests of individuals and institutions conflict. University research committees, for example, have sometimes been used to protect the reputation of the host institution or as a mechanism for internal discipline, rather than as a guarantor of ethical behaviour (Hedgecoe 2016)3.

Codes of ethic offer an interesting middle ground between self-regulation and institutional ethics procedures. They are usually autonomously developed by a community, increasing the likelihood of accounting for practical and organisational needs. They are often institutionalised, making it easier to use them to moderate the behaviour of a social group. If the agent-based social simulation community were to adopt an already existing code, it would avoid spending resources in the drafting and socialisation of its own code. This resource saving, however, risks coming at the expense of effectiveness. Previous literature on codes of ethics offers insights into standardisation and regulation that are highly relevant for the present discussion. It evidences, first, that diverging beliefs and perceptions about the object and subject of ethical regulation might lead to entirely different processes of standardisation and, second, that a social group willing to autonomously regulate its ethical behaviour must pay attention both to the means and mechanisms of regulation and the disciplinary dynamics targeted by the standardisation4.

Concerning their means and mechanisms of regulation, codes might not be drafted with ethical motivations or might include provisions that are ethically questionable or exacerbate ethical conflict (Farrell et al. 2002; Jamal & Bowie 1995; Schwartz 2002), as we have seen is the case for institutional ethics procedures. When considering already existing codes, it might not be easy to identify their limitations or estimate the extent to which they will adequately regulate ethical behaviour in a new context. In addition, it has been shown that the existence of a code alone is not enough to guarantee ethical behaviour (Freidson 2007; Iverson et al. 2003; Singh 2011). A diverse arrange of supporting activities and structures targeted primarily to its promotion, administration and enforcement are necessary (Lere & Gaumnitz 2007; Rosenberg 1998; Schwartz 2004; Webley & Werner 2008).

Similarly, the supporting activities that lead to the articulation of the code have equally proved to affect its effectiveness. Adherence to a code of ethics, for instance, is more likely when it is found contextually relevant by those governed by it (Hardy 2016; Kaptein & Schwartz 2007). There might be, as well, a moral reason to include practitioners in the discussion about ethical regulations. Since standardisation of ethical practices in a code of ethics, or any other type of formal procedure, creates new obligations for individuals and is used to judge their behaviour, it would be morally responsible for them to have the possibility to be involved in the ethical standardisation (Schwartz 2002).

From the perspective of the target of standardisation, it is clear that previous codes of ethics do not sufficiently address all the ethical issues that are involved in the practice of agent-based social simulation. Although some ethical principles or ideals e.g., respect, integrity, and fair treatment, are usually included, codes of ethics are often developed to cover specific disciplinary or professional practices that warrant ethical regulation. For instance, psychology and sociology are two closely connected disciplinary areas. Yet, the codes for the American associations for sociology and psychology notably differ in content (see Table 2), since ethical concerns are not completely shared. The former, for example, does not include standards pertaining to assessment and therapy, for sociologists do not have that professional competence.

In turn, while the output of the field of agent-based social simulation could be categorised within the general social sciences, its digitalised nature, as mentioned in the previous sections, raises some ethical issues related to the intensive use of digital technologies, and to the implementation, processing, analysis, and dissemination of computational models, that render codes of ethics in social science insufficient. At the same time, codes in more technical areas of research, such as those developed by ACM and IEEE (see Table 3), do not acknowledge ethical issues that are typical of social sciences e.g., ‘Informed Consent’ or ‘Record Keeping and Fees’ (included in the codes for psychology and sociology (see Table 2)), for they tend to emphasise ethical concerns surrounding efficiency and proficiency in the use of technology.

Table 2: Issues covered by the ethical codes of the American Sociological Association (2018) and the American Psychological Association (2017).
ASA APA
General principles General principles
- Professional Competence
- Integrity
- Professional and Scientific Responsibility
- Respect for people’s rights, dignity and diversity
- Social Responsibility
- Human Rights
- Beneficence and Nonmaleficence
- Fidelity and Responsibility
- Integrity
- Justice
- Respect for People’s Rights and Dignity
Ethical Standards Ethical Standards
- Competence
- Representation and Misuse of Expertise
- Delegation and Supervision
- Discrimination
- Exploitation
- Harassment
- Employment Decisions
- Conflicts of Interest and Commitment
- Public Communications
- Informed Consent
- Research Planning, Implementation, and Dissemination
- Plagiarism
- Authorship
- Publication Process
- Responsibilities of Reviewers
- Education, Teaching, and Training
- Contractual and Consulting Services
- Aherence to the Code of Ethics
- Resolving Ethical Issues
- Competence
- Human Relations
- Privacy and Confidentiality
- Advertising and Other Public Statements
- Record Keeping and Fees
- Education and Training
- Research and Publication
- Assessment
- Therapy
Table 3: Example codes of ethics for technical societies and research areas. Adapted from ACM (2018), IEEE (2020) and IEEE Computer Society (1999)
ACM IEEE IEEE Computer Society
General Ethical Principles To uphold the highest standards of integrity, responsible behavior, and ethical conduct in professional activities
- Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing
- Avoid harm
- Be honest and trustworthy
- Be fair and take action not to discriminate
- Respect the work required to produce new ideas, inventions, creative works, and computing artifacts
- Respect privacy
- Honor confidentiality
Professional Responsibilities
- Strive to achieve high quality in both the processes and products of professional work
- Maintain high standards of professional competence, conduct, and ethical practice
- Know and respect existing rules pertaining to professional work
- Accept and provide appropriate professional review
- Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks
- Perform work only in areas of competence
- Foster public awareness and understanding of computing, related technologies, and their consequences
- Access computing and communication resources only when authorized or when compelled by the public good
- Design and implement systems that are robustly and usably secure
Professional Leadership Principles
- Ensure that the public good is the central concern during all professional computing work
- Articulate, encourage acceptance of, and evaluate fulfillment of social responsibilities by members of the organization or group
- Manage personnel and resources to enhance the quality of working life
- Articulate, apply, and support policies and processes that reflect the principles of the Code
- Create opportunities for members of the organization or group to grow as professionals
- Use care when modifying or retiring systems
- Recognize and take special care of systems that become integrated into the infrastructure of society
Compliance with the Code
- Uphold, promote, and respect the principles of the Code
- Treat violations of the Code as inconsistent with membership in the ACM
- To hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, to protect the privacy of others, and to disclose promptly factors that might endanger the public or the environment
- To improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems
- To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist
- To avoid unlawful conduct in professional activities, and to reject bribery in all its forms
- To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, to be honest and realistic in stating claims or estimates based on available data, and to credit properly the contributions of others
- To maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations
To treat all persons fairly and with respect, to not engage in harassment or discrimination, and to avoid injuring others
- To treat all persons fairly and with respect, and to not engage in discrimination based on characteristics such as race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression
- To not engage in harassment of any kind, including sexual harassment or bullying behavior
- To avoid injuring others, their property, reputation, or employment by false or malicious actions, rumors or any other verbal or physical abuses
To strive to ensure this code is upheld by colleagues and co-workers
- To support colleagues and co-workers in following this code of ethics, to strive to ensure the code is upheld, and to not retaliate against individuals reporting a violation
- Public: Software engineers shall act consistently with the public interest.
- Client and employer: Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.
- Product: Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
- Judgment: Software engineers shall maintain integrity and independence in their professional judgment.
- Management: Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
- Profession: Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
- Colleagues: Software engineers shall be fair to and supportive of their colleagues.
- Self: Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.

Most codes centred on technology also fail to sufficiently cover ethical issues arising from disciplinary dynamics that are important for agent-based social simulation from an organisational point of view. Simulation is referred in the literature both as discipline and a profession (Anzola 2021a; Diallo et al. 2015; Padilla et al. 2018; Silvert 2001; Tolk & Ören 2017). While the difference is not clear-cut, professions are generally associated with the generation of a jurisdiction in the labour market, and the eventual exclusive control of that jurisdiction, based on recognised expertise over some specialised knowledge (Freidson 2007; Young & Muller 2014). Conversely, disciplines are about the structuration and consolidation of bodies of knowledge through the delimitation of aspects such as a specialised object of research, a foundational narrative, a particular research agenda and specialised theoretical-methodological frameworks (Becher & Trowler 2001; Krishnan 2009). When it comes to ethics, it seems that most technical codes, including the ‘Code of Professional Ethics for Simulationists’ (Table 4), approach computer modelling more as a profession than as a discipline. They neglect aspects that are often found in disciplinary codes of ethics, such as those associated with training, research organisation, and publication processes, because the emphasis is on occupational elements of the everyday practice of simulation.

This narrow understanding of the object of ethical regulation in the professional approach to computer simulation is not limited to standardisation, but is also present in some ethical compliance strategies. In recent years, ‘design’ approaches to ethics have become popular in computer science and engineering, especially for AI applications in social settings (Donia & Shaw 2021; European Commission 2021; IBM 2019). These methods and frameworks seek to prevent ethical issues either by purposefully intervening in the technological application itself (e.g., embedding morality in artificial agents Dignum et al. 2018) or in the production process (e.g., by clarifying the normative dimensions of the process or by explicitly incorporating ethics-oriented principles, activities or subprocesses Donia & Shaw 2021). While useful, design approaches are limited when considering the ethics of a disciplinary field or area of research, for they are product-centred. Computer simulation, however, is not all there is to agent-based social simulation. Additional key scientific outputs and activities e.g., training, interaction with stakeholders, events, and hiring/promotion, are relevant for everyday activities within a discipline and are also worth of ethical regulation. A disciplinary area, in comparison to a simulation, does not readily lend itself to be ‘designed’.

Table 4: Adapted from Ören et al. (2002).
Code of Professional Ethics for Simulationists
Personal Development and the Profession
As a simulationist I will:
  • Acquire and maintain professional competence and attitude
  • Treat fairly employees, clients, users, colleagues and employers
  • Encourage and support new entrants to the profession
  • Support fellow practitioners and members of other professions who are engaged in modelling and simulation
  • Assists colleagues to achieve reliable results
  • Promote the reliable and credible use of modelling and simulation
  • Promote the modelling and simulation profession; e.g., advance public knowledge and appreciation of modelling and simulation and clarify and counter false or misleading statements
Professional Competence
As a simulationist I will:
  • Assure product and/or service quality by the use of proper methodologies and technologies
  • Seek, utilize, and provide critical professional review
  • Recommend and stipulate proper and achievable goals for any project
  • Document simulation studies and/or systems comprehensibly and accurately to authorized parties
  • Provide full disclosure of system design assumptions and known limitations and problems to authorized parties
  • Be explicit and unequivocal about the conditions of applicability of specific models and associated simulation results
  • Caution against acceptance of modelling and simulation results when there is insufficient evidence of thorough validation and verification
  • Assure thorough and unbiased interpretations and evaluations of the results of modelling and simulation studies
Trustworthiness
As a simulationist I will:
  • Be honest about any circumstances that might lead to conflict of interest
  • Honor contracts, agreements, and assigned responsibilities and accountabilities
  • Help develop an organizational environment that is supportive of ethical behavior
  • Support studies which will not harm humans (current and future generations) as well as environment
Property Rights and Due Credit
As a simulationist I will:
  • Give full acknowledgement to the contributions of others
  • Give proper credit for intellectual property
  • Honor property rights including copyrights and patents
  • Honor privacy rights of individuals and organizations as well as confidentiality of the relevant data and knowledge
Compliance with the Code
As a simulationist I will:
  • Adhere to this code and encourage other simulationists to adhere to it
  • Treat violations of this code as inconsistent with being a simulationist
  • Seek advice from professional colleagues when faced with an ethical dilemma in modelling and simulation activities
  • Advise any professional society which supports this code of desirable updates

First steps

Collective ethical compliance first requires the community to become aware of the increasing importance of scientific integrity and the multiple ways in which behaviour could be ethically considered. While practitioners might already be behaving in a way that is ethically compliant, a conscious effort to analyse critically the different sources of ethical concern should yield a more complex understanding of individual situations. Practitioners with training or supervision responsibilities, for example, should be mindful of an ethical dimension that is not as relevant for those with only research responsibilities. Moreover, as mentioned above, ethical expectations are gradually changing to cover a multiplicity of elements beyond what was traditionally considered central to the practice of science e.g., issues of under-representation. There must be, then, a willingness to continuously engage in ethical reflection and moderate behaviours accordingly.

In the current absence of standards that the community can generally agree upon and collectively employ, it is important for practitioners to strive for self-regulation. In some instances, the overlap with general social norms, customs, and conventions may help to identify the requirements for ethically compliant behaviour. Similarly, institutional ethics procedures should provide guidance about research-specific issues. For aspects related to simulation proper, multiple resources, including the questions listed in Table 1 and some of the resources mentioned in Section 2, especially Shults & Wildman (2019) meta-ethical framework, should prove useful.

Ultimately, however, it may be more reasonable for the community to explicitly address the ethics of agent-based social simulation collectively, in the form of a code of ethics and the corresponding set of supporting activities. This approach could help agent-based social simulation dealing with: (i) the increasing complexity of the issues associated with scientific integrity, (ii) the limitations in self-regulation and institutional ethics procedures, (iii) the potential higher costs in having to manage ethical compliance as individual researchers, (iv) the observed differences in individual and social perceptions of and attitudes towards scientific integrity, (v) the need to raise the profile of the field and improve the relationship with external stakeholders, and (vi) the interest in further consolidating the practice of agent-based social simulation.

Some General Recommendations Moving Forward

In order to advance a standardisation process that brings scientific integrity to the fore in agent-based social simulation, an institutional setup that accommodates and provides resources for a range of standardisation activities is needed. Below, some key actions and decisions that might guide the articulation of such setup are briefly mentioned.

Actions pertaining to ethical standardisation (and the potential development of a code)

  • Raise awareness: differences in ethical compliance are not only associated with perceptions of and attitudes towards integrity and misconduct, but also with what standardisation entails and its potential effectiveness (Davies 2019; Fleischmann et al. 2010; Shaw & Satalkar 2018). It is therefore necessary to carry out activities (e.g., special events, tracks or workshops in conferences, special issues, dedicated workshops and training) that raise awareness about scientific integrity and its potential standardisation and foster an initial moderation of perceptions, attitudes and knowledge about the ethics of social simulation (Frankel & Bird 2003).
  • Participation and recruitment: for ethical standardisation to be successful, a reasonable proportion of members should willingly engage with the ethicalisation process i.e., participate in different roles in the several activities carried out (Frankel & Bird 2003; Freidson 2007; Romani & Szkudlarek 2014). Should the community proceed with the development of a code, as mentioned below, recruitment and participation will be crucial for three decisions pertaining to its drafting: who will draft it, whether the drafting process will be open to any member of the community and at any time, and who is responsible for approving the code (Messikomer & Cirka 2010; Webley & Werner 2008).
  • Establishing a governance structure: Standardisation usually requires some sort of institutionalisation (Becher & Trowler 2001; Frankel & Bird 2003; Freidson 2007). Thus, the articulation of a governance structure with detailed roles and functions will greatly aid in the process. Depending on the scope of the standardisation process, some positions might be necessary or be given more relevance. For example, if an institutional space for conflict resolution is desired, there should be an ombudsperson that operates separately from those in charge of administrative issues (e.g., an ethics committee). Similarly, if a code is developed, a dedicated structure for drafting, implementation and management of the code might be required (Mcdonald 2009; Messikomer & Cirka 2010; Rosenberg 1998).
  • Training: formal training is one of the most powerful standardisation mechanisms in contemporary societies. Ideally, options should be available to account for differences in expertise in simulation and knowledge of ethics. Training programmes and scenarios should also be designed to cover different goals. There is a significant difference, for example, in training that is meant to be simply informative and training designed to foster the development of the skills needed to recognise ethical challenges in practice (Fleischmann 2010; Frankel & Bird 2003; Guillemin & Gillam 2004; Israel & Hay 2006). If the community moves forward with the development of a code, training will be fundamental, for practitioners must be trained in the provisions included in the code, if they are to be expected to comply with it (Mcdonald 2009; Schwartz 2004; Webley & Werner 2008).
  • Reporting: for ethics to become part of everyday practice, a reporting structure should be developed, probably linked to the governance structure and the training infrastructure, that informs the community about scientific integrity and advances in standardisation (including the administration and application of the code, if needed), among others (Frankel & Bird 2003; Singh 2011; Webley & Werner 2008).

This list is not exhaustive, but it shows the several fronts on which the discussion about ethical standardisation in agent-based social simulation can be advanced. We hope that other practitioners feel motivated by this article to share their experiences and contribute to activities seeking to position ethics and scientific integrity centrally within everyday practice.

Should the agent-based social simulation community decide to move forward with the development of its own code of ethics, there are a few key elements about the design, implementation and management of a code that must be considered. Actions carried out regarding participation and governance would be fundamental to assign responsibility for these decisions.

Decisions pertaining to the code

  • What type of code?: the literature usually distinguishes two types of ethical code: aspirational and prescriptive (Farrell et al. 2002; Mcdonald 2009; Schwartz 2004). The former, as the name suggests, centres on moral ideals that are believed worthy of being professionally pursued by the community; the latter, in comparison, provides a more elaborate description of expected behaviour in certain situations. Each code fosters different approaches to behaviour regulation. Prescriptive codes, for example, because of their narrower scope, tend to emphasise proscribed rather than virtuous behaviour.
  • Who is it for?: the code might be intended exclusively for the community or deliberately involve external stakeholders. Explicitly involving additional stakeholders might help legitimising the code, which could be useful, considering the increasing popularity of stakeholder engagement. It, however, poses additional challenges to the conceptualisation of the code, for the roles, expectations, and interests of additional stakeholders will need to be deliberately accounted for (Messikomer & Cirka 2010; Singh 2011; Webley & Werner 2008). In turn, the ‘who’ is particularly important in agent-based social simulation, given that the sense of belonging to the community is rarely defined through training or affiliation but through practice.
  • Who should draft it?: the drafting process could be open or closed. In the former case, anyone can participate of the drafting at any point; in the latter, the drafting is carried out entirely by a predefined group (e.g., an ad-hoc drafting committee or an already appointed ethics committee). There are, naturally, possible combinations, for example, open for the conceptualisation stage, but closed for the drafting itself. The possible options differ in the type and amount of resources employed (e.g., open processes typically require more resources for the drafting, but less for dissemination), as well as the decisions and activities required for its approval, implementation, dissemination, and enforcement. As mentioned, opening the drafting process can increase compliance and is also morally responsible, given the obligations a code creates. It may, in addition, strengthen the ethical culture and identity of the community beyond the specific efforts of standardisation and code provision (Becher & Trowler 2001; Romani & Szkudlarek 2014).
  • Who will enforce it?: there is not one way to enforce the code. In most cases, implementation, oversight, and enforcement is included in the functions of the governance structure mentioned above. Whichever option is chosen, it should be made clear and explicit for members of the community (Mcdonald 2009; Schwartz 2004; Singh 2011). Enforcement should not only be understood as dealing with misconduct. Integrity could be promoted, for example, by publishing regular reports on topics that directly pertain to ethical practices and organisation.

These decisions are specific to the code. Yet, as mentioned above, their success ultimately depends on how they are integrated with a diverse set of supporting activities, derived, in part, from the actions just listed. In addition, decisions about code enforcement should be made being mindful of the need to guarantee the continuous ethical relevance of the code. Whoever is responsible for ethical oversight must ensure that the code remains contextually adequate (e.g., through updates), that it contributes to scientific integrity (e.g., through ethically compliant decision-making) and that it keeps the different stakeholders engaged (e.g., through constant, open, and transparent communication).

Conclusion

This article sought to raise awareness of the need for practitioners of social simulation to engage in a collective discussion about ethics. We have argued that now is a good time to start this discussion, first, because, despite the increasing popularity of scientific integrity, agent-based social simulation lacks widespread and consensual standards on ethical compliance and, second, because developing models with real-life implications requires being especially mindful of the interests and needs of different stakeholders.

We analysed the two main sources of ethical issues in social simulation from a disciplinary point of view. The first is the modelling workflow. Ethical challenges arising during the modelling process were presented and linked to the different stages. The second is the organisation of agent-based social simulation. Two examples were used to show how ethics is differently linked to the practice and organisation of agent-based social simulation. We suggested that the dissimilar and uneven disciplinary expertises and the possibility of conflicting disciplinary moral commitments in social simulation are potential sources of ethical tension. We also claimed that the combination of social science with technology puts agent-based social simulation in an ethical context that differs from that of both traditional social disciplines and other technological domains, such as engineering and computer science. Overall, we argued, ethical challenges linked to the organisation of social simulation bring to the foreground the need for differentiated ethical standardisation.

We then addressed the question of how to enable collective ethical behaviour. We claimed that there are three major options for ethical regulation and standardisation, each with its own advantages and disadvantages. We suggested that, given the distinctive organisational features of social simulation, a code of ethics might be the best long-term strategy.

The article closes with a brief discussion about key actions and decisions pertaining to the standardisation of ethics in agent-based social simulation. A first version of the code, incorporating some of the major conclusions of this article as provisions, is presented in the Appendix. This draft is intended as a contribution to the discussion among stakeholders that needs to happen around the ethics of agent-based social simulation. Subsequent reflections, additions and criticism are both encouraged and welcome. It is our intention to use a variety of institutional spaces such as social simulation conferences to encourage the advance of this discussion.

Discussing the ethics of simulation, we believe, could also encourage further conversation and cross-fertilisation with other types of digitalised scientific research. Several ethical challenges are not exclusive to agent-based models, but pertain more generally to the operation of diverse information technologies. In the domain of artificial intelligence, for instance, the concern with epistemic opacity mentioned above has led to the popularisation of explainable artificial intelligence (XAI), a set of methods that seek to make artificial intelligence models, particularly machine learning, easier to understand for a human subject (Adadi & Berrada 2018). Similarly, while the conditions are not the same, practitioners of agent-based social simulation might also learn from past experiences of researchers in other fields. A decade ago, for example, research on the ethics of algorithms (Kraemer et al. 2011; Mittelstadt et al. 2016) tackled the problem of the value-ladenness of model implementation, an issue that is worth discussing further from a moral standpoint in agent-based social simulation (David 2021).

Acknowledgements

This work was supported in part by the Economic and Social Research Council, UK, under grant ES/S007024/1, Centre for the Evaluation of Complexity Across the Nexus.

Notes

  1. While a conceptual agreement about research integrity is lacking and some distinctions may be worth making between ‘research’ and ‘scientific’ integrity (Shaw 2019), this text uses ‘scientific integrity’ to highlight, first, the role of ethics beyond preventing misconduct and, second, the additional dimensions of integrity in scientific practices and organisation beyond research.↩︎

  2. In social simulation, for example, Axtell et al. (1996) called in their now seminal article for an institutional setup that rewards practices such as replication and docking, two increasingly popular concerns in contemporary science with interesting ethical implications. Even though this call was made almost 30 years ago, these activities are still peripheral and very resource demanding, both from the perspective of framing the research and validating the results (Anzola 2021b).↩︎

  3. The ethical status of these procedures has also been questioned. Some authors (e.g., Haggerty 2004; Hammersley 2009) argue first, that principles to regulate behaviour are themselves unethical, for they impinge on research autonomy and, second, that they might be conducive to worse research outputs and overall research ethics, especially in social science, for they do not acknowledge key idiosyncratic features of social research.↩︎

  4. Findings reported in this literature should be critically approached, however. A significant portion of the literature on codes of ethics centre on organisations and some key discussions, such as the ones on code drafting and code effectiveness, significantly rely on the analysis of organisational codes of ethics. There are, however, some key differences between organisations and entire disciplinary or professional areas that are worth being mindful of. For example, unlike professional and disciplinary codes, organisational codes centre more often on proscribing behaviour and protecting the company from the employees, in part, because of hierarchies and power asymmetries are more pronounced (Khaled & Gond 2020; Komić et al. 2015; Mcdonald 2009; Valkenburg et al. 2021). Similarly, because disciplines are larger and more intricate social systems, ethical standardisation is less uniform (e.g., it may vary according to sociodemographic factors, institutional setting or disciplinary sub-specialisation Ana et al. 2013; Freidson 2007; Israel 2020; Schwartz 2004) and might, overall, require more time for its effect to be adequately assessed (see e.g., Baker et al. 1999 for a discussion about the long-term implications of the American Medical Association’s code of ethics).↩︎

Appendix: A First Proposal

Table 5 presents a draft intended as a first step in what needs to be a collective effort that should involve not only practitioners, but also all other stakeholders. Researchers, analysts, or modellers conducting agent-based modelling in the social, economic, and policy sciences could equally benefit and contribute to the process of ethical standardisation. We hope the code can be further developed at conferences and meetings within the community. It would not be right, nor do we wish, to own or dominate the development of a code of ethics for ABM, so we hope and encourage others to improve this draft, and/or develop their own.

Table 5: Proposed draft of a code of ethics for agent-based social simulation
The Code
Institutional considerations
  1. Professional competence and training: Have in place plans for training for new and experienced modellers, not only on technical aspects of modelling, but also on model use and interpretation, and interdisciplinary working.
  2. Interdisciplinary working: Have in place plans and common understandings of how the challenges of interdisciplinary working will be managed (e.g., different assumptions about the aims and value of methods and projects, knowledge asymmetries, different ethical standards).
Individual project considerations
  1. Project management, transparency, and quality: Plan and maintain project management processes to ensure the documentation, quality, and reproducibility of all model stages.
  2. Narrative and positions: Be honest and open about your underlying beliefs about the system you are modelling. Are there contested understandings, results, or interventions that you expect your model to support or contradict?
  3. Model use: Develop a plan for how a model will be used and for unplanned use of your model or its results.
  4. Model inputs: Be conscious of bias in the data and theory you use to inform a model.
  5. Model assumptions: Be honest and transparent about the assumptions in your model and their implications.
  6. Stakeholders: Involve or engage with any relevant stakeholders, including the people or organisations you are modelling.
  7. Representation: Consider how people and/or groups are represented in your model; ensure no one or group is represented in an inappropriate or inequitable way.
  8. Open science: Make all model code and analysis open and provide documentation
  9. Dissemination: Consider the appropriate use of standards in model reporting.
  10. Publication: Publish in an open and honest way, acknowledging failures and false starts.
  11. Authorship: Ensure authorship and contributions are fairly attributed.
  12. Public communication: Consider preparing documentation which presents the model, results, and their interpretation for layman or non-modeller audiences.
  13. Maintenance: Make appropriate arrangements for long-term model and documentation maintenance.

References

ACM. (2018). ACM Code of Ethics and Professional Conduct. Available at: https://www.acm.org/code-of-ethics.

ADADI, A., & Berrada, M. (2018). Peeking inside the black-Box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. [doi:10.1109/access.2018.2870052]

ALBERTS, B., Cicerone, R., Fienberg, S., Kamb, A., McNutt, M., Nerem, R., Schekman, R., Shiffrin, R., Stodden, V., Suresh, S., Zuber, M. T., Pope, B. K., & Jamieson, K. H. (2015). Self-Correction in science at work. Science, 348(6242), 1420–1422. [doi:10.1126/science.aab3847]

AMERICAN Psychological Association. (2017). Ethical principles of psychologists and code of conduct. Available at: https://www.apa.org/ethics/code.

AMERICAN Sociological Association. (2018). Code of ethics. Available at: https://www.asanet.org/sites/default/files/asa_code_of_ethics-june2018a.pdf.

ANA, J., Koehlmoos, T., Smith, R., & Yan, L. (2013). Research misconduct in low- and middle-Income countries. PLoS Medicine, 10(3), e1001315. [doi:10.1371/journal.pmed.1001315]

ANDERSON, M. (2018). Shifting perspectives on research integrity. Journal of Empirical Research on Human Research Ethics, 13(5), 459–460. [doi:10.1177/1556264618813737]

ANZOLA, D. (2021a). Disagreement in discipline-building processes. Synthese, 198, 6201–6224. [doi:10.1007/s11229-019-02438-9]

ANZOLA, D. (2021b). Social epistemology and validation in agent-Based social simulation. Philosophy & Technology, 34, 1333–1361. [doi:10.1007/s13347-021-00461-8]

AXTELL, R., Axelrod, R., Epstein, J., & Cohen, M. (1996). Aligning simulation models: A case study and results. Computational & Mathematical Organization Theory, 1(2), 123–141. [doi:10.1007/bf01299065]

BAK, M. (2022). Computing fairness: Ethics of modeling and simulation in public health. Simulation: Transactions of the Society for Modeling and Simulation International, 98(2), 103–111. [doi:10.1177/0037549720932656]

BAKER, R., Caplan, A., Emanuel, L., & Latham, S. (1999). The American Medical Ethics Revolution. Baltimore, MD: Johns Hopkins University Press.

BARBROOK-JOHNSON, P., Castellani, B., Hills, D., Penn, A., & Gilbert, N. (2021). Policy evaluation for a complex world: Practical methods and reflections from the UK Centre for the Evaluation of Complexity across the Nexus. Evaluation, 27(1), 4–17. [doi:10.1177/1356389020976491]

BARRERA, D., & Simpson, B. (2012). Much ado about deception: Consequences of deceiving research participants in the social sciences. Sociological Methods & Research, 41(3), 383–413. [doi:10.1177/0049124112452526]

BECHER, T., & Trowler, P. (2001). Academic Tribes and Territories. London: Open University Press.

BREY, P. (2014). 'Virtual reality and computer simulation.' In R. Sandler (Ed.), Ethics and Emerging Technologies. London: Palgrave Macmillan.

CALDER, M., Craig, C., Culley, D., de Cani, R., Donnelly, C., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N., Hargrove, C., Hinds, D., Lane, D., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M., & Wilson, A. (2018). Computational modelling for decision-making: Where, why, what, who and how. Royal Society Open Science, 5(6), 172096. [doi:10.1098/rsos.172096]

CASSINI, A. (2022). Simulation models and probabilities: A Bayesian defense of the value-Free ideal. Simulation, 98(2), 113–125. [doi:10.1177/00375497211028815]

COLEMAN, G. (2009). CODE IS SPEECH: Legal tinkering, expertise, and protest among free and open source software developers. Cultural Anthropology, 24(3), 420–454. [doi:10.1111/j.1548-1360.2009.01036.x]

CONTE, R., Andrighetto, G., & Campennl, M. (Eds.). (2013). Minding Norms. Oxford: Oxford University Press.

CURRIE, C., Fowler, J., Kotiadis, K., Monks, T., Onggo, B., Robertson, D., & Tako, A. (2020). How simulation modelling can help reduce the impact of COVID-19. Journal of Simulation, 14(2), 83–97. [doi:10.1080/17477778.2020.1751570]

DAVID, N. (2021). Implementations, interpretative malleability, value-laden-ness and the moral significance of agent-based social simulations. AI & SOCIETY. [doi:10.1007/s00146-021-01304-y]

DAVIES, S. (2019). An ethics of the system: Talking to scientists about research integrity. Science and Engineering Ethics, 25(4), 1235–1253. [doi:10.1007/s11948-018-0064-y]

DIALLO, S., Gore, R., Padilla, J., & Lynch, C. (2015). An Overview of Modeling and Simulation Using Content Analysis. Scientometrics, 103(3), 977–1002. [doi:10.1007/s11192-015-1578-6]

DIGNUM, V., Baldoni, M., Baroglio, C., Caon, M., Chatila, R., Dennis, L., Génova, G., Haim, G., Kließ, M., Lopez-Sanchez, M., Micalizio, R., Pavón, J., Slavkovik, M., Smakman, M., van Steenbergen, M., Tedeschi, S., van der Toree, L., Villata, S., & de Wildt, T. (2018). Ethics by design. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, ACM, New York. [doi:10.1145/3278721.3278745]

DONIA, J., & Shaw, J. (2021). Ethics and values in design: A structured review and theoretical critique. Science and Engineering Ethics, 27(5), 57. [doi:10.1007/s11948-021-00329-2]

DURÁN, J. (2018). Computer Simulations in Science and Engineering. Berlin Heidelberg: Springer.

DURÁN, J., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28(4), 645–666.

ELLIOTT, K. (2008). Scientific judgment and the limits of conflict-of-Interest policies. Accountability in Research, 15(1), 1–29. [doi:10.1080/08989620701783725]

ELSENBROICH, C., & Gilbert, N. (2014). Modelling Norms. Berlin Heidelberg: Springer.

ENSERINK, M. (2018). European science funders ban grantees from publishing in paywalled journals. Available at: https://www.science.org/content/article/european-science-funders-ban-grantees-publishing-paywalled-journals. [doi:10.1126/science.aav3093]

EUROPEAN Commission. (2021). Ethics by design and ethics of use approaches for Artificial Intelligence. Available at: https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf.

FARRELL, B., Cobbin, D., & FARRELL, H. (2002). Codes of ethics: Their evolution, development and other controversies. Journal of Management Development, 21(2), 152–163. [doi:10.1108/02621710210417448]

FLEISCHMANN, K. (2010). Preaching what we practice: Teaching ethical decision-Making to computer security professionals. In R. Sion, R. Curtmola, S. Dietrich, A. Kiayias, J. Miret, K. Sako, & F. Sebé (Eds.), Financial Cryptography and Data Security (pp. 197–202). Berlin Heidelberg: Springer. [doi:10.1007/978-3-642-14992-4_18]

FLEISCHMANN, K., Wallace, W., & Grimes, J. (2010). The values of computational modelers and professional codes of ethics: Results from a field study. IEEE, 2010 43rd Hawaii International Conference on System Sciences. [doi:10.1109/hicss.2010.400]

FLORIDI, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8.

FLORIDI, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. [doi:10.1098/rsta.2016.0360]

FOX, M., & Braxton, J. (1994). Misconduct and social control in science: Issues, problems, solutions. The Journal of Higher Education, 65(3), 373. [doi:10.2307/2943973]

FRANKEL, M. (2000). Scientific societies as sentinels of responsible research conduct. Proceedings of the Society for Experimental Biology and Medicine, 224(4), 216–219. [doi:10.1046/j.1525-1373.2000.22424.x]

FRANKEL, M., & Bird, S. (Eds.) (2003). The role of scientific societies in promoting research integrity [Special Issue]. Science and Engineering Ethics, 9(2). [doi:10.1007/s11948-003-0001-5]

FREIDSON, E. (2007). Professionalism. Cambridge: Polity Press.

FRICKER, M. (2007). Epistemic Injustice. Oxford: Oxford University Press.

GALÁN, J., Izquierdo, L., Izquierdo, S., Santos, J., Olmo, R. del, & López-Paredes, A. (2016). 'Checking simulations: Detecting and avoiding errors and artefacts.' In B. Edmonds & R. Meyer (Eds.), Simulating Social Complexity. Berlin Heidelberg: Springer.

GALISON, P. (1996). 'Computer simulations and the trading zone.' In P. Galison & D. Stump (Eds.), The Disunity of Science: Boundaries, Contexts, and Power. Stanford, CA: Stanford University Press.

GILBERT, N. (2008). Agent-Based Models. London: Sage.

GILBERT, N., Ahrweiler, P., Barbrook-Johnson, P., Narasimhan, K., & Wilkinson, H. (2018). Computational modelling of public policy: Reflections on practice. Journal of Artificial Societies and Social Simulation, 21(1), 14: https://www.jasss.org/21/1/14.html. [doi:10.18564/jasss.3669]

GLEIBS, I. (2017). Are all “Research Fields” equal? Rethinking practice for the use of data from crowdsourcing market places. Behavior Research Methods, 49(4), 1333–1342. [doi:10.3758/s13428-016-0789-y]

GUILLEMIN, M., & Gillam, L. (2004). Ethics, reflexivity, and “Ethically Important Moments” in research. Qualitative Inquiry, 10(2), 261–280. [doi:10.1177/1077800403262360]

HAGGERTY, K. (2004). Ethics creep: Governing social science research in the name of ethics. Qualitative Sociology, 27(4), 391–414. [doi:10.1023/b:quas.0000049239.15922.a3]

HAMMERSLEY, M. (2009). Against the ethicists: On the evils of ethical regulation. International Journal of Social Research Methodology, 12(3), 211–225. [doi:10.1080/13645570802170288]

HARDY, M. (2016). Drafting an effective ethical code of conduct for professional societies: A practical guide. Administrative Sciences, 6(4), 16. [doi:10.3390/admsci6040016]

HEDGECOE, A. (2016). Reputational risk, academic freedom and research ethics review. Sociology, 50(3), 486–501. [doi:10.1177/0038038515590756]

HIRSCH Hadorn, G., & Baumberger, C. (2019). 'What types of values enter simulation validation and what are their roles?' In C. Beisbart & N. J. Saam (Eds.), Computer Simulation Validation. Berlin Heidelberg: Springer. [doi:10.1007/978-3-319-70766-2_40]

HOECHT, A. (2011). Whose ethics, whose accountability? A debate about university research ethics committees. Ethics and Education, 6(3), 253–266. [doi:10.1080/17449642.2011.632719]

HOLLANDER, C., & Wu, A. (2011). The current state of normative agent-Based systems. Journal of Artificial Societies and Social Simulation, 14(2), 6: https://www.jasss.org/14/2/6.html. [doi:10.18564/jasss.1750]

HORBACH, S., & Halffman, W. (2017). Promoting virtue or punishing fraud: Mapping contrasts in the language of ‘scientific integrity’. Science and Engineering Ethics, 23(6), 1461–1485. [doi:10.1007/s11948-016-9858-y]

IBM. (2019). Everyday ethics for Artificial Intelligence. Available at: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf.

IEEE. (2020). IEEE Code of Ethics. Available at: https://www.ieee.org/about/corporate/governance/p7-8.html.

IEEE Computer Society. (1999). Code of Ethics. Available at: https://www.computer.org/education/code-of-ethics.

INTEMANN, K. (2015). Distinguishing between legitimate and illegitimate values in climate modeling. European Journal for Philosophy of Science, 5(2), 217–232. [doi:10.1007/s13194-014-0105-6]

IOANNIDIS, J. (2012). Why science is not necessarily self-Correcting. Perspectives on Psychological Science, 7(6), 645–654. [doi:10.1177/1745691612464056]

ISRAEL, M. (2020). 'Organizing and contesting research ethics.' In R. Iphofen (Ed.), Handbook of Research Ethics and Scientific Integrity. Berlin Heidelberg: Springer. [doi:10.1007/978-3-030-16759-2_63]

ISRAEL, M., & Hay, I. (2006). Research Ethics for Social Scientists. Thousands Oaks, CA: Sage.

IVERSON, M., Frankel, M., & Siang, S. (2003). Scientific societies and research integrity: What are they doing and how well are they doing it? Science and Engineering Ethics, 9(2), 141–158. [doi:10.1007/s11948-003-0002-4]

JAMAL, K., & Bowie, N. (1995). Theoretical considerations for a meaningful code of professional ethics. Journal of Business Ethics, 14(9), 703–714. [doi:10.1007/bf00872324]

KAPTEIN, M., & Schwartz, M. (2007). The effectiveness of business codes: A critical examination of existing studies and the development of an integrated research model. Journal of Business Ethics, 77(2), 111–127. [doi:10.1007/s10551-006-9305-0]

KHALED, W., & Gond, J.-P. (2020). How do external regulations shape the design of ethical tools in organisations? An open polity and sociology of compliance perspective. Human Relations, 73(5), 653–681.

KOEPSELL, D. (2010). On genies and bottles: Scientists’ moral responsibility and dangerous technology R&D. Science and Engineering Ethics, 16(1), 119–133. [doi:10.1007/s11948-009-9158-x]

KOMIĆ, D., Marušić, S. L., & Marušić, A. (2015). Research integrity and research ethics in professional codes of ethics: Survey of terminology used by professional organizations across research disciplines. PloS ONE, 10(7), e0133662.

KRAEMER, F., Overveld, K. van, & Peterson, M. (2011). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251–260. [doi:10.1007/s10676-010-9233-7]

KRASNOW, M., Howard, R., & Eisenbruch, A. (2020). The importance of being honest? Evidence that deception may not pollute social science subject pools after all. Behavior Research Methods, 52(3), 1175–1188. [doi:10.3758/s13428-019-01309-y]

KRISHNAN, A. (2009). What are academic disciplines? Some observations on the disciplinarity vs. interdisciplinarity debate. NCRM Working Paper Series, ESRC National Centre for Research Methods. Available at: http://eprints.ncrm.ac.uk/783/1/what_are_academic_disciplines.pdf.

LERE, J., & Gaumnitz, B. (2007). Changing behavior by improving codes of ethics. American Journal of Business, 22(2), 7–18. [doi:10.1108/19355181200700006]

MCDONALD, G. (2009). An anthology of codes of ethics. European Business Review, 21(4), 344–372. [doi:10.1108/09555340910970445]

MESSIKOMER, C., & Cirka, C. (2010). Constructing a code of ethics: An experiential case of a national professional organization. Journal of Business Ethics, 95(1), 55–71. [doi:10.1007/s10551-009-0347-y]

MITTELSTADT, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. [doi:10.1177/2053951716679679]

MORRIS-MARTIN, A., De Vos, M., & Padget, J. (2019). Norm emergence in multiagent systems: A viewpoint paper. Autonomous Agents and Multi-Agent Systems, 33(6), 706–749. [doi:10.1007/s10458-019-09422-0]

MORRISON, M. (2014). Values and uncertainty in simulation models. Erkenntnis, 79(S5), 939–959. [doi:10.1007/s10670-013-9537-1]

NEUMANN, M. (2008). Homo socionicus: A case study of simulation models of norms. Journal of Artificial Societies and Social Simulation, 11(4), 6: https://www.jasss.org/11/4/6.html.

ORIGGI, G., & Ciranna, S. (2017). Epistemic injustice. The case of digital environments. In I. Kidd, J. Medina, & G. Pohlhaus (Eds.), The Routledge Handbook of Epistemic Injustice. London: Routledge. [doi:10.4324/9781315212043-1]

OWEN, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. [doi:10.1093/scipol/scs093]

ÖREN, T., Elzas, M., Smit, I., & Birta, L. (2002). Code of professional ethics for simulationists. The Proceedings of the 2002 Summer Computer Simulation Conference, San Diego, CA.

PADILLA, J., Diallo, S., Lynch, C., & Gore, R. (2018). Observations on the practice and profession of modeling and simulation: A survey approach. Simulation, 94(6), 493–506. [doi:10.1177/0037549717737159]

PALMER, E. (2017). Beyond proximity: Consequentialist ethics and system dynamics. Etikk i praksis - Nordic Journal of Applied Ethics, 1, 89–105. [doi:10.5324/eip.v11i1.1978]

PARKER, W. (2014). Values and uncertainties in climate prediction, revisited. Studies in History and Philosophy of Science Part A, 46, 24–30. [doi:10.1016/j.shpsa.2013.11.003]

PARKER, W., & Winsberg, E. (2018). Values and evidence: How models make a difference. European Journal for Philosophy of Science, 8(1), 125–142. [doi:10.1007/s13194-017-0180-6]

POILE, C., & Safayeni, F. (2016). Using computational modeling for building theory: A double edged sword. Journal of Artificial Societies and Social Simulation, 19(3), 8: https://www.jasss.org/19/3/8.html. [doi:10.18564/jasss.3137]

RAILSBACK, S., & Grimm, V. (2012). Agent-Based and Individual-Based Modeling. Princeton, NJ: Princeton University Press.

RESNICK, D. (2013). Ethics of science. In S. Psillos & M. Curd (Eds.), The Routledge Companion to Philosophy of Science. London: Routledge.

ROMANI, L., & Szkudlarek, B. (2014). The struggles of the interculturalists: Professional ethical identity and early stages of codes of ethics development. Journal of Business Ethics, 119(2), 173–191. [doi:10.1007/s10551-012-1610-1]

ROSENBERG, R. (1998). Beyond the code of ethics: The responsibility of professional societies. ACM SIGCAS Computers and Society, 28(2), 18–25. [doi:10.1145/276758.276768]

SALLOCH, S. (2018). The dual use of research ethics committees: Why professional self-Governance falls short in preserving biosecurity. BMC Medical Ethics, 19(1), 53. [doi:10.1186/s12910-018-0295-0]

SANDLER, R. (Ed.). (2014). Ethics and Emerging Technologies. New York, NY: Palgrave Macmillan.

SATALKAR, P., & Shaw, D. (2018). Is failure to raise concerns about misconduct a breach of integrity? Researchers’ reflections on reporting misconduct. Accountability in Research, 25(6), 311–339. [doi:10.1080/08989621.2018.1493577]

SCHWARTZ, M. (2002). A code of ethics for corporate code of ethics. Journal of Business Ethics, 41, 27–43.

SCHWARTZ, M. (2004). Effective corporate codes of ethics: Perceptions of code users. Journal of Business Ethics, 55(4), 321–341. [doi:10.1007/s10551-004-2169-2]

SHAW, D. (2019). The quest for clarity in research integrity: A conceptual schema. Science and Engineering Ethics, 25(4), 1085–1093. [doi:10.1007/s11948-018-0052-2]

SHAW, D., & Satalkar, P. (2018). Researchers’ interpretations of research integrity: A qualitative study. Accountability in Research, 25(2), 79–93. [doi:10.1080/08989621.2017.1413940]

SHORT, J. L., & Toffel, M. (2010). Making self-Regulation more than merely symbolic: The critical role of the legal environment. Administrative Science Quarterly, 55(3), 361–396. [doi:10.2189/asqu.2010.55.3.361]

SHULTS, L., & Wildman, W. (2019). 'Ethics, computer simulation, and the future of humanity.' In S. Diallo, W. Wildman, L. SHULTS, & A. Tolk (Eds.), Human Simulation: Perspectives, Insights, and Applications. Berlin Heidelberg: Springer. [doi:10.1007/978-3-030-17090-5_2]

SHULTS, L., Wildman, W., & Dignum, V. (2018). The ethics of computer modeling and simulation. In M. Rabe, A. Juan, N. Mustafee, A. Skoogh, S. Jain, & B. Johansson (Eds.), 2018 Winter Simulation Conference (WSC) (pp. 4069–4083). IEEE. [doi:10.1109/wsc.2018.8632517]

SILVERT, W. (2001). Modelling as a discipline. International Journal of General Systems, 30(3), 261–282. [doi:10.1080/03081070108960709]

SINGH, D. (2019). Open-Access row prompts editorial board of Elsevier journal to resign. Nature. Available at: https://www.nature.com/articles/d41586-019-00135-8.

SINGH, J. (2011). Determinants of the effectiveness of corporate codes of ethics: An empirical study. Journal of Business Ethics, 101(3), 385–395. [doi:10.1007/s10551-010-0727-3]

SOBKOWICZ, P. (2019). Social simulation models at the ethical crossroads. Science and Engineering Ethics, 25(1), 143–157. [doi:10.1007/s11948-017-9993-0]

SQUAZZONI, F. (2012). Agent-Based Computational Sociology. Hoboken, NJ: Wiley.

SQUAZZONI, F., Polhill, G., Edmonds, B., Ahrweiler, P., Antosz, P., Scholz, G., Chappin, É., Borit, M., Verhagen, H., Giardini, F., & Gilbert, N. (2020). Computational models that matter during a global pandemic outbreak: A call to action. Journal of Artificial Societies and Social Simulation, 23(2), 10: https://www.jasss.org/23/2/10.html. [doi:10.18564/jasss.4298]

STENECK, N. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12(1), 53–74. [doi:10.1007/s11948-006-0006-y]

TAYLOR, P. (2009). Scientific self-Regulation — So good, how can it fail? Science and Engineering Ethics, 15(3), 395–406. [doi:10.1007/s11948-009-9123-8]

TOLK, A., Lane, J., Shults, L., & Wildman, W. (2021). 'Panel on ethical constraints on validation, verification, and application of simulation.' In S. Kim, B. Feng, K. Smith, S. Masoud, Z. Zheng, C. Szabo, & M. Loper (Eds.), 2021 Winter Simulation Conference (WSC) (pp. 1–15). IEEE. [doi:10.1109/wsc52266.2021.9715431]

TOLK, A., & Ören, T. (Eds.). (2017). The Profession of Modeling and Simulation. Hoboken, NJ: Wiley.

TSAMADOS, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2022). The ethics of algorithms: Key problems and solutions. AI & SOCIETY, 37(1), 215–230. [doi:10.1007/s00146-021-01154-8]

VALKENBURG, G., Dix, G., Tijdink, J., & Rijcke, S. de. (2021). Expanding research integrity: A cultural-Practice perspective. Science and Engineering Ethics, 27(1), 10. [doi:10.1007/s11948-021-00291-z]

VON Hippel, E., & VON Krogh, G. (2003). Open source software and the “Private-Collective” innovation model: Issues for organization science. Organization Science, 14(2), 209–223. [doi:10.1287/orsc.14.2.209.14992]

WAGENKNECHT, S. (2016). A Social Epistemology of Research Groups. London:: Palgrave Macmillan.

WEBLEY, S., & Werner, A. (2008). Corporate codes of ethics: Necessary but not sufficient. Business Ethics: A European Review, 17(4), 405–415. [doi:10.1111/j.1467-8608.2008.00543.x]

WILDMAN, W. (2019). Modeling ethics: Reflections at the intersection of ethics and simulation. 2019 Spring Simulation Conference (SpringSim). [doi:10.23919/springsim.2019.8732924]

WILENSKY, U., & Rand, W. (2015). An Introduction to Agent-Based Modeling. Cambridge, MA: MIT Press.

WILLIAMSON, T. (2010). Predicting building performance: The ethics of computer simulation. Building Research & Information, 38(4), 401–410. [doi:10.1080/09613218.2010.481204]

WINSBERG, E., Huebner, B., & Kukla, R. (2014). Accountability and values in radically collaborative research. Studies in History and Philosophy of Science Part A, 46, 16–23. [doi:10.1016/j.shpsa.2013.11.007]

WOERMANN, M., & Cilliers, P. (2016). 'The ethics of complexity and the complexity of ethics.' In R. Preiser (Ed.), Critical Complexity (pp. 265–284). Berlin: De Gruyter. [doi:10.1515/9781501502590-021]

WYATT, S. (2008). 'Technological determinism is dead; Long live technological determinism.' In E. Hackett, O. Amsterdamska, M. Lynch, & J. Wajcman (Eds.), The Handbook of Science and Technology Studies. Cambridge, MA: MIT Press.

YOUNG, M., & Muller, J. (2014). From the Sociology of Professions to the Sociology of Professional Knowledge. In M. Young & J. Muller (Eds.), Knowledge, Expertise and the Professions. London: Routledge.