Next Article in Journal
Knowledge Graph Representation of Multi-Source Urban Storm Surge Hazard Information Based on Spatio-Temporal Coding and the Hazard Events Ontology Model
Next Article in Special Issue
Meeting the Challenges of the UN Sustainable Development Goals through Holistic Systems Thinking and Applied Geospatial Ethics
Previous Article in Journal
Distinguishing the Intervalley Plain from the Intermountain Flat for Landform Mapping Using the Sightline Algorithm
Previous Article in Special Issue
Conceptualizing and Validating the Trustworthiness of Maps through an Empirical Study on the Influence of Cultural Background on Map Design Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Crossing Boundaries: The Ethics of AI and Geographic Information Technologies

Department of Philosophy, Faculty of Behavioural, Management of Social Sciences, University of Twente, 7522 NB Enschede, The Netherlands
ISPRS Int. J. Geo-Inf. 2024, 13(3), 87; https://doi.org/10.3390/ijgi13030087
Submission received: 20 December 2023 / Revised: 28 February 2024 / Accepted: 29 February 2024 / Published: 9 March 2024
(This article belongs to the Special Issue Trustful and Ethical Use of Geospatial Data)

Abstract

:
Over the past two decades, there has been increasing research on the use of artificial intelligence (AI) and geographic information technologies for monitoring and mapping varying phenomena on the Earth’s surface. At the same time, there has been growing attention given to the ethical challenges that these technologies present (both individually and collectively in fields such as critical cartography, ethics of AI and GeoAI). This attention has produced a growing number of critical commentaries and articles as well as guidelines (by academic, governmental, and private institutions) that have been drafted to raise these ethical challenges and suggest potential solutions. This paper presents a review of 16 ethical guidelines of AI and 8 guidelines of geographic information technologies, analysing how these guidelines define and employ a number of ethical values and principles (e.g., autonomy, bias, privacy, and consent). One of the key findings from this review is the asymmetrical mentioning of certain values and principles within the guidelines. The AI guidelines make very clear the potential of AI to negatively impact social and environmental justice, autonomy, fairness and dignity, while far less attention is given to these impacts in the geographic information guidelines. This points to a need for the geo-information guidelines to be more attentive to the role geographic information can play in disempowering individuals and groups.

1. Introduction

Artificial intelligence (AI) and geographic information technologies (e.g., satellites, drones and GPS devices) have become increasingly relied on in various fields ranging from commercial use, military planning and epidemiological research to public administration. Artificial intelligence refers to a range of computer systems that can (a) perform specific tasks (e.g., facial recognition systems or chatbots), (b) exhibit behaviour or processes of human level intelligence (referred to as artificial general intelligence), or (c) exhibit behaviour or processes that may be beyond human level intelligence (referred to as artificial super intelligence) [1]. Geographic information, according to Bishop and Grubesic [2] and Goodchild and Longley [3], refers to facts and data that relate to events, activities and phenomena that are located on the Earth’s surface. This information pertains to human as well as non-human activities, which can be both real-time (e.g., weather forecasting) as well historical (e.g., mapping the Chinese administrative units from 222 BCE and 1911 CE using the database of the China Historical Geographic Information System).
While there are many positive opportunities that both AI and geographic information technologies create, there has also been increasing attention directed towards the social, political and ethical dangers that can arise from these technologies. This attention has led to critical research drawing attention to these dangers, along with ethical guidelines written by political organisations, technology companies and research institutions. This paper will assess 16 ethical guidelines concerning the use of AI and 8 ethical guidelines concerning the use of geographic information (geo-information hereon) technologies. This assessment will take the form of a review of these guidelines to highlight the central ethical values and principles identified as important to consider when looking at the development and use of these technologies.
Section 2 will begin with a case example of the combination of AI and geo-information technologies to highlight some of the ethical challenges these technologies present. This example will be the humanitarian mapping of ‘slums’ and informal settlements. Using the classification drawn from UN Habitat [4], a ‘slum’ is usually characterised by five key indicators: lack of security of tenure, lack of water and sanitation, overcrowding and inadequate structural quality of housing [4]. Whereas, informal settlements are defined by inhabitants having no security of tenure (or ranging between squatting and informal rental properties), lack of formal basic services and housing not being compliant with building regulations or situated close to hazardous areas [4]. But Thomson et al. [5], Kuffer et al. [6] and Abascal et al. [7] highlight the need for looking at the scale of deprivation in the surroundings of these areas. This means looking at the exposure of communities to social, economic, environmental and ecological risks faced by those living in these areas. For this reason, ‘slums’ and informal settlements will be referred to as deprived urban areas (DUAs) hereafter. The mapping of these DUAs stems from the rapid growth of populations in these areas being met with a lack of adequate resources and consistent data on their living conditions. More so, AI has been relied on to support the mapping of these areas due to technical opportunities (e.g., increased spatial data processing speed, accuracy of classification and scalability that aims to map entire regions and not just individual neighbourhoods). While there are opportunities in the use of AI-assisted mapping of these areas, there are also ethical, social and political risks that arise in these mapping efforts. This section presents a number of these challenges and emphasise the critical attention warranted by the integration of AI and geo-information technologies.
Section 3 presents the methodology for the review of the AI and geo-information guidelines. This review will take the form of a literature analysis of the guidelines. This analysis is based upon a keyword search of 19 terms (e.g., accountability, bias, consent and explainability) framed as ethical values and principles that need to be respected and embedded in the development and deployment of AI and geo-information technologies. Section 4 illustrates the results of this analysis, showing the distribution of these terms across the 24 combined guidelines. Section 5 discusses how these guidelines either overlap or show important differences in how the chosen ethical values and principles are presented. This section will explore how the guidelines shed insight on the role of researchers, data controllers and private companies as well as governmental bodies in the ethical assessment of these technologies and their impacts. As well as pointing out how certain values are given more attention in one set of guidelines and not the other. For instance, values such as explainability, fairness, autonomy, dignity and justice are given more attention in the AI guidelines than in the geo-information guidelines. This points towards a need for the geo-information guidelines to be more attentive and/or borrow the ethical language more commonly used for analysing AI technologies to broaden the assessment of the potential negative impact of geo-information technologies.

2. AI-Assisted Humanitarian Mapping of Vulnerable Communities

Over the past two decades, there has been increasing development and deployment of both AI and geo-information technologies in the domain of humanitarian intervention. One example of such a domain using these technologies is for producing maps of urban morphology and monitoring the living conditions of communities living in DUAs. As Mahabir et al. [8] contend, the growing prevalence of DUAs has “regional and global implications, impacting areas such as education, health and child mortality, and political and social exclusion”. Geo-information technologies are relied on to “accurately and routinely monitor and map the dynamic development of these urban settlements at high spatial and temporal resolutions” [9]. The mapping of DUAs is vital for better understanding the sociodemographic characteristics [10,11] as well as capacity building potential in the face of human and environmental crises [12]. These maps are instrumental in initiatives such as its4land (which works on creating geo-information tools to improve land tenure security in East Africa) and multinational organisations such as Slum Dwellers International.
The data used to produce maps and statistics on the growth of DUAs are gathered from censuses and national household surveys at the municipal level as well as from non-governmental organisations, such as the WHO and UNICEF Joint Monitoring Programme on Water Supply and Sanitation (JMP) [13]. These mapping initiatives have been supplemented by deploying AI, specifically machine learning (ML) based methods such as random forests, support vector machines and deep learning [14,15,16,17]. These ML methods transform spatial data “into different layers of abstraction” that can become useful for making predictions as well as data representations [18]. For instance, using convolutional neural networks (CNNs) enables researchers to analyse satellite imagery at the level of pixels based on semantic classes (e.g., the classes of ‘vegetation’, ‘urban’ or ‘slum’).
But the accuracy of these classifications is affected by the level of bias and uncertainty within the data during the training of algorithms and models [19]. Elmes et al. [20] extensively elaborate on this, revealing the sources of training data errors in ML-based map design and construction. Among the sources of training data errors, the authors list the following: map reference data (observations used to assess map accuracy), design-related errors (e.g., temporal or spatial mismatch between outdated image sources and field-collected data), and collection-related errors (e.g., levels of agreement or disagreement between image interpreters) [20]. These sources of error can negatively bias the mapping result, and it is important to consider how the training data in mapping DUAs is collected as well as what validation processes are involved to ensure biases borne from training data errors are minimised. For instance, errors in training data can lead to overfitting and class imbalance (e.g., showing a greater number of ‘slum’ classes where there should not be), while collection related errors can negatively impact map quality [20].
Beyond being a technical challenge, the classification of these areas also has societal effects. Communities living in DUAs are often statistically underrepresented due to outdated census data as they grow faster than municipalities can keep track of them. But they are also democratically underrepresented because in many cases, they are not considered as full members of the cities they are found in [11], partly as a product of the negative connotation that surrounds those living within these areas. For instance, areas being inaccurately classified and represented as DUAs can contribute to the greater stigmatisation of the communities within them [21]. Geo-information that can be used to represent these communities can therefore be a positive means of improving their visibility in contexts where municipal authorities may fail to acknowledge them. But this increased visibility can also be potentially negative if in the hands of malicious actors who may use the spatial data and representations to further discriminate against these communities. As these technologies have impacts on the lives of the communities being mapped, there is a growing need to give critical attention to more than the technical concerns of mapping these areas.
Some work has already been conducted on presenting the unique ethical challenges that geo-information technologies present in humanitarian mapping. Cinnamon [22] provides a list of ethical principles that are important in the design and use of geo-information technologies including autonomy (the right of individuals or groups to demonstrate agency), confidentiality (e.g., protection of personal privacy), beneficence (the requirement to do good), nonmaleficence (the requirement to do no harm) and justice (treating all persons and groups fairly and equitably). Micheli et al. [23] present the importance of including AI ethics and data governance in the use of digital earth applications. Gevaert [24] provides a review of explainable machine learning and AI methods, assessing the societal and regulatory opportunities and challenges arising from applying these methods in earth observation and remote sensing applications. Kochupillai et al. [25] present scientists engaged in AI for earth observation (AI4EO) research with a toolkit to better identify ethical challenges in their work. Similarly, Gevaert et al. [26] contend that researchers working in the FAccT (fairness, accountability and transparency) community need to be more ethically mindful, specifically when using AI in the domain of disaster risk management. The authors go on to mention that there “is an underrepresentation of ethical research by LMICs [Low-Middle-Income Countries] in the development of international frameworks and guidelines for ethical AI” [26].
This review builds on these works, analysing how the chosen guidelines frame ethical values and principles in the development and deployment of AI and geo-information technologies to assess the extent to which these two sets of guidelines overlap or have potential differences as well as showing how the guidelines pay particular attention to the data that is gathered, stored and represented along with the technologies and actors involved (i.e., researchers, organisations and the public).

3. Methodology

The first step of the review was selecting a list of ethical values and principles to focus on. An initial literature search was performed on ScienceDirect with the terms “critical cartography ethics” (698 results), “data ethics” (814, 191 results), and “ethics of AI” (27, 130 results). These three fields of study were chosen due to the insights they shed respectively on how geographic information, data (from individuals and groups), and AI are embedded in social and political relations and consequently carry ethical implications. Refining the search to just studies based on social sciences and open access publications that are either review articles or research articles then returned 59 results, 11,380 results and 644 results, respectively. The choice to focus on social sciences was to find research on the societal impact of these technologies rather than purely technical publications. Following this, ethical values and principles from the first 25 results for each search term in reference to the use of geo-information and/or AI were selected. Table 1 below lists the 19 ethical values and principles (and a generalised description of what they refer to) along with the publications they were pooled from after the literature search.
Some values such as explainability/interpretability and harm/non-maleficence were referred to interchangeably and that is why I have included them together. Some of the publications refer to these ethical concerns as principles, but in other cases, they refer to them as values. For example, Raab [37] mentions that “it is not easy to apply principles that mandate, for example, consent, transparency, beneficence, respect for human dignity or autonomy, and other important ‘headline’ values”. This review will therefore be using both ways of referring to these ethical concerns as either values or principles depending on how they are framed throughout the rest of this review.
Following this, a secondary literature search was performed to find appropriate guidelines that presented ethical values and principles for AI and geo-information technologies. An initial search of the terms “AI guidelines”, “AI ethics guidelines”, “ethics of AI guidelines”, “geographic information guidelines”, “ethics of geographic information” and “ethical guidelines of geographic information” was performed in Google Scholar, Scopus and Web of Science. But the results only returned either review articles or articles that highlighted ethical concerns raised by these technologies. Instead, a Google search using the same terms produced much better results, and Table 2 lists 16 AI guidelines and Table 3 lists 8 geo-information guidelines.
This review includes guidelines that have been published within the past five years in order to get the most up-to-date publications. The guidelines chosen are from private as well as governmental organisations that represent national as well as international parties (e.g., Google, the European Commission and UNICEF). This was important in order for the guidelines to be representative of the views and interests of a wide range of actors, along with showing the ambition of these guidelines to be used by an international audience.
However, there are a number of limitations in the selection. Firstly, while the guidelines chosen cover the regions of USA, UK, Africa, EU and some international organisations/companies, there are not many guidelines that are representative of Latin America, the Middle East or Asia (besides the guidelines from Japan). Secondly, all the guidelines chosen are in English, and so any potentially insightful guidelines that may be in other languages are not considered here. Thirdly, there may appear to be a lack of depth in the choices of guidelines (e.g., this review does not include guidelines for AI used in robotics or autonomous vehicles/weapons). Following the work of Hagendorf [71], the selection was based not on depth “but the discernible intention of a comprehensive mapping and categorization of normative claims” in the fields of AI and geo-information. Specifically, this review aims to assess how the two sets of guidelines frame the ethical values and principles that they present. For instance, how is the inclusion of marginalised communities advocated for within AI development? Or how do the geo-information guidelines consider the adequacy of informed consent in the case of requiring spatial data during emergencies?
The review of the chosen guidelines was performed using ATLAS.ti in order to run word searches through each document using the 19 ethical terms, and the results of the search were used to create codes that could then be tabulated.

4. Results

4.1. Values and Principles in AI Guidelines

Figure 1 below illustrates the high distribution of the search words from the AI guidelines showing clear overlapping concerns on the ethical challenges presented by AI technologies. Table 4 below shows the number of occurrences from the word search across the guidelines. Accountability, fairness, privacy and transparency were mentioned in all 16 guidelines, showing the importance given to these values/principles. Trust (15), rights (15), bias (14), discrimination (15), access (14), risk (14), harm (14), responsibility (14), autonomy (13) and inclusion (13) also appeared in many of the guidelines. Beneficence (3) was mentioned the least. While going through the guidelines, a number of the terms appeared in conjunction, such as statements around how AI developers need to be accountable for how algorithms perform, which means they have greater responsibilities for the data collection and communication of decisions from the data. Similarly, transparency and explainability are both used for improving clarity and intelligibility. As well as the guidelines correlating the levels of bias in both AI and geo-information technologies with discriminatory decisions that affect marginalised communities. For this reason, a number of the terms were paired or joined in triads, as they are discussed in the same senses or follow from each other based on how the guidelines express them.

4.1.1. Accountability and Responsibility

Two core values that the guidelines focus on are the accountability and responsibility of those involved in designing and deploying AI technologies. As the Association for Computing Machinery’s (ACM) [49] code of ethics states, computing professionals must be accountable to all stakeholders and also understand “that the public good is the paramount consideration”. Similarly, Leslie [60] states that all AI systems “must be designed to facilitate end-to-end answerability and auditability”, which “requires both responsible humans-in-the-loop across the entire design and implementation chain and activity monitoring protocols”. This answerability is framed in terms of a principle of accountability, which “demands that the onus of justifying algorithmically supported decisions be placed on the shoulders of the human creators and users” of AI systems. Likewise, IBM [56] states that it is the responsibility of each team working with AI “to keep users empowered with control over their interactions and data”. But the demand for such answerability coming from auditing mechanisms, can present a potential danger. As identified by Kak and Myers [48], if the “algorithmic accountability space” becomes “mainstreamed into a procedural audit requirement”, this can lead to regulation being sidelined as the auditing is performed within companies themselves. For instance, Microsoft [58] states that accountability is broken down into definitive outcomes “such as ensuring AI systems are subject to impact assessments, data governance, and human oversight”. The question here is whether the governance of the data is performed within the company or by legislative bodies? Also, are the “impact assessments” based on local or global impacts of AI systems? These two questions are important in understanding what the public good of these systems are, and which public specifically is being affected.
For instance, a number of the guidelines highlight the need for AI norms and standards to reflect regional interests as well as global interests. Looking at the regional level, Gaffley et al. [50] make clear that ethical principles and guidelines should be “formulated based on African values and standards [that] may generate responsible AI perspectives … to address the specific AI-related challenges experienced on the [African] Continent”. This is further emphasised by the AUDA-NEPAD [47] stating that the African Union “should encourage African governments to pursue a deliberate and proactive approach, to implement supportive regulation, policies and initiatives”. Likewise, UNESCO [62] states that member states should “develop, review and adapt, as appropriate, regulatory frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their life cycle”. This call for a proactive approach reflects the need for the greater involvement of nation states in the governance and therefore assigning of accountability in AI development and deployment. As the Japanese Cabinet Secretariat [57] states, “international coordination for governance is important, and in addition to governance in each individual country, an international system of cooperation for dealing with cross-border problems should be established”. Similarly, the European Commission [51] asserts that the EU will promote “ambitious global rules and standards, including strengthening cooperation with like-minded countries and the broader multi-stakeholder community … to support a human-centric and rules-based approach to AI”. Such international collaboration will help ensure that the auditing mechanisms of AI systems do not remain under the control of companies in order to strengthen the efficacy of regulatory pressure.

4.1.2. Privacy and Risks

Just like accountability and responsibility, concerns over privacy and risks were mentioned in most of the guidelines. The European Commission [51] states that the “EU’s approach should be human-centric, risk-based, proportionate and dynamic”. The OECD [59] states that “AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems”. Likewise, Google [54] states that they “use a risk-based approach that focuses reviews in the areas that are most needed at any given time” and through this process, “assemble a diverse set of stakeholders to ensure [they] consider a variety of perspectives and effectively manage risks”. But Kak and Myers [48] argue that there is a “burgeoning audit economy with companies ordering audits-as-a-service despite no clarity on the standards and methodologies for algorithmic auditing, nor consensus on the definitions of risk and harm” which can have the consequence that “audits will devolve into a superficial ‘checkbox’ “exercise”.
An example of an auditing exercise that is mentioned by the UK’s House of Lords Select Committee [55] is data protection impact assessments (DPIA), following the General Data Protection Regulation (GDPR). DPIAs must be carried out “when using new technologies and the processing [of personal data] is likely to result in a high risk to the rights and freedoms of individuals” [55]. The DPIA is meant to include the following: a description of the processing, purposes and interests of data controllers; assessment of the necessity and proportionality of the processing; assessing the risks to individuals; and measures that mitigate the risk of data collection and processing [60].
One of the most pressing risks referenced in the guidelines is the infringement on privacy by AI technologies. The ACM [49] asserts that the “responsibility of respecting privacy applies to computing professionals in a particularly profound way”, as AI technology “enables the collection, monitoring and exchange of personal information quickly, inexpensively, and often without the knowledge of the people affected”. This invasiveness of AI is similarly presented by the Japanese Cabinet Secretariat [57], stating that “it is still possible to gauge each person’s political position, economic situation, personal hobbies, personal preferences, and so forth with great accuracy”. The European Commission [51] states that part of its human-centric approach to AI relies on the protection of EU values and fundamental rights “such as non-discrimination, privacy and data protection”. Similarly, the House of Lords [55] highlights that “legal and technical mechanisms for strengthening personal control over data, and preserving privacy, will become increasingly important as AI becomes more widespread through society”. IBM [56] echoes these statements, noting that organisations (including themselves) “should be fully compliant with the applicable portions of EU’s GDPR and any comparable regulations in other countries, to make sure users understand that AI is working in their best interests” and use AI to amplify privacy “rather than undermine it”.

4.1.3. Explainability and Transparency

Many of the guidelines also show a high concern over the levels of explainability and transparency in AI systems. Microsoft [58] points out that “the development of intelligible AI systems” is fundamental to making AI trustworthy in society, with intelligibility tied to “technical transparency and explainability”. In 2022, Google [54] launched a course introducing the concept of explainability to “user experience designers, product managers and ML developers”. IBM [56] makes the point that “we don’t blindly trust those who can’t explain their reasoning”, and so as an “AI system increases in capacities and achieves a greater range of impact, its decision-making process should be explainable in terms people can understand”. This is echoed by Leslie [60], who states that AI projects should be justifiable by ensuring there is transparency in how models are “designed and implemented, and the transparency and interpretability of its decisions and behaviours”. The WHO [61] states that ensuring intelligibility of AI can be achieved by improving transparency and explainability, where transparency requires that there is sufficient information “published or documented before the design and deployment of AI technology”. And the OECD [59] asserts that the “transparency and explainability of AI systems are often preconditions to ensure the respect, protection and promotion of human rights, fundamental freedoms and ethical principles”.
Elaborating further on explainability, Leslie [60] suggests four approaches to interpretability and due to the fact that this term is interchangeable with explainability in many of the guidelines, I have inserted these approaches here. These approaches are as follows:
(i)
internal explanations aimed at shedding “descriptive and inferential light” on the operations of a system;
(ii)
external or post-hoc explanations that “seek out evidence for the reasoning behind a given opaque model’s prediction … by utilising maximally interpretable techniques like linear regression, decision trees, rule lists or case-based reasoning”;
(iii)
supplementary explanatory infrastructure such as using two kinds of neural networks (e.g., first, a convolutional network and next, a recurrent neural net) to extract features from an image recognition system and translate them into a natural language representation;
(iv)
counterfactual explanations, which “offer succinct computational reckonings of how specific factors that influenced an algorithmic decision can be changed so that better alternatives can be realised by the subject of that decision” [60].
Echoing the need for counterfactual explanations in a similar way, the UNESCO guideline [62] states that those “who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalised populations”. These statements show that explainability depends on the efforts of those responsible for algorithmic training and decision-making to improve transparency to ensure those affected by AI systems are able to understand the explanations of algorithmic decisions.

4.1.4. Fairness, Inclusion and Harm

Improving the impacts of AI is also linked to improving fairness in the design and deployment of AI. Fairness is tied to notions of improving equity, reducing potential harm and ensuring greater inclusivity. This is echoed by the Japanese Cabinet Secretariat [57] stating that it is “necessary to ensure fairness and transparency in decision-making, appropriate accountability for the results, and trust in the technology” to make sure no one suffers from undue discrimination or unfair treatment that negatively impacts their dignity. Likewise, the House of Lords [55] states that AI “should operate on principles of intelligibility and fairness”. Kak and Myers [48] probe with questions such as the following: “How might the implementation of your AI system adversely affect each stakeholder’s fair and equal treatment under the law? Are there aspects of the project that expose vulnerable communities to possible discriminatory harm?” The ACM [49] similarly points out that avoiding “harm begins with careful consideration of potential impacts on all those affected by decisions” and further states that “where misuse or harm are predictable or unavoidable, the best option may be to not implement the system”.
As these statements show, fairness further depends on inclusive or exclusive development as well as the use of these AI systems. For instance, according to the AUDA-NEPAD [47], there is a need to respect diversity within AI development especially through the proposal of a diversity-aware principle. This principle will facilitate “teaming up of African nations so that the opportunities of AI for Africa’s socio-cultural and economic development can be maximally exploited, while the challenges can be dealt with collectively” [47]. In a similar vein, the European Commission [51] encourages its member states to “take measures and exchange best practices to increase inclusion and diversity” by facilitating “balanced” AI teams and attracting academic talent in developing AI technologies. Microsoft [58] notes that teams working on AI should “consider the complex cultural, political, and societal factors of AI as they show up in different deployment contexts” reflecting how “diversity and inclusion are critical” to Microsoft’s responsible AI commitment. Google [54] also asserts that “the diversity of our testers is critical to ensuring models are assessed across a wide spectrum of use-cases, scenarios and values”. These references to diversity, inclusion and socio-cultural-political factors reflect how location, context and circumstance influence the impacts of AI technologies and how to ensure these impacts are well understood by all actors that can be affected.

4.1.5. Justice and Beneficence

Many of the guidelines also stress the need for AI to be developed and used in a manner that upholds justice for all. The OECD [59] states that “AI actors should respect the rule of law, human rights, and democratic values, throughout the AI system lifecycle” and also mentions the need for AI actors to keep in mind the impacts of AI on “dignity and autonomy, privacy and data protection, non-discrimination and equality” [59]. In similar terms, the ACM [49] proposes the principle “Be fair and take action not to discriminate”, asserting that this principle should be governed by “values of equality, tolerance, respect for others, and justice”. The need for such principles and values stems from the awareness of the risks that AI poses; for example, the EU’s AI HLEG [53] pointed towards dangers such as endangering the standards of distributive justice. This is echoed by UNESCO [62] who assert there is a need to recognise that “AI technologies can deepen existing divides and inequalities in the world, within and between countries, and that justice, trust and fairness must be upheld”. These statements show that the potential dangers of AI are not localised to individual regions, as these guidelines make clear the global impact that AI has on various strata of the world (e.g., the law, the economy and the political sphere). As the AUDA-NEPAD [47] points out, structural bias in AI applications can embed stereotypes that affect the determination of financial risk (to individuals or groups) as well as tools used in predictive justice. Despite this global reach, standards of justice within regions need to also be respected by those developing these technologies. As the CEPJ [52] notes, while machine learning can improve judicial systems, this deployment should “be carried out with responsibility, and due regard for the fundamental rights of individuals” as stipulated in legislation such as the European Convention on Human Rights and the Convention on the Protection of Personal Data.
These calls for focusing on justice are also, in a smaller number of guidelines, linked to the value of beneficence or the need for AI to be a force for good and improving standards of wellbeing. For instance, the UK’s House of Lords [55] mentioned the need to focus on a “wide-ranging set of ethical considerations, including the preservation of human autonomy, beneficence, non-maleficence, and justice”. It is therefore an imperative for AI developers to be concerned not only with the potential technical or economic incentives to develop AI tools but also with not diminishing the wellbeing of those that may use or be represented by these tools. A similar focus is stated by the WHO [61] calling for AI in healthcare to promote the well-being of others and minimise the risks as much as possible. Gaffley et al. [50] note that for Africa to better embed AI, there needs to be “Africa-centric normative frameworks for human-centred AI design that consider respect, beneficence, non-maleficence, harmony, explicability and ethnic neutrality as alternatives to dominant themes of fairness, transparency and accountability”.

4.1.6. Rights and Consent

In addition, the potential negative impacts of AI systems were tied to whether AI infringed upon the rights of individuals and groups, and their capacity for consenting or removing consent from AI technologies. As UNESCO [62] states, data protection frameworks and governance mechanisms need to be created by “a multi-stakeholder approach at the national or international level, protected by judicial systems, and ensured throughout the life cycle of AI systems”, with these frameworks focusing on the rights of data subjects and their informed consent. Similarly, Leslie [60] highlights how “AI systems that target, profile, or nudge data subjects without their knowledge or consent could in some circumstances be interpreted as infringing” on the personal lives of individuals, which can “consequently harm a person’s more basic right to pursue their goals and life plans free from unchosen influence”. The correlation here between consent, rights and harm is also made clear by the European Commission’s AI HLEG [53], stating that a “proportionate use of control techniques in AI is needed to uphold the autonomy of European citizens” through clearly defining “if, when and how AI can be used for automated identification of individuals”. There should be a clear differentiation between identification, tracking, targeted surveillance and mass surveillance [53]. Kak and Myers [48] also consider the use of data minimisation principles as taking “the burden away from individuals having to make decisions or proactively exercise their data rights, and onto firms to demonstrate their compliance with these principles in the interests of users”.
In many of the guidelines, the issue of consent is therefore framed in terms of data rights and more broadly, human rights. For instance, the OECD [59] states that the normative framework for AI and its social implications “finds its basis in international and national legal frameworks, human rights and fundamental freedoms”, which involve access to data. Likewise, the WHO [61] asserts that the rights of patients such as “dignity, privacy, confidentiality and informed consent, might be dramatically redefined or undermined as digital technologies take hold and expand”. IBM [56] promotes the need to recognize “and adhere to applicable national and international rights laws when designing for an AI system’s acceptable user data access permissions”. Leslie [60] points out that in a situation where injuries or negative consequences occur, there needs to be necessary accountability processes, otherwise the autonomy and rights of individuals will be violated.

4.1.7. Bias and Discrimination

Many of the guidelines also include an assessment of the levels of bias and discrimination with the data and algorithms used in AI applications and the negative implications that can arise. The UK’s House of Lords [55] points out that alongside “questions of data bias, researchers and developers need to consider biases embedded in the algorithms themselves”. IBM [56] goes further, listing 18 different types of unconscious biases including the following:
  • Availability bias (overestimating events that have more recency);
  • Congruence bias (testing hypotheses exclusively through direct testing alone);
  • Bandwagon bias (believing things based on groupthink);
  • Ingroup/outgroup bias (favouring one’s ingroup over outgroup members);
  • Confirmation bias (interpreting or focusing on information that confirms one’s preconceptions);
  • and bias blind spot (seeing oneself as less biassed than others).
Understanding how these biases may play a role in the design and decisions of AI is necessary to ensure that individuals and groups do not become misrepresented or mistreated. As the WHO [61] states, for AI to be used effectively, “existing biases in healthcare services and systems based on race, ethnicity, age and gender that are encoded in data used to train algorithms, must be overcome”. And this is echoed by the European Commission for the Efficiency of Justice (CEPEJ) [52], asserting that reasoning biases within AI carry inherent risks that can “transcend the act of judging and affect essential functional elements of the rule of law and judicial systems”.
Alongside bias, the capacity of AI systems to discriminate against certain groups was also mentioned by many of the guidelines. As the ACM [49] states, the use of “information and technology may cause new, or enhance existing, inequities”. Kak and Myers [48] likewise point out the “growing consensus around recognized harms from AI systems (particularly inaccuracies, bias, and discrimination) has led to a flurry of policy movement”. Gaffley et al. [50] assert that important “human rights risk factors” include “human rights violations resulting from the use of inadequate training data, particularly in relation to the stigmatisation and discrimination against individuals and communities”. As processing methods can reveal existing discrimination through aggregating data “related to individuals or groups of individuals, public and private stakeholders must ensure that the [processing] methods do not reproduce or aggravate such discrimination” [52]. This is similarly pointed out by the WHO [61], highlighting that bias in traditional databases and machine learning datasets can “lead to allocation of resources that discriminate against, for example, people of colour” and that “decisions related to gender, ethnicity or socioeconomic status might similarly be biassed”.

4.1.8. Access

Additionally, the guidelines present concerns over the specific users that can make use of AI technologies along with the need for greater access to data to improve the performance of these technologies. For instance, organisations “that have access to the widest and deepest swath of behavioural data insights through surveillance will have an edge in the creation of consumer AI products” [48]. Meanwhile, IBM [56] states that users should “always maintain control over what data is being used and in what context”. Moreover, these users “can deny access to personal data that they may find compromising or unfit for an AI system to know or use” [48]. Leslie [60] highlights that as AI technologies will “organically improve with the enlargement of access to data and the growth of computing power”, it will not be “long before AI technologies become gatekeepers for the advancement of vital public interests and sustainable human development”. These guidelines show a clear awareness that as AI systems are developed, their potential opportunities (e.g., in business analytics) and challenges (e.g., more invasive surveillance or endangering public interests) both need greater scrutiny. But as mentioned above, the issue of access also contributes to which individuals and groups may end up being excluded from enjoying the benefits of AI and who may end up facing greater discrimination from the greater scrutiny of AI-based surveillance.

4.1.9. Autonomy and Dignity

Moreover, the impact that AI can have on the autonomy and dignity of individuals and groups was also mentioned in the guidelines (and was linked to issues of accountability, justice, discrimination, rights, inclusion and explainability). As IBM [56] notes in relation to healthcare, “If empowered wisely, AI has the potential to empower patients and communities to assume control of their own health care”, but if “we do not take appropriate measures, AI could lead to situations where decisions that should be made by providers and patients are transferred to machines”. Similarly, UNESCO [62] points out that as AI systems can impact healthcare, education, politics and the environment, these systems can “challenge humans’ special sense of experience and agency, raising additional concerns about, inter alia, human-self-understanding, social, cultural and environmental interaction, autonomy, agency, worth and dignity”. The connection regarding autonomy and dignity was also present in other guidelines. The OECD [59] asserts that “AI actors” should respect rights and values including “dignity and autonomy”. Leslie [60] states that respecting the dignity of individuals is part of ensuring “their abilities to make free and informed decisions” along with safeguarding “their autonomy, their power to express themselves, and their right to be heard”. Gaffley et al. [50] also highlight the “negative effect [of AI] on deeply held societal values, human dignity and wellbeing” as part of understanding how to better design, develop and implement AI systems in Africa.

4.1.10. Trust

At the root of many of the guidelines is the concern for improving the levels of trust that individual users as well as the general public has in the face of increased use of AI. The apparent lack of trust in AI is partially due to fear of the many risks already mentioned. For instance, the AUDA-NEPAD [47] points out that “there are legitimate concerns that the incorporation of AI into the maintenance of public safety and security could become too pervasive and invasive”. AI technologies can capture sensitive data from individuals as well as groups through tools such as facial recognition. Such data can be useful in applications such as predictive policing and help in lowering rates of crime, but these surveillance tools can also infringe upon the rights of the innocent. This is observed by the WHO [61] when pointing out that several AI applications raise many ethical concerns including “infringement on the rights of privacy and autonomy, health and social inequity and the conditions necessary for trust and legitimate uses of data-intensive applications”. Greater oversight of these tools is therefore called for. As UNESCO [62] asserts, the “trustworthiness and integrity of the life cycle of AI systems is essential to ensure that AI technologies will work for the good of humanity, individuals, societies and the environment and ecosystems”. Similarly, the Cabinet Secretariat of Japan [57] states it is necessary “to ensure fairness and transparency in decision-making, appropriate accountability for the results, and trust in the technology” so that users of AI do not face undue discrimination. IBM [56] asserts that the company has ambitions to build trustworthy AI around three components, stating that AI should be lawful, ethical and robust (both technically and societally). They also put further emphasis on several key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal well-being, and accountability [50].

4.2. Values and Principles in Geographic Information Guidelines

Moving on to the ethical guidelines for geo-information systems, the values of access, consent and privacy were mentioned in all the eight guidelines. Figure 2 and Table 5 below show the distribution of the terms across the guidelines and the number of occurrences for each term, respectively. Access and consent were mentioned in all eight guidelines. Bias (7), discrimination (7), harm (7), risks (7), rights (7), inclusion (7), accountability (6), responsibility (6), transparency (6) and trust (5) were mentioned in most of the guidelines. Explainability, dignity, fairness and autonomy were mentioned the least. Beneficence was the only value not mentioned.

4.2.1. Access

Throughout all the guidelines, the importance of increased access to geo-information technologies was mentioned, along with both positive and negative consequences of this. Benchmark & EthicalGeo [64] note that “the number, variety and accessibility of digital mapping has created risks and opportunities that are new in kind and scale”, and so users “should have help to understand potential harms from their activity”. Similarly, Goodchild et al. [65] state that we have now reached a point “where a largely unregulated mix of government, nonprofit, and corporate agencies have access” to geo-information on a large proportion of the world, which may be beneficial, but “too often they are or can become intrusive or serve oppressive purposes”. Echoing this sentiment, the Ordnance Survey [67] states that controls and frameworks may need to be drawn up to supervise data collection and use. The Geospatial Commission [66] asserts that creating such frameworks and controls would fall on policymakers to “understand and mitigate any current barriers to effective data access, including legislative barriers, and find mechanisms to encourage the use of location data to support public benefits”.
At the same time, such barriers to effective data access may already be falling due to the availability of “open source GIS software [that] can be used to reduce overhead costs and ensure all relevant agencies can effectively engage with coordination efforts and/or reap the benefits of GIS systems” [69]. The availability of this software is especially necessary in contexts that may rely on funding from external agencies. As the AAAS [63] asserts, local partners should have “a defined mechanism for raising concerns or making complaints about the data collection effort”, and this mechanism needs to “remain accessible even after funding for the response effort has ended”. The importance of this mechanism is also brought up by the World Wide Web Consortium (W3C) [70] in terms of a right to access. This will give individuals “the right to obtain a copy of the personal information held about them” along with supplementary information including the following: purpose of processing, categories of personal data being processed, recipients of the processing, the right to object together with instructions of how to object and whether there is automated decision taking [70].

4.2.2. Rights and Consent

The guidelines also stress upon the need for the public’s rights (as briefly noted by W3C’s right to access) to be better protected and methods of acquiring consent to be made more integral in the use of geo-information technologies. As Benchmark & EthicalGeo [64] clearly states, users of location data have the responsibility “to understand the potential effects of their uses of data, including knowing who (individuals and groups) and what could be affected, and how”. Goodchild et al. [65] extend the target of where responsibility lies, stating that the “regulatory parameters for geospatial data collection and use must be applicable to all aspects of supply-chain management, including companies that contract to provide products and services to multinational corporations”. Likewise, the Geospatial Commission [66] asserts that those “working with location data (across all sectors of the UK economy and the public sector) share responsibility for the public’s trust in the location data ecosystem”. This demand for greater responsibility in the use of geo-information stems from the need to ensure the upholding of the rights of individuals and communities. As Berman et al. [68] note, geo-information organisations should consider whether “there are clear conditions evidencing respect for individuals’ rights relating to their data” such as through “consent arrangements for non-operational use of data, notification of potential sharing of data” or “right to removal of personal data from data sets”. Similarly, the AAAS [63] also note that local communities and organisations have the “rights to rectify false, inaccurate, or incomplete data collected about them, to remove themselves and their associated data from the data collection systems at any time, and to have input regarding what will happen to that data after the investigation”.
Additionally, another significant concern related to the protection of rights, is consent. For instance, there is need for particular attention towards the ethical issues “associated with obtaining consent in the midst of an emergency or ongoing conflict, recognizing that the nature and form of consent” depends on “the role and relationship of the person or group of individuals with regard to the data” [63]. More than just in the case of emergencies, any collection, analysis or sale of geospatial data needs informed consent as Goodchild et al. [65] stress. Such informed consent involves putting “individuals on notice concerning the purpose, scope and use of the geospatial data to be collected, along with providing individuals with the ex ante choice of consenting” or declining to consent after the fact [65]. The Geospatial Commission [66] also suggests that individuals and communities “could be given a platform to review the location data gathered over a period of time to enable them to consent to whether all of their data or just some” can be used, restricted and anonymised, or deleted.
But it may not always be possible for consent to be given by individuals and communities. One reason for this is the aggregation of information such as data captured from phone calls, including call location and frequency. This is an example given by Berman et al. [69], highlighting that the aggregation of such information can mean that individuals and communities may be unable to provide any consent. Although providing “publicly available information regarding its use would be a means to respect those whose data is used” [68]. Another obstacle to providing consent is in the case of emergencies, as mentioned by the AAAS [63] above, where there may be far less contact between organisations and communities. Thus, it becomes problematic to ensure that the consent for reuse of information or requests to delete the information can actually be communicated.

4.2.3. Privacy and Risks

In the same vein as concerns over consent and rights is the issue of ensuring privacy is protected and risks mitigated. As Berman et al. [69] note, comprehensive “privacy, data protection and storage standards may be largely non-existent in many countries where geospatial data is being collected” and this is especially concerning “in development and humanitarian contexts where data is frequently shared between agencies, donors and NGOs”. This lack of comprehensive guides or standards is also worsened by the potential lack of engagement between those collecting geo-information and those represented within it. As remarked by the Ordnance Survey [67], in order to “address risks arising from lack of power of local populations and low awareness or understanding of the work undertaken including potential loss of privacy, trust and harm”, engagement with local populations is a crucial step. This engagement is heavily tied, as mentioned above, to the level of inclusion (or exclusion) of a broad and diverse range of actors in the collection and distribution of geo-information, along with focus being given to communicating the benefit (economic or otherwise) of using these technologies over the risks. As Goodchild et al. [65] comment, the development, marketing, and implementation of these technologies “typically focus on productivity, convenience, entertainment and control, without concern for individual or collective privacy, or power imbalances, and basic human rights”. The Geospatial Commission [66] further points out that currently, “accountability processes may be insufficient to maximise individuals’ confidence that their location data is used responsibly and held securely” due to a lack of clear communication from organisations, which hinders the trust the public has in these organisations.
One way in which this confidence can be improved upon, is through processes of data minimization or a principle of proportionality. Goodchild et al. [65] state that the “anticipated value [of geo-information] would need to be weighed against the possible harm that could result from increasing the amount or precision of data”. Thus, they suggest a need for organisations to consider the trade-off between the accuracy or precision of the collection and representation of data and how such precision can negatively affect communities. Principle 7 on minimising data of the Locus Charter [64] advocates “using only the necessary personal data that is adequate, relevant and limited to the objective, including abstracting location data to the least invasive scale feasible for the application”. The Ordnance Survey [67] also suggests capturing and producing “strictly only what is required by the project and to anonymise any possibly personal information that remains” by having “upfront and ongoing consideration of how necessary data are before deciding to capture them”.
The AAAS [63] asserts that it is necessary for data collectors to be provided “training to ensure that they are aware of the risks and responsibilities (both to themselves and the research subjects) associated with the task” before data is even collected. This special care is argued for because of the way spatial data can be used to identify individuals and groups. For instance, there is “potential for an individual’s identity to be exposed in the release of a georeferenced data set” such as through linking “health and other administrative data back to individuals using geographical location” [69]. While aggregation and anonymisation may ensure such disclosure does not happen, it is not impossible for de-anonymization of this data to be performed. The AAAS [63] goes a step further, suggesting the creation of a “risk matrix” that “plots the probability of the harm taking place against the potential outcome’s severity, with the risk represented by the product of these two factors”.

4.2.4. Bias and Discrimination

Similar to the AI guidelines, the coupled concerns over bias and discrimination were also raised in the geo-information guidelines. The Geospatial Commission [66] highlights the fact that organisations need to be aware of different kinds of bias that include “the replication or exacerbation of existing biases, the transfer of biases from elsewhere, and the masking of bias or compromise of oversight processes due to a perception” of the data having no inherent biases. These biases can be both consciously and unconsciously propagated within geo-information datasets, and each of the guidelines highlights the need to be aware of this. As further remarked on by Benchmark and EthicalGEO [64], bias “in the collection, use, and combination of location datasets can either remove affected groups from mapping that conveys rights or services, or amplify negative impacts of inclusion in a dataset”. This is also commented on by the AAAS guidelines [63] that point to the fact that such biases can be amplified during crisis situations. For instance, “if connectivity is known to have been degraded in certain neighbourhoods of a city, reports coming out of that zone might be given more statistical weight” in contrast to areas where the infrastructure remains intact [63]. The overvaluing of certain areas from the bias in the data may therefore lead to certain areas receiving more resources than others (or inversely certain areas not being considered to be in need of resources). As similarly highlighted by Berman et al. [69], data modelling performed by “persons who do not take into account the limitations of the data and/or do not understand and take into account the social, political and environment contexts” of data collection can lead to flawed findings and decision-making.
The guidelines also highlight that it is necessary to be careful about the potential for discrimination that may arise from biassed datasets. Berman et al. [69] further point out that discrimination can be “consciously or unconsciously built into algorithms without the final user’s knowledge”, and this may result in “trends and predictive models that discriminate against certain persons and populations” through police profiling, judicial decisions or even insurance applications. This is because geo-information can “inherently implicate and identify potentially vulnerable individuals, groups, organisations and resources”, which can lead to “data generated in good faith being used maliciously” if data collectors are not aware of the contexts specific to where the geo-information is being gathered from [69]. In similar terms, Benchmark and EthicalGEO [64] state that it is “now clear that data-driven applications can come with specific kinds of risks, including undue manipulation, discrimination, opacity, and undermining personal privacy”. There is therefore a need to understand what conscious or unconscious biases can become embedded that lead to discriminatory decision-making. As the Geospatial Commission [66] argues, understanding the “potential bias and discrimination in the use of all data is fundamental” to unlocking the potential of location data projects to create more positive outcomes for the public.

4.2.5. Explainability and Transparency

The guidelines also highlight the need to improve the level of transparency in communication about geo-information. The Geospatial Commission [66] states that “transparency means improved access for the data subject, which depends on organisations explaining and engaging with the public”. This “improved access” is meant to be made possible through “simpler, more granular and less intrusive or complex ways to understand and consent to, or have control over, how their location data is collected and used” [66]. In similar terms, the UKSA [68] states that transparency involves clear communication throughout the stages of collection, use, analysis, storage and distribution of spatial data. The need for greater clarity of explanations regarding spatial data is especially needed in the use of AI, as the Ordnance Survey [67] notes that “AI presents very particular complexities because processes and models can lack transparency and happen at scales beyond the capacity of humans to monitor”. Moreover, transparency goes beyond clearer and simpler communication and can be a matter of organisations acting in good faith. As Berman et al. [69] state, in the case of using unmanned aerial vehicles (UAVs), launching and landing should be “from the location to be surveyed rather than remotely to highlight transparency in use and allow for an opportunity for appropriate explanations and dialogue with the community in advance”.
Although mentioned in only two of the guidelines, explainability is also part of improving the communication between data controllers and the public. For example, the Ordnance Survey [67] points out that geo-information models should include procedures for “data and model explainability/observability to ensure that decisions are transparent and taken within the scope of the domain of knowledge”. Goodchild et al. [65] also assert that ensuring explainability also necessitates “a mandated and enforceable legal regulatory regime, which requires informed consent and restrictions that enable fairness and non-discrimination”. These mandates would put more pressure on organisations as well as researchers collecting geo-information to ensure that they are as transparent as possible with the purpose behind the data they are collecting.

4.2.6. Accountability and Responsibility

Furthermore, the calls for transparency reflect the importance the guidelines place on the assigning of accountability and responsibility in the use of spatial data. As the Geospatial Commission [66] states, “accountability is a key factor in deciding whether a data subject supports the use of location data relating to their movements”, but this support is hindered because “the public feel unable to hold data controllers to account”. Likewise, Benchmark and EthicalGEO [64] point out that the individuals and communities represented in location data “should be able to interrogate how it is collected and used in relation to them and their interests, and appeal those uses proportionate to levels of detail and potential for harms”. This ability to interrogate organisations is based on whether or not there is communication between the public and these organisations. A lack of communication or clarity on what data is collected “prevents any legitimate queries, restitution or disputation of decision making, thereby precluding accountability in decision making” [69]. Ensuring that such queries can be made and accountability assigned in the collecting and use of spatial data reflects the importance of defining “norms within the public and private sectors to collect data and build systems that are in the public interest” [64]. And the Ordnance Survey [67] specifies that such norms require the creation of an “ethical culture” based on “internal processes for staff to raise issues, and stakeholder engagement in organisational activities”.

4.2.7. Inclusion and Harm

The creation of such an ethical culture (or lack thereof) is also brought up in the guidelines in the need for greater inclusion in developing these technologies. Goodchild et al. [65] acknowledge the fact that “individuals, communities, and groups have been and continue to be excluded from structures of power, decision making, and self-determination” through experiencing barriers to resources, participation, justice and equity. While Berman et al. [69] mention that limitations of spatial data can also include the exclusion of certain demographics due to their lack of access or datasets being outdated. The Geospatial Commission [66] also argues for improving direct “lines of accountability between the data subject and the data controller” in order to “build confidence in an organisations’ internal governance and oversight mechanisms”. These direct lines of accountability would be important in reducing the marginalisation of certain communities. Especially as these communities are “rarely included in creating these [geo-information] technologies, are less likely to be informed about the risks and consequences, and typically have fewer remedies available to combat injustices” [64].
The need for these oversight mechanisms is tied to the levels of harm that geo-information can produce. For example, two of the principles within the Locus Charter [64] are Principle 3 (Do no harm) and Principle 4 (Protect the vulnerable). In similar terms, the UKSA [71] provides an ethics checklist, which also has as its second requirement, “Do no harm”, and asks the following: “Will this analysis do more good than harm? Will it be a positive influence in the world?” Meanwhile, the Ordnance Survey [67] lists 15 “high-level harms” associated with geo-information. These include breach of privacy, disclosure of vulnerable individuals or communities, use of data to damage landscape or environment, increase in social inequality resulting from how the data are used and contributing to a regime’s propaganda by bestowing an air of respectability [67]. These points highlight the awareness that geo-information can negatively impact individuals in multiple ways both personally and at a wider collective level if the spatial data is used by malicious actors.

4.2.8. Trust

Due to the impact that geo-information can have (both negative and positive), a number of the guidelines point towards the importance of increasing the trust in the spatial data as well as technologies used by researchers and companies. The Geospatial Commission [66] states that users of location data “must be transparent, and the benefits must be clearly stated and adhered to” for the sake of “building long term trust and support from the UK public”. This call for generating greater trust in location data is founded upon the Geospatial Commission’s ABCs (i.e., improving Accountability, mitigating Bias and having greater Clarity), which are “three shared values to safeguard and build the public’s trust and confidence in the use, sharing and reuse of location data” [66]. This is also echoed by the UKSA [68], which states that “taking a considered approach to ethics in every project” ensures retaining “public trust in the use of data for research and in statistics, both in geospatial contexts and more generally”. Likewise, the Ordnance Survey [67] points out that “ethically managed location data is critical to maintaining public, customer and government trust, and demonstrating” the accountability of organisations. Whereas, the W3C [70] prompts those using location data to have certain expectations that must be met before trusting developers and service providers. The guideline asserts that users should ask: what location data is included, how is it used and what can it tell others? [70]. Moreover, responses from developers and service providers “should include comprehensive justification for spatial data use and practical steps that users can take to mitigate risks and keep themselves safe without sensationalising the dangers” [70]. There is a notable difference in how the Geospatial Commission refers to users and how the W3C refers to users. In the first case, the Geospatial Commission refers to those who have already gathered location data (i.e., either organisations or researchers), while the W3C refers to individuals belonging to the public who may volunteer their location data to these organisations or researchers. The AAAS [63] points out that greater levels of “disclosure, autonomy, and access to data may be allowed for highly trusted data recipients with strong data security, audit and access control processes”, whose goals for using the data align with the purposes stated for collecting this location data.

5. Discussion

The 24 guidelines present the ethical values and principles within them with mostly similar descriptions as well as stress their importance for how AI and geo-information technologies are developed and used. Firstly, many of the guidelines point out the need for these technologies to be developed based on human-centric and risk-based approaches. “Human-centric” in the guidelines refers to ensuring that these technologies are developed according to clearly defined values (e.g., accountability, responsibility, transparency, rights, justice and consent) that need to be embedded in their development. One point of concern that was only lightly considered is the question of whose values, in terms of which regions, are the most important to be embedded. For example, Gaffley et al. [50] and the AUDA-NEPAD [47] make repeatedly clear that African values should be integrated in the creation of responsible AI just as much as Western values are. The European Commission [51] also points towards strengthening collaborations of AI with “like-minded” countries. But this begs the question of what this means for countries that are not like-minded, an issue which is not brought up in the guidelines. And the risk-based focus is also clear from the focus on privacy, rights, risks, discrimination and bias in most of the guidelines as well. This focus highlights the dangers these technologies present to individuals as well as groups, given the invasive potential of these technologies.
Both the human-centric and risk-based approaches place heavy stress on the role of developers and organisations that collect, store and use data from individuals and groups. The guidelines from the ACM [49], WHO [61], W3C [70] and AAAS [63] for instance make very clear the need for those collecting data to clearly communicate the purposes behind the use of the data they gather. However, this may not always be straightforward, as the AAAS [63] notes in the context of emergencies or crisis situations where gaining both clarity and consent on the purposes of data collection may not be simple. But more generally, the need for this clarity is linked to how these technologies can adversely affect the lives of the public. For example, Berman et al. [69] point out how health information can be linked to individuals by their geographic location despite efforts to anonymise the data.
At the same time, the distribution of certain values was not symmetrical across the guidelines. For example, there were far higher mentions of the term justice in the set of AI guidelines than in the geo-information guidelines. This may be due to the fact that AI is used in many domains that could be personally harmful to individuals (e.g., for giving out loans, access to jobs, identification of insurance risk, medical diagnoses, along with criminal sentencing), which present far clearer legal dangers (that each also have clearer cause for seeking restitution). These applications (and their negative consequences) are far more likely to endanger standards of justice. But the range of harms of geo-information is not observed to be as injurious (in legal terms), and so the risks of these technologies are not assessed in terms of promoting or infringing upon standards of justice.
However, this may not be entirely true, since the numerous mentions of privacy, consent, bias, discrimination, trust and rights reflect the fact that the geo-information guidelines make very clear the need for legal mechanisms to be developed to protect the public from misuse of their spatial data. These mechanisms should be in place whenever geo-information is collected (whether in ordinary situations or in emergency situations). For instance, Breman et al. [69] state the need for consent agreements to ensure individuals can be made aware of when their spatial data is being shared along with the right to remove their data from datasets. Likewise, the AAAS [63] asserts that local communities as well as organisations should have the right to changing any false or inaccurate data that is collected about them along with being involved in what happens to their data after collection. As Goodchild et al. [65] also point out, explainability should not end with just communicating how data is processed but also requires enforceable legal measures and restrictions to protect people where their data is concerned. These statements show that there is a need to make it possible for the public to know more about the data that is collected about them and open opportunities for them to be able to more directly address any past injustices that they may have suffered.
Moreover, while the majority of the guidelines do well in presenting the risks and harms from both AI and geo-information technologies, there were not many mentions of the capacity for these technologies to improve the wellbeing of individuals and groups. Out of all 24 guidelines, only three from the AI guidelines mentioned beneficence. This can either be representative of the fact that the guidelines mainly aimed to stress upon the potential misuse of these technologies, and this is understandably important given their widespread use and the risks they pose. But this could also be because there was not as much consideration of the potential positive outcomes for improving the lives of the public. This is something that should be equally considered in the analysis of the impact of these technologies. This may be partially due to the selection of guidelines, as a wider selection with analyses of other specific technologies such as robotics or drones used in healthcare or emergencies may have had different results (e.g., use of robots in elderly care or drones in the delivery of medical supplies). At the same time, this may be also because while the review was focused on word searches, the framing of the values/principles may still be present even though the word search does not return anything. For instance, the ACM [49] contains the principle “Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing” and further states that this principle concerns “the quality of life of all people, affirms an obligation of computing professionals… to use their skills for the benefit of society, its members, and the environment surrounding them”. Looking through the guidelines only through specific word searches can therefore be considered a limitation in this case.
Another asymmetry is present in the geo-information guidelines as while many mention the importance of transparency, only two mention the term explainability. This may be in part because the guidelines from Goodchild et al. [65] and the Ordnance Survey [67] are the only ones assessing the integration of AI and geo-information technologies, which may reflect the fact that the term explainability is commonly exclusive to the discourse on the ethics of AI while the term transparency seems to be more generally associated with any data-intensive technologies. This is also the case with the terms, fairness, autonomy and dignity, which were mentioned in numerous AI guidelines but were each only mentioned once in the geo-information guidelines. This may be due to the fact that not much critical attention is given to the potential for geo-information to limit the agency of individuals. Most recently, this was exemplified during the COVID-19 pandemic where geo-information from wearables and mobile phones was used to track and limit the movement of individuals. This was for the purpose of maintaining public health, but it also shows the potential ethical challenges that having vast geo-information on where people are and where they are going can have on fairness and their autonomy [12]. Geo-information can impact dignity in situations such as when certain places are classified in negative terms, and so if people are spatially represented as belonging to these places, they can be considered to be less worthy of respect or dignity (e.g., the stigma of those living in DUAs as explored briefly in Section 2). In this case, dignity can be diminished as a consequence of stigmatisation or discrimination from the use of geo-information. Attention to such specific cases in the guidelines may have increased the mentioning of how geo-information technologies can affect autonomy and dignity.
Furthermore, the dangers these technologies present is shown in the way that many of the guidelines call for increasing public trust. The need for trust is framed by the guidelines as necessary for the long-term adoption of these technologies as well as reducing the anxieties of the public given the risks these technologies pose. This is exemplified in the efforts to create trustworthy AI along with the fields of responsible AI or ethical AI more broadly. But such trust is also dependent on improving the levels of inclusion, which the Geospatial Commission [66], for example, make clear is especially an issue given the fact that marginalised demographics are often not included in conversations surrounding the deployment of geo-information technologies and are not informed about the risks from these technologies. This is also illustrated by Principle 4 of the Locus Charter [64]—“Protect the vulnerable”. However, such inclusion would still depend upon the levels of explainability or transparency in the data collection, processing and representation. Even if certain demographics are brought into the development of these technologies, there still needs to be greater clarity especially if they are from non-technical backgrounds.
Another principle present in the guidelines is the need to consider region-specific and culture-specific values as well (as briefly mentioned above). Consequently, increasing public trust will depend upon which public is being considered and where this public is located. This adds another layer of critical concern, as it requires greater communication between regional and international actors so that while universal principles may be important (e.g., “do no harm” or the right to privacy), there needs to be space for establishing more culturally specific principles as well. Moreover, part of this inclusivity and diversity is also dependent on the levels of access to these technologies. Many of the guidelines point out how companies with access to greater amounts of data can reap the most in terms of better insights but also have greater adverse potential if they misuse this data. At the same time, they also mention the need for individuals and communities to have greater access to the data collected about them (e.g., in terms of respecting consent or privacy rights). For instance, the W3C [70] asserts that individuals should have a “right to access”, which gives them the right to get a copy of any personal geographical information that may be held about them. Likewise, the Ordnance Survey [67] points out that there may need to be controls and frameworks created for the supervision of geographical data collection and usage, which would affect the levels of access that organisations can have to geo-information of the public.
Lastly, in response to the potential risks and harms from these technologies, both sets of guidelines also highlight the need for better auditing and oversight mechanisms in the development of both AI and geo-information technologies. These mechanisms—for instance, the Data Protection Impact Assessments referred to by the UK’s House of Lords [55]—are meant to ensure that any data from the public is collected in a manner that does not negatively impact those the data represents. Likewise, principles, such as the Locus Charter’s [64] Principle 7 on minimising data, propose that those collecting data do so in as minimally invasive a manner as possible or at least proportional to the purposes of the data collection. But these mechanisms raise the question of whether they should be implemented from within technology companies themselves or be put into practice by legislative bodies such as the EU’s GDPR. Leaving these mechanisms in the hands of companies may lead to what is called “ethics washing”, where technology companies use ethics or ethical principles as “an acceptable facade that justifies deregulation, self-regulation or market driven governance”, which creates the “adoption of appearances of ethical behaviour” [72]. This facade makes it appear as though the guidelines produced by companies (e.g., from Microsoft, Google and IBM in this review) means they are doing the dutiful work to ensure their technologies are developed to be as ethically mindful as possible, but only as a mask. To ensure this is not the case, these auditing processes would be better deployed by government regulatory bodies to enforce laws to protect the public (e.g., the EU’s AI Act that is currently in consideration).

6. Conclusions

This review presents the growing attention that is being given to the social, political and ethical challenges facing the development and use of AI and geo-information technologies. The two sets of ethical guidelines make very clear that these two technologies present a number of similar concerns. These similarities are evidenced by the overlap of many of the ethical concerns they bring up when discussing the risks posed by AI and geo-information technologies whether individually or together (e.g., ranging from discrimination of marginalised communities due to biassed datasets to issues of ensuring cultural and regional values are integrated in the design of these technologies). Moreover, both sets of guidelines also show that a good deal of scrutiny should be placed on the shoulders of the researchers and organisations that gather the data used by AI and geo-information technologies, as many of the ethical concerns raised are brought up in relation to the responsibility these actors have towards minimising the dangers these technologies present to the public. Through this greater scrutiny, as well as through more stringent auditing and legal mechanisms (as many of the guidelines suggest), these technologies can be used and developed in a manner that fosters more trust than fear. At the same time, this review reveals that there is an asymmetry in the mentioning of certain values and principles (such as autonomy, beneficence, fairness, dignity and justice) between the two sets of guidelines. On the one hand, this asymmetry is a limitation of the review methodology itself. While a word may not be mentioned (and not found in a word search), this does not mean the ethical concern associated with the word is missing (such as in the case of beneficence). On the other hand, this asymmetry shows that certain central issues, such as protecting social and environmental justice, should be far more central and more clearly framed in the geo-information guidelines (as they are in the AI guidelines). As many guidelines call for setting up necessary measures for individuals and groups to seek restitution when they suffer injustices from the malicious collection and use of their geo-information. Such calls are clear in issues surrounding consent, privacy and rights, which show an underlying concern from the geo-information guidelines on how the public ought to be protected. These measures will help ensure these technologies are more trustworthy, and the data they gather is used in a far more responsible manner.

Funding

This research received funding from the University of Twente as part of the FRAME-PRO project (work order number: 14535000-10).

Data Availability Statement

No new data was created in this review.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Saghiri, A.M.; Vahidipour, S.M.; Jabbarpour, M.R.; Sookhak, M.; Forestiero, A. A survey of Artificial Intelligence Challenges: Analyzing the definitions, relationships, and evolutions. Appl. Sci. 2022, 12, 4054. [Google Scholar] [CrossRef]
  2. Bishop, W.; Grubesic, T.H. Geographic Information: Organisation, Access and Use; Springer International Publishing AG: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  3. Goodchild, M.F.; Longley, P.A. Geographic Information Science. In Handbook of Regional Science; Fischer, M.M., Nijkamp, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1597–1614. [Google Scholar] [CrossRef]
  4. UN-Habitat. Metadata on SDGs Indicator 11.1.1 Indicator Category: Tier 1; UN-Habitat: Nairobi, Kenya, 2018. [Google Scholar]
  5. Thomson, D.R.; Kuffer, M.; Boo, G.; Hati, B.; Grippa, T.; Elsey, H.; Linard, C.; Mahabir, R.; Kyobutungi, C.; Maviti, J.; et al. Need for an integrated deprived area “slum” mapping system (ideamaps) in low- and middle-income countries (lmics). Soc. Sci. 2020, 9, 80. [Google Scholar] [CrossRef]
  6. Kuffer, M.; Wang, J.; Thomson, D.R.; Georganos, S.; Abascal, A.; Owusu, M.; Vanhuysse, S. Spatial information gaps on deprived urban areas (slums) in low-and-middle-income-countries: A user-centered approach. Urban Sci. 2021, 5, 72. [Google Scholar] [CrossRef]
  7. Abascal, A.; Rothwell, N.; Shonowo, A.; Thomson, D.R.; Elias, P.; Elsey, H.; Yeboah, G.; Kuffer, M. “Domains of deprivation framework” for mapping slums, informal settlements, and other deprived areas in LMICs to improve urban planning and policy: A scoping review. Comput. Environ. Urban Syst. 2022, 93, 101770. [Google Scholar] [CrossRef]
  8. Mahabir, R.; Crooks, A.; Croitoru, A.; Agouris, P. The study of slums as social and physical constructs: Challenges and emerging research opportunities. Reg. Stud. Reg. Sci. 2016, 3, 399–419. [Google Scholar] [CrossRef]
  9. Fisher, T.; Gibson, H.; Liu, Y.; Abdar, M.; Posa, M.; Salimi-Khorshidi, G.; Hassaine, A.; Cai, Y.; Rahimi, K.; Mamouei, M. Uncertainty-aware interpretable deep learning for slum mapping and monitoring. Remote Sens. 2022, 14, 3072. [Google Scholar] [CrossRef]
  10. Leonita, G.; Kuffer, M.; Sliuzas, R.; Persello, C. Machine learning-based slum mapping in sup- port of slum upgrading programs: The case of Bandung City, Indonesia. Remote Sens. 2018, 10, 1522. [Google Scholar] [CrossRef]
  11. Brito, P.L.; Kuffer, M.; Koeva, M.; Pedrassoli, J.C.; Wang, J.; Costa, F.; de Freitas, A.D. The spatial dimension of COVID-19: The potential of earth observation data in support of slum communities with evidence from Brazil. ISPRS Int. J. Geo-Inf. 2020, 9, 557. [Google Scholar] [CrossRef]
  12. Oluoch, I.O. Managing risk, governmentality and geoinformation: Vectors of vulnerability in the mapping of COVID-19. J. Contingencies Crisis Manag. 2022, 30, 41–49. [Google Scholar] [CrossRef]
  13. Sliuzas, R.; Mboup, G.; de Sherbinin, A. Expert Group Meeting on Slum Identification and Mapping; Report; University of Twente: Enschede, The Netherlands, 2008. [Google Scholar]
  14. Oluoch, I.; Kuffer, M.; Nagenborg, M. In-between the lines and pixels: Cartography’s transition from tool of the state to humanitarian mapping of deprived urban areas. Digit. Soc. 2022, 1, 5. [Google Scholar] [CrossRef]
  15. Mboga, N.; Georganos, S.; Grippa, T.; Lennert, M.; Vanhuysse, S.; Wolff, E. Fully convolutional networks and geographic object-based image analysis for the classification of VHR imagery. Remote Sens. 2019, 11, 597. [Google Scholar] [CrossRef]
  16. Ajami, A.; Kuffer, M.; Persello, C.; Pfeffer, K. Identifying a slums’ degree of deprivation from VHR images using convolutional neural networks. Remote Sens. 2019, 11, 1282. [Google Scholar] [CrossRef]
  17. Wang, W. Digital collectivism in a global state of emergency. In Data Justice and COVID-19: Global Perspectives; Taylor, L., Sharma, G., Martin, G., Jameson, S., Eds.; Meatspace Press: Manchester, UK, 2020. [Google Scholar]
  18. Verma, D.; Jana, A.; Ramamritham, K. Transfer learning approach to map urban slums using high and medium resolution satellite imagery. Habitat Int. 2019, 88, 101981. [Google Scholar] [CrossRef]
  19. Kohli, D.; Stein, A.; Sliuzas, R. Uncertainty analysis for image interpretations of urban slums. Comput. Environ. Urban Syst. 2016, 60, 37–49. [Google Scholar] [CrossRef]
  20. Elmes, A.; Alemohammad, H.; Avery, R.; Caylor, K.; Eastman, J.R.; Fishgold, L.; Friedl, M.A.; Jain, M.; Kohli, D.; Bayas, J.C.L.; et al. Accounting for training data error in machine learning applied to Earth Observations. Remote Sens. 2020, 12, 1034. [Google Scholar] [CrossRef]
  21. Owusu, M.; Kuffer, M.; Belgiu, M.; Grippa, T.; Lennert, M.; Georganos, S.; Vanhuysse, S. Towards user-driven Earth observation-based slum mapping. Comput. Environ. Urban Syst. 2021, 89, 101681. [Google Scholar] [CrossRef]
  22. Cinnamon, J. Geographic Information Systems; Ethics. Int. Encycl. Hum. Geogr. 2020, 6, 57–62. [Google Scholar] [CrossRef]
  23. Micheli, M.; Gevaert, C.M.; Carman, M.; Craglia, M.; Daemen, E.; Ibrahim, R.E.; Kotsev, A.; Mohamed-Ghouse, Z.; Schade, S.; Schneider, I.; et al. AI ethics and data governance in the geospatial domain of Digital Earth. Big Data Soc. 2022, 9, 20539517221138767. [Google Scholar] [CrossRef]
  24. Gevaert, C.M. Explainable AI for earth observation: A review including societal and Regulatory Perspectives. Int. J. Appl. Earth Obs. Geoinform. 2022, 112, 102869. [Google Scholar] [CrossRef]
  25. Kochupillai, M.; Kahl, M.; Schmitt, M.; Taubenbock, H.; Zhu, X.X. Earth observation and Artificial Intelligence: Understanding emerging ethical issues and opportunities. IEEE Geosci. Remote Sens. Mag. 2022, 10, 90–124. [Google Scholar] [CrossRef]
  26. Gevaert, C.M.; Carman, M.; Rosman, B.; Georgiadou, Y.; Soden, R. Fairness and accountability of AI in disaster risk management: Opportunities and challenges. Patterns 2021, 2, 1–9. [Google Scholar] [CrossRef] [PubMed]
  27. Benson, E.; Brigg, M.; Hu, K.; Maddison, S.; Makras, A.; Moodie, N.; Strakosch, E. Mapping the spatial politics of Australian settler colonialism. Politi-Geogr. 2023, 102, 102855. [Google Scholar] [CrossRef]
  28. Maciej, G. Human–place–technology relations in the digital placemaking process. Geoforum 2024, 149, 103950. [Google Scholar] [CrossRef]
  29. Sheikh, H.; Foth, M.; Mitchell, P. From legislation to obligation: Re-thinking smart urban governance for multispecies justice. Urban Gov. 2023, 3, 259–268. [Google Scholar] [CrossRef]
  30. Stahl, B.C.; Eke, D. The ethics of CHATGPT—Exploring the ethical issues of an emerging technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
  31. Farhi, F.; Jeljeli, R.; Aburezeq, I.; Dweikat, F.F.; Al-Shami, S.A.; Slamene, R. Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Comput. Educ. Artif. Intell. 2023, 5, 100180. [Google Scholar] [CrossRef]
  32. Memarian, B.; Doleck, T. Fairness, accountability, transparency, and ethics (fate) in Artificial Intelligence (AI) and Higher Education: A systematic review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
  33. Carlsson, V.; Rönnblom, M. From politics to ethics: Transformations in EU policies on digital technology. Technol. Soc. 2022, 71, 102145. [Google Scholar] [CrossRef]
  34. Malmio, I. Ethics as an enabler and a constraint—Narratives on technology development and Artificial Intelligence in military affairs through the case of Project Maven. Technol. Soc. 2023, 72, 102193. [Google Scholar] [CrossRef]
  35. Delacroix, S.; Wagner, B. Constructing a mutually supportive interface between ethics and regulation. Comput. Law Secur. Rev. 2021, 40, 105520. [Google Scholar] [CrossRef]
  36. Aitken, M.; Ng, M.; Horsfall, D.; Coopamootoo, K.P.; van Moorsel, A.; Elliott, K. In pursuit of socially-minded data-intensive innovation in banking: A focus group study of public expectations of digital innovation in banking. Technol. Soc. 2021, 66, 101666. [Google Scholar] [CrossRef]
  37. Raab, C.D. Information privacy, impact assessment, and the place of ethics. Comput. Law Secur. Rev. 2020, 37, 105404. [Google Scholar] [CrossRef]
  38. Bingley, W.J.; Curtis, C.; Lockey, S.; Bialkowski, A.; Gillespie, N.; Haslam, S.A.; Ko, R.K.; Steffens, N.; Wiles, J.; Worthy, P. Where is the human in human-centered AI? Insights from developer priorities and user experiences. Comput. Hum. Behav. 2023, 141, 107617. [Google Scholar] [CrossRef]
  39. Francisco, M.; Linnér, B.-O. AI and the governance of Sustainable Development. An idea analysis of the European Union, the United Nations, and the World Economic Forum. Environ. Sci. Policy 2023, 150, 103590. [Google Scholar] [CrossRef]
  40. Robinson, S.C. Trust, transparency, and openness: How inclusion of cultural values shapes nordic national public policy strategies for artificial intelligence (AI). Technol. Soc. 2020, 63, 101421. [Google Scholar] [CrossRef]
  41. Kannelønning, M.S. Navigating uncertainties of introducing artificial intelligence (AI) in Healthcare: The role of a Norwegian network of professionals. Technol. Soc. 2024, 76, 102432. [Google Scholar] [CrossRef]
  42. Vetter, M.A.; Lucia, B.; Jiang, J.; Othman, M. Towards a framework for local interrogation of AI Ethics: A case study on text generators, academic integrity, and composing with chatgpt. Comput. Compos. 2024, 71, 102831. [Google Scholar] [CrossRef]
  43. Saastamoinen, U.; Vikström, S.; Helminen, V.; Lyytimäki, J.; Nurmio, K.; Nyberg, E.; Rantala, S. The Limits of Spatial Data? Sense-making within the development and different uses of Finnish urban-rural classification. Land Use Policy 2022, 120, 106231. [Google Scholar] [CrossRef]
  44. Awad, E.; Levine, S.; Anderson, M.; Anderson, S.L.; Conitzer, V.; Crockett, M.; Everett, J.A.; Evgeniou, T.; Gopnik, A.; Jamison, J.C.; et al. Computational ethics. Trends Cogn. Sci. 2022, 26, 388–405. [Google Scholar] [CrossRef]
  45. Alderman, D.H.; Inwood, J.F.; Bottone, E. The mapping behind the movement: On recovering the critical cartographies of the African American Freedom Struggle. Geoforum 2021, 120, 67–78. [Google Scholar] [CrossRef]
  46. Mortaheb, R.; Jankowski, P. Smart city re-imagined: City Planning and Geoai in the age of big data. J. Urban Manag. 2023, 12, 4–15. [Google Scholar] [CrossRef]
  47. AUDA-NEPAD. African Union High Level Panel on Emerging Technologies (APET) Technology Report on Artificial Intelligence for Africa: Harnessing Artificial Intelligence for Africa’s Socio-Economic; Union Development Agency: Johannesburg, South Africa, 2021. [Google Scholar]
  48. Kak, A.; Myers, S.W. “AI Now 2023 Landscape: Confronting Tech Power”, AI Now Institute. Available online: https://ainowinstitute.org/2023-landscape (accessed on 11 April 2023).
  49. Association for Computing Machinery. ACM Code of Ethics and Professional Conduct: Affirming Our Obligation to Use Our Skills to Benefit Society; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar]
  50. Gaffley, M.; Adams, R.; Shyllon, O. Artificial Intelligence. African Insight. A Research Summary of the Ethical and Human Rights Implications of AI in Africa; HSRC & Meta AI and Ethics Human Rights Research Project for Africa—Synthesis Report. 2022. Available online: https://africanaiethics.com/wp-content/uploads/2022/02/Artificial-Intelligence-African-Insight-Report.pdf (accessed on 10 November 2023).
  51. European Commission. Annexes to the Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and the Social Committee and the Committee of the Regions: Fostering a European approach to Artificial Intelligence; European Commission: Brussels, Belgium, 2021. [Google Scholar]
  52. European Commission for the Efficiency of Justice (CEPJ). European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment; CEPJ: Strasbourg, France, 2018. [Google Scholar]
  53. European Commission. Ethical Guidelines for Trustworthy AI. High-Level Expert Group on AI (AI HLEG); European Commission: Brussels, Belgium, 2019. [Google Scholar]
  54. Google. 2022 AI Principles Progress Update; Google: Menlo Park, CA, USA, 2022. [Google Scholar]
  55. House of Lords Select Committee on Artificial Intelligence. AI in the UK: Ready, Willing and Able? Report of Session 2017–2019; House of Lords Select Committee on Artificial Intelligence: London, UK, 2020. [Google Scholar]
  56. IBM. Everyday Ethics for Artificial Intelligence. 2022. Available online: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (accessed on 12 December 2023).
  57. Cabinet Secretariat, Japan. Social Principles of Human-Centric AI. Tokyo. 2019. Available online: https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf (accessed on 10 November 2023).
  58. Microsoft. Governing AI: A Blueprint for the Future; Microsoft: Albuquerque, NM, USA, 2023. [Google Scholar]
  59. OECD. Recommendations of the Council on Artificial Intelligence, OECD/LEGAL/0449; OECD: Paris, France, 2023. [Google Scholar]
  60. Leslie, D. Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. arXiv 2019, arXiv:1906.05684. [Google Scholar] [CrossRef]
  61. WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance; WHO: Geneva, Switzerland, 2021. [Google Scholar]
  62. UNESCO. Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2022. [Google Scholar]
  63. Wyndham, J.; Platts, E.; Drake, J. AAAS Scientific Responsibility, Human Rights and Law Program. Location-Based Data in Crisis Situations: Principles and Guidelines; AAAS: New Yorke Ave, NW, USA; Washington, DC, USA, 2019. [Google Scholar]
  64. Benchmark; EthicalGeo. Locus Charter. 2021. Available online: https://ethicalgeo.org/wp-content/uploads/2021/03/Locus_Charter_March21.pdf (accessed on 20 November 2023).
  65. Goodchild, M.F.; Appelbaum, R.; Crampton, J.; Herbert, W.A.; Janowicz, K.; Kwan, M.-P.; Michael, K.; Alvarez León, L.F.; Bennett, M.; Cole, D.G.; et al. A White Paper on Locational Information and the Public Interest. Am. Assoc. Geogr. 2022. [Google Scholar] [CrossRef]
  66. Geospatial Commission. Building Public Confidence in Location Data: The ABC of Ethical Use; Geospatial Commission: London, UK, 2022. [Google Scholar]
  67. Ordnance Survey. Practical GeoAI Ethics; Workshop Output Report; Ordnance Survey: Southampton, UK, 2022. [Google Scholar]
  68. UK Statistics Authority. Ethical Considerations in the Use of Geospatial Data for Research and Statistics. Available online: https://uksa.statisticsauthority.gov.uk/publication/ethical-considerations-in-the-use-of-geospatial-data-for-research-and-statistics/pages/2/ (accessed on 6 September 2021).
  69. Berman, G.; Rosa, S.; Accone, T. Ethical Considerations When Using Geospatial Technologies for Evidence Generation; Unicef Official Research—Innocenti Discussion Paper; DP-2018-02; Unicef: New York, NY, USA, 2021. [Google Scholar]
  70. World Wide Web Consortium (W3C). The Responsible Use of Spatial Data. Available online: https://w3c.github.io/sdw/responsible-use/ (accessed on 11 September 2023).
  71. Hagendorff, T. The Ethics of AI Ethics: An evaluation of guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  72. Bietti, E. From ethics washing to ethics bashing: A view on tech ethics from within Moral Philosophy. SSRN Electron. J. 2021, 210–219. [Google Scholar] [CrossRef]
Figure 1. Co-occurrence of ethical values mentioned in AI guidelines.
Figure 1. Co-occurrence of ethical values mentioned in AI guidelines.
Ijgi 13 00087 g001
Figure 2. Co-occurence of ethical values mentioned in geo-information guidelines.
Figure 2. Co-occurence of ethical values mentioned in geo-information guidelines.
Ijgi 13 00087 g002
Table 1. Values and the author/publication they are derived from.
Table 1. Values and the author/publication they are derived from.
ValuesAuthor/Publication
Access to data, algorithms and geo-information technologiesBenson et al. [27], Maciej [28], Sheikh et al. [29], Stahl & Eke [30], Farhi et al. [31] Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [34], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], Kannelonning [41], and Vetter et al. [42]
Accountability for decisions and impacts of AI and geo-information technologiesMaciej [28], Saastamoinen et al. [43], Stahl & Eke [30], Farhi et al. [31] Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [34], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Robinson [40], and Vetter et al. [42]
Autonomy of individuals and groups to make decisions for themselves should not be taken away by AI and geo-information technologiesBenson et al. [27], Saastamoinen et al. [43], Stahl & Eke [30], Awad et al. [44], Memarian & Doleck [32] Malmio [34], Delacroix and Wagner [35], Bingley et al. [38], and Robinson [40]
Beneficence or improving the wellbeing of individuals and groups by using AI and geo-informationStahl & Eke [30], Memarian & Doleck [32] Raab [37], and Bingley et al. [38]
Bias within data, algorithms and decision-making of AI and geo-information technologiesMaciej [28], Saastamoinen et al. [43], Stahl & Eke [30], Farhi et al. [31] Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [34], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], Kannelonning [41], and Vetter et al. [42]
Consent given (or revoked) by individuals or groups during data collection, retention and distributionBenson et al. [27], Stahl & Eke [30], Memarian & Doleck [32] Raab [37], Francisco & Linner [39], Robinson [40], and Vetter et al. [42]
Dignity of individuals and groups should be respected and not reduced by AI and geo-information technologiesStahl & Eke [30], Delacroix and Wagner [35], and Raab [37]
Discrimination against individuals and groups (e.g., based on race, location, or gender) being worsened by AI or geo-information technologiesAlderman et al. [45], Stahl & Eke [30], Farhi et al. [31], Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Raab [37], Francisco & Linner [39], and Robinson [40]
Explainability/Interpretability of data and algorithms when decisions are made (e.g., what variables are given greater importance in the training of a model)Maciej [28], Memarian & Doleck [32] Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], and Kannelonning [41]
Fairness in access to AI and geo-information technologiesAlderman et al. [45], Mortaheb and Jankowski [46] Stahl & Eke [30], Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], and Vetter et al. [42]
Justice in the distribution of benefits and opportunities from AI and geo-information technologiesMemarian & Doleck [32] Carlsson & Ronnblom [33], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], and Vetter et al. [42]
Harm/Non-maleficence suffered by individuals and groups from decisions made using AI and geo-information technologies Sheikh et al. [30], Stahl & Eke [30], Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [35], Delacroix and Wagner [35], Aitken et al. [36], Bingley et al. [38], Kannelonning [41], and Vetter et al. [42]
Inclusion of marginalised individuals and groups in the design, development and use of AI and geo-information technologiesAlderman et al. [45], Saastamoinen et al. [43], Sheikh et al. [29], Stahl & Eke [30], Memarian & Doleck [32] Delacroix and Wagner [35], Bingley et al. [38], Francisco & Linner [39], and Robinson [40]
Privacy of individuals and groups being endangered by a greater level of detail captured from data (e.g., behaviour analytics or locations of marginalised groups)Maciej [28], Mortaheb and Jankowski [46] Stahl & Eke [30], Memarian & Doleck [32] Carlsson & Ronnblom [33], Delacroix and Wagner [35], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], and Vetter et al. [42]
Responsibility of those collecting, storing and making use of data in AI and geo-information technologies (related to accountability)Alderman et al. [45], Maciej [28], Sheikh et al. [29], Stahl & Eke [30], Farhi et al. [31] Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [35], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], Kannelonning [41], and Vetter et al. [42]
Rights of individuals and groups needing to be protected (e.g., right to privacy or right to withdraw consent)Alderman et al. [45], Benson et al. [27], Mortaheb and Jankowski [46] Sheikh et al. [29], Stahl & Eke [30], Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Delacroix and Wagner [35], Raab [37], Francisco & Linner [39], and Robinson [40]
Risks that could arise from data used in AI and geo-information systems (e.g., exposure of certain groups to greater discriminatory actions or data protection weaknesses)Alderman et al. [45], Benson et al. [27], Saastamoinen et al. [43], Mortaheb and Jankowski [46] Sheikh et al. [29], Stahl & Eke [30], Farhi et al. [31] Memarian & Doleck [32] Carlsson & Ronnblom [33], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], and Vetter et al. [42]
Transparency of data used in AI and geo-information technologiesMaciej [28], Saastamoinen et al. [43], Mortaheb and Jankowski [46] Sheikh et al. [30], Stahl & Eke [30], Farhi et al. [31] Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [35], Delacroix and Wagner [35], Raab [37], Bingley et al. [38], Francisco & Linner [39], and Robinson [40]
Trust in the sharing, decisions and representations of data used in AI and geo-information technologiesMaciej [28], Stahl & Eke [30], Awad et al. [44], Memarian & Doleck [32] Carlsson & Ronnblom [33], Malmio [35], Delacroix and Wagner [35], Aitken et al. [36], Raab [37], Bingley et al. [38], Francisco & Linner [39], Robinson [40], Kannelonning [41], and Vetter et al. [42]
Table 2. List of AI guidelines.
Table 2. List of AI guidelines.
Author/InstitutionCitationRegionYear of Publication
African Union Development Agency (AUDA-NEPAD)AUDA-NEPAD. (2021). African Union High Level Panel on Emerging Technologies (APET) Technology Report on Artificial Intelligence for Africa: Harnessing Artificial Intelligence for Africa’s Socio-economic [47].Africa2021
Kak, A. and Myers, S. (AI Now Institute)Kak. A and Myers. S. W., “AI Now 2023 Landscape: Confronting Tech Power”, AI Now Institute, 11 April 2023 [48].USA2023
Association for Computing Machinery (ACM)Association for Computing Machinery. (2018). ACM Code of Ethics and Professional Conduct: Affirming our Obligation to use our Skills to Benefit Society [49].International2018
Gaffley et al. (Human Sciences Research Council and Meta)Gaffley M, Adams R & Shyllon O “Artificial Intelligence. African Insight. A Research Summary of the Ethical and Human Rights Implications of AI in Africa” (2022) HSRC & Meta AI and Ethics Human Rights Research Project for Africa—Synthesis Report [50].Africa2022
European Commission (EC)European Commission (2021). Annexes to the Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and the Social Committee and the Committee of the Regions: Fostering a European Approach to Artificial Intelligence [51].EU2021
European Commission for the Efficiency of Justice (CEPEJ)European Commission for the Efficiency of Justice (CEPJ). (2018). European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and their Environment [52].EU2018
AI High-Level Expert Group (AI HLEG)European Commission. (2019). Ethical Guidelines for Trustworthy AI. High-Level Expert Group on AI (AI HLEG) [53].EU2019
GoogleGoogle. (2022). 2022 AI Principles Progress Update [54].USA2022
Select Committee on Artificial IntelligenceHouse of Lords Select Committee on Artificial Intelligence. (2020). AI in the UK: Ready, Willing and Able? Report of Session 2017–19 [55].UK2018
IBMIBM. (2022). Everyday Ethics for Artificial Intelligence [56].International2022
Cabinet Secretariat JapanCabinet Secretariat, Japan. 2019. Social Principles of Human-Centric AI. Tokyo [57].Japan2019
MicrosoftMicrosoft. (2023). Governing AI: A Blueprint for the Future [58].International2023
Organisation for Economic Co-Operation and Development (OECD)OECD. (2023). Recommendations of the Council on Artificial Intelligence, OECD/LEGAL/0449 [59].International2023
Leslie, D. (The Alan Turing Institute)Leslie, D. (2019). Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI systems in the Public Sector. The Alan Turing Institute [60].UK2019
World Health Organisation (WHO)WHO. (2021). Ethics and Governance of Artificial Intelligence for Health: WHO Guidance [61].International2021
United Nations Educational, Scientific and Cultural Organization (UNESCO)UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence [62].International2022
Table 3. List of Geo-information guidelines.
Table 3. List of Geo-information guidelines.
Author/InstitutionCitationRegionYear of Publication
Association for the Advancement of Science (AAAS)AAAS Scientific Responsibility, Human Rights and Law Program. (2019). Location-Based Data in Crisis Situations: Principles and Guidelines” (Report prepared by Jessica Wyndham, Ellen Platts and Jonathan Drake) [63].International2019
EthicalGEO & BenchmarkBenchmark & EthicalGeo. (2021). Locus Charter [64].International2021
Goodchild et al. (AAG)Goodchild et al. (2022). A White Paper on Locational Information and the Public Interest. American Association of Geographers [65].USA2023
Geospatial CommissionGeospatial Commission. (2022). Building Public Confidence in Location Data: The ABC of Ethical Use [66].UK2022
Ordnance SurveyOrdnance Survey. (2022). Practical GeoAI Ethics. Workshop Output Report 2022 [67].UK/International2022
United Kingdom Statistics Authority (UKSA)UK Statistics Authority. (2021). Ethical Considerations in the use of Geospatial Data for Research and Statistics [68].UK2021
Berman, G., Rosa, S., and Accone, T. (UNICEF)Berman, G., Rosa, S., and Accone, T. (2021). Ethical Considerations When Using Geospatial Technologies for Evidence Generation. UNICEF Official Research—Innocenti Discussion Paper. DP-2018-02 [69].International2021
World Wide Web Consortium (W3C)World Wide Web Consortium (W3C). (11 September 2023). The Responsible Use of Spatial Data [70].International2023
Table 4. Number of occurrences from word searches across AI guidelines.
Table 4. Number of occurrences from word searches across AI guidelines.
ValuesNo. of Guidelines Mentioned in
Accountability16
Privacy16
Transparency16
Fairness16
Rights15
Discrimination15
Bias15
Trust15
Access14
Risk14
Responsibility14
Harm14
Autonomy13
Inclusion13
Justice12
Dignity12
Consent12
Explainability10
Beneficence3
Table 5. Number of occurrences from word search in geo-information guidelines.
Table 5. Number of occurrences from word search in geo-information guidelines.
ValuesNo. of Occurrences
Access8
Consent8
Privacy8
Bias7
Discrimination7
Harm7
Risk7
Rights7
Accountability6
Responsibility6
Transparency6
Trust5
Inclusion5
Explainability2
Justice1
Autonomy1
Dignity1
Fairness1
Beneficence0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oluoch, I. Crossing Boundaries: The Ethics of AI and Geographic Information Technologies. ISPRS Int. J. Geo-Inf. 2024, 13, 87. https://doi.org/10.3390/ijgi13030087

AMA Style

Oluoch I. Crossing Boundaries: The Ethics of AI and Geographic Information Technologies. ISPRS International Journal of Geo-Information. 2024; 13(3):87. https://doi.org/10.3390/ijgi13030087

Chicago/Turabian Style

Oluoch, Isaac. 2024. "Crossing Boundaries: The Ethics of AI and Geographic Information Technologies" ISPRS International Journal of Geo-Information 13, no. 3: 87. https://doi.org/10.3390/ijgi13030087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop