Skip to main content
  • Original article
  • Open access
  • Published:

Spatial and temporal resolution of geographic information: an observation-based theory

Abstract

After a review of previous work on resolution in geographic information science (GIScience), this article presents a theory of spatial and temporal resolution of sensor observations. Resolution of single observations is computed based on the characteristics of the receptors involved in the observation process, and resolution of observation collections is assessed based on the portion of the study area (or study period) that has been observed by the observations in the collection. The theory is formalized using Haskell. The concepts suggested for the description of the resolution of observation and observation collections are turned into ontology design patterns, which can be used for the annotation of current observations with their spatial and temporal resolution.

Introduction

Resolution is a key notion to the field of geographic information science (GIScience): it is critical in determining a data set’s fitness for a given use (see [1]), and influences the patterns that can be observed during an analysis process (see [2]). In addition, as Goodchild [3] pointed out, resolution determines the volume of data which is generated and therefore the processing costs and storage volume. Finally, resolution is necessarily present in any data collection process because the world is too complex to be studied in its full detail (see for instance [1] and [4]).

The literature on geographic information science and related field contains various definitions, and understandings of resolution. In an attempt to provide conceptual clarity, Degbelo and Kuhn [5] discussed some of these notions, and presented a framework to reconcile various connotations of the term. The framework consists of definitions of resolution, proxy measures for resolution and related notions to resolution. Definitions of resolution refer to possible ways of defining the term; proxy measures for resolution denote different measures that can be used to characterize resolution; and related notions to resolution denote notions closely related to resolution, but in fact different from it. Examples of related notions to resolution include scale, granularity and accuracy, while examples of proxy measures include the step size of a sensor, and the mean spacing of samples. In line with [5], resolution is defined in this article as the amount of detail (or level of detail, or degree of detail) in a representation. Resolution applies to data (i.e., representations), whereas granularity applies to conceptual models (see [4, 5]). Resolution is only one of many components of scale, with other components including extent, grain, lag, support and cartographic ratio (see [6, 7]).

The transition to the digital age, and the rise of Volunteered Geographic Information (VGI, [8]) call for a rethinking of traditional criteria to describe the resolution of data in GIScience. The current work explores the idea of an observation-based characterization of resolution. At least four reasons motivate this.

First, observations are key to the geo-sciences. For example, Frank [9] asserts that “all we know about the world is based on observation”. Janowicz [10] indicates that observations have been proposed as the foundation of geo-ontologies. Adams and Janowicz [11] point out that the geosciences rely on observations, models, and simulations to answer complex scientific questions such as the impact of global change. Stasch et al. [12] point out that observations form the basis of empirical and physical sciences. The information-based ontological system outlined in [13] has, at its core, the notion of observation.

Second, observations are a central concept of the digital age (which relies on information), and of VGI (where humans act as sensors to produce geographic information). Describing resolution in the era of VGI is a complex, underexplored, and important issue. As Goodchild [14] indicated, metrics of spatial resolution are strongly affected by the analog to digital transition. In addition (and as pointed out in [15]), mechanisms to describe the quality of human observations are needed, and are missing. Describing the quality of these observations, in turn, is important to effectively assess their suitability for a given task.

Third, Frank [16] indicated that “Data quality research needs a quantitative, theory based approach. The theory must relate to the physical characteristics of the observation process, where the imperfections in the data originate” (emphasis added). An ontology design pattern for the spatial data quality of observations was proposed in [17]. Yet, more specific, observation-based treatments of how spatial data quality components (e.g., accuracy, resolution, completeness and lineage) may be accounted for are still needed. The ultimate usefulness of an observation-based characterization of resolution is the provision of a conceptual apparatus, which helps to understand semantic differences with respect to resolution in two geographic datasets.

Fourth, a full ‘science of scale’ (as envisioned in [18]) requires progress in the understanding of resolution. As described in [18], a science of scale needs to tackle five main issues: invariants of scale, the ability to change scale, measures of the impact of scale, scale as a parameter in process models, and the implementation of multiscale approaches. Since resolution is one component of scale, measuring the impact of scale on spatial analysis requires a better understanding of resolution. Frank [19] pointed out that scale (and hence resolution) is introduced in spatial datasets by observation processes. Therefore, a first step towards measuring the impact of resolution of spatial analysis is a greater understanding of how resolution is introduced in observation processes. The current article attempts to provide an answer to this question, by elaborating an observation-based characterization of resolution. The main contributions of this article are as follows:

  • a brief review of previous work on resolution in GIScience;

  • a formal theory of spatial and temporal resolution of observations underlying geographic information. The theory has a dual importance: (i) at the theoretical level, it is to be taken as a small and necessary piece of the science of scale; (ii) at the practical level, the axioms of the theory (or parts of them) can be implemented, and serve the purposes of reasoning over datasets at different spatial and temporal resolution;

  • a critical analysis of existing criteria for the description of the spatial and temporal resolution of observation collections;

  • ontology design patterns extending the SSN Ontology [20]. These ontology design patterns can be used for the annotation of observations, and observation collections of the Sensor Web with their spatial and temporal resolution.

Since GIScience has investigated resolution of geographic information for many years, Section Resolution in GIScience: a brief review briefly reviews previous work, pointing out what is still missing. Ontology is used as a method to elaborate the theory, and Section Method introduces the different steps followed during the development. An observation-based theory of resolution for single observations is expounded in Section Spatial and temporal resolution of a single observation, and Section Spatial and temporal resolution of an observation collection discusses the resolution of observation collections. Since the spatial and temporal dimensions of geographic information are currently more understood than the thematic dimension, the theory focuses on these two as a first step (deferring a theory of observation resolution applicable to all three dimensions of geographic information to future work). Section Applications presents some examples of use of the ideas discussed. Section Comparison with previous work discusses the current work in relation to previous work. Section Limitations points at limitations before Section Conclusion and future work concludes the paper.

Resolution in GIScience: a brief review

Despite many discussions of the broader notion of ‘scale’ from different viewpoints (see for example a discussion from an hydrology perspective in [21], a discussion from a geostatistics perspective in [6, 22], and a discussion from a GIS perspective in [23]), discussions of the more specific notion of ‘resolution’ have been very few. Progress on resolution has been made over the years, but the ideas are scattered throughout articles. This section presents some of the previous work - in GIScience - addressing four areas: the optimum resolution, the influence of resolution on other variables, integration of multi-resolution features and multi-resolution databases, and previous formal accounts for resolution.

The optimum resolution: Lam and Quattrochi [24] commented on the issues of scale, resolution, and fractal analysis in the mapping sciences and pointed out one important research question in this context, namely ‘what is the optimum resolution for a study or does an optimum really exist?’. On that subject, Marceau et al. [25] proposed and tested a method to identify the optimal resolution for a study. They concluded that (i) the concept of optimal spatial resolution is relevant and meaningful for the field of remote sensing, and (ii) there is a need of selecting the appropriate resolution in any study involving the manipulation of geographical data. Though the study was conducted in the field of remote sensing, its results are included in this literature review because they are relevant for GIScience.

Influence of resolution on other variables: Gao [26] explored the correlations between spatial resolution and root mean square error (RMSE), spatial resolution and accuracy, as well as spatial resolution and mean gradient in the context of digital elevation models (DEMs). He concluded that (i) the RMSE of a gridded DEM increases linearly with its spatial resolution from 10m to 60m, (ii) the accuracy of representing a terrain with a gridded DEM decreases as the resolution decreases from 10m to 60m, and (iii) resolution has a minimal impact on mean gradient. Deng et al. [27] used correlation and regression analysis to assess the effect of DEM resolution on calculated terrain attributes such as slope, plan curvature, profile curvature, north–south slope orientation, east–west slope orientation, and topographic wetness index. Their work indicated that terrain attributes respond to resolution change in different ways. Among the different terrain attributes studied, plan and profile curvatures were found to be the most sensitive attributes, and slope was the least sensitive attribute to change in resolution. The findings are valid only for landscapes found in the Santa Monica Mountains. The experiments reported in [28] revealed that there is a logarithmic relationship between DEM resolution and mean slope. Jantz and Goetz [29] examined the ability of the urban land-use-change model SLEUTH (slope, land use, exclusion, urban extent, transportation, hillshade) to capture urban growth patterns across varying spatial resolutions (i.e., cell sizes). The authors reported that, during their experiments, the amount of growth that could be produced through spontaneous growth at a resolution of 360m was more than five times the amount at a resolution of 45m. That is, the resolution of the input data impact the overall performance of an urban land-use-change model. A similar conclusion was reached by Kim [30] whose study indicated that variations in spatial and temporal resolution can generate substantial differences in the outcomes of a land-use change simulation. Pontius Jr and Cheuck [31] proposed a method which helps to examine the sensitivity of statistical results to changes in resolution. The method was designed to facilitate multi-resolution analysis during the comparison of maps that display a shared categorical variable. Csillag et al. [32] studied the impact of spatial resolution on the classification of areas into taxonomic attributes. ‘Classification’ here means that a measurement is made at a point in space, and based on the measurement value, one would like to assign a (predefined) class to the point at which the measurement is made. Csillag et al. [32] used two examples during their study: (i) vegetation is sampled at given locations and classified according to species and/or associations; (ii) soil properties are measured at a given locations, and soil types are assigned to the locations based on the value of the measured property. They pointed out that changes in spatial resolution lead to changes in the accuracy in terms of class identification, and concluded that there may not be a single best resolution for environmental data. Finally, Lechner and Rhodes [33] recently presented a review of the effects of spatial and thematic resolution on ecological analysis. They indicated that spatial resolution affects statistical analysis outcomes such as inference about population mean, variation and statistical significance. In addition, changing spatial and thematic resolution affects the characterisation of landscapes and ecological analyses (e.g., measuring land cover proportions, landscape metrics and change detection). Lechner and Rhodes [33] also pointed out that spatial and thematic resolution not only have affect ecological variables, but also mutually influence one another. Integration of multi-resolution features and multi-resolution databases: Du et al. [34] suggested an approach to check directional consistency between representations of features at different resolutions. Examples of direction relations include east (of), west (of), south (of), north (of), southeast (of), southwest (of), northeast (of), northwest (of), and directional consistency is evaluated by checking whether direction relations between pairs of spatial regions at different resolutions are similar. Balley et al. [35] proposed an approach to build a unified database from source databases. The source databases are databases which contain the same feature represented at different levels of spatial and thematic detail.

Formalisms for resolution: A formal framework for multi-resolution spatial data handling was suggested in [36]. The framework has five main components: map, map space, granularity lattice, stratified map space, and sheaf of stratified map spaces. It can be used to assess the correctness of generalization algorithms and the integration of geometrically and semantically heterogeneous spatial datasets. Skogan [37] suggested another framework to deal with multi-resolution objects and multi-resolution databases. The framework consists of four components: the federated multi-resolution database management system, the resolution space, the multi-resolution type and methods for aggregating resolution. Worboys [38] dealt with multi-resolution geographic spaces and proposed a formal account for multi-resolution geographic spaces using ideas related to fuzzy logic and rough set theory. Other formalisms for resolution, focusing on sensor observation and processes, can be found in [16] and [39] respectively. Frank [16] suggested to model (formally) the effect of resolution on the final sensor observation using a convolution with a Gaussian kernel. Weiser and Frank [39] proposed a formalism to represent multiple level of details (i.e., resolution) in discrete processes (e.g., a train ride). Finally, Bruegger [40] suggested a theory for the integration of spatial data presenting differences in spatial resolution and representation format (i.e., raster and vector).

Summary: In sum, there is a need of selecting the appropriate resolution in any study involving the manipulation of geographical data. The literature has also documented correlations between resolution and other parameters (e.g., error, accuracy, slope). This stresses the importance of choosing the appropriate resolution, and documenting the resolution at which inferences are done during an analysis. In addition, different formalisms have been suggested to model the resolution of geographic data. Yet, there is no observation-based theory of resolution. The aim of the next section is to outline one. The theory is proposed as an ontology, and this has two main benefits: (i) conceptual clarification, and (ii) implementability and processability by machines (when encoded in ontology languages such as the Web Ontology Language). The latter benefit (i.e., processability by machines) is one of the advantages of the new theory over previous formalisms for resolution (and what makes the theory applicable to the Sensor Web).

Method

The steps followed in this work involve a design stage and an implementation stage. In line with [41], the design stage includes the identification of a motivating scenario, the identification of terms useful to describe the resolution of datasets, and the formal specification of these terms. The design stage results in a logical theory. The implementation stage derives a computational artifact (from the design stage), which can be used for practical tasks such as query disambiguation and query expansion. Both the motivating scenario, and terms used to describe the observation process are presented in the following subsections. The terms to describe the resolution of datasets and the computational artifacts derived from the design stage (i.e., Ontology Design Patterns) are introduced in Sections Spatial and temporal resolution of a single observation and Spatial and temporal resolution of an observation collection. Applications of the ontology design patterns are discussed in Section Applications. In keeping with [42], different languages were used for the different phases of the theory development. Haskell was used in the work for the design phase (presented in Sections Spatial and temporal resolution of a single observation and Spatial and temporal resolution of an observation collection), while the Web Ontology Language was used during the implementation phase (whose results are introduced in Section Applications). The use of different languages at different stages helps to better accommodate the requirements of each of the phases of ontology development. As Bittner et al. [43] put it: “[o]nce one has developed a highly expressive theory, less expressive logics with better computational properties can be used to implement certain portions of the full theory for specific purposes”.

Motivating scenario

A collection of sensors has been deployed in a city to measure the concentration of carbon monoxide (CO) in the air. The concentration of CO is taken at different moments of the day, by different carbon monoxide analyzers (COAs) placed at different locations in the city. A group of scientists is interested in analyzing the quality of the air in the city. Using the Semantic Sensor Observation Service (SemSOS), the group is able to develop an application software, which retrieves data generated by the COAs so that differences of sensors and observations regarding measurement procedures and measurement units are harmonized. The group is now interested in extending the semantic capabilities of the application so that the resolution of the observations is made explicit, and retrieval at different resolution, with minimal human intervention, is made possible. In particular, the group would like to know the spatial and temporal resolution of one observation (Q1), and the spatial and temporal resolution of the observation collection produced by the COAs (Q2). Making an application software understand what ‘resolution’ of an observation (or an observation collection) means, is only possible through a formal characterization of the concept.

The scenario above presupposes the use of in-situ COAs, but remote COAs such as the MOPITT instrument introduced in [44] might be also used for data collection purposes. The theory proposed in this article takes into account both in-situ and remote sensors. For an introduction to SemSOS, see [45]. Q1 and Q2 are competency questions in the sense of [46].

Reuse of terms from existing observation ontologies

Observations have been analyzed from a variety of perspectives, yielding observation ontologies in [20, 4750]. The relevance of these analyses for the Sensor Web is at least twofold: to provide a shared conceptual basis for scientific discourse; and to provide (practical) means of representing observations generated by sensors in an information system. This section aims at selecting one of these observation ontologies as starting point for the development of the ontology of resolution. Three criteria are used to guide this choice:

  • Remain neutral with respect to the distinction between field and object (C1): as mentioned in [51], the most widely accepted conceptual model for GIScience considers that geographic reality is represented either as fully definable entities (objects) or smooth, continuous spatial variation (fields). An ontology of resolution, which remains neutral to the distinction field vs object is therefore highly desirable, to ensure a wide applicability of the terms suggested in GIScience and the Sensor Web.

  • Take into account humans as sensors (C2): Goodchild [8] defined Volunteered Geographic Information as the widespread engagement of private citizens in the creation of geographic information, and pointed out some valuable aspects of the information produced by volunteers: (i) the information can be timely; (ii) it is far cheaper than any alternative; and (iii) information produced by volunteers can tell about local activities in various geographic locations that go unnoticed by the world’s media. Humans acting as sensors are at the heart of VGI; the ontology of resolution should therefore be developed using a notion of sensor encompassing both instruments and humans, to be usable for both observations generated by humans and technical devices. Only ontologies capable of processing both types of observations can help to take advantage of VGI’s potential, namely, “the potential to be a significant source of geographers’ understanding of the surface of the Earth” [8].

  • Take into account observation as a result and observation as a process (C3): there has been two uses of ‘observation’ in the literature: observation as a process and observation as a result. An observation process is “an act associated with a discrete time instant or period through which a number, term or other symbol is assigned to a phenomenon” [52]. An observation result (or observation for short) is the outcome of an observation process. The ontology of resolution should be developed in such a way that justice is done to these two senses of ‘observation’.

Table 1 presents the results of the application of these three criteria to the observation ontologies mentioned at the outset of this section. A detailed explanation of the results is provided in [53]. The table shows that the functional ontology of observation and measurement (or FOOM for short) is the only one which fulfills the three criteria outlined above. According to FOOM, four main entities are involved in the observation process: the particular (i.e., entity to be observed), the stimulus (i.e., detectable change in the environment), the observer or sensor (i.e., someone or something that provides a symbol for a property of the particular) and the observation result (i.e., a value). FOOM was formally specified using Haskell, and aligned to the foundational ontology DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering, see [54]). For this reason, both Haskell and DOLCE are also used while extending FOOM with concepts of spatial and temporal resolution in the next sections. Figure 1 illustrates the observation process. Terms useful to specify the spatial and temporal resolution of sensor observations are highlighted in bold in the next sections.

Fig. 1
figure 1

Observation process (reprinted from [55] with permission)

Table 1 Criterias C1, C2 and C3 applied to the observation ontologies

Results

The theories of observation-based resolution are expounded in this section. Section Spatial and temporal resolution of a single observation discusses the resolution of single observations, and Section Spatial and temporal resolution of an observation collection discusses observation collections.

Spatial and temporal resolution of a single observation

The first competency question (Q1, see Section Motivating scenario) is the focus of this section. As mentioned in Section Introduction, resolution is a property of a representation. On that account, two terms are introduced: spatial resolution, and temporal resolution. The spatial resolution is the amount of spatial detail in an observation, and the temporal resolution is the amount of temporal detail in an observation. Previous work has proposed to model the spatial and temporal resolution of an observation using one of two approaches: a stimulus-centric approach and a property-centric approach. A stimulus-centric approach constrains spatial/temporal resolution using the spatial/temporal extent of the stimulus participating in the observation process. It suffers from vagueness issues regarding the determination of the spatial extent of the stimulus, and strongly depends on one’s adopted view (i.e., stimulus as process or an event) for the determination of the temporal extent of the stimulus. A property-centric approach specifies resolution based on the spatial/temporal region over which the property of interest is considered homogeneous. It avoids vagueness issues, but needs to accommodate arbitrariness since there might be various reasons for which a data provider considers the property of interest homogeneous for his/her data collection purposes. To cope with both issues, our work introduces a receptor-centric approach where the spatial and temporal resolution of a single sensor observation are specified based on the physical properties of the observer. The three approaches are discussed in detail next.

The stimulus-centric approach

Stasch et al. [55] suggested to constrain the spatial and temporal resolution of an observation by the spatial and temporal extent of the stimulus. A drawback of this approach is that there is no one-way of defining the spatial and/or temporal extent of the stimulus involved in an observation process. For instance, in the case of a thermometer placed in a room of area 20m2 and measuring the temperature, the stimulus is the heat flow of the amount of air in the room. It can be stated that the spatial extent of the stimulus is equal to the spatial footprint of the amount of air in the room (e.g., 20 m2), but there is no logical basis for preferring the value 20m2 over smaller values of the amount of air in the room such as 15m2, 10m2 or 1m2. In fact, every size of the amount of air in the room falling within the interval ] 0, 20 ] has an equal right to be called the spatial extent of the stimulus participating in the observation process. Said another way, vagueness issues arise as to the determination of the spatial extent of the stimulus. As regards the temporal extent of the stimulus, its characterization is not straightforward because, as [48] pointed out, a detectable change can be viewed as a process (periodic or continuous) or an event (intermittent). The duration of the stimulus is therefore perspective-dependent.

The property-centric approach

Frank [16] indicates that a sensor always measures over an extended area and time (called ε), and reports a point-observation (i.e., average value for an attribute) for this extended area and time. The extended area or time was termed the support of the sensor. Frank [16] ascribes support to the sensor, but support has also been attributed in the literature to the observation. For instance, Atkinson and Tate [22] define support as “[t]he size, geometry, and orientation of the space on which the observation is defined” (emphasis added). Modelling support as an attribute of the observation rather than of the sensor is the standpoint adopted in this work, because ε needs not be related to the characteristics of the sensing device. As Burrough and McDonnell ([56], page 101) pointed out, support is the technical name used in geostatistics for the area or volume of the physical sample on which the measurement is made. Measuring soil pH over a physical soil sample of 10 cm · 10 cm would imply a support of 100 cm2. That is, the support is determined independently of the sensor (i.e., the instrument measuring and reporting a value for the pH of the soil at a location).

A general definition of support is “the largest time interval [T], area [L2] or volume [L3] for which the property of interest is considered homogeneous” [57]. The spatial resolution of an observation can be equated with its spatial support, and its temporal resolution with its temporal support. The downside of this approach is that no precision is given regarding the way of estimating the area, volume or time interval for which the property of interest is considered homogeneous. The example of soil pH abovementioned mentions only a size, but additional attributes such as shape and orientation are also defining characteristics of the support. Deciding whether the shape of the support should be rectangular, circular or irregular involves a certain degree of arbitrariness. Using support as a criterion to characterize the resolution of the observation implies therefore a certain degree of arbitrariness inherent in the resolution value. The next subsection attempts to improve this situation by proposing a method to characterize the resolution of the observation based on the physical characteristics of the observer.

The receptor-centric approach

From the previous two subsections, existing criteria for observation resolution are wanting in some respects. Besides, Frank [16] pointed out in previous work that “quantitative descriptors of data quality must be justified by the properties of the observation process” (emphasis added). That is, in the context of resolution, quantitative descriptors should be traceable to the physical properties of the observation process. The introduction of a new criterion for observation resolution here aims at making progress towards fulfilling this desideratum.

In line with [48], the observation process is conceptualized as consisting of four steps (the first two steps are required only once, to determine the observed phenomenon):

Step 1: choose an observable,

Step 2: find one or more stimuli that are causally linked to the observable,

Step 3 (also called ‘impression’): detect the stimuli producing analog signals,

Step 4 (also called ‘expression’): convert the signals to observation values.

The entity which produces the analog signal upon detection of the stimulus (Step 3) is called here the receptor. Receptors are similar to the threshold devices introduced in [58], in that the production of the output (analog signal) doesn’t happen immediately upon activation of the input (stimulus), but only after a short delay. However (and contrary to [59]), receptors are not considered as the interface between the external world and the observer. In other words, receptors don’t need to be located at the surface of the observer. It is suggested here to use the spatial region containing all the receptors stimulated during the observation process as criterion to characterize the spatial resolution of the observation. The short delay required by the receptors to produce analog signals (upon detection of the stimulus) can be used as a criterion to specify the temporal resolution of the observation.

Two new terms borrowed and adapted from neuroscience (see [6062]) are also introduced at this point: the spatial receptive field (of the observer) and the temporal receptive window (of the observer). The spatial receptive field (SRF) is the spatial region of the observer which is stimulated during the observation process. This spatial region can be seen as two-dimensional (e.g., the palm of the hand) or three-dimensional (e.g., the whole hand) depending on the type of receptors participating in the observation process, and hence the word ‘field’ in SRF to reflect this fact. The temporal receptive window (TRW) is the smallest interval of time required by the observer’s receptors in order to produce analog signals.

The definition of SRF above is compatible with the one of receptive field in neuroscience as a “specific region of sensory space in which an appropriate stimulus can drive an electrical response in a sensory neuron” [60]. The definition of TRW paraphrases and generalizes to all sensor devices the definition proposed in [61, 62]. The spatial resolution of an observation can be approximated by the spatial receptive field of the observer, and its temporal resolution could be equated with the temporal receptive window of the observer participating in the observation process. There might be a chaining of different types of receptors in an observation process. In these cases, the relevant receptors for the computation of the spatial and temporal resolution are those that are stimulated by external stimuli. Figure 2 illustrates this point.

Fig. 2
figure 2

Observer with several receptors. Note: Only receptor R1 is relevant to the estimation of the spatial and temporal resolution of the observation because it is directly stimulated by external stimuli. An example of observation process where several receptors are chained is the hearing process as described in [93]. The process can be summarized as follows: eardrums (R1) collect sound waves and vibrate; after them, hair cells (R2) convert the mechanical vibrations to electrical signals. These electrical signals are then carried to the auditory cortex, i.e., the part of the brain involved in perceiving sound. In the auditory cortex, there are specialist neurons (R3) which specialize in different combinations of tone (e.g., some are sensitive to pure tones, such as those produced by a flute, and some to complex sounds like those made by a violin). At last, there are other neurons (R4) which can combine information from the specialist neurons to recognize a word or an instrument

Examples of SRF and TRW for a single observation

With the approach introduced in Section The receptor-centric approach, the computation of the spatial and temporal resolution of a single sensor observation involves three steps:

Step 1: identify the type of receptor involved in the observation process;

Step 2: find the duration needed for the production of analog signal upon detection of the stimulus (relevant to the estimation of the TRW);

Step 3: find the size of the receptors and the number of receptors stimulated during the observation process (relevant to the estimation of the SRF).

The approach hinges on the availability of information about the receptors which participate in an observation process. This information can be found in technical documentations (for sensor devices), and in research outcomes of the field of neuroscience (for human observers). The next paragraphs provide some examples of receptor, spatial receptive field and temporal receptive window for human and technical observers. As said in Section The receptor-centric approach, the production of an observation involves two stages: impression and expression. Strictly speaking, the TRW is the time interval required for the impression operation. However, most information about sensors (or observations) currently available provide only hints about the duration of the whole observation process (i.e., impression + expression). More work will be needed in the future to tease the impression’s duration and the expression’s duration apart. For the time being, the examples of temporal receptive window that follow are based on the assumption that the time needed for the expression operation is negligible compared to the time needed for the impression operation. That is, for now, TRW is approximated using the duration of the whole observation process.

EXAMPLE 1: A Carbon Monoxide Analyzer of type GM901 (see [63]) returns the concentration of carbon monoxide (Observation) in a gas. The receptor of this sensing device is the measuring probe. The spatial receptive field is equal to the size of the opening of the measuring probe, and the temporal receptive window is equal to the response time. The value of the temporal receptive window lies between 5 and 360 seconds. The diameter of the opening of the measuring probe varies between 300 and 500 millimeters and this suggests a spatial receptive field between 707 and 1963 square centimeters.

EXAMPLE 2: A digital camera returns an image (Observation), with a spatial receptive field equal to the size of the aperture and a temporal receptive window equal to the shutter speed. The aperture is “the size of the adjustable opening inside the lens, which determines how much light passes through the lens to strike the image sensor” [64], and the shutter speed is “the amount of time the digital camera’s shutter remains open when capturing a photograph” [65]. The receptor of the camera is the image sensor, but the size of the aperture determines the actual portion of the image sensor that is stimulated during the production of an image. The shutter speed determines the duration of the image sensor’s exposure to light. It is acknowledged here that resolution has been defined in the literature (for example, [66]) as a function of imaging aperture and the wavelength of the light. This is the optical resolution (which has sometimes also been called spatial resolution). It is inversely correlated with aperture, and measures the shortest distance between two points on an image that can still be distinguished by the observer. However, in line with previous work [5, 67], the term ‘discrimination’ is reserved for the shortest distance between two points on an image that can still be distinguished by the observer (the optical resolution). Approximating spatial resolution (as defined in this work, see Section Introduction) with spatial receptive field intends to inform about a different notion, and tell instead about the portion of the observer which was stimulated while producing the observation. It also reflects intuition, namely: the larger the aperture, the smaller the optical resolution (smallest details the lens can resolve), and consequently the larger the amount of spatial detail in the final image (spatial resolution). Further examples of receptors for technical observers include thermistors (for medical digital thermometers), bulbs (for clinical mercury thermometers), telescopes (for laser altimeters), aneroid capsules (for pressure altimeters), bulbs (for psychrometers), to name a few.

EXAMPLE 3: A human observer reports on a scenery at a temporal receptive window of about 14 milliseconds (ms) using the sentence ‘there is an apple here’ (Observation). The value of TRW is assigned based on the results from [68], where the authors investigated the mechanisms involved in object recognition by monkeys’ and humans’ visual systems. Keysers et al. [68] studied visual responses to very rapid image sequences composed of “color photographs of faces, everyday objects familiar and unfamiliar to the subjects, and naturalistic images taken from image archives” and reported a rate of 14 ms per image for human perception and memory.

EXAMPLE 4: The previous example is illustrative of the temporal receptive window of an observation sentence as defined in [59, 69] in that the observer assigns unreflectively on the spot a value to external stimuli. Lederman [70] indicates that, in the context of purposive exploration of the world, it typically takes 1 to 2 seconds to identify common objects such as spoon. Therefore, the temporal receptive window for the observation ‘spoon’ in the context of a purposive exploration task using human hands (of blind subjects) varies between 1 and 2 seconds. The temporal receptive windows of observations produced by human observers will depend on the observer, the type of task, and the stimulus.

EXAMPLE 5: The spatial receptive field of human observations is equal to the size of the surface stimulated during the observation process. This surface might be calculated using the product N · S, where N is the number of receptors which have participated in the observation process, and S is the size of one receptor (if the receptors overlap, the size of the overlap should be subtracted from the product). As starting point for the computation, the knowledge presented in Table 2 can be used. The exact knowledge of the receptors which have participated in an observation process will become available as neuroscience evolves. For example, Krulwich [71] pointed out that it is only in 2002 that it became the new view that there is a fifth taste (umami), in addition to the four admitted during many centuries (bitter, salty, sour, sweet). This fifth taste is detected by a specific type of receptors (receptors for L-glutamate on the tongue).

Table 2 Examples of receptors for a human observer

As this section illustrates, a receptor-centric approach to characterize the spatial and temporal resolution of a sensor observation is applicable to both in-situ (e.g., tongue) and remote (e.g., eye) sensors, and to both human and technical observers. Information given about the Carbon Monoxide Analyzer of type GM901 (technical observer) was extracted from the technical documentation of the product (see [63]). Table 2 illustrates that the (neuroscience) literature is a useful source to gather necessary information to estimate the spatial resolution of observations produced by human observer. [68], cited previously, is an example showing that the literature on neuroscience is also a useful source to collect information for the computation of the temporal resolution of observations generated by human observers.

Alignment to DOLCE: resolution of an observation

‘Spatial resolution’, ‘temporal resolution’, ‘spatial receptive field’, and ‘temporal receptive window’ as characteristics of an entity (the observation or the observer) correspond to the notion of quality in DOLCE. A quality can be defined as “any aspect of an entity (but not a part of it), which cannot exist without that entity” [72]. Spatial receptive field and temporal receptive window inhere in the observer, and are therefore physical qualities. Spatial receptive field and temporal receptive window are also examples of referential qualities, i.e., “qualities of an entity taken with reference to another entities” [73]. Both SRF and TRW are qualities of the observer taken with reference to the stimulus. Spatial resolution and temporal resolution inhere in the observation (i.e., a social object), and hence belong to DOLCE’s class abstract quality. Finally, DOLCE proposes a general distinction between agentive physical objects (i.e., endurants with unity to which we ascribe intentions, beliefs and desires), and non-agentive physical objects (which are endurants which constitute these agentive physical objects). The receptor, being an element of the observer, is a non-agentive physical object.

Formal specification: resolution of a sensor observation

The case of a carbon monoxide analyzer (COA) of type GM901 reporting a value of the concentration of carbon monoxide (CO) (see Example 1, Section Examples of SRF and TRW for a single observation) is taken as running example for the formal specification presented in this section. The section walks the reader through the definition of the concepts of involved in observation, as well as a step by step account of how spatial and temporal resolution are introduced in observation processes. The specification of resolution presented next builds upon the specification for observations provided at https://git.io/f3TuI (last accessed: June 19, 2018), and described in [48].

Listing 1 introduces three relevant datatypes for the scenario: Magnitude (to represent the magnitude of a quality), Quale (entity evoked in a cognitive agent’s mind when observing a quality), and ObsValue (to represent observation values). For a detailed discussion of these notions, see [74].

The amount of air surrounding the COA is modelled as containing a certain amount (i.e., magnitude) of carbon monoxide, that is:

A receptor has an id, a size, a processing time for incoming stimuli and a certain role. The receptor involved in the observation of the CO concentration in the city is the measuring probe (see Section Examples of SRF and TRW for a single observation). It has a size and a processing time set provisionally to 1500 cm2 and 60 seconds respectively, and the role of detecting CO molecules. The size of the receptor is set here to the size of the opening of the measuring probe (the opening of the measuring probe determines the actual portion of the measuring probe that is stimulated by external stimuli). The receptor’s role is modelled here as a description in natural language.

An observer has an id and a number of receptors of a certain type. It carries a quale and an observation value. The measurement unit used below for observation values is “ppm” standing for parts per million. For simplicity, it is assumed here that all receptors (with a similar function) have the same size, and there is no malfunction during the observation process (i.e., either all the receptors detecting the stimulus are stimulated or none of them). The assumption that all receptors have the same size is in line with Quine [59] who states: “The subject’s sensory receptors are fixed in position, limited in number, and substantially alike”. A COA has one measuring probe.

Listing 2 presents the alignment of the terms ‘observer’ and ‘receptor’ to DOLCE.

During the perception of the observed quality (i.e., the carbon monoxide of the amount of air), the observer produces a quale. The perception of the observed quality involves inherently a loss of spatial and temporal detail, and this leads to a spatial and temporal resolution for the quale. The spatial resolution of the quale is modelled in the current work as being equal to the spatial receptive field of the observer involved in the perception operation. The temporal resolution of the quale is equal to the temporal receptive window of the observer which participated in the perception of the observed quality. The function magnitudeToQuale establishes a mapping from a certain magnitude to the corresponding quale, and more details about it are provided below.

Based on the quale, the observer produces an observation value. The function qualeToMeasure introduced below establishes a mapping between a quale and an observation value (resulting from a measurement process).

The spatial resolution and the temporal resolution of the observation value are now equated with the spatial resolution and temporal resolution of the quale respectivelyFootnote 1.

Spatial receptive field is now specified as the size of the spatial region containing all receptors stimulated during the observation process. Temporal receptive window is the processing time of the receptors stimulated during the observation process.

The last stage of this formal specification is the definition of the functions magnitudeToQuale and qualeToMeasure. These two functions are introduced to reflect the idea (already present in [74]) that an observation process is the approximation of the absolute magnitude of a certain quality. Probst [74] indicated two types of approximations: qualia approximate absolute magnitude (this happens during the perception or impression process), and observation values approximate qualia (this happens during the expression process). As a general requirement, the composition of magnitudeToQuale and qualeToMeasure is a monotonic function. In the context of the current scenario, these two functions will be given a simple definition, assuming an approximation factor of the magnitude amounting to 0.9 during the mapping magnitudeToQuale, and another approximation factor of 0.9 during the mapping qualeToMeasure.

The Haskell specification presented above is available at https://doi.org/10.5281/zenodo.1293285. As argued in Winter and Nittel [75], a running Haskell specification guarantees the consistency (i.e., internal consistency between the concepts in the specification), correctness (i.e., the developer has said what he intended to say), and completeness (i.e., appropriate coverage of questions within a domain) of the specification. The theory expounded above is thus consistent, correct, and complete with respect to the question Q1 (how to specify the resolution of an observation using the characteristics of the observed entity, the stimulus, and the sensor?). All concepts suggested to model the resolution of single observations have also been implemented as an ontology design pattern (ODP) in the Web Ontology Language (OWL). The ODP is an extension to the SSN Ontology [20] and offers concepts needed to annotate single observations with their resolution. The ODP for the resolution of single observations is shown in Fig. 3, and can be downloaded at: https://doi.org/10.5281/zenodo.1293285.

Fig. 3
figure 3

ODP for the resolution of a single observation

Spatial and temporal resolution of an observation collection

The second competency question (Q2, see Section Motivating scenario) is put under scrutiny in this section. ‘Spatial resolution’ and ‘temporal resolution’ denote the amount of spatial detail in the observation collection, and the amount of temporal detail in the observation collection respectively. An observation collection is a collection of single observations (or ‘observations’ for short). Wood and Galton [76] presented a review of existing ontologies (including DOLCE and the Basic Formal Ontology) for the representation of collectives (‘collective’ from [76] is equivalent to ‘collection’ in this article), and proposed a taxonomy allowing the classification of around 1800 distinct types of collectives. Adapting their reflections to the specific case of collections of observations leads to the following statements:

  • An observation collection is a concrete particular, not a type, nor an abstract entity;

  • An observation collection is a continuant, that is, it is to be thought of as enduring over a period of time, existing as a whole at each moment during that period, and possibly undergoing various types of change over that period;

  • An observation collection has multiple observations (and only observations) as members. In line with [77], the member-collection relationship is a more specific kind of part-of relation. Winston et al. [77] also point out that membership in a collection is determined based on one of two factors: spatial proximity or social connection. As regards observation collections, membership in an observation collection is determined based on social connection (not spatial proximity).

In addition, the current work adopts the standpoint that an entity is either a single observation or an observation collection. It cannot be both. Put differently, an observation collection has n members, where n is a natural number greater than one. An observation collection with only one observation is a single observation. In that sense, one remote sensing image is not an observation collection, but two consecutive pictures of an area (are already enough to) form an observation collection.

Observation collections and observations are social objects in the sense of the DOLCE Ultra Light (DUL) upper ontologyFootnote 2. There is however one important difference between the two which relates to their process of generation: an observation is generated by observing the physical realityFootnote 3; an observation collection is produced by gathering other social objects (i.e., observations). In terms of DUL, an observation collection can be viewed as a DUL:Configuration (‘A collection whose members are organized according to a certain schema that can be represented by a Description’) while an observation may be regarded as a DUL:Situation (‘A relational context created by an observer on the basis of a Description’).

Figure 4 shows four examples of observation collections. Two criteria suggested in previous work - spacing and coverage - can be used to characterize the spatial resolution of the observation collections. These two criteria are critically discussed in the next two paragraphs. The arguments brought forward for the spatial resolution hold, mutatis mutandis, for the temporal resolution of the observation collections.

Fig. 4
figure 4

Some examples of observation collections. Note: Each point on the figure represents the spatial location of a single observation, the dotted box in black represents the spatial extent of the study area for which the observations have been generated. a shows two collections with different amount of spatial detail, but similar total spacing; b shows two collections with distinct amount of spatial detail, but similar mean spacing. The observed study area (i.e., the portion of the study area that has been observed) reflects differences in amount of spatial detail where both total- and mean spacing fail

Spacing: Goodchild and Proctor [1] mentioned the spacing of the points (i.e., observations) as a criterion to characterize the spatial resolution of observation collections. The estimation of spacing necessitates some information about the spatial location of each observation. Spacing can be calculated in (at least) four ways: the maximum spacing, the minimum spacing, the total spacing and the mean spacing. All four have some disadvantages. For example, the maximum spacing and the minimum spacing say nothing about how the observation collection is spatially detailed. They rather tell that, within the current observation collection, the closest locations are within a distance equal to the minimum spacing, the farthest within a distance equal to the maximum spacing. Regarding the total spacing, one disadvantage is the need to specify a spatial ordering for the observation collection. As discussed in [53], this choice might involve some arbitrariness. The ultimate implication of the use of total spacing as criterion, is that a decision-maker will be provided with different values of spatial resolution for an observation collection, with no means to decide which one to choose for his or her purpose. In addition, there are cases such as the one from Fig. 4a where the total spacing fails to capture the fact that two observation collections have different amounts of spatial detail. It is indeed arguable that (under the assumption that the size of the points is negligible) the two observation collections from Fig. 4a have the same spacing S. The use of the mean spacing has the advantage that it is no longer necessary to define what observation is the first, and what is the next. However, a serious drawback of this criterion is that, when applied to the observation collections from Fig. 4b, it gives the same value. In other words, this criterion fails to capture the fact, as far as Fig. 4b is concerned, the observation collection further right is spatially more detailed than the observation collection further left.

Coverage: coverage, proposed in [78], is another criterion that can be used to characterize the spatial resolution of observation collections. The value C of this criterion for the observation collections presented in Fig. 4 is:

$$C =\frac{N \cdot A}{E}$$

where N is the number of observations, A is the area covered by each observation, and E is the extent of the study area. This criterion will yield different values for the spatial resolutions of the observation collections from Fig. 4a-b, capturing the fact that these observation collections have different amounts of spatial detail. There is also no need to face the arbitrariness which comes with the specification of a spatial ordering for an observation collection, and C gives an immediate impression of the portion of the study area which has been observed. A drawback of this criterion is that it leads to a dimensionless value, and this fails to account for the intuition (reflected in expressions such as ‘10 meters resolution’, ‘20 meters resolution’, and so on) that resolution is a property to which humans associate a dimension of length.

From the previous paragraphs, spacing and coverage as criteria to characterize the spatial/temporal resolution of observation collections are wanting in some respects. As a general requirement for the Sensor Web and GIScience, proxy measures for the spatial/temporal resolution of observation collections should: (i) avoid the arbitrariness ensuant on the necessity to define a spatial ordering for the observation collection; (ii) have the dimension of length/timeFootnote 4.; and (iii) mirror the fact that a perfect sampling strategy covers the whole study area/period. This motivates the introduction of the following two terms for the description of the resolution of observation collections: observed study area and observed study period. The observed study area is the portion of the study area that has been observed. The observed study period is the portion of the study period that has been observed. The study area is the spatial extent of the analysis and the study period is the temporal extent of the analysis.

Observed study area and observed study period

The observed study area of an observation collection can be obtained by summing up the observed areas of each of the observations of the collection. The observed area of an observation is the spatial region of the phenomenon of interest that has been observed. Let RSum be defined after [79] as the sum of two spatial regions. RSum is similar to the operator union used in set theory in that the RSum of two regions A and B is a region C such that all the elements belonging to C either belong to A, or B or both A and B. The RSum of two regions is itself a region (see [79] for the formalization). The following equation holds:

$$ObservedStudyArea = RSum_{i=1}^{n} \left[a_{i}\right] $$

where ai denotes the observed area of each observation, and n is the number of observations in the observation collection.

Likewise, the observed study period of an observation collection can be obtained by summing up the observed periods of each of the observations in the observation collection. The observed period of a single observation is the temporal region of the phenomenon of interest that has been observed.

$$ObservedStudyPeriod = RSum_{i=1}^{n} \left[w_{i}\right] $$

where wi designates the observed period of each observation, and n is the number of observations in the observation collection.

Modelling the resolution of an observation collection

The spatial resolution and temporal resolution of the observation collection (Q2) can be equated with the observed study area, and the observed study period respectively. The observed study area provides the decision-maker with a value which reflects how much of the study area has effectively been observed (or sampled). Its value is independent of the ordering of the observations, and also independent of the type of sampling strategy (i.e., regular vs irregular). The observed areas of the individual observations in the collection need not be alike (some might be greater or smaller than others). The observed study area has a dimension of length squared, but a linear measure can be obtained by taking the square root. For a given study area, the equation \(ObservedStudyArea = RSum_{i=1}^{n} [a_{i}] \) will approximate the study area if n tends to infinity (and under the sufficient condition that the ai are disjoint).

The observed study area and the observed study period are more suitable than spacing and coverage to characterize the spatial and temporal resolution of observation collections. The fulfill the three requirements for proxy measures for resolution listed above, thereby addressing shortcomings of criteria suggested in previous work. In addition, decision-makers are free to compute the proportion of the study area/study period that has effectively been observed through the ratios \(\frac {ObservedStudyArea}{StudyArea}\) or \(\frac {ObservedStudyPeriod}{StudyPeriod}\).

If the observed area is defined as spatial reference field, and the observed period as temporal receptive window, a computation of the observed study area and the observed study period based on the receptors involved in the observation process becomes possible. An example of computation of observed study area and observed study period based on the spatial receptive field, and the temporal receptive window is provided in [53] (Chapter 5). Specifying observed area, and observed period based on the spatial receptive field, and the temporal receptive window respectively leads to a definition of the observed study area and the observed study period based on the properties of the observation process as Frank’s [16] desideratum (see Section The receptor-centric approach) expressed. In the absence of information about the spatial receptive field, and temporal receptive window, the spatial and temporal supports may be used as criteria to characterize the observed area and observed period and compute the observed study area and observed study period (one should be aware of supports’ drawbacks discussed in Section The property-centric approach though).

A minor drawback of these two criteria is that their full significance is only unfolded when the extent of the whole study area/period is known. For example, stating that the observed study period of an observation collection is one hour says nothing about the actual quality of the observation collection, unless the whole temporal extent under consideration (e.g., one day or one month) is also made explicit. The extent of the study area/period will also be required for a meaningful comparison of two observation collections with respect to their spatial and temporal resolution. Even so, this drawback is not intrinsic to the criteria suggested. It is rather a consequence from the general fact that values need some context if their significance is to be assessed. Another minor drawback of these two criteria is that they are new, and thus yet to be adopted by the practice of metadata documentation. However, the lack of criteria in the literature which fulfill the three characteristics mentioned in Section Spatial and temporal resolution of an observation collection suggests that the Sensor Web and GIScience should come up with new criteria to describe the resolution of observations collections (rather than conforming to existing ones).

Alignment to DOLCE: resolution of an observation collection

In line with [80], an ‘observation collection’ is viewed as a social object. A social object is an object that exists only within a process of social communication, in which at least one PhysicalObject participates. ‘Spatial resolution’, ‘temporal resolution’, ‘observed study area’, ‘observed study period’, ‘observed area’ and ‘observed period’ are all qualities that inhere in a social object, and therefore abstract qualities. Figure 5 shows the ODP for the resolution of observation collections which summarizes all terms introduced in this section. The ODP can be downloaded at https://doi.org/10.5281/zenodo.1293285.

Fig. 5
figure 5

ODP for the resolution of an observation collection

Applications

This section presents the practical usefulness of some of the ideas presented in this work. As discussed in [41], practical usefulness is an evaluation criterion of the implementation stage of ontology development, and is demonstrated through one or more applications which use the ontology. The ontology design patterns introduced earlier in Sections Formal specification: resolution of a sensor observation and Alignment to DOLCE: resolution of an observation collection are particularly relevant in this context. As the name suggests, they are relevant for the design stage of ontology development. If in addition, they are encoded in an ontology implementation language (e.g., OWL), they become useful for practical tasks (e.g., information retrieval). In short, ODPs act here as a bridge between design stage and implementation stage during ontology development, and provide a nexus between the theoretical investigations and their practical complements.

Section Resolution of single observations: Retrieval of Flickr data at a certain temporal resolution shows how the ODP for the resolution of single sensor observation can be used to annotate and retrieve Flickr data with their temporal resolution. The purpose of the section is to illustrate how information from a real dataset could be accommodated through the ODP. Section Resolution of single observations: Expressing resolution qualitatively demonstrates how translation rules in SPARQL can be specified to account for qualitative values of resolution in the context of query expansion. Both sections illustrate the use of the ODP for information retrieval and query expansion respectively. Since the principles are similar, information retrieval and query expansion using the ODP for observation collection are not further presented. Instead, Section Resolution of observation collections: Cross-comparison of average values for air quality in Europe focuses on demonstrating the practical usefulness of the concepts of observed study area/period for policy making (and in particular cross-comparison of average values for air quality in Europe). The implementation described here was done using the Java Programming Language, and Eclipse as tool. The software code can be accessed at https://doi.org/10.5281/zenodo.1293285.

Resolution of single observations: Retrieval of Flickr data at a certain temporal resolution

This subsection illustrates how the ODP, which was presented above to characterize the resolution of single observations, can be used to retrieve Flickr data satisfying some (temporal) resolution constraints. Flickr is an online platform for the sharing of photographs. Flickr photographs are associated with a great variety of themes but they can be organized into albums or galleries with a limited thematic scope. The Lava shots galleryFootnote 5 for example groups photos capturing “volcanic activity and areas, featuring Sicily’s Mt. Etna and Hawaii’s national parks”. The ODP for the resolution of single observations can be used to annotate and infer the temporal resolutions of these images, based on the physical properties (i.e., the shutter speeds) of the cameras which produced them. Figure 6 shows the ids of the photographs from the Lava shots gallery, which have a temporal resolution below 0.4 seconds. The different steps followed to get the results displayed are:

Fig. 6
figure 6

Photographs of the Lava shots gallery (Flickr) with a temporal resolution less than or equal to 0.4 seconds

Step1: Retrieve the pictures contained in the Lava shots gallery using the method flickr.galleries.getPhotos from the Flickr API;

Step2: Get the Exif (Exchangeable Image File Format) data about each picture, as well as the shutter speed (if available) of the camera which produced the picture through the flickr.photos.getExif method of the Flickr API;

Step3: Populate the ODP with pictures (for which the shutter speed has been explicitly documented) using the OWL API [81, 82];

Step4: Infer the temporal resolution of these pictures using the Pellet Reasoner [83, 84];

Step5: Retrieve pictures at a given temporal resolution using SPARQL.

Resolution of single observations: Expressing resolution qualitatively

The examples introduced so far in this article have given only quantitative values to the resolution of spatial and temporal observations (or observation collections). Even so, spatial and temporal resolution can also be expressed qualitatively. One could envision the following information needs where resolution is expressed qualitatively:

  • retrieve all the remote sensing imageries (observation) in the knowledge base, which have a high spatial resolution

  • return the census data (observation collection) from last year, at the county level

  • provide daily data (observation collection) about the level of the Danube river

  • retrieve the air quality observations in the database, which have a low temporal resolution

To account for such queries, one must specify translation rules establishing correspondences between quantitative and qualitative values of resolution. As an example illustrating how the translation could be done, Listing 3 presents a SPARQL query to retrieve the Flickr photographs from the Lava shots gallery with both their qualitative and quantitative temporal resolution. The translation rule is specified in the query through “BIND(IF(?quantitativeTres ≤ 0.4, ‘high’, ‘low’) AS ?qualitativeTres)” which states that pictures with a temporal resolution less than or equal to 0.4 seconds have a ‘high’ temporal resolution, and those with a temporal resolution greater than 0.4 seconds have a ‘low’ temporal resolution. Figure 7 displays the results of the query.

Fig. 7
figure 7

Results of Q3

Resolution of observation collections: Cross-comparison of average values for air quality in Europe

In 2008, the European Commission introduced the Directive 2008/50/EC on ambient air quality and cleaner air for Europe. The following quote is taken from this directive:

“In order to ensure that the information collected on air pollution is sufficiently representative and comparable across the Community, it is important that standardised measurement techniques and common criteria for the number and location of measuring stations are used for the assessment of ambient air quality” [85].

It is argued here that the observed study area, and the observed study period of observation collections should be taken into consideration, if average values are to be “sufficiently representative and comparable across the Community” as the directive 2008/50/EC requires. To give an example, Table 3 shows three European Member states with their respective numbers of monitoring stations measuring ozone levels. The numbers of monitoring stations are taken from [86], a recent report on air pollution by ozone across Europe. It is assumed, for the purposes of the illustration, that each of the monitoring station in these countries has an observed area of 100 m2 (the report did not provide information to derive the observed area, and the validity of the arguments exposed in the next paragraph is not influenced by the value of the observed areas chosen: 100 m2 or others).

Table 3 Number of monitoring stations for the ozone level in three European countries

Only average values from France and Germany over an observed study area of 8,300 m2 can be used for a consistent comparison of the average ozone levels in France, Germany and United Kingdom. Likewise, only average values from France over an observed study area of 26,000 m2 are pertinent for an adequate comparison of average ozone levels in France and Germany. The report presented in [86] remained silent about this aspect. For instance, the occurrence of exceedances in each European country (henceforth called ‘occurrences per country’) was defined as “the average number of exceedances observed per station in a country” (emphasis added) and the report informed about the occurrences per country (see page 11 of the report). The occurrences per country have later been summed up and averaged, to give an average value of occurrences in Europe of 1.5 (and this without mention of the spatial areas for which the occurrences per country are valid). This approach bears the risk of producing meaningless results. Indeed, average values over 83 stations cannot be compared with average values over 260 stations, in the same way as average values over a day cannot be compared with average values over a month (observed areas and observed periods being equal). A similar observed study area or observed study period is a prerequisite for an appropriate comparison of average values of observations belonging to observation collections. In the absence of this information in the report, the meaningfulness of the values provided in the report for a cross-comparison of occurrences per country in Europe may be questioned.

In sum, observed study area and observed study period should always be documented when manipulating average values. The general rule that a comparison of average values requires similar observed study areas/periods is an axiom, which can used to check the consistency of information stored in (sensor web or) geographic information systems. This work has provided the basis for assessing the observed study area and observed study period of observation collections. Both criteria are derived from the observed areas and observed periods respectively (see Section Spatial and temporal resolution of an observation collection). The observed areas and observed periods can be estimated using the spatial receptive fields and temporal receptive windows of the observers - in this case the monitoring stations in Europe - which produced the observations.

Comparison with previous work

With regard to previous work, the discussion from [16] is most closely related to the work presented here. The main difference between the two is in the nature of the investigation: Frank [16] essentially discussed the effects of observations’ limited resolution on the size of the objects that could be formed based on these observations; this work analyzed the relationship between the characteristics of entities participating in an observation process, and the resolution of the final observations. Table 4 recapitulates the similarities and differences between the two works (‘previous work’ denotes the work done in [16] and ‘current work’ refers to this article).

Table 4 Comparison with previous work

In addition, the work has shown that a receptor-centric approach is applicable to both technical and human sensors. The receptor-based approach thus looks a promising way to cope with resolution in the VGI age. Since none of the previous formalisms reviewed in Section Resolution in GIScience: a brief review has explicitly considered VGI as a possible use case, the work contributes to advance the state of the art regarding that aspect. It’s worth mentioning that there are two cases of VGI to which the theory suggested would apply, namely: humans going around using sensors to collect values (e.g., about noise) and reporting them; and humans directly reporting qualitative values about the environment (e.g., saying via Twitter: “there is an apple here” or “it’s now very cold in Las Vegas”). In the first case, resolution of the VGI can be traced back to the properties of the instruments used by people during the data collection activity. As to the second case, the work has proposed that the resolution of VGI should be traced back to the properties of human sensing (which are unveiled by research from neuroscience). In both cases, resolution could be specified using the spatial/temporal receptive field, or the observed study area/period depending on whether one is talking about one VGI observation or a collection of VGI observations. A case of VGI not supported yet by the theory is the case where a group of people collaboratively creates a map of an affected region after a disaster (as in e.g., [87]). The rationale for this is that the scope of applicability of the work is limited to the tiers 0 and 1 of [9, 88] (see Table 4). For this latter case of VGI, approaching the resolution problem of VGI as the specification of resolution of a vector dataset seems logical (though specifying resolution of vector data in the digital area is still an open issue, see [23]).

Limitations

The motivation for this work has been to explore whether a specification of resolution based on physical properties of the observation process is feasible at all (given the current dearth of approaches looking at this). As mentioned earlier, “data quality research needs a quantitative, theory-based approach” [16]. It appears that spatial receptive field, and temporal receptive windows are good candidates for such a quantitative, theory-based approach with respect to the spatial data quality criterion ‘resolution’. The receptor-based approach can also cope with both technical and human observers, and this is a major advance compared to current approaches to model resolution, which mainly address technical sensorsFootnote 6. Some examples were provided to illustrate the practical usefulness of the theory in Section Applications. Nonetheless, much still needs to be done to make it readily usable to annotate datasets with their resolution. There are two main obstacles (already mentioned, but briefly recapped here): documentation practice (have the sensor industry provide metadata that can help compute both the spatial receptive field and temporal window), and pace of evolution of neuroscience as to the understanding of the human’s observation process. Both obstacles are acknowledged here, but it’s also argued that they are not insurmountable on the road towards more quantitative, objective approaches to observation resolution. The merit for this work has been to lay down a foundation upon which future work can build upon, as better knowledge about human sensing processes, and better sensor documentation practices become available.

Finally, geographic information has three components: space, time and theme (or attribute). Though the interdependence of these three dimensions is acknowledged, the work has deliberately chosen to focus on the spatial and temporal dimension of geographic dimension. The main rationale for this is that space and time are more specific and well-understood case of the (more varied) attribute dimension. It appeared reasonable to start with these two to make the investigation more manageable, as theories pertaining to the thematic dimension have proven challenging to establish. For instance, though spatial and temporal reference systems have been around (and used) for years, semantic reference systems suggested 15 years ago [89] are yet to be produced. The ideas proposed in this article can be used as a starting point for the formulation of a more general, receptor-based, theory of resolution which applies to all three dimensions of geographic information. Regarding thematic resolution, Veregin [90] suggested a distinction between two types: thematic resolution for quantitative data and thematic resolution for categorical data. The former refers to the degree to which small differences in the quantitative attribute can be discerned (e.g., 10.03mA and 10.0251mAFootnote 7 indicate two different thematic resolutions for an observation reporting about the amount of electric current in an electrical circuit); the latter denotes the fineness of category definition (e.g., a classification of entities as being either ‘anthropogenic’ or ‘natural’ as opposed to a classification of the same entities as belonging to the classes ‘Agriculture’, ‘Grass and Riparian and Dense Urban vegetation’ ‘Desert’ or ‘Urban’Footnote 8). The best setting for reuse of the ideas presented in this work is a theory of thematic resolution of quantitative data. In particular, interesting questions to investigate are the extent to which a receptor-based approach is applicable to the thematic resolution of observations, and the interplay between the thematic resolution of an observation (say an image), and the discrimination of the sensor (e.g., satellite) which produced the observation. These questions have not been discussed in this work and could be taken up by future studies.

Conclusion and future work

Resolution is one of several components of scale, and a science of scale requires progress in its understanding. Observations are central to Geographic Information Science (GIScience) because “all we know about the world is based on observation” [9]. Previous work has proposed different formalisms for the resolution of geographic data, yet offered no observation-based theory of resolution. This paper has expounded one, and suggested to model the resolution of observations based on the receptors of the observer which participated in the observation process. The theory was specified using Haskell, and the different concepts suggested were implemented as an Ontology Design Pattern, which can be used while annotating sensor observations with their resolution. The article also discussed criteria proposed in the literature to characterize the resolution of observation collections, pointed out their limitations, and suggested that resolution of observation may be better described by the observed study area, and the observed study period of an observation collection.

Both the transition to the digital age, and the rise of Volunteered Geographic Information (VGI) call for a rethinking of traditional criteria to describe the resolution of data in GIScience. The ideas presented in this paper have shown one way to redefine resolution of observation and observation collections in order to accommodate both technical and human sensors. The article gave examples illustrating the applicability of a receptor-centric approach to observation resolution description. An immediate direction for future work is to extend the theory’s applicability to account for the thematic resolution of observations and observation collections. In addition, since current metadata documentation practices limit themselves to the documentation of the characteristics of observation values, further tests of the applicability of the theory can only be done as the practice of metadata documentation evolves towards a more explicit documentation of the quale’s contribution to the observation process. Finally, it became clear during the course of this work that a better understanding of the notion of quale (and especially its relationship with observation value) would help advance observation ontology.

Notes

  1. In fact, the following equations hold: spatialResolution(observation) ≤ spatialResolution(quale); temporalResolution(observation) ≤ temporalResolution(quale); thematicResolution(observation) ≤ thematicResolution(quale), since the transformation of the quale into an observation value (through the expression operation mentioned in Section The receptor-centric approach) might involve another loss of spatial/temporal/thematic detail. The example introduced here assumes no loss of spatial/temporal detail during the expression operation, and equates the spatial/temporal resolution of the observation with the spatial/temporal resolution of the quale. A thorough investigation of the interplay between resolution of quale and resolution of observation value (for the spatial, temporal and thematic dimensions) is deferred to future work.

  2. http://www.ontologydesignpatterns.org/ont/dul/DUL.owl(last accessed: December 05, 2017). A DUL:SocialObject is an object that is created in the process of social communication.

  3. This idea can be found in [10, 20, 47, 48, 55].

  4. Length/time is mentioned here in opposition to dimensionless values. Area/time or volume/time as dimensions are also suitable to proxy measures for resolution. This requirement of length dimension for values of resolution was already brought forth in [1].

  5. https://www.flickr.com/photos/flickr/galleries/72157645265344193/, last accessed: December 05, 2017.

  6. An exception is Scheider and Stasch [91] who recently suggested the use of attention as a metaphor to interpret sensor observations, proposing time/location of the focus of measurement as a proxy measure for resolution. Nonetheless, exploring the translation of this idea into a quantitative, computational resolution theory (as in this work) is still ongoing work.

  7. mA is an abbreviation for milliampere.

  8. This second example is based on the illustration of map reclassification rules from [92].

Abbreviations

CO:

Carbon Monoxide

COA:

Carbon Monoxide Analyzer

DOLCE:

Descriptive Ontology for Linguistic and Cognitive Engineering

DUL:

DOLCE Ultra Light

FOOM:

Functional Ontology of Observation and Measurement

GIScience:

Geographic Information Science

SemSOS:

Semantic Sensor Observation Service

ODP:

Ontology Design Pattern

OWL:

Web Ontology Language

VGI:

Volunteered Geographic Information

References

  1. Goodchild M, Proctor J. Scale in a digital geographic world. Geogr Environ Model. 1997; 1(1):5–23.

    Google Scholar 

  2. Gibson CC, Ostrom E, Ahn TK. The concept of scale and the human dimensions of global change: a survey. Ecol Econ. 2000; 32(2):217–39. https://doi.org/10.1016/S0921-8009(99)00092-0.

    Article  Google Scholar 

  3. Goodchild M. Accuracy and spatial resolution: critical dimensions for geoprocessing In: Douglas DH, Boyle AR, editors. Cartography and Geographic Information Processing: Hope and Realism. Ottawa: Canadian Cartographic Association: 1982. p. 87–90.

    Google Scholar 

  4. Degbelo A, Kuhn W. Five general properties of resolution In: Krzysztof J, Adams B, McKenzie G, Kauppinen T, editors. CEUR Workshop Proceedings. Vienna: CEUR-WS.org: 2014. p. 40–7.

    Google Scholar 

  5. Degbelo A, Kuhn W. A conceptual analysis of resolution In: Bogorny V, Namikawa L, editors. XIII Brazilian Symposium on Geoinformatics. Campos do Jordão: MCT/INPE: 2012. p. 11–22. https://doi.org/ISSN2179-4847.

    Google Scholar 

  6. Dungan JL, Perry JN, Dale MRT, Legendre P, Citron-Pousty S, Fortin MJ, Jakomulska A, Miriti M, Rosenberg MS. A balanced view of scale in spatial statistical analysis. Ecography. 2002:626–40. https://doi.org/10.1034/j.1600-0587.2002.250510.x.

  7. Wu J, Li H. Concepts of scale and scaling In: Wu J, Jones B, Li H, Loucks O, editors. Scaling and Uncertainty Analysis in Ecology: Methods and Applications. Dordrecht: Springer: 2006. p. 3–16. https://doi.org/10.1007/1-4020-4663-4_1.

    Chapter  Google Scholar 

  8. Goodchild M. Citizens as sensors: the world of volunteered geography. GeoJournal. 2007; 69(4):211–221. https://doi.org/10.1007/s10708-007-9111-y.

    Article  Google Scholar 

  9. Frank A. Ontology for spatio-temporal databases In: Sellis T, Koubarakis M, Frank AU, Grumbach S, Güting RH, Jensen CS, Lorentzos N, Manolopoulos Y, Nardelli E, Pernici B, Theodoulidis B, Tryfona N, Schek H, Scholl M, editors. Spatio-Temporal Databases: The CHOROCHRONOS Approach. Berlin Heidelberg: Springer: 2003. p. 9–77. Chap. 2. https://doi.org/10.1007/978-3-540-45081-8_2.

    Google Scholar 

  10. Janowicz K. Observation-driven geo-ontology engineering. Trans GIS. 2012; 16(3):351–74. https://doi.org/10.1111/j.1467-9671.2012.01342.x.

    Article  Google Scholar 

  11. Adams B, Janowicz K. Constructing geo-ontologies by reification of observation data In: Agrawal D, Cruz I, Jensen C, Ofek E, Tanin E, editors. Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Chicago: ACM: 2011. p. 309–18. https://doi.org/10.1145/2093973.2094015.

    Google Scholar 

  12. Stasch C, Scheider S, Pebesma E, Kuhn W. Meaningful spatial prediction and aggregation. Environ Model Softw. 2014; 51:149–65. https://doi.org/10.1016/j.envsoft.2013.09.006.

    Article  Google Scholar 

  13. Couclelis H. Ontologies of geographic information. Int J Geogr Inf Sci. 2010; 24(12):1785–809. https://doi.org/10.1080/13658816.2010.484392.

    Article  Google Scholar 

  14. Goodchild M. Scales of cybergeography In: Sheppard E, McMaster RB, editors. Scale and Geographic Inquiry: Nature, Society, and Method. Malden: Blackwell Publishing Ltd: 2004. p. 154–169. Chap. 7.

    Google Scholar 

  15. Resch B, Blaschke T. Fusing human and technical sensor data: concepts and challenges. SIGSPATIAL Spec. 2015; 7(2):29–35. https://doi.org/10.1145/2826686.2826692.

    Article  Google Scholar 

  16. Frank A. Why is scale an effective descriptor for data quality? The physical and ontological rationale for imprecision and level of detail In: Cartwright W, Gartner G, Meng L, Peterson MP, editors. Research Trends in Geographic Information Science. Lecture Notes in Geoinformation and Cartography. Berlin Heidelberg: Springer: 2009. p. 39–61. Chap. 4. https://doi.org/10.1007/978-3-540-88244-2_4.

    Google Scholar 

  17. Degbelo A. An ontology design pattern for spatial data quality characterization in the semantic sensor web In: Henson C, Taylor K, Corcho O, editors. The 5th International Workshop on Semantic Sensor Networks. Boston, Massachusetts: CEUR-WS.org: 2012. p. 103–8.

    Google Scholar 

  18. Goodchild M, Quattrochi D. Introduction: scale, multiscaling, remote sensing, and GIS In: Quattrochi D, Goodchild M, editors. Scale in Remote Sensing and GIS. Boca Raton: Lewis Publishers: 1997. p. 1–11.

    Google Scholar 

  19. Frank A. Scale is introduced in spatial datasets by observation processes In: Devillers R, Goodchild H, editors. Spatial Data Quality: From Process to Decisions. St. John’s, Newfoundland and Labrador: CRC Press: 2009. p. 17–29.

    Google Scholar 

  20. Compton M, Barnaghi P, Bermudez L, García-Castro R, Corcho O, Cox S, Graybeal J, Hauswirth M, Henson C, Herzog A, Huang V, Janowicz K, Kelsey WD, Le Phuoc D, Lefort L, Leggieri M, Neuhaus H, Nikolov A, Page K, Passant A, Sheth A, Taylor K. The SSN ontology of the W3C semantic sensor network incubator group. Web Semant Sci Serv Agents World Wide Web. 2012. https://doi.org/10.1016/j.websem.2012.05.003.

  21. Blöschl G, Sivapalan M. Scale issues in hydrological modelling: a review. Hydrol Process. 1995; 9(3-4):251–90. https://doi.org/10.1002/hyp.3360090305.

    Article  Google Scholar 

  22. Atkinson PM, Tate NJ. Spatial scale problems and geostatistical solutions: a review. Prof Geogr. 2000; 52(4):607–23. https://doi.org/10.1111/0033-0124.00250.

    Article  Google Scholar 

  23. Goodchild M. Scale in GIS: an overview. Geomorphology. 2011; 130(1-2):5–9. https://doi.org/10.1016/j.geomorph.2010.10.004.

    Article  Google Scholar 

  24. Lam NSN, Quattrochi DA. On the Issues of Scale, Resolution, and Fractal Analysis in the Mapping Sciences*. Prof Geogr. 1992; 44(1):88–98. https://doi.org/10.1111/j.0033-0124.1992.00088.x.

    Article  Google Scholar 

  25. Marceau DJ, Gratton DJ, Fournier RA, Fortin JP. Remote sensing and the measurement of geographical entities in a forested environment. 2. The optimal spatial resolution. Remote Sens Environ. 1994; 49(2):105–17. https://doi.org/10.1016/0034-4257(94)90046-9.

    Article  Google Scholar 

  26. Gao J. Resolution and accuracy of terrain representation by grid DEMs at a micro-scale. Int J Geogr Inf Sci. 1997; 11(2):199–212. https://doi.org/10.1080/136588197242464.

    Article  Google Scholar 

  27. Deng Y, Wilson JP, Bauer BO. DEM resolution dependencies of terrain attributes across a landscape. Int J Geogr Inf Sci. 2007; 21(2):187–213. https://doi.org/10.1080/13658810600894364.

    Article  Google Scholar 

  28. Chow TE, Hodgson ME. Effects of lidar post-spacing and DEM resolution to mean slope estimation. Int J Geogr Inf Sci. 2009; 23(10):1277–95. https://doi.org/10.1080/13658810802344127.

    Article  Google Scholar 

  29. Jantz CA, Goetz SJ. Analysis of scale dependencies in an urban land-use-change model. Int J Geogr Inf Sci. 2005; 19(2):217–41. https://doi.org/10.1080/13658810410001713425.

    Article  Google Scholar 

  30. Kim JH. Spatiotemporal scale dependency and other sensitivities in dynamic land-use change simulations. Int J Geogr Inf Sci. 2013; 27(9):1782–803. https://doi.org/10.1080/13658816.2013.787145.

    Article  Google Scholar 

  31. Pontius Jr RG, Cheuk ML. A generalized cross-tabulation matrix to compare soft-classified maps at multiple resolutions. Int J Geogr Inf Sci. 2006; 20(1):1–30. https://doi.org/10.1080/13658810500391024.

    Article  Google Scholar 

  32. Csillag F, Kummert A, Kertész M. Resolution, accuracy and attributes: approaches for environmental geographical information systems. Comput Environ Urban Syst. 1992; 16(4):289–97. https://doi.org/10.1016/0198-9715(92)90010-O.

    Article  Google Scholar 

  33. Lechner AM, Rhodes JR. Recent progress on spatial and thematic resolution in Landscape Ecology. Curr Landsc Ecol Rep. 2016; 1(2):98–105. https://doi.org/10.1007/s40823-016-0011-z.

    Article  Google Scholar 

  34. Du S, Guo L, Wang Q. A scale-explicit model for checking directional consistency in multi-resolution spatial data. Int J Geogr Inf Sci. 2010; 24(3):465–85. https://doi.org/10.1080/13658810802629360.

    Article  Google Scholar 

  35. Balley S, Parent C, Spaccapietra S. Modelling geographic data with multiple representations. Int J Geogr Inf Sci. 2004; 18(4):327–52. https://doi.org/10.1080/13658810410001672881.

    Article  Google Scholar 

  36. Stell J, Worboys M. Stratified map spaces: a formal basis for multi-resolution spatial databases In: Poiker T, Chrisman N, editors. SDH’98 - Proceedings 8th International Symposium on Spatial Data Handling. Vancouver: 1998. p. 180–9.

  37. Skogan D. Managing resolution in multi-resolution databases In: Bjø rke JT, Tveite H, editors. ScanGIS’2001 - The 8th Scandinavian Research Conference on Geographical Information Science. Ås, Norway: 2001. p. 99–113.

  38. Worboys M. Imprecision in finite resolution spatial data. GeoInformatica. 1998; 2(3):257–79. https://doi.org/10.1023/A:1009769705164.

    Article  Google Scholar 

  39. Weiser P, Frank A. Modeling discrete processes over multiple levels of detail using partial function application In: Degbelo A, Brink J, Stasch C, Chipofya M, Gerkensmeyer T, Humayun MI, Wang J, Broelemann K, Wang D, Eppe M, Lee JH, editors. GI Zeitgeist 2012 - Proceedings of the Young Researchers Forum on Geographic Information Science. Muenster: AKA, Heidelberg, Germany: 2012. p. 93–7.

    Google Scholar 

  40. Bruegger B. Theory for the integration of scale and representation formats: major concepts and practical implications In: Frank AU, Kuhn W, editors. Spatial Information Theory: a Theoretical Basis for GIS. Semmering: Springer: 1995. p. 297–310. https://doi.org/10.1007/3-540-60392-1_19.

    Google Scholar 

  41. Degbelo A. A snapshot of ontology evaluation criteria and strategies In: Hoestra R, Faron-Zucker C, Pellegrini T, de Boer V, editors. Proceedings of the 13th International Conference on Semantic Systems - SEMANTICS 2017. Amsterdam: ACM Press: 2017. https://doi.org/10.1145/3132218.3132219.

    Google Scholar 

  42. Kuhn W. Modeling vs encoding for the Semantic Web. Semant Web. 2010; 1(1):11–5. https://doi.org/10.3233/SW-2010-0012.

    Google Scholar 

  43. Bittner T, Donnelly M, Smith B. A spatio-temporal ontology for geographic information integration. Int J Geogr Inf Sci. 2009; 23(6):765–98. https://doi.org/10.1080/13658810701776767.

    Article  Google Scholar 

  44. Drummond JR, Mand GS. The measurements of pollution in the troposphere (MOPITT) instrument: overall performance and calibration requirements. J Atmos Ocean Technol. 1996; 13(2):314–20. https://doi.org/10.1175/1520-0426(1996)013<0314:TMOPIT>2.0.CO;2.

    Article  Google Scholar 

  45. Henson CA, Pschorr JK, Sheth AP, Thirunarayan K. SemSOS: semantic sensor observation service In: McQuay W, Smari W, editors. International Symposium on Collaborative Technologies and Systems (CTS 2009). Baltimore: IEEE: 2009. p. 44–53. https://doi.org/10.1109/CTS.2009.5067461.

    Google Scholar 

  46. Grüninger M, Fox MS. Methodology for the design and evaluation of ontologies. In: Proceedings of the IJCAI Workshop on Basic Ontological Issues in Knowledge Sharing. Montreal, Quebec: 1995.

  47. Janowicz K, Compton M. The Stimulus-Sensor-Observation ontology design pattern and its integration into the semantic sensor network ontology In: Taylor K, Ayyagari A, De Roure D, editors. The 3rd International Workshop on Semantic Sensor Networks. Shanghai: CEUR-WS.org: 2010.

    Google Scholar 

  48. Kuhn W. A functional ontology of observation and measurement In: Janowicz K, Raubal M, Levashkin S, editors. GeoSpatial Semantics: Third International Conference. Mexico City, Mexico: Springer: 2009. p. 26–43. https://doi.org/10.1007/978-3-642-10436-7_3.

    Google Scholar 

  49. Madin J, Bowers S, Schildhauer M, Krivov S, Pennington D, Villa F. An ontology for describing and synthesizing ecological observation data. Ecol Informat. 2007; 2(3):279–96. https://doi.org/10.1016/j.ecoinf.2007.05.004.

    Article  Google Scholar 

  50. Probst F. Ontological analysis of observations and measurements In: Raubal M, Miller H, Frank A, Goodchild M, editors. Geographic Information Science: Fourth International Conference. Münster, Germany: Springer: 2006. p. 304–20. https://doi.org/10.1007/11863939_20.

    Google Scholar 

  51. Fonseca F, Davis C, Câmara G. Bridging ontologies and conceptual schemas in geographic information integration. Geoinformatica. 2003; 7(4):355–78. https://doi.org/10.1023/A:1025573406389.

    Article  Google Scholar 

  52. Percivall G. OGC Reference Model. OpenGIS® Implementation Specification (version 2.0), OGC 08-062r4. Technical report, Open Geospatial Consortium. 2008.

  53. Degbelo A. Spatial and Temporal Resolution of Sensor Observations. IOS Press: Dissertations in Geographic Information Science; 2015, p. 206.

    Google Scholar 

  54. Masolo C, Borgo S, Gangemi A, Guarino N, Oltramari A. WonderWeb Deliverable D18. Technical report. 2003.

  55. Stasch C, Janowicz K, Bröring A, Reis I, Kuhn W. A stimulus-centric algebraic approach to sensors and observations In: Trigoni N, Markham A, Nawaz S, editors. GeoSensor Networks: Third International Conference. Oxford: Springer: 2009. p. 169–79. https://doi.org/10.1007/978-3-642-02903-5_17.

    Google Scholar 

  56. Burrough PA, McDonnell RA. Principles of Geographical Information Systems, vol. 333. New York: Oxford University Press; 1998, p. 333.

    Google Scholar 

  57. Finke PA, Bierkens MFP, de Willigen P. Choosing appropriate upscaling and downscaling methods for environmental research In: Steenvoorden J, Claessen F, Willems J, editors. Proceedings of the International Conference on Agricultural Effects on Ground and Surface Waters. Wageningen: IAHS: 2002. p. 405–9.

    Google Scholar 

  58. Braitenberg V. Vehicles: Experiments in Synthetic Psychology. Cambridge: MIT press; 1984, p. 152.

    Google Scholar 

  59. Quine WV. In praise of observation sentences. The Journal of Philosophy. 1993; 90(3):107–16. https://doi.org/10.2307/2940954.

    Article  Google Scholar 

  60. Alonso J, Chen Y. Receptive field. Scholarpedia. 2009; 4(1):5393. https://doi.org/10.4249/scholarpedia.5393.

    Article  Google Scholar 

  61. Hasson U, Yang E, Vallines I, Heeger DJ, Rubin N. A hierarchy of temporal receptive windows in human cortex. J Neurosci. 2008; 28(10):2539–50. https://doi.org/10.1523/JNEUROSCI.5487-07.2008.

    Article  Google Scholar 

  62. Lerner Y, Honey CJ, Silbert LJ, Hasson U. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. J Neurosci. 2011; 31(8):2906–15. https://doi.org/10.1523/JNEUROSCI.3684-10.2011.

    Article  Google Scholar 

  63. SICK. Product information GM901. 2015. Available online from https://www.sick.com/media/dox/3/73/473/Product_information_GM901_Carbon_Monoxide_Gas_Analyzers_en_IM0011473.PDF. Accessed 04 Aug 2016.

  64. Schurman K. Aperture. 2013. http://cameras.about.com/od/digitalcameraglossary/g/aperture.htm. Accessed 04 Aug 2016.

  65. Schurman K. Shutter Speed. 2013. http://cameras.about.com/od/digitalcameraglossary/g/shutter_speed.htm. Accessed 04 Aug 2016.

  66. den Dekker AJ, van den Bos A. Resolution: a survey. J Opt Soc Am A. 1997; 14(3):547. https://doi.org/10.1364/JOSAA.14.000547.

    Article  Google Scholar 

  67. Sydenham PH. Static and dynamic characteristics of instrumentation In: Webster JG, editor. The Measurement, Instrumentation, and Sensors Handbook. Boca Raton: CRC Press LLC: 1999. Chap. 3.

    Google Scholar 

  68. Keysers C, Xiao D-K, Földiák P, Perrett DI. The speed of sight. J Cogn Neurosci. 2001; 13(1):90–101. https://doi.org/10.1162/089892901564199.

    Article  Google Scholar 

  69. Quine WV. From Stimulus to Science. Cambridge, Massachusetts, USA: Harvard University Press; 1995, p. 114.

    Google Scholar 

  70. Lederman SJ. Skin and touch In: Dulbecco R, editor. Encyclopedia of Human Biology. vol. 8, 2nd edn. San Diego: Academic Press: 1997. p. 49–61.

    Google Scholar 

  71. Krulwich R. Sweet, sour, salty, bitter... and umami. 2007. http://www.npr.org/templates/story/story.php?storyId=15819485. Accessed 22 Jan 2013.

  72. Gangemi A. DOLCE+DnS Ultralite 3.31. 2010. Available from http://www.ontologydesignpatterns.org/ont/dul/DUL.owl. Accessed 04 Aug 2016.

  73. Ortmann J, Daniel D. An ontology design pattern for referential qualities In: Aroyo L, Welty C, Alani H, Taylor J, Bernstein A, Kagal L, Noy N, Blomqvist E, editors. The Semantic Web - ISWC 2011: 10th International Semantic Web Conference. Bonn: Springer: 2011. p. 537–552. https://doi.org/10.1007/978-3-642-25073-6_34.

    Google Scholar 

  74. Probst F. Observations, measurements and semantic reference spaces. Appl Ontol. 2008; 3(1):63–89. https://doi.org/10.3233/AO-2008-0046.

    Google Scholar 

  75. Winter S, Nittel S. Formal information modelling for standardisation in the spatial domain. Int J Geogr Inf Sci. 2003; 17(8):721–41. https://doi.org/10.1080/13658810310001596067.

    Article  Google Scholar 

  76. Wood Z, Galton A. A taxonomy of collective phenomena. Appl Ontol. 2009; 4(3):267–92. https://doi.org/10.3233/AO-2009-0071.

    Google Scholar 

  77. Winston ME, Chaffin R, Herrmann D. A taxonomy of part-whole relations. Cogn Sci. 1987; 11(4):417–44. https://doi.org/10.1207/s15516709cog1104_2.

    Article  Google Scholar 

  78. Degbelo A, Stasch C. Level of detail of observations in space and time In: Egenhofer MJ, Giudice N, Moratz R, Worboys M, editors. Poster Session at Conference on Spatial Information Theory: COSIT’11. Belfast, Maine: 2011.

  79. Casati R, Varzi AC. The structure of spatial localization. Philos Stud. 1996; 82(2):205–39. https://doi.org/10.1007/BF00364776.

    Article  Google Scholar 

  80. Bottazzi E, Catenacci C, Gangemi A, Lehmann J. From collective intentionality to intentional collectives: an ontological perspective. Cogn Syst Res. 2006; 7(2):192–208. https://doi.org/10.1016/j.cogsys.2005.11.009.

    Article  Google Scholar 

  81. Horridge M, Bechhofer S. The OWL API: a Java API for working with OWL 2 ontologies In: Hoekstra R, Patel-Schneider PF, editors. Proceedings of the 6th International Workshop on OWL: Experiences and Directions (OWLED 2009). Chantilly: CEUR-WS.org: 2009.

    Google Scholar 

  82. Horridge M, Bechhofer S. The OWL API: a Java API for OWL ontologies. Semant Web. 2011; 2(1):11–21. https://doi.org/10.3233/SW-2011-0025.

    Google Scholar 

  83. Parsia B, Sirin E. Pellet: an OWL DL reasoner. In: Poster Track at the Third International Semantic Web Conference (ISWC2004). Hiroshima: 2004.

  84. Sirin E, Parsia B, Cuenca Grau B, Kalyanpur A, Katz Y. Pellet: a practical OWL-DL reasoner. Web Semant Sci Serv Agents World Wide Web. 2007; 5(2):51–3. https://doi.org/10.1016/j.websem.2007.03.004.

    Article  Google Scholar 

  85. European Commission. Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe. Off J Eur Union. 2008; 51(L152).

  86. EEA. Air pollution by ozone across Europe during summer 2012: overview of exceedances of EC ozone threshold values for April-September 2012. Technical report, European Environment Agency. 2013.

  87. Zook M, Graham M, Shelton T, Gorman S. Volunteered geographic information and crowdsourcing disaster relief: a case study of the Haitian earthquake. World Med Health Policy. 2010; 2(2):2. https://doi.org/10.2202/1948-4682.1069.

    Article  Google Scholar 

  88. Frank A. Tiers of ontology and consistency constraints in geographical information systems. Int J Geogr Inf Sci. 2001; 15(7):667–78. https://doi.org/10.1080/13658810110061144.

    Article  Google Scholar 

  89. Kuhn W. Semantic reference systems. Int J Geogr Inf Sci. 2003; 17(5):405–9. https://doi.org/10.1080/1365881031000114116.

    Article  Google Scholar 

  90. Veregin H. Data quality measurement and assessment: NCGIA Core Curriculum in Geographic Information Science; 1998, pp. 1–10.

  91. Scheider S, Stasch C. The semantics of sensor observations based on attention In: Marchetti G, Benedetti G, Alharbi A, editors. Attention and Meaning: The Attentional Basis of Meaning. Pub Inc: Nova Science: 2015. p. 319–343.

    Google Scholar 

  92. Buyantuyev A, Wu J. Effects of thematic resolution on landscape pattern analysis. Landsc Ecol. 2007; 22(1):7–13. https://doi.org/10.1007/s10980-006-9010-5.

    Article  Google Scholar 

  93. Society for Neuroscience. Brain Facts : a Primer on the Brain and Nervous System. 7th edn. Washington, DC: Society for Neuroscience; 2012, p. 92.

    Google Scholar 

  94. Britannica.com. Tympanic membrane. Encyclopædia Britannica Online. 2013. https://www.britannica.com/science/tympanic-membrane. Accessed 04 Aug 2016.

  95. Chudler EH. Brain facts and figures. 2013. http://faculty.washington.edu/chudler/facts.html. Accessed 04 Aug 2016.

  96. Kolb H. Facts and figures concerning the human retina In: Kolb H, Fernandez E, Nelson R, editors. Webvision: The Organization of the Retina and Visual System. Salt Lake City (UT): University of Utah Health Sciences Center: 2005. Available From: http://www.ncbi.nlm.nih.gov/books/NBK11556/. Accessed 04 Aug 2016.

    Google Scholar 

  97. Optipedia. Photoreceptors: Optipedia. Free optics information from SPIE. 2013. http://spie.org/x32354.xml?pf=true. Accessed 04 Aug 2016.

  98. Jenkins PM, McEwen DP, Martens JR. Olfactory cilia: linking sensory cilia function and human disease. Chem Senses. 2009; 34(5):451–64. https://doi.org/10.1093/chemse/bjp020.

    Article  Google Scholar 

  99. Leffingwell JC. Olfaction. Technical report, Leffingwell & Associates. 2001.

  100. Britannica.com. Taste bud. Encyclopædia Britannica Online. 2013. http://www.britannica.com/EBchecked/topic/584034/taste-bud. Accessed 04 Aug 2016.

  101. Meyerhof W. Human taste receptors In: Blank I, Wüst M, Yeretzian C, editors. Expression of Multidisciplinary Flavour Science - Proceedings of the 12th Weurman Symposium. Interlaken: Zürcher Hochschule für Angewandte Wissenschaften (ZHAW): 2008. p. 3–12.

    Google Scholar 

Download references

Acknowledgments

We would like to thank the anonymous reviewers for their many insightful comments and suggestions.

Funding

The work has been partially funded by the German Academic Exchange Service (DAAD A/10/98506) and the European Union (FP7-249170), and was conducted within the International Research Training Group on Semantic Integration of Geospatial Information (DFG GRK 1498). We also acknowledge support by the Open Access Fund of the University of Muenster.

Availability of data and materials

The Haskell code (“Results” section), the OWL files for the Ontology Design Patterns for Resolution (“Results” section), and the software described in the article (“Applications” section) are all available for download at https://doi.org/10.5281/zenodo.1293285.

Author information

Authors and Affiliations

Authors

Contributions

The idea of an observation-based theory of resolution was jointly developed by AD and WK. AD implemented the software and the ontology design patterns. WK contributed to getting the Haskell formalization sound. AD primarily wrote AD primarily wrote Resolution in GIScience: a review, Methods, Results, Applications, Comparison with previous work, and Limitations. WK and AD jointly wrote the Introduction and the Conclusion. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Auriol Degbelo.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Degbelo, A., Kuhn, W. Spatial and temporal resolution of geographic information: an observation-based theory. Open geospatial data, softw. stand. 3, 12 (2018). https://doi.org/10.1186/s40965-018-0053-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40965-018-0053-8

Keywords