Next Article in Journal
Use of Remote Sensing to Support Forest and Wetlands Policies in the USA
Next Article in Special Issue
Toronto’s Urban Heat Island—Exploring the Relationship between Land Use and Surface Temperature
Previous Article in Journal / Special Issue
Development of a New Ground Truth Database for Global Urban Area Mapping from a Gazetteer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data

Geo-Environmental Cartography and Remote Sensing Research Group, Universidad Politécnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2011, 3(6), 1188-1210; https://doi.org/10.3390/rs3061188
Submission received: 24 March 2011 / Revised: 5 May 2011 / Accepted: 1 June 2011 / Published: 14 June 2011
(This article belongs to the Special Issue Urban Remote Sensing)

Abstract

:
In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral response. The other approach follows the standard scheme of object-based image classification: segmentation, feature extraction and selection, and classification, here performed using decision trees. In addition, the effect of the inclusion in the building detection process of contextual relations with the shadows is evaluated. Quality assessment is performed at two different levels: area and object. Area-level evaluates the building delineation performance, whereas object-level assesses the accuracy in the spatial location of individual buildings. The results obtained show a high efficiency of the evaluated methods for building detection techniques, in particular the thresholding-based approach, when the parameters are properly adjusted and adapted to the type of urban landscape considered.

1. Introduction

The evolution and dynamism of the landscape and land uses is particularly noticeable in urban and peri-urban environments, due to the construction of new infrastructures, houses and buildings. Maintenance of urban cartographic databases requires a reliable and systematic methodology for building detection. Traditionally, this specific task has been carried out by photointerpretation, which is expensive and time consuming. The abundance of high resolution multi-spectral imagery and the increasing availability of LiDAR data have influenced combining these data sources and the development of digital image processing methods for automatic building detection and delineation. Several approaches have been reported in a large number of studies, yielding different degrees of accuracy. Main problematic aspects related to the early building detection methodologies were deeply analyzed by Mayer [1], who quoted several relevant factors that need to be considered when designing an object detection model. After the examination of a large number of methodologies, the main problems pointed out by this author were: a lack of a deep analysis of the contextual relations, an inadequate treatment of data variability, and an absence of a critical statistical evaluation of the results obtained. Dependent on the data source employed, building detection techniques can be classified in three groups: (i) using airborne or satellite imagery; (ii) using three-dimensional information; and (iii) combining both data sources.
Aerial images have traditionally been used to extract buildings for mapping applications. With the successive launching of high spatial resolution commercial satellites, high-resolution satellite imagery has become a cost-effective alternative to aerial photography for several applications [2]. Automatic building detection by means of high resolution imagery presents difficulties due to geometric reasons, buildings can constitute complex structures that create abrupt height discontinuities, occlusions, shadows [3], and radiometric reasons, as different material combinations can create a variety of intensity values in the spectral bands employed [4]. Two main groups of building detection techniques using high resolution imagery can be considered [3]: low-level and high-level vision techniques. Low-level vision techniques are mainly based on edge detection and extraction from images, followed by processes of definition of rules and hypothesis in order to identify the buildings. High-level vision techniques try to imitate the human cognition process and decision making skills which are based on the analysis of the information. Pattern and object recognition, and image classification are common high-level vision techniques.
Since buildings grouped in high-density urban or industrial areas usually have regular patterns and directionality, rather than a random distribution, low-level vision techniques have been initially considered as more adequate and straightforward to face the problem of building detection [5]. However, many of the low-level vision techniques are strongly restricted, making a priori assumptions such as that buildings have rectangular shape, flat roofs or specific spectral responses, or that they are arranged following a particular orientation. Previous research defined a series of rules that buildings should accomplish [6]. Similar approaches were used by [3,7,8,9], who detected edges and analyzed their mutual relationships to define building existence hypothesis. Some authors proposed the use of transforms between image representation spaces, such as Fourier [5] or Hough [10,11]. In general, low-level vision techniques have the advantage of presenting a relatively simple design and low computational cost, but they lack robustness due to their inherent methodological restrictions.
A group of interim approaches could be placed between low and high-level vision techniques, such as those based on the employment of active contour models. Thus, Mayunga et al. [12] present a semiautomatic system that requires the manual definition of a seed point inside each building. Afterwards, Theng [13] fully automates the system by adding an automatic corner detector that initializes the active contour model. Ahmady et al. [14] and Ge et al. [15] initialize the contours using regularly distributed circumferences that progressively merge after some repetition. Ok [16] applies the active contours technique specifically to simplify and merge the detected edges.
Most building detection high-level vision techniques are based on image classification. Although some methodologies have been proposed to detect buildings using pixel-based classifications [17,18,19,20], in the majority of the studies the buildings are considered as objects, and automatic segmentation methods based on image homogeneity are used to create the image-objects. Regarding the classification process, image-objects are mainly characterized using descriptive features based on the spectral response, the image texture, or the shape of the objects [2,21,22,23,24,25,26,27,28], or even using features derived from the wavelet transform [29,30]. Some authors establish contextual relationships between objects and sub-objects by using multi-scale segmentation techniques [31,32]. Others include ancillary data such as road maps [21]. In many cases, standard morphological filtering techniques, or even the Hough transform, are applied at the end of the process to generalize and smooth the shape of those objects classified as buildings [2,33,34,35]. Other different high-level vision techniques have been used to detect buildings, but generally offer less satisfactory results than those based on classification, being usually focused on solving uncommon or particular cases. Thus, L’Homme et al. [36] propose a methodologically simple method based on the extraction of the central positions of the buildings using the variance of the Grey Level Co-occurrence Matrix (GLCM). In the same sense, the methodology presented by [37,38,39,40] assumes that buildings are regularly arranged following a unique orientation, having higher intensity values than the background.
Building detection techniques based on images are generally improved by adding some contextual information, such as shadows. According to [38], the height is a common feature of buildings, and the projected shadows allow defining a context for building detection. In general, authors condition the existence of a building to the proximity or contiguity to shadows, following the direction defined by the solar incidence angle [2,6,21,25,37,38,39,40,41]. In spite of the fact that shadows may have different origins and properties—which makes their detection more difficult [42]—it is frequently easier to extract them than to directly analyze the buildings that cast shadows [6]. In practice, most of the authors use thresholding methods with very positive results, using the panchromatic band [6,26,43,44] or the intensity channel [25]. Some attempts have been made to automatically define the optimum thresholding value. Thus, Scanlan et al. [42] established the threshold value dividing the image in tiles and comparing the mean value from each tile with the median value of the whole image. Tsai [45] recursively divided the image up to each tile presents a bi-modal histogram of grey levels. Other more complex approaches were applied, based on pixel classification [31], differential morphological profile method [37,38,39,40], color spaces transformation [46] or digital models analysis [35].
The problem of automated building detection using only three-dimensional data sources has been the focus of a large number of researches. Two main data sources have usually been employed: photogrammetric restitution and, more recently, airborne laser scanning (ALS). In general, the methodologies proposed are less complex than when only images are used, and very often they are not statistically evaluated, since they constitute a transitional step for the representation of virtual 3D environments. The methodological approaches are dependent on the type of data sources used. Thus, most of the studies that use digital surface models (DSM) obtained from photogrammetric techniques are based on thresholding the normalized DSM (nDSM) to differentiate between buildings and vegetation [47,48,49,50,51,52]. However, when the source of data is ALS, automatic segmentation techniques are normally used, being the resultant objects classified according to their size, shape, height or rugosity [53,54,55,56,57]. Many of the errors produced using this approach are due to the misclassification of vegetation as buildings, and are partially corrected by the application of morphological filters and height or size thresholds. A similar approach is used by [58,59,60]. Other methodologies can be found in [61,62,63,64].
A different group of building detection and location techniques are those based on the combination of imagery and three-dimensional data. The use of two complementary sources of data usually improves the results. Most of these studies are based on two main approaches: object-based classification and thresholding-based detection. In the first approach, image-objects are created using automatic segmentation techniques. Then, the objects are characterized by means of spectral, shape and height features [65,66,67,68,69,70,71,72]. Approaches based on thresholding consist of the application of a threshold value to the nDSM to discriminate buildings and vegetation, combined with a threshold applied to the normalized difference vegetation index (NDVI) image in order to mask the vegetation. Then, buildings can be accepted or rejected according to different conditions, such as size and shape [73,74,75,76], spectral values [77] or texture features [78,79,80]. In addition to these two main approaches, some other specific methods have been reported. Cheng et al. [81] propose a single threshold applied to the nDSM, and then a correction of the shape of the detected building based on alignments extracted from the imagery. Paparoditis [82] offered a low-level vision solution that effectively integrates bi-dimensional and three-dimensional information by detecting and analyzing edges in both data sources. Guo and Yasuoka [83] used height data to initialize active contours, and spectral information to extract the buildings.
In this paper, two adapted versions of the two main approaches to detect and localize buildings based on images and 3D LiDAR data are compared and evaluated. The main objectives of this study are: to critically compare the performance of two building detection approaches, those based on thresholding and those based on object-based classification; and to evaluate the effect of the inclusion of some contextual relations with the shadows on building detection. The evaluation tests are carried out over different urban scenarios, and not only detection, but also location of buildings is considered.

2. Data and Study Area

Three study areas were defined, all in the province of Valencia (Spain), corresponding to the municipalities of Moncada, Oliva and Sagunto (see locations in Figure 1). Moncada is characterized by an urban center surrounded by large areas of suburban neighborhoods occupied by detached and semi-detached houses, together with some industrial areas. Oliva is a coastal touristic town presenting a residential strip with high apartment buildings, detached and semi-detached houses, and traditional village houses. The Sagunto area is divided in two main dense urban centers and several large industrial areas.
Figure 1. Location of the province of Valencia in Spain (left) and distribution of the three study areas (right).
Figure 1. Location of the province of Valencia in Spain (left) and distribution of the three study areas (right).
Remotesensing 03 01188 g001
Table 1 shows some technical specifications of the data used in the tests. QuickBird satellite images were used for Moncada, and aerial orthophotographs acquired as part of the Spanish Programme of Aerial Orthophotography (PNOA) were used for Oliva and Sagunto. LiDAR data were collected using different flights and sensors, and they have a variety of nominal density values ranging from 0.5 to 2 points/m2.
Table 1. Data technical details.
Table 1. Data technical details.
Study areaImageryLiDAR
MoncadaQuickBird
Acquired: February 2004
Spatial resolution: 0.6 m/pixel
Spectral bands: IR, R, G, B
Sensor: Optech ALTM 2033
Acquired: December 2003
Density: 1 point/m2
OlivaAerial orthophoto
Acquired: June 2006
Spatial resolution: 0.5 m/pixel
Spectral bands: IR, R, G
Sensor: Optech ALTM 3025
Acquired: September 2009
Density: 2 points/m2
SaguntoAerial orthophoto
Acquired: June 2006
Spatial resolution: 0.5 m/pixel
Spectral bands: IR, R, G
Sensor: RIEGL LMS-Q680
Acquired: August 2009
Density: 0.5 points/m2
A digital terrain model (DTM) was created from the LiDAR data by eliminating points belonging to aboveground objects, such as vegetation or buildings. An iterative algorithm consisting of selecting minimum height points in a series of progressively smaller windows was used. First, a coarse DTM is computed using large window size. Afterwards it is refined adding new minimum height points chosen by using smaller windows. These are compared to the initial DTM and the use of a height threshold allows for removing the non-ground points in each iteration. The methodology used is fully described in [84]. In addition, a DSM was calculated considering the highest points. The difference between DSM and DTM produces the nDSM, which describes the height of the objects aboveground.

3. Methodology

This section describes two adapted approaches for building detection and location when imagery and three-dimensional information is available: thresholding-based and object-based classification. In addition, a shadow detection methodology followed by the definition of contextual relationships is described. This method is used in order to improve the building detection process. At the end of this section, the evaluation metrics employed are described. Quality assessment has been performed at two different levels: area and object. Area-level evaluates the building delineation performance; object-level evaluation assesses accuracy in the spatial location of individual buildings.

3.1. Thresholding-Based Approach

The adapted thresholding-based building detection approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as a building, applied over the nDSM, and the other refers to the presence of vegetation, which is defined as a minimum value of NDVI obtained from the image bands (Figure 2). The NDVI threshold value can be determined using the trial and error method, or in a semi-automatic manner by collecting samples of the classes: vegetation and non-vegetation. Both classes are parameterized using the mean and standard deviation values, considering a normal distribution of their histograms. The threshold value is defined as the point where the normal curves of both classes are intersected. The binary images produced in the thresholding step are smoothed by means of morphological opening and closing filters, and small objects are eliminated in order to remove noise. As a last step, intersecting the vegetation and height binary images, a mask of detected buildings is produced. Working with the thresholding-based approach, several different variations have been presented in the literature as a last step for building definition, including spectral or texture analysis techniques. For comparison purposes, only the size is considered as final condition for building definition.
Figure 2. Scheme of the thresholding-based building detection approach.
Figure 2. Scheme of the thresholding-based building detection approach.
Remotesensing 03 01188 g002

3.2. Object-Based Classification Approach

The object-based classification approach follows the standard scheme of object-based image classification (Figure 3): segmentation, feature extraction and selection, and classification. In order to create meaningful image objects for building detection purposes, a region growing segmentation method is performed over the nDSM. As a building can present different spectral responses, the addition of spectral information for segmentation would produce an over-segmentation of the image, making the classification difficult, so they were not included for segmentation.
Figure 3. Scheme of the object-based classification building detection approach.
Figure 3. Scheme of the object-based classification building detection approach.
Remotesensing 03 01188 g003
The produced objects were characterized for classification using four different groups of descriptive features: spectral, texture, shape and height; computed by means of the object-based image analysis software, FETEX 2.0 [85]. Spectral features provide information about the intensity values of the objects in the different spectral bands. Texture features provide information about the spatial distribution of the intensity values in the object analyzed. Histogram-based features (kurtosis and skewness) and texture features from the grey level co-occurrence matrix (GLCM) [86] were extracted, as well as the mean and the standard deviation of the edgeness factor [87], representing the density of edges in the neighborhood of a pixel. Shape features provide information about the dimensions of the objects and their contour complexity. These have been described by means of area, perimeter, compactness, shape index and fractal dimension features. Height features provide three-dimensional information about the parcel: the average, maximum and standard deviation values of the height, derived from the nDSM, were computed. Due to the large number of parameters defined (27), some features may be redundant in terms of efficient description of the objects, and the inclusion of these parameters can introduce noise in the classification. Therefore, as a first step, the study of the relationships between the features and their contribution to the classification accuracy is essential. A pre-selection was made to discard the use of correlated variables, by analyzing the correlation values and by grouping the variables using principal components analysis, helping to understand the links between variables in terms of redundancies. Then, stepwise linear discriminant analysis was used to determine the significance of the remaining features for the particular classification problem. This was done independently for each study area.
Classification was performed using decision trees built using C5.0 algorithm [88] and the boosting multi-classifier method [89]. This algorithm searches the features that best separate one class from the others by dividing data using mutually exclusive conditions, until the new generated subgroups are homogeneous, i.e., all the elements in a subgroup belong to the same class, or a stopping condition is satisfied. Different classes were defined in each study area, considering their respective variety in the typologies of buildings. Then, contiguous objects classified as buildings were merged and very small objects eliminated.

3.3. Shadow Detection and Contextual Relationships Definition

A common characteristic of urban areas in images is the presence of shadows, which depend on the height of the buildings and on the illumination conditions. This fact, that initially is a limitation for spectral characterization of buildings, could become an additional source of information if the shadows are properly detected and related to their respective buildings. Many authors assume a bimodal behavior of the histogram of images containing shadows, and as a result they try to automatically find the valley (lowest histogram frequency values) that divides the shadowed areas from the illuminated areas. The amount of shadows in a high resolution image depends on the azimuth and solar elevation angles, and the sensor observation angle [37], but also on the proportion of objects able to project these shadows. However, even when some shadows are present in the image, the histogram may not be clearly bimodal, making their detection quite difficult by only analyzing this frequency graph. Consequently, we used a semi-automatic methodology that requires manual sample selection. It consists of the definition of the threshold value on the first principal component of the original bands, by selecting samples of shadowed and illuminated areas. Both intensity values are then modeled by Gaussian curves, and the threshold value is determined as the intersection of both curves. The binary image obtained is cleaned and smoothed by means of morphological filters. Shadows are used to contextualize the adjacent buildings in the direction of the solar incidence angle on each image. This is used as a restrictive condition, so that every object detected as building but with no shadow projected in the solar incidence direction, will not be considered as building, i.e., it will be excluded.

3.4. Quality Assessment

The evaluation of the described approaches for building detection was carried out on two levels: area or pixel level; and object level. Reference buildings were manually delineated by direct photointerpretation of the images. Due to the different acquisition dates of LiDAR data and imagery, the selection of the reference areas for evaluation purposes was done ensuring that those areas were not affected by changes.
In urban areas, diverse types of buildings with different uses and characteristics coexist. In order to evaluate the performance of the building detection approaches on different building scenarios, three building types were defined for the study areas of Moncada and Sagunto: urban, suburban and industrial; and two for the study area of Oliva: urban and suburban. Urban buildings (Figure 4, left) are located at city and town centers and they are designed for residential and commercial purposes. These are diverse structures located in blocks and surrounded by urban elements like roads, artificial green areas or car parks. Suburban areas (Figure 4, center) are normally composed of abundant vegetation zones and detached and semi-detached buildings to accommodate single families. Industrial buildings (Figure 4, right) are characterized for presenting large dimensions, and they are aimed to manufacture, transform, repair, store and distribute products.
Figure 4. Examples of the building types defined: urban (left), suburban (center), and industrial (right). All the images belong to the study area of Sagunto.
Figure 4. Examples of the building types defined: urban (left), suburban (center), and industrial (right). All the images belong to the study area of Sagunto.
Remotesensing 03 01188 g004
The evaluation at area level has been performed using a series of statistical parameters defined by McGlone and Shufelt [90] that have been repeatedly referred to in the literature [8,10,19,25,27,38,40,69,91,92]. Detected and reference buildings are spatially compared, and areas are categorized in four cases (see Figure 5): true positive (TP), true negative (TN), false positive (FP) and false negative (FN). TP represents those areas containing both detected and reference buildings. TN represents areas without reference or detected buildings. FP represents areas containing detected buildings but without reference buildings. FN represents undetected building areas.
Figure 5. Cases for evaluation of building detection at area level.
Figure 5. Cases for evaluation of building detection at area level.
Remotesensing 03 01188 g005
Using these cases, the following area level quality metrics are defined: The branching factor (Equation (1)) is a measure of the degree to which a system over-detects as buildings non-built areas. The more accurate the detection, the closer the value is to zero. The miss factor (Equation (2)) indicates the omission error committed on the detection of built areas. These quality metrics are closely related to the boundary delineation performance of the building extraction system. The building detection percentage (Equation (3)) represents the percentage of reference built areas that is automatically detected. The quality percentage (Equation (4)) measures the absolute quality of the detection model by combining aspects of both boundary delineation accuracy and building detection rate to summarize the system performance.
B r a n c h i n g   f a c t o r = F P T P
M i s s   f a c t o r = F N T P
D e t e c t i o n   p e r c e n t a g e = 100 T P T P + F N
Q u a l i t y   p e r c e n t a g e = 100 T P T P + F P + F N
The object level assessment approach estimates the spatial correspondence between reference and detected buildings, and has been employed in several studies [17,26,54,70,93]. Building detection methodologies with low performance at pixel-level and high performance at object-level are inappropriate for automatic building delineation. However, these procedures may be suitable for the identification and spatial location of buildings, considering a subsequent manual delineation by means of photointerpretation techniques. Two metrics were considered for object-level evaluation: correctness and completeness, which reveal the errors of commission and omission, respectively. The correctness value indicates the percentage of the detected building objects that are at least partially overlapped with the reference buildings. Completeness value refers to the percentage of reference buildings overlapping the detected buildings.

4. Results and Discussion

4.1. Evaluation of Building Detection Methods

The results of the statistical assessment of the building detection using the thresholding-based approach are shown in Table 2. In general, the results show a high performance for the different building types analyzed.
Detection results obtained for type industrial building show very low values of branching and miss factors, meaning that buildings are precisely delineated. Quality percentage values are higher than 90% in both study areas. At object-level, completeness values show that all reference buildings are overlapped by detected buildings. On the other hand, correctness values are lower, showing that several small objects have been erroneously detected as buildings. Their very small size barely affects the area-level metrics, but it acts as noise in the object-level parameters. Figure 6(a) shows an image detail illustrating the detection results for industrial type buildings. Generally, these buildings appear properly defined and delimited; meanwhile little objects are erroneously detected as buildings due to the presence of characteristic elements of industrial areas, such as trucks or pallets, during the LiDAR data acquisition process.
Table 2. Evaluation results for the thresholding-based approach.
Table 2. Evaluation results for the thresholding-based approach.
IndustrialUrbanSuburban
MoncadaSaguntoMoncadaOlivaSaguntoMoncadaOlivaSagunto
Area-levelBranching factor0.060.090.120.080.110.370.110.15
Miss factor0.020.020.040.030.020.140.120.16
Detection percentage97.798.296.597.298.187.889.186.3
Quality percentage9290.386.690.688.366.181.176.6
Object-levelCorrectness72.473.374.371.987.289.810099.2
Completeness10010010010010098.499.395
Figure 6. Detection examples of industrial buildings in Sagunto (a), urban buildings in Oliva (b) and suburban buildings in Moncada (c) using the thresholding-based approach.
Figure 6. Detection examples of industrial buildings in Sagunto (a), urban buildings in Oliva (b) and suburban buildings in Moncada (c) using the thresholding-based approach.
Remotesensing 03 01188 g006
Regarding the urban building type, the branching factor values are slightly higher than those obtained for industrial buildings, and the miss factor values are very close to zero. As a result, the quality percentage presents high values for the three studied areas. At object-level, the quality assessment shows similar results to the industrial building type, obtaining values for correctness markedly lower than for completeness. Figure 6(b) also shows some small objects erroneously detected as buildings due to the slight positional differences between LiDAR and image data.
Detection of suburban buildings presents more difficulties, which is reflected on the branching and miss factor results obtained for this urban type. In general, quality percentages vary from 66% to 78%, comparable to results reported by other authors in similar scenarios using LiDAR data and multispectral imagery [69]. Correctness and completeness values are more balanced than those obtained for industrial and urban building types. As shown in the example of Figure 6(c), most of the buildings are individually detected, but their shape is strongly conditioned by the effect of tree occlusions.
Table 3 shows the overall classification accuracy values for the three study areas, obtained by the cross-validation method. Since the main objective is to evaluate building detection, only two classes were considered: buildings and non-built areas.
Table 3. Overall classification accuracies obtained in the three study areas.
Table 3. Overall classification accuracies obtained in the three study areas.
Study areaOverall accuracy
Moncada97%
Oliva95%
Sagunto98%
Table 4 shows the results of the statistical assessment of building detection using the object-based classification approach. Area-level assessment results obtained for industrial and urban types present low branching and miss factors, and quality percentage values ranging from 80% to 90%. This suggests that this method is suitable for building delineation. At object-level, the completeness factor is very high, but the correctness quality metric values range from 50% to 60%. These results reveal high commission errors due to the incorrect detection of many small objects, especially vegetation with low infrared response, and other urban elements, as shown in Figure 7(a,b). However, at area-level there is a poor performance of the object-based classification method for the suburban building type. Although the results are adequate attending to the miss factor, the branching factor values are significantly higher. This means that this method produces an important over-detection of small buildings. At object-level, assessment results are more balanced than those obtained for industrial and urban buildings.
Table 4. Results of building detection applying the object-based classification method.
Table 4. Results of building detection applying the object-based classification method.
IndustrialUrbanSuburban
MoncadaSaguntoMoncadaOlivaSaguntoMoncadaOlivaSagunto
Area-levelBranching factor0.080.110.170.060.150.700.200.49
Miss factor0.040.040.050.100.030.170.160.17
Detection percentage96.596.195.190.997.585.686.385.5
Quality percentage89.686.881.78685.553.673.960.4
Object-levelCorrectness59.55059.551.653.886.693.184.1
Completeness10096.496.31001009399.395.7
Figure 7. Detection examples of industrial buildings in Sagunto (a), urban buildings in Oliva (b) and suburban buildings in Moncada (c) using the object-based classification approach.
Figure 7. Detection examples of industrial buildings in Sagunto (a), urban buildings in Oliva (b) and suburban buildings in Moncada (c) using the object-based classification approach.
Remotesensing 03 01188 g007
The comparison of the performance of object-based classification and thresholding-based approaches is presented in Table 5, showing the mean and the standard deviation values of the accuracy metrics, grouped by building types. At area-level, branching and miss factor mean values are similar for industrial and urban buildings. However, the mean quality percentage values are slightly higher when using the thresholding-based approach. The branching factor mean and standard deviation values obtained with the object-based classification approach for suburban buildings doubles those obtained using the thresholding-based approach. Detected objects, using object-based classification approach, frequently contain several actual buildings due to the difficulty of independently framing suburban buildings in the segmentation step. The quality percentage is also significantly higher in the case of the thresholding-based approach. Object-level metrics show a similar completeness value for both methods, but higher correctness mean value for the method based on thresholding; also presenting a lower standard deviation value. The better mean value performance indicates that the thresholding-based approach is more suitable for building detection in all the analyzed scenarios, meanwhile the lower standard deviation value suggests a better robustness for this approach.
Table 5. Comparison of mean (µ) and standard deviation (σ) of building detection metrics for the thresholding-based (Thresh) and object-based classification (Object).
Table 5. Comparison of mean (µ) and standard deviation (σ) of building detection metrics for the thresholding-based (Thresh) and object-based classification (Object).
IndustrialUrbanSuburban
Thresh.ObjectThresh.ObjectThresh.Object
µσµσµσµσµσµσ
Area-levelBranching factor0.080.020.100.020.100.020.130.060.210.130.460.30
Miss factor0.0200.0400.030.010.060.040.140.060.170.08
Detection percentage980.496.30.397.20.894.53.387.75.285.86.7
Quality percentage91.21.288.2288.5284.42.474.611.162.616.1
Object-levelCorrectness72.90.654.86.777.88.255.04.196.35.687.921.1
Completeness100098.22.5100098.82.197.62.396.03.9
Table 6. Comparison of mean (µ) and standard deviation (σ) of building detection assessment metrics with and without using shadow contextual information for the thresholding-based approach.
Table 6. Comparison of mean (µ) and standard deviation (σ) of building detection assessment metrics with and without using shadow contextual information for the thresholding-based approach.
IndustrialUrbanSuburban
Without shadowsWith shadowsWithout shadowsWith shadowsWithout shadowsWith shadows
µσµσµσµσµσµσ
Area-levelBranching factor0.080.020.070.010.100.020.100.020.210.130.200.14
Miss factor0.0200.0200.030.010.030.010.140.060.300.13
Detection percentage980.4980.497.20.896.80.387.75.277.78.3
Quality percentage91.21.291.40.988.5288.41.474.611.166.82.1
Object-levelCorrectness72.90.676.82.577.88.283.411.296.35.697.73.2
Completeness10001000100099.11.597.62.382.511
Table 7. Comparison of mean (µ) and standard deviation (σ) of building detection assessment metrics with and without using shadow contextual information for the object-oriented classification approach.
Table 7. Comparison of mean (µ) and standard deviation (σ) of building detection assessment metrics with and without using shadow contextual information for the object-oriented classification approach.
IndustrialUrbanSuburban
Without shadowsWith shadowsWithout shadowsWith shadowsWithout shadowsWith shadows
µσµσµσµσµσµσ
Area-levelBranching factor0.100.020.090.010.130.060.120.060.460.300.450.31
Miss factor0.0400.040.000.060.040.060.040.170.080.200.08
Detection percentage96.30.396.30.394.53.394.53.385.86.783.27.1
Quality percentage88.2288.71.384.42.484.62.562.616.161.516.2
Object-levelCorrectness54.86.757.33.255.04.159.31.087.921.192.721.0
Completeness98.22.598.22.598.82.198.82.196.03.991.73.6

4.2. Assessment of Contextual Information Derived from Shadows

The use of contextual information derived from shadows in the building detection process has a restrictive effect, especially remarkable in the case of suburban buildings. The presence of small buildings and trees makes the accurate detection of shadows difficult and, consequently, actual buildings are erroneously removed after being detected. As a result of this, the miss factor values generally increase detection and quality percentages are reduced as are the completeness percentages, which is more significant for the thresholding-based approach (Table 6 and Table 7) (Figure 8, center). Industrial and urban buildings barely show variations in the area level metrics when shadow contextual relationships are included, due to the typically large size of these types of buildings. The object-level correctness values usually increase for all building types, due to the removal of small objects that are incorrectly detected as buildings, and act as noise (see Figure 8, left and right). Therefore, the use of shadows involves a trade-off between the reduction of FP in urban and industrial areas, and the increase of FN in suburban areas. In general, the results show that, even when shadows have been frequently and successfully employed to improve building detection in images, their positive effect in building detection becomes restricted when three-dimensional information is also used.
Figure 8. Detection examples of urban buildings in Sagunto (left) and suburban buildings in Moncada (center) and Oliva (right), showing differences due to the use of shadow contextual information.
Figure 8. Detection examples of urban buildings in Sagunto (left) and suburban buildings in Moncada (center) and Oliva (right), showing differences due to the use of shadow contextual information.
Remotesensing 03 01188 g008

5. Conclusions

This paper presents a quality assessment of two main approaches for building detection based on the use of high resolution imagery and low density airborne laser scanner data: thresholding-based and object-based classification. Methodological parameters of both approaches have been adapted to the study areas, in this case three different scenarios: urban, suburban and industrial buildings. The evaluation results have been obtained according to two criteria: area level and object level.
In general, results show that the thresholding-based building detection approach performs better in the different scenarios analyzed, especially in peri-urban areas. This method produces a more accurate building delineation and object detection than the object-based classification method. Moreover, object-based classification method presents a higher procedural complexity, due to the larger number of parameters needed for definition and steps to be carried out: type of data used in the segmentation, definition of classes, selection of training samples, extraction and selection of descriptive features, selection and application of the classification algorithm. The thresholding-based approach only requires the definition of two threshold values, the minimum height from LiDAR and the maximum NDVI from the spectral data. The building type is as a key factor in building detection performance. Thus, urban and industrial areas show better accuracies in detection metrics than suburban areas, due to the small size of suburban constructions, combined with the prominent presence of trees in suburban classes, which makes building detection more difficult. The contextual relations between buildings and shadows improve the object-level detection, removing small objects erroneously detected as buildings that negatively affect the correctness index. However, at area-level, their use generates building under-detection, especially in suburban areas.
The results obtained show a high efficiency of the evaluated methods for building detection techniques, in particular the thresholding-based approach when the parameters are properly adjusted and adapted to the type of urban landscape under consideration. These methodologies can be applied to the detection of new buildings for updating urban databases, as well as for the inclusion of this information in urban object classification and updating of geospatial databases.

Acknowledgements

The authors appreciate the financial support provided by the Spanish Ministry of Science and Innovation and FEDER in the framework of the projects CGL2009-14220 and CGL2010-19591/BTE, and the support of the Spanish Instituto Geográfico Nacional (IGN).

References

  1. Mayer, H. Automatic object extraction from aerial imagery—A survey focusing on buildings. Comput. Vis. Image Understand. 1999, 74, 138–149. [Google Scholar] [CrossRef]
  2. Wei, Y.; Zhao, Z.; Song, J. Urban Building Extraction from High-Resolution Satellite Panchromatic Image Using Clustering and Edge Detection. In Proceedings of IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20–24 September 2004; Volume 7, pp. 2008–2010.
  3. Kim, T.; Muller, J. Development of a graph based approach for building detection. Image Vis. Comput. 1999, 17, 3–14. [Google Scholar] [CrossRef]
  4. Donnay, J.P.; Barnsley, M.J.; Longley, P.A. Remote sensing and urban analysis. In Remote Sensing and Urban Analysis; Donnay, J.P., Barnsley, M.J., Longley, P.A., Eds.; Taylor and Francis: London, UK, 2001; pp. 3–18. [Google Scholar]
  5. Sohn, G.; Dowman, I. Extraction of buildings from high resolution satellite data. In Automated Extraction of Man-Made Object from Aerial and Space Images (III); Baltsavias, E., Gruen, A., Van Gool, L., Eds.; Balkema Publishers: Lisse, The Netherlands, 2001; pp. 345–355. [Google Scholar]
  6. Irvin, R.B.; McKeown, D.M., Jr. Methods for exploiting the relationship between buildings and their shadows in aerial imagery. IEEE Trans. Syst. Man Cybern. 1989, 19, 1564–1575. [Google Scholar] [CrossRef]
  7. Lin, C.; Huertas, A.; Navatia, R. Detection of Buildings Using Perceptual Grouping and Shadows. In Proceedings of 1994 IEEE Computer Society Conference on Computer Vision Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 62–69.
  8. Lin, C.; Nevatia, R. Building detection and description from a single intensity image. Comput. Vis. Image Understand. 1998, 72, 101–121. [Google Scholar] [CrossRef]
  9. Katartzis, A.; Sahli, H. A stochastic framework for the identification of building rooftops using a single remote sensing image. IEEE Trans. Geosci. Remote Sens. 2008, 46, 259–271. [Google Scholar] [CrossRef]
  10. Kim, T.; Lee, T.Y.; Lim, Y.J.; Kim, K.O. The Use of Voting Strategy for Building Extraction from High Resolution Satellite Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 1269–1272.
  11. Cui, S.Y.; Yam, Q.; Liu, Z.J.; Li, M. Building Detection and Recognition from High Resolution Remotely Sensed Imagery. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 411–416.
  12. Mayunga, S.D.; Zhang, Y.; Coleman, D.J. Semi-Automatic Building Extraction Utilizing Quickbird Imagery. In Proceedings of the ISPRS Working Groups III/4-5 and IV/3: “CMRT 2005”, Vienna, Austria, 29–30 August 2005; Volume XXXVI, Part 3/W24. pp. 131–136.
  13. Theng, L.B. Automatic building extraction from satellite. Eng. Lett. 2006, 13, EL_13_3_5. [Google Scholar]
  14. Ahmady, S.; Ebadi, H.; Zouj, M.J.V.; Moghaddam, H.A. Automatic Building Extraction from High Resolution Satellite Images Using Active Contour MDE. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 453–456.
  15. Ge, Q.Z.; Li, Q.; Ling, Z.C.; Hui, X.X.; Zhang, G. Stochastic Motion and the Level Set Method in Semi-Automatic Building Detection. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 431–434.
  16. Ok, A.O. Robust Detection of Buildings from a Single Color Aerial Image. In Proceedings of GEOBIA 2008, Calgary, AB, Canada, 5–8 August 2008; Volume XXXVII, Part 4/C1. p. 6.
  17. Olsen, B.P. Automatic Change Detection for Validation of Digital Map Databases. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, Part B2. pp. 569–574.
  18. Shan, J.; Lee, D.S. Generalization of Building Polygons Extracted from IKONOS Imagery. In Proceedings of the ISPRS Commision IV Symposium “Geospatial Theory, Processing and Applications”, Ottawa, ON, Canada, 9–12 July 2002; Volume XXXIV, Part 4. pp. 286–290.
  19. Lee, D.S.; Shan, J.; Bethel, J. Class-guided building extraction from IKONOS imagery. Photogramm. Eng. Remote Sensing 2003, 69, 143–150. [Google Scholar] [CrossRef]
  20. Lefévre, S.; Weber, J.; Sheeren, D. Automatic Building Extraction in VHR Images using Advanced Morphological Operators. In Proceedings of the Urban Remote Sensing Joint Event, Paris, France, 11–13 April 2007; pp. 1–5.
  21. Stassopoulou, A.; Caelli, T. Building detection using Bayesian networks. Int. J. Pattern Recog. Artif. Intell. 2000, 14, 715–733. [Google Scholar] [CrossRef]
  22. Kundsen, T.; Nielsen, A.A. Detection of Buildings Through Multivariate Analysis of Spectral, Textural, and Shape Based Features. In Proceedings of IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20–24 September 2004; Volume 7, pp. 2830–2833.
  23. Duan, J.; Prinet, V.; Lu, H. Building Extraction in Urban Areas from Satellite Images Using GIS Data as Prior Information. In Proceedings of IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20–24 September 2004; Volume 7, pp. 4762–4764.
  24. Marangoz, A.M.; Oruç, M.; Büyüksalih, G. Object-Oriented Image Analysis and Semantic Network for Extracting the Roads and Buildings From Ikonos Pan-sharpened Images. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, Part B3. p. 4.
  25. Müller, S.; Zaum, D.W. Robust Building Detection in Aerial Images. In Proceedings of the ISPRS Working Group III/4-5 and IV/3: “CMRT 2005”, Vienna, Austria, 29–30 August 2005; Volume XXXVI, Part 3/W24. pp. 143–148.
  26. Liu, W.; Prinet, V. Building Detection from High-Resolution Satellite Image Using Probability Model. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 3888–3891.
  27. Lari, Z.; Ebadi, H. Automatic Extraction of Building Features from High Resolution Satellite Images Using Artificial Neural Networks. In Proceedings of the ISPRS Working Group I/5 and IV/3: “High Resolution Earth Imaging for Geospatial Information”, Hannover, Germany, 29 May–1 June 2007; Volume XXXVI, Part 1/W51. p. 6.
  28. Sumer, E.; Turker, M. Building Detection from High-Resolution Satellite Imagery Using Adaptive Fuzzy-Genetic Approach. In Proceedings of GEOBIA 2008, Calgary, AB, Canada, 5–8 August 2008; Volume XXXVIII, Part 4/C1. p. 6.
  29. Bellman, C.; Shortis, M. Building recognition using wavelet analysis and support vector machines. In Spatial Knowledge without Boundaries; Lees, B., Ed.; Spatial Sciences Institute: Canberra, Australia, 2003. [Google Scholar]
  30. Lin, Y.; Zhang, S.; Xie, F.; Chen, Y. The Construction of Anti-Symmetrical Wavelet and Building Extraction from Remote Sensing Imagery. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 467–472.
  31. Vu, T.T.; Matsuoka, M.; Yamazaki, F. Shadow Analysis in Assisting Damage Detection due to Earthquakes from Quickbird Imagery. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, pp. 607–610.
  32. Hofmann, P.; Strobl, J.; Blaschke, T.; Kux, H. Detecting Informal Settlements from QuickBird Data in Rio de Janeiro Using an Object Based Approach. In Proceedings of the Conference on Object-Based Image Analysis (OBIA06), Salzburg, Austria, 4–5 July 2006; Volume XXXVI, Part 4/C42. p. 6.
  33. Tian, J.; Wang, J.; Shi, P. Urban Building Boundary Extraction from IKONOS Imagery. In Proceedings of The 25th Canadian Symposium on Remote Sensing, Montreal, QC, Canada, 14–16 October 2003; p. 6.
  34. Liu, Z.J.; Wang, J.; Liu, W.P. Building Extraction from High Resolution Imagery Based on Multi-Scale Object Oriented Classification and Probabilistic Hough Transform. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 2250–2253.
  35. Li, H.Y.; Wang, H.Q.; Ding, C.B. A New Solution of Automatic Building Extraction in Remote Sensing Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Denver, CO, USA, 31 July–4 August 2006; pp. 3790–3793.
  36. L’Homme, S.; Weber, C.; He, D.C.; Morin, D.; Puissant, A. Building Extraction from Very High Spatial Resolution Image. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, Part B3. pp. 921–925.
  37. Shackelford, A.K.; Davis, C.H.; Wang, X. Automated 2-D Building Footprint Extraction from High-Resolution Satellite Multispectral Imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20–24 September 2004; Volume 7, pp. 1996–1999.
  38. Jin, X.; Davis, C.H. Automated building extraction from high-resolution satellite imagery in urban areas using structural, contextual, and spectral information. EURASIP J. Appl. Signal Process. 2005, 14, 2196–2206. [Google Scholar] [CrossRef]
  39. Klaric, M.; Scott, G.; Shyu, C.R.; Davis, C.H. Automated Object Extraction through Simplification of the Differential Morphological Profile for High-Resolution Satellite Imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 1265–1268.
  40. Shyu, R.; Scott, G.; Klaric, M.; Davis, C.H.; Palaniappan, K. Automatic Object Extraction from Full Differential Morphological Profile in Urban Imagery for Efficient Object Indexing and Retrievals. In Proceedings of the 3rd International Symposium Remote Sensing and Data Fusion Over Urban Areas, Tempe, AZ, USA, 14–16 March 2005; p. 6.
  41. Kim, Z.W.; Netavia, R. Uncertain reasoning and learning for feature grouping. Comput. Vis. Image Undestand. 1999, 73, 278–288. [Google Scholar] [CrossRef]
  42. Scanlan, J.M.; Chabries, D.M.; Christiansen, R.W. A shadow detection and removal algorithm for 2-D images. IEEE Trans. Acoust. Speech Signal Process. 1990, 4, 2057–2060. [Google Scholar]
  43. Madhavan, B.B.; Tachibana, K.; Sasagawa, T.; Okada, H.; Shimozuma, Y. Automatic Extraction of Shadow Regions in High-resolution Ads40 Images. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, Part B3. pp. 808–810.
  44. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sensing 2005, 71, 169–177. [Google Scholar] [CrossRef]
  45. Tsai, J.D. Automatic Shadow Detection and Radiometric Restoration on Digital Aerial Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; pp. 732–733.
  46. Ambrosio, G.; Arévalo, V.; González, J.; Valdés, J. Detección de Sombras en Imágenes de Satélite de Alta Resolución QuickBird. In Proceedings of the XXVII Jornadas de Automática, Almería, Spain, 6–9 September 2006; pp. 1306–1313.
  47. Weidner, U.; Förstner, W. Towards automatic building extraction from high-resolution digital elevation models. ISPRS J. Photogramm. Remote Sens. 1995, 50, 38–49. [Google Scholar] [CrossRef]
  48. Cord, M.; Declercq, D. Three-dimensional building detection and modeling using a statistical approach. IEEE Trans. Image Process. 2001, 10, 715–723. [Google Scholar] [CrossRef] [PubMed]
  49. Vestri, C.; Devernay, F. Using Robust Methods for Automatic Extraction of Buildings. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 208–213.
  50. Jordan, M.; Cord, M.; Belli, T. Building Detection from High Resolution Digital Elevation Models in Urban Areas. In Proceedings of the ISPRS Commission III Symposium: “Photogrammetric Computer Vision”, Graz, Austria, 9–13 September 2002; Volume XXXIV, Part 3B. pp. 96–99.
  51. Champion, N. 2D Building Change Detection from High Resolution Aerial Images and Correlation Digital Surface Models. In Proceedings of PIA07-Photogrammetric Image Analysis, Munich, Germany, 19–21 September 2007; Volume XXXVI, Part 3/W49A. pp. 197–202.
  52. Karsli, F.; Kahya, O. Building Extraction from Laser Scanning Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 289–294.
  53. Tan, G.; Shibasaki, R. A Research for the Extraction of 3D Urban Building by Using Airborne Laser Scanner Data. In Proceedings of the 23rd Asian Conference on Remote Sensing, Kathmandu, Nepal, 25–29 November 2002; p. 5.
  54. Hofmann, A.D.; Maas, H.G.; Streilein, A. Knowledge-Based Building Detection Based on Laser Scanner Data and Topographic Map Information. In Proceedings of the ISPRS Commission III Symposium: “Photogrammetric Computer Vision”, Graz, Austria, 9–13 September 2002; Volume XXXIV, Part 3A. pp. 169–174.
  55. Cho, W.; Jwa, Y.S.; Chang, H.J.; Lee, S.H. Pseudo-Grid Based Building Extraction Using Airborne LIDAR Data. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, PartB3. pp. 378–381.
  56. Ma, R. DEM generation and building detection from lidar data. Photogramm. Eng. Remote Sensing 2005, 71, 847–854. [Google Scholar] [CrossRef]
  57. Miliaresis, G.; Kokkas, N. Segmentation and object-based classification for the extraction of the building class from LIDAR DEMs. Comput. Geosci. 2007, 33, 1076–1087. [Google Scholar]
  58. Vosselman, G.; Gorte, B.G.H.; Sithole, G. Change Detection for Updating Medium Scale Maps Using Laser Altimetry. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, Part B3. pp. 207–212.
  59. Ekhtari, N.; Sahebi, M.R.; Valadan Zoej, M.J.; Mohammadzadeh, A. Automatic Building Detection from LIDAR Point Cloud Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B4. pp. 473–477.
  60. Lai, X.; Wan, Y.; Wei, W. A Flow to Generate DEM and Segment Building in Urban Areas from LiDAR Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 149–152.
  61. Zhang, K.; Yan, J.; Chen, S.C. Automatic construction of buildings footprints from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2523–2533. [Google Scholar] [CrossRef]
  62. Tse, R.O.C.; Gold, C.M.; Kidner, D.B. Building Reconstruction Using LIDAR Data. In Proceedings of the ISPRS Working Group IV/1: “Dynamic and Multi-dimensional GIS”, Urumchi, China, 28–29 August 2007; Volume XXXVI, Part 4-W54. pp. 121–126.
  63. Wan, C.K.; Hsu, P.H. Building Detection and Structure Line Extraction from Airborne LiDAR Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 267–272.
  64. Lafarge, F.; Descombes, X.; Zerubia, J.; Pierrot-Deseilligny, M. Automatic building extraction from DEMs using an object approach and application to the 3D-city modelling. ISPRS J. Photogramm. Remote Sens. 2008, 63, 365–381. [Google Scholar] [CrossRef] [Green Version]
  65. Kokubu, K.; Kohiyama, M.; Umemura, F.; Yamazaki, F. Automatic Detection of Building Properties from Aerial Photographs Using Color and 3D Configuration. In Proceedings of the 22nd Asian Conference on Remote Sensing, Singapore, 5–9 November 2001; p. 6.
  66. Chen, L.C.; Teo, T.A.; Shao, Y.C.; Lai, Y.C.; Rau, J.Y. Fusion of LIDAR Data and Optical Imagery for Building Modeling. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, PartB4. pp. 732–737.
  67. Matikainen, L.; Hyyppä, J.; Kaartinen, H. Automatic Detection of Changes from Laser Scanner and Aerial Image Data for Updating Building Maps. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; Volume XXXV, PartB2. pp. 434–439.
  68. Teo, T.A.; Chen, L.C. Object-Based Building Detection from LiDAR Data and High Resolution Satellite Imagery. In Proceedings of the 25th Asian Conference on Remote Sensing, Chiang Mai, Thailand, 22–26 November 2004; p. 6.
  69. Koc San, D.; Turker, M. Automatic Building Detection and Delineation from High Resolution Space Images Using Model Based Approach. In Proceedings of the ISPRS Working Groups I/5 and I/6I Workshop: “Topographic Mapping from Space”, Ankara, Turkey, 14–16 February 2006; Volume XXXVI, Part 1/W41. p. 6.
  70. Matikainen, L.; Kaartinen, H.; Hyyppä, J. Classification Tree Based Building Detection from Laser Scanner and Aerial Image Data. In Proceedings of the ISPRS Working Groups III/3, III/4, V/3 and VIII/11: “Lasser Scanning 2007 and SilviLaser 2007”, Espoo, Finland, 12–14 September, 2007; Volume XXXVI, Part 3/W52. pp. 280–287.
  71. Li, H.; Gu, H.; Han, Y.; Yang, J. Fusion of High-Resolution Aerial Imagery and LIDAR Data for Object-Oriented Urban Land-Cover Classification Based on SVM. In Proceedings of the ISPRS Working Group IV/1: “Dynamic and Multi-dimensional GIS”, Urumchi, China, 28–29 August 2007; Volume XXXVI, Part 4/W54. pp. 179–184.
  72. Oczipka, M.; Bucher, T.; Trosset, A. Maping and Updating Maps in Dense Urban Regions Using High Resolution Digital Airborne Imagery, Surface Models and Object-Based Classification. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B7. pp. 551–556.
  73. Vögtle, T.; Steinle, E. 3D Modelling of Buildings Using Laser Scanning and Spectral Information. In Proceedings of the XIXth ISPRS Congress, Amsterdam, The Netherlands, 16–22 July 2000; Volume XXXIII, Part B3. pp. 927–934.
  74. Ferreira, J.; Bernardino, A. Acquisition of 3D Regular Prismatic Models in Urban Environments from DSM and Orthoimages. In Proceedings of the Symposium on Computational Modeling of Objects Represented in Images: Fundamentals, Methods and Applications, Coimbra, Portugal, 20–21 October 2006; p. 6.
  75. Yu, B.; Liu, H.; Zhang, L.; Wu, J. An Object-Based Two-Stage Method for a Detailed Classification of Urban Landscape Components by Integrating Airborne LiDAR and Color Infrared Image Data: A Case Study of Downtown Houston. In Proceedings of the Join Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; p. 6.
  76. Yu, B.; Liu, H.; Wu, J.; Hu, Y.; Zhang, L. Automated derivation of urban building density information using airborne LiDAR data and object-based method. Landscape Urban Plan 2010, 98, 210–219. [Google Scholar] [CrossRef]
  77. Vu, T.T.; Matsuoka, M.; Yamazaki, F. LiDAR Signature to Update Japanese Building Inventory Database. In Proceedings of the 25th Asian Conference on Remote Sensing, Chiang Mai, Thailand, 22–26 November 2004; pp. 624–629.
  78. Rottensteiner, F.; Trinder, J.; Clode, S.; Kubic, K. Building Dectection Using LIDAR Data and Multispectral Images. In Proceedings of Digital Image Computing: Techniques and Applications, Sydney, Australia, 10–12 December 2003; pp. 673–682.
  79. Rottensteiner, F.; Trinder, J.; Clode, S.; Kubik, K. Building Detection by Dempster-Shafer Fusion of LIDAR Data and Multispectral Aerial Imagery. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 339–342.
  80. Vozikis, G.; Jansa, J. Advantages and Disadvantages of the Hough Transformation in the Frame of Automated Building Extraction. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 719–724.
  81. Cheng, L.; Gong, J.; Chen, X.; Han, P. Building Boundary Extraction from High Resolution Imagery and LiDAR Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B3b. pp. 693–698.
  82. Paparoditis, N. Building detection and reconstruction from mid- and high-resolution aerial imagery. Comput. Vis. Image Understand. 1998, 72, 122–142. [Google Scholar] [CrossRef]
  83. Guo, T.; Yasuoka, Y. Snake-Based Approach for Building Extraction from High-Resolution Satellite Images and Height Data in Urban Areas. In Proceedings of the 23rd Asian Conference on Remote Sensing, Kathmandu, Nepal, 25–29 November 2002; p. 6.
  84. Estornell, J.; Ruiz, L.A.; Velázquez-Martí, B.; Hermosilla, T. Analysis of the factors affecting LiDAR DTM accuracy in a steep shrub area. Int. J. Digit. Earth 2010. [Google Scholar] [CrossRef]
  85. Ruiz, L.A.; Recio, J.A.; Fernández-Sarría, A.; Hermosilla, T. A feature extraction software tool for agricultural object-based image analysis. Comput. Electron. Agric. 2011, 76, 284–296. [Google Scholar] [CrossRef]
  86. Haralick, R.M.; Shanmugan, K.; Dinstein, I. Texture features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef]
  87. Sutton, R.N.; Hall, E.L. Texture measures for automatic classification of pulmonary disease. IEEE Trans. Comput. 1972, 21, 667–676. [Google Scholar] [CrossRef]
  88. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann: San Mateo, CA, USA, 1993. [Google Scholar]
  89. Freund, Y. Boosting a weak learning algorithm for majority. Inf. Comput. 1995, 121, 256–285. [Google Scholar] [CrossRef]
  90. McGlone, J.C.; Shufelt, J.A. Projective and Object Space Geometry for Monocular Building Extraction. In Proceedings of Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 54–61.
  91. Shufelt, J.A. Performance evaluation and analysis of monocular building extraction from aerial imagery. IEEE Trans. Pattern Anal. 1999, 21, 311–326. [Google Scholar] [CrossRef]
  92. Shan, J.; Lee, D.S. Quality of building extraction from IKONOS imagery. J. Survey. Eng. 2005, 131, 27–32. [Google Scholar] [CrossRef]
  93. Champion, N.; Matikainen, L.; Rottensteiner, F.; Liang, X.; Hyyppä, J. A Test of 2D Building Change Detection Methods: Comparison, Evaluation and Perspectives. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume XXXVII, Part B4. pp. 297–303.

Share and Cite

MDPI and ACS Style

Hermosilla, T.; Ruiz, L.A.; Recio, J.A.; Estornell, J. Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data. Remote Sens. 2011, 3, 1188-1210. https://doi.org/10.3390/rs3061188

AMA Style

Hermosilla T, Ruiz LA, Recio JA, Estornell J. Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data. Remote Sensing. 2011; 3(6):1188-1210. https://doi.org/10.3390/rs3061188

Chicago/Turabian Style

Hermosilla, Txomin, Luis A. Ruiz, Jorge A. Recio, and Javier Estornell. 2011. "Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data" Remote Sensing 3, no. 6: 1188-1210. https://doi.org/10.3390/rs3061188

Article Metrics

Back to TopTop