Next Article in Journal
Deep Learning-Based Named Entity Recognition and Knowledge Graph Construction for Geological Hazards
Next Article in Special Issue
Use of Mamdani Fuzzy Algorithm for Multi-Hazard Susceptibility Assessment in a Developing Urban Settlement (Mamak, Ankara, Turkey)
Previous Article in Journal
The Unbalanced Analysis of Economic Urbanization—A Case Study of Typical Cities in China
Previous Article in Special Issue
Mapping Impact of Tidal Flooding on Solar Salt Farming in Northern Java using a Hydrodynamic Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

UAV-Based Structural Damage Mapping: A Review

1
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7500 AE Enschede, The Netherlands
2
Technische Universität Braunschweig, Institut für Geodäsie und Photogrammetrie, Bienroder Weg 81, 38106 Braunschweig, Germany
3
Department of Mathematics, University of Coimbra, Apartado 3008 EC Santa Cruz, 3001-501 Coimbra, Portugal
4
Institute for Systems Engineering and Computers, University of Coimbra, Rua Sílvio Lima, Pólo II, 3030-290 Coimbra, Portugal
5
Experian Singapore Pte. Ltd., 10 Kallang Ave #14-18 Aperia Tower 2, Singapore 339510, Singapore
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(1), 14; https://doi.org/10.3390/ijgi9010014
Submission received: 22 November 2019 / Revised: 16 December 2019 / Accepted: 23 December 2019 / Published: 26 December 2019
(This article belongs to the Special Issue GI for Disaster Management)

Abstract

:
Structural disaster damage detection and characterization is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of unmanned aerial vehicles (UAVs) in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. This study provides a comprehensive review of how UAV-based damage mapping has evolved from providing simple descriptive overviews of a disaster science, to more sophisticated texture and segmentation-based approaches, and finally to studies using advanced deep learning approaches, as well as multi-temporal and multi-perspective imagery to provide comprehensive damage descriptions. The paper further reviews studies on the utility of the developed mapping strategies and image processing pipelines for first responders, focusing especially on outcomes of two recent European research projects, RECONASS (Reconstruction and Recovery Planning: Rapid and Continuously Updated Construction Damage, and Related Needs Assessment) and INACHUS (Technological and Methodological Solutions for Integrated Wide Area Situation Awareness and Survivor Localization to Support Search and Rescue Teams). Finally, recent and emerging developments are reviewed, such as recent improvements in machine learning, increasing mapping autonomy, damage mapping in interior, GPS-denied environments, the utility of UAVs for infrastructure mapping and maintenance, as well as the emergence of UAVs with robotic abilities.

1. Introduction

1.1. Structural Damage Mapping with Remote Sensing

The first documented systematic post-disaster damage assessment attempt with remote sensing technology dates back to 1906, when parts of earthquake-affected San Francisco were mapped with a 20 kg camera that was raised on a series of kites some 800 m above the disaster scene [1]. This makes damage mapping one of the oldest applications in the remote sensing domain, but also one of the few that continues to elude robust operational solutions, and which remains a subject of active research. Since the early pioneering days, nearly every type of active and passive sensor has been mounted on airborne platforms that range from tethered to autonomous or piloted, as well as satellites operating in different orbital or network configurations, to attempt increasingly automated damage detection [2,3]. However, despite more than a century of research and tremendous technological developments both on the hardware and the computing side, operational image-based damage mapping, such as through the International Charter “Space and Major Disasters” or the Copernicus Emergency Management Service (EMS), continues to be a largely manual exercise (e.g., [4,5]).
Charter and EMS activations center on a particularly challenging type of damage mapping. Both need to respond to a wide range of natural and anthropogenic disaster types, and the first maps are expected to be available within hours of image acquisition, while the particular damage patterns and their recognition are subject to a number of variables. Building typologies, spatial configurations, and construction materials differ, and recognizable damage indicators are strongly dependent on the type of hazard and its magnitude. Image type, in terms of spatial and spectral characteristics, as well as incident angle, but also environmental conditions such as haze or cloud cover, differ enormously, further challenging the development of generic and widely applicable damage detection algorithms. Satellite-based damage mapping has the additional disadvantage that damage that may be quite variably expressed on each of the building’s facades, its roof, as well as its interior, is largely reduced to a single dimension, the quasi-vertical perspective that centers on the roof. Damage detection in reality is then supported by the use of proxies, such as evidence of nearby debris or damage clues associated with particular shadow signatures [6,7]. There have been some notable successes in satellite-based damage mapping, especially related to cases where radar data have an advantage, in particular interferometric [8] and polarimetric synthetic aperture radar [9]. Where damage patterns are structurally characteristic, such as foundation walls remaining after the 2011 Tohoku (Japan) tsunami, simple backscatter intensity has also been used to detect damage [10]. Increasingly advanced machine learning algorithms, including convolutional neural networks (CNN), are used to detect different forms of building damage with radar data [11].
Efforts to process optical satellite data for rapid damage mapping are also moving in the machine learning direction. This includes methods based on artificial neural networks [12], and increasingly also CNN [13,14,15,16]. Studies vary in terms of mapping ambition, with many only aiming at a binary classification (damage/no damage; [12]), and there is no evidence yet of emerging methodologies being used operationally. However, the recently released xBD satellite dataset containing more than 700,000 building damage labels and corresponding to 8 different disaster types [17] will help in developing and benchmarking novel methodologies.

1.2. Scope of the Review

Automated satellite-based damage mapping has thus shown limited progress, at least in terms of versatile methodologies that can readily map structural damage caused by different event types in diverse environments. At the same time, the proliferation and rapidly growing maturity of unmanned aerial vehicles (UAVs/drones) in recent years has created vast new prospects for rapid and detailed structural damage assessment, which are the focus of this review. We do not consider historical, mainly military systems, such as unpiloted reconnaissance aircraft that date back to World War II. Rather, we focus on the suite of platforms that evolved from remote-controlled (mainly hobbyist) planes and helicopters, with the first documented scientific studies on UAV-based disaster response dating back to about 2005 [18]. The review also does not include non-structural damage assessment, such as studies on crop or forest damage. It also does not cover issues of UAV communication (e.g., use of UAVs to create ad hoc communication networks over disaster areas), nor studies on drone network or scheduling optimization. For both good reviews already exist (e.g., [19,20]).
The review includes peer-reviewed publications indexed in Scopus and Web of Science, focusing on research on automated damage detection rather than provision of data for visual assessment, and is not meant to be exhaustive. While the topic is a niche within the remote sensing domain and the amount of studies remains relatively small, a number of application papers without significant novelty exist, which are excluded here. The article is built on a recent conference contribution [21], though the focus of that paper on the results of two European research projects is expanded here to a comprehensive review study. In addition to tracing relevant technical and methodological developments in damage detection, we synthesize the current state of the art and evaluate current and emerging research directions. In addition, we assess the actual usability and practical value of emerging methods for operational damage mapping, including for local mapping by first responders. In the following section relevant publications on the use of UAVs for structural damage mapping are reviewed, sorted by increasing technical sophistication, and a summary is provided in Table 1.

2. UAV-Based Damage Mapping

2.1. Scene Reconnaissance and Simple Imaging

The principal advantage of a UAV in a disaster situation is its vantage point, a flexible position that can provide both synoptic and detailed views of a potentially complex scene, as well as overcome access limitations. Early studies thus focused on scene imaging, aiding disaster responders by supplying a relatively low-cost aerial perspective [22]. Taking advantage of increasingly efficient structure from motion (SfM) and 3D reconstruction concepts emerging at the time (e.g., [23]), in some early studies data were already processed to derive georeferenced images [24], terrain information/digital elevation models (DEM) [18], or orthophotos [25]/orthomosaics [26]. In cases without a full processing pipeline and where no suitable DEM data existed, pseudo-orthorectified images (assuming constant terrain height) were created. Instead of still images also video data were transmitted in real time to allow visual damage inspection [27].
In the years following the initial studies, little methodological progress in damage mapping was made, despite advances in off-the-shelf UAV systems, or the emergence of ArduPilot in 2007 for improved UAV flight stability, or Pix4D(Pix4D, Switzerland) in 2011 for easier photogrammetric image processing. A range of studies appeared that essentially still focused on image provision or simple photogrammetric processing, using remote-controlled helicopter systems [28], multi-copters [29,30,31,32], or fixed-wing UAVs [33,34,35].

2.2. Texture- and Segmentation-Based Methods

Initial attempts to extract damage information automatically from UAV data were based on segmentation- and texture-based approaches, using mono-temporal imagery. Fernandez Galarreta et al. [36] processed UAV imagery of an 2012 Emilia Romagna (Italy) earthquake site into detailed 3D models. The work adapted and expanded earlier approaches developed for the airborne (piloted) Pictometry system that yields similar oblique, overlapping, and multi-perspective imagery. Also, those images had been photogrammetrically processed [37], and used for structural damage assessment [38,39]. The analysis of [36] focused on geometric damage indicators such as slanted walls or deformed roofs, as well as presence of debris piles (Figure 1). In addition, object-based image analysis (OBIA) was carried out on the images to extract damage features such as cracks or holes, but also identification of those damage features intersecting with apparent load-carrying structural elements. A similar OBIA strategy was used by [40] to identify damage in Mianzhu city, affected by the 2008 Wenchuan earthquake.

2.3. Conventional Classifiers

The work of Fernandez Galarreta et al. demonstrated the significance of geometric information in damage detection, in particular of openings in roofs and façades. Vetrivel et al. [41] advanced the work by developing a method to isolate individual buildings from a detailed image-derived point cloud covering a neighborhood of Mirabello (Italy) comprising nearly 100 buildings. Each of those was then subjected to a search for openings attributable to seismic damage, such as partial roof collapses or holes in the façades, a focus similar to [42]. The gaps were identified based on Gabor wavelets as well as histogram of gradient (HoG) orientation features. Two basic machine learning algorithms, Support Vector Machine (SVM) and Random Forest (RF), were used to identify damaged regions based on the radiometric descriptors, with a success rate of approximately 95%. However, the work also illustrated how the segmentation of point clouds is frequently hindered by artefacts and data gaps. In [43], an approach was developed to overcome this problem: after projecting the initial point cloud-derived 3D segments into image space, a subsequent segmentation using both geometric and radiometric features yielded more accurate and complete building segments.
Table 1. Summary the relevant technical papers reviewed in Section 2, organized by level of technical sophistication.
Table 1. Summary the relevant technical papers reviewed in Section 2, organized by level of technical sophistication.
Processing LevelPublicationsPlatformNotes
Scene reconnaissance/simple imagingWhang et al., 2007 [22]Multi-copter
Studies focusing on visual image analysis, simple 3D terrain reconstruction, as well as creation of orthophotos or orthomosaics
Adams et al., 2014 [29]
Dominici et al., 2012 [30]
Mavroulis et al., 2019 [31]
Nakanishi and Inoue, 2005 [18]Helicopter
Murphy et al., 2008 [27]
Kochersberger et al., 2014 [28]
Lewis, 2007 [26]Fixed-wing
Includes platforms with specialized abilities, such as deployment of spectrometers for gas detection [28]
Suzuki et al., 2008 [24]
Bendea et al., 2008 [25]
Hein et al., 2019 [33]
Xu et al., 2014 [34]
Gowravaram et al., 2018 [35]
Texture- and segmentation-based methods; change detectionFernandez Galarreta et al., 2015 [36]Multi-copter
Includes approaches based on handcrafted texture features (e.g., Gabor wavelets, histogram of gradient),
Dorafshan et al., 2018 [44]
Chen et al., 2019 [45]
Akbar et al., 2019 [46]
Zeng et al., 2013 [40]Helicopter
Kakooei and Baleghi, 2017 [47]Fixed-wing
Change detection based on 3D features only (e.g., [48])
Grenzdorffer et al., 2008 [37]Pictometry 2
Gerke and Kerle, 2011a [38]
Fusion of pre- and post-event satellite imagery with both manned airborne and UAV data [47]
Gerke and Kerle, 2011b [39]
Zeng et al., 2013 [40]
Vetrivel et al. 2016 [47]
Tu et al., 2017 [49] 1
Conventional classifiersLi et al., 2015 [42]Multi-copter
Includes methods based on classifiers such as Support Vector Machine (SVM) and Random Forest, as well as image fusion
Combination of image and 3D structural features
Use of boosting (e.g., AdaBoost)
Vetrivel et al., 2015b [43]
Vetrivel et al., 2015a [41]Pictometry
Lucks et al., 2019 [50]-- 3
Advanced machine learning/CNN/generative adversarial networks (GAN)Duarte et al., 2017 [51]Multi-copter
Use of active learning methods, convolutional neural networks (CNN), as well as convolutional autoencoders (CAE)
Dorafshan et al., 2018a [52]
Dorafshan et al., 2018b [53]
Xu et al., 2018 [54]
Vetrivel et al., 2018 [55]
Cusicanqui et al., 2018
Use of boosting (e.g., XGBoost)
Duarte et al., 2018a [56]
Combination with
Duarte et al., 2018b [57]
Conventional classifier
Kerle et al., 2019 [58]
Use of single-shot multi-box detector algorithm
Nex et al., 2019a [59]
Use of UAV data to classify satellite imagery [56]
Tsai and Wei, 2019 [60]
Multi-resolution CNN
Xu et al., 2018 [54]Fixed-wing
Semantic segmentation
Vetrivel et al., 2018 [55]Pictometry
Transfer learning
Duarte et al., in press [61]
Morphological filtering
Li et al., 2018 [62]-- 2
Bayesian optimization
Li et al., 2019 [63]
Liang et al., 2019 [64]Ground-based
Emerging use of generative adversarial networks (GAN) [60]
Nex et al., 2019b [65]Multiple platforms
Review of the performance of state-of-the-art CNN
Song et al., 2019 [66]Manned airborne
Includes combination of deep learning and SLIC superpixels, as well as multi-scale analysis
Huang et al., 2019 [67]
Duarte et al., 2018 [56]
1 Manned system with 5 cameras comparable to Pictometry. 2 Though not a UAV category, some relevant Pictometry-based studies are included in the review. 3 Use of generic high-resolution airborne data, with no specific platform being indicated.
The work in [41] work also showed the limitations of HoG and Gabor filters in the classification of complex scenes, and of global feature representations on general. The latter cause problems when scene and image characteristics vary, which is typically the case between different disaster areas or in multi-temporal assessments. The work described in [68] moved towards descriptors that are more generalizable and invariant to image characteristics. The method was built on the Visual Bag of Words approach and focused on the detection of rubble, debris piles, and severe spalling. The method performed well on individual UAVs and also Pictometry data sets of Mirabello (Italy) and Port-au-Prince (Haiti), respectively, but also on a dataset that combined the two airborne datasets with transverse street-level images. The limitation of the method is that it is grid-based and can only identify general damage patches, i.e., grid cells affected by one or more of the damage types considered, a limitation also evident in the study of [50], who used RF on superpixels. A detailed localization and characterization (size, shape, etc.) of damages of a specific type would be preferable, though this will come at the cost of increased processing time.

2.4. Advanced Machine Learning and the Emergence of CNN

Image classification used for damage mapping increasingly made use of machine learning, in particular SVM and RF [41,54,69] or different boosting algorithms, such as AdaBoost [38] or XGBoost [58], and moving towards more advanced scene understanding and semantic processing. However, the features used were typically hand-crafted (such as HoG or Gabor, or other point feature descriptors related to spectral, textural, and geometrical properties [54]), and emerging work had shown that in deep learning approaches CNN could actually learn features and their representation directly from the image pixel values [70]. Thus, the damage detection work proceeded in this direction, hypothesizing that image classification would benefit from the micropropagation of 3D point cloud features. The work described in [55] applied a multiple-kernel-learning framework on several sets of diverse aerial images, and showed that combining the radiometric and geometric information yields higher classification accuracies. The processing was based on Simple Linear Iterative Clustering (SLIC) superpixels, meaning that damage was again only identified in patches, though those were labelled with specific prediction scores. Song et al. [66] also worked with SLIC superpixels, though unlike in [55] where they had formed the basis for the ML analysis, here first a CNN-based damage detection was carried out directly on the image, and the SLIC segments were then used in combination with mathematical morphology to refine the results. In [67] a similar approach was taken, except that instead of SLIC a multi-resolution segmentation was carried out, to allow features naturally occurring at different spatial scales to be used effectively. The CNN approach developed in [55] was also used by Cusicanqui et al. [71], who reasoned that video data are often available before suitable still photographs (e.g., acquired by police or the media). In the study it was thus tested whether 3D reconstructions based on video data could offer similar support, and it was indeed shown that a binary damage classification based on deep learning applied to SLIC superpixels and the 3D models led to results comparable to those based on still photographs.
The particular significance of the work in [55] for disaster response and search and rescue was that the method demonstrated significant transferability, which has become a frequent focus in recent literature. A model trained with a sufficient number of samples (e.g., trained before an actual event) performed well when then applied to a new disaster scene, supporting a rapid analysis without the need for extensive retraining. This approach can help to overcome the traditional limitation of CNN, i.e., their need for a large amount of labelled training data. A different approach was taken by Li et al. [62], who used a convolutional autoencoder (CAE) that was trained using unlabelled post-disaster imagery based on SLIC superpixels, with results being finetuned by a CNN classifier. In follow-up work [63] the authors in addition employed a range of data augmentation methods, such as data blurring or rotating, to enlarge the number of samples. The resulting pre-training improved the overall damage detection accuracy by 10%.
Disaster scenarios are frequently characterized by imperfect image data availability, and a rapid response effort has to make do with what exists. In this respect it is valuable to be able to incorporate images of different types and scales into the training model. Duarte et al. [56] trained a CNN with different types of aerial imagery to classify post-disaster satellite data of Port-au-Prince. Although information coming from the different image resolutions evidently improved the model and classification accuracy, the approach still failed to capture smaller damage features. The work also focused on determining the effect of multi-scale information on the CNN activation layers as a proxy for improved damage recognition, while not allowing a detailed assessment of where the classification improvement originated in terms of false positives and negatives, or specific damage types. Later work focused on multi-resolution feature fusion and its effect on building damage classification [57]. It showed that such a fusion is useful and can improve the overall accuracy, though it still failed to show which specific damage types are identified, and how well they are captured.
Earlier work had shown how highly variable the expression of structural damage is in vertical and oblique data [6]. The former essentially only considers the damage expressed in the roof, and in addition makes use of proxies such as debris piles for specific shadow configurations [7]. Significant additional information is also encoded in the façade information, as already explained in Section 2.1. However, the OBIA-based approach used for example in [36] tends towards overfitting and lacks the efficiency and transferability of deep learning. While a focus on façades is appealing, their actual delineation in imagery poses its own challenges, especially when considering aspects such as occlusion or environmental effects such as shadows (Figure 2). The work described in [51] thus focused on developing an efficient method to extract façades that were subsequently assessed for damage using CNN. The approach made use of a point cloud calculated from vertical imagery acquired in an initial UAV survey. From the sparse point cloud, the building roofs were segmented and the building façades hypothesized, which in turn was used to extract the actual façades from oblique UAV images. The patch-based damage classification had an overall accuracy of approximately 80%, though the work also demonstrated the significant challenge of damage identification on façades, due to architectural complexities and associated diverse shadow patterns, but also occlusion (by external features such as vegetation, or internal ones such as balconies).
It stands to reason that some ambiguities can be resolved by analysing multi-perspective data (views of a given façade from different angles that go beyond regular stereoscopic overlap), but also by incorporating multi-temporal data where available. The majority of the studies described above only used post-disaster imagery. However, in the last few years the availability of high spatial resolution pre-event reference imagery has been growing rapidly. This has led to additional methodological developments that built on the segmentation- and texture-based damage detection described above, extending them into a multi-temporal framework. Vetrivel et al. [47] used pre- and post-earthquake data of L’Aquila (Italy) and focused on the identification of 3D segments missing in the post-disaster data as an indicator of damage. Both voxel- and segment-based approaches were tested, and finally a composite segmentation method that subjects an integrated pre- and post-event point cloud to plane-based segmentation was chosen. Although working with conventional airborne data, in [61] those assumptions were also tested in a CNN framework, where 6 different multi-temporal approaches were compared against 3 mono-temporal ones. It was concluded that a multi-temporal approach with 3 views at each the pre- and post-event epoch performed best. Also, here smaller damage features eluded detection. However, the authors expect better results with UAV data, given that the problem of occlusion can be reduced through more flexible image acquisition.

2.5. Levels of Disaster Damage Mapping

Early efforts in disaster response with satellite imagery identified damaged areas more generally, while airborne data were used to detect specific damage proxies, usually debris piles (e.g., [72]). Especially in more recent years, overall classification accuracy and f-scores have been the most commonly used metrics to assess the efficacy of a given damage mapping method, and to judge progress within the discipline. However, this focus neglects an inherent incomparability of many of the studies produced to date, and the absence of a generally agreed upon damage scale. The introduction of the European Macroseismic Scale 1998 (EMS-98) led to a broad homogenization and alignment of efforts, by grouping structural building damage in 5 categories, D1 (negligible/slight damage)–D5 (destruction) [73]. Building on its common use in satellite-based damage detection (e.g., [74,75,76]), later its utility for UAV-based damage mapping was explored. For example, [38] classified building damage according to EMS-98, though recognizing the diversity and ambiguity of the observed damage patterns the study did not aim at automatic damage classification, except in cases where the 3D model clearly showed complete collapse (D5). Also, studies [36,48,77] used this scale as a basis, with [31] even adding a 6th damage level.
One consequence of the continuing challenge of image-based damage mapping is that, while D1 and D5 are comparatively easy to determine but intermediate damage stages are not, many studies have departed from the 5-level classification scheme. The work in [50] opted instead for a 4-class approach (intact, light, medium, and heavy damage), while several studies grouped damage into 3 classes. However, even within one such category damage levels/class names vary, limiting comparability. For example, Zeng et al. [40] mapped intact, damaged, and destroyed buildings, while Vetrivel et al. [47] termed the classes undamaged, lower levels of damage, and highly damaged/collapsed, and Song et al. [66] distinguished intact, semi-collapsed, and collapsed buildings, with differences in class definition going beyond semantics. However, the majority of recent studies opted for a simple binary classification, either explicitly mapping both damaged and undamaged structures (e.g., [12,49,55,65,67]), or only mapping damage in general in a single class [51,61]. In addition, there are studies that focused on the identification of specific damage types, such as holes in the roof [41,42], or dislocated roof tiles and cracks along walls [36]. Others mixed damage and proxy classes, such as [62], who mapped damaged and undamaged structures, but also debris as a separate class. Creative choice of class names is further hindering a comparison between different studies. Li et al. [63] used the classes mildly damaged and ruins, while Xu et al. [54] mapped categories including roof, ground, debris, and small objects. The difficulty of image-based damage mapping has led to a focus on severe damage classes (D4-5), making studies such as [42] that expressly focus on lesser damage (D2-3) an exception. Approaches based on deep learning are particularly suited for binary classification, which is another reason why in the interest of automation only a single damage class is now frequently considered.

2.6. The Special Case of Infrastructure Damage Mapping

The focus of this review is on structural building damage. However, one of the fastest growing UAV application areas in recent years is infrastructure monitoring and detection of damage indicators related to wear and degradation, such as of roads, bridges, or tunnels. The lines between disciplines have blurred, with studies such as by Dominici et al. [32] addressing both regular structures and infrastructure. Furthermore, from a methodological perspective studies focusing on crack or spalling assessment along bridges or tunnels are also relevant for the disaster damage mapping community, and damage to infrastructure caused by disaster events naturally also falls under the scope of this review. For this reason, papers marking key developments in infrastructure monitoring and damage mapping are briefly reviewed here.
A recent review by Dorafshan and Maguire [52] provides an overview of the specific challenges of bridge inspection and maintenance, and how UAVs, both with active and passive sensors, are starting to become a commonly used tool. In an early study by Whang et al. [22], a UAV with two coaxial rotors was developed to perform somewhat autonomous bridge inspection, within limits even in GPS-denied areas beneath the bridge. In addition, the system was able to place a small autonomous rover on the bridge using ultrasonic localization, and which provided images for damage inspection. However, few details about the actual methods and system performance are provided in the paper. The authors of [44] focused on the detection of small fatigue cracks on bridges, assessing the value of active illumination, and carrying out controlled laboratory experiments to determine detection limits and optimal mapping approaches.
Increasingly, the focus has been on image- or laser-based 3D reconstruction of the bridge or tunnel in question, as a basis for visual or automated damage identification. In [78] the accuracy and thus utility of such 3D models was assessed, and [45] also assessed how well complex bridge structures can be reconstructed with SfM methods, in addition attempting 3D volume calculations or major spalling instances. The work of [79] expressly focused on seismic damage detection on bridges, also using UAV-based 3D reconstructions, though here starting with pre-event Building Information Modelling (BIM) data that were updated with the detected damage. Akbar et al. [46] addressed structural health monitoring (SHM) of tall structures, focusing on comprehensive 3D model creation through speeded up robust features (SURF), and on the detection of simulated damage features on large concrete slabs, though providing little detail on the actual damage detection algorithm.
Deep learning with CNN is also being used in SHM. In [53] an AlexNet network was trained to detect small cracks in concrete walls, reporting accuracies of nearly 95%, and also testing network transferability. Comparable accuracies were reported by Liang [64], who in addition also tested GoogleNet and VGG-16 networks to detect earthquake damage on a bridge.

3. Damage Product and System Usability

Post-disaster damage mapping serves a specific purpose, i.e., providing timely, accurate, and actionable information to a range of stakeholders. Those include civil protection agencies planning emergency response actions, but also incident commanders and first responders operating at the actual disaster site. One of the consequences of the growing availability of UAV technology is a declining need to rely on formal protocols such as the Charter or EMS, and instead allowing actual site-based damage mapping. It is thus surprising that the usability of data acquisition pipelines (including planning tools, hardware components, and data processing routines), but also of resulting damage mapping products, has scarcely been considered in the literature reviewed in this paper. This section briefly introduces two recent research projects with a strong focus on UAV-based structural damage assessment, and from which a number of publications reviewed in this paper emerged. In these projects also a range of different end users participated, and their evaluation of the developed damage mapping procedures is also summarized.

3.1. Damage Detection in Two European Research Projects

RECONASS (Reconstruction and Recovery Planning: Rapid and Continuously Updated Construction Damage, and Related Needs Assessment; www.reconass.eu) and INACHUS (Technological and Methodological Solutions for Integrated Wide Area Situation Awareness and Survivor Localization to Support Search and Rescue Teams; www.inachus.eu) were research projects funded through the 7th Framework of the European Union, and which ran with some overlap from 2013 until the end of 2018. The focus of RECONASS was to create a system for monitoring and damage assessment for individual high-value buildings, based on a range of internally installed sensors that included accelerometers, inclinometers, and position tags, with data getting processed in a finite element structural stability model to determine damages caused by seismic activity or by either interior or exterior explosions. UAV-based 3D reconstruction of the building exterior and detailed damage mapping were carried out to patch data gaps caused by failed sensor nodes, as well as to validate model outputs. The progressively developed methods were tested in a series of experiments, culminating in a pilot where a 3-story reinforced concrete building was first subjected to an explosion of 400 kg TNT placed 13 m away, and later by a 15 kg charge detonated within the structure itself. End users, including the German Federal Agency for Technical Relief (THW), were present to assess the utility of the system.
The purpose of INACHUS was to assist disaster response and urban search and rescue forces by providing early and increasingly detailed information on damage hotspots and the likely location of survivors. Different UAV platforms, but also ground-based and portable laser scanning instruments, were used to map a damaged structure. One research focus was on scene reconstruction and damage mapping based on optical imagery from a low-cost UAV. The French remote sensing lab ONERA also deployed various larger UAVs that carried different laser scanners, in part with proprietary data processing solutions. The major pilots were also assessed by a group of end users.

3.2. Tests with End Users in Two European Research Projects

Both RECONASS and INACHUS included a number of pilot experiments, where first individual components or sets thereof, and later the entire systems were tested under relatively realistic conditions. For the explosion experiments in Sweden data were acquired using an Aibot X6 Hexacopter carrying a Canon D600 camera with a Voigtländer 20 mm lens. In addition to reference data, images were acquired after both the exterior and the interior blasts, with a ground sampling distance (GSD) of approximately 1.5 cm. From those images, detailed 3D point clouds were calculated and analyzed. The data proved suitable to identify damage-related openings, such as infill walls damaged or blown-out by the blasts, as well as cracks and debris. Additionally, subtle façade deformations could be detected and quantified (Figure 3), both using only the post-detonation point cloud, as well as in a comparison with pre-event reference data. It was also shown how a BIM model of the structure could be automatically updated, both to visualize and catalogue detailed damage information. THW deployed a LEICA TM30 total station to survey the structure from 4 reference points, using 16 prisms mounted on the structure. While the total station has the advantage that a structure can be continuously monitored for minute deformations—critical when rescue personnel operates near or within weakened structured—the UAV-derived data provided damage data of comparable quality, with greater flexibility and lower cost, including the roof that ground-based surveys cannot see, and potentially operated from a safer distance. The building was further surveyed by a Riegl VZ400 terrestrial laser scanner (TLS), which also confirmed the high quality of the UAV-derived 3D models.
Four INACHUS pilot experiments were conducted at 4 different sites in France and Germany, and included buildings in the process of being demolished, as well as an urban search and rescue training site (Training Base Weeze in Germany). In response to criticism by end users in RECONASS as to the high cost of the Aibot UAV (ca. 40,000 Euro), in INACHUS low-cost DJI drones (Phantom 4 and Mavic Pro) were used. Following the research directions described in Section 2.3 and Section 2.4, the work focused less on simple scene reconstruction, but on integration with other spatial data, as well as advanced data analysis, including with CNN. For each of the pilots, the building in question was also surveyed by ONERA using different UAV-borne laser instruments, as well as with a TLS, to detect the respective strengths of the individual systems. The initial experiments with UAV-based laser scanners failed. First a Riegl VZ-1000 instrument (weight of about 10 kg) was deployed on a Yamaha RMAX helicopter (weight > 60 kg), though the acquired data suffered from artefacts and were not useful. Also, data acquired with a Velodyne HDL32 (weight of only 1.3 kg) deployed on a VARIO BENZIN helicopter (weight just under 10 kg) proved unusable for damage detection, owing to the very unstable platform. For the final pilot, a high quality Riegl VUX-1 was mounted on a stable DJI Matrice 600 hexacopter platform. The data were excellent, though the combined system is also very costly (>80,000 EUR) and requires expert knowledge for flight planning and execution, as well as data processing. The mapping with optical data focused on using data acquired with the built-in cameras of the Phantom 4 and Mavic Pro (costs of < 2000 Euro), and advanced along the computer vision and machine learning trajectory described earlier. The 3D data obtained from the optical imagery were of comparable quality to the VUX-1 data while also providing native color information, better spatial detail, and full coverage also of façades (Figure 4). The expectation that the airborne laser data would patch the one principal weakness of photogrammetry, the inability to map dark interior spaces through openings (as a means of possibly locating trapped survivors), was also not met. The data on openings and connected interior spaces were primarily delivered from the tripod-mounted ground-based laser scanner, though here the limited flexibility and occlusion by the building’s structural elements also prevented a complete mapping of openings.
While commercial UAVs by DJI and other makers have clearly reached high levels of cost-benefit, stability, and reliability, most are also not designed to be survey-grade instruments working in real time. For rapid search and rescue support it is vital to provide usable information quickly. For that reason, in INACHUS a procedure was developed to process the data with minimal delay. Working with the ability of the Mavic Pro to stream images during flight, a procedure was built that (i) downloads images right after acquisition, (ii) builds a progressively extended sparse 3D model of the scene using established SfM methods, (iii) applies CNN to detect damage, and (iv) orthorectifies the images using the 3D model. By the time the UAV lands after a maximum flight duration of about 25 min, all processing is done and the damage map available. A smart phone app was also built that allows this procedure to be executed together with a standard laptop (Figure 5). Details about the app and data processing workflow can be found in [59], while more information about the optimized CNN that was made available on GitHub can be found in.

3.3. Validation

At every pilot, different end users were present and undertook a detailed assessment of every tool produced and tested. The RECONASS system was evaluated by THW at the pilot site, and more extensively in a dedicated workshop at ISCRAM 2017 by a total of 11 specialist end users, representing both governmental and non-governmental emergency response organizations, as well as organizations involved in the creation of damage maps. It was concluded that the UAV-based element met all previously established user requirements, principally the detection of all externally expressed damage types and their annotation both on imagery but also a 3D model and a BIM, as well as the provision of 3D volume calculations, all in GIS-ready format. The final system received a maximum score of 10/10.
At the final INACHUS pilot that took place in Roquebillière, France, in November 2018 a total of 25 end users from 8 countries participated, representing USAR teams and other civil protection organizations. They followed individual demonstrations of all technical tools developed and graded them. Of all hard- and software or procedure solutions developed in INACHUS, the 3D mapping and damage detection with a light-weight commercial UAV scored highest (overall 4.5 out of 5). The high score does not so much represent a high level of technical sophistication, but rather the simplicity, both in terms of off-the-shelf hardware and an automated flight planning and damage mapping routine. The end users especially appreciated the simple, low-cost approach that provided accurate and useful information in near-real time, without the need for a highly specialized operator.

3.4. Limitations

Despite the positive evaluations, the end user assessment also revealed limitations of the developed damage mapping solution. Legal restrictions of drone deployment continue to pose challenges, though problems are less severe for lighter platforms, and in addition first responder and civil protection organizations tend to operate under different legal frameworks. A clear disadvantage of small multi-copter UAV platforms is their comparatively small operating range and flight duration. The limited spatial scope of RECONASS and INACHUS matched their abilities well, but damage assessment over larger affected areas requires different solutions. Off-the-shelf UAVs come equipped with high quality optical cameras, though the computer vision processing to generate 3D point clouds fails for dark image patches such as shadow or smaller building openings. For this reason, openings and possible survival spaces in the pilot structures could not be mapped, and here active sensors have a clear advantage. Commercial UAVs also tend to be closed and largely proprietary systems, meaning that it is not easily possible, if at all, to exchange or add sensors, or to install processing units such as a DJI Manifold (China) or NVIDIA Jetson TX2 (USA) to push more autonomy in onboard image processing or dynamic flight path adjustment onto the drone. Several of these limitations are the focus of ongoing research, as explained in the following section.

4. Outlook and New Developments

The literature reviewed in this paper mirrors a rapidly developing discipline that in only a few years moved from largely descriptive imaging of disaster scenes to fully automated analysis procedures that build on state-of-the-art methods originating, in particular, in the computer science domain. At the same time, limits persist in hard- and software, in operational damage mapping procedures, but also in the conceptual basis of how images can be related to the actual meaning and significance of damage, which are addressed in this section.

4.1. Improvements in Machine Learning

For all the sophistication of machine leaning approaches to recognize patterns and features, some open questions persist. The black box nature of deep learning approaches means that the specific effect of certain training labels remains unclear, challenging efforts to optimize the training efficiency for specific damage features. Training to map only specific indicators such as cracks or object dislocations is thus challenging, compounded by the scarcity of large training samples for individual damage features. Also, solutions developed to date still tend to be patch/grid-based, highlighting damage in general, but not specific features. This, however, is highly scale dependent, with high resolution image data, for example, also yielding small superpixels that allow precise damage identification [50].
Work such as in [56,57] tends to focus on activation layers that indicate the presence and approximate position of damage (Figure 6), rather than the creation of actual damage maps. From a user perspective, more clarity on the specific damage type, but also more precise location, shape, and size, would be preferable. In addition, the nature of CNN-based studies prevents insights into how specifically a network with superior overall accuracy performs in terms of reducing false positives or negatives.
To overcome the problem of the large number of training samples needed in CNN analysis, recent work has shown how Generative Adversarial Networks (GAN) can effectively enlarge sample databases, which has already been shown to benefit the identification of damage of road furniture [60]. GAN seem to be particularly useful in anomaly detection [80], where training does not focus on a potentially large number of specific damage features or indicators, but rather where a comprehensive understanding of normal, undamaged scenes is created, based on which anomalies such as damage are identified. GAN have been mainly used in applications with smaller variabilities than are typical for urban scenes (i.e., indoor environments with fixed cameras). Their use in urban scenes is, therefore, an additional challenge that could be compensated by only using very large and comprehensive datasets of undamaged scenes, to prevent the generation of many false positives.

4.2. Mapping Autonomy

Traditional UAV surveys were based on pre-defined flight plans or manual piloting supported by video streams from the instrument, with data getting processed after landing of the aircraft, or through pipelines such as described in [59]. A more ideal scenario would be for the UAV to carry out an initial, for example vertical, survey over a pre-defined area, identify hotspot and damage candidate areas based on limited real-time processing, followed by a more detailed and multi-perspective survey of those marked areas. The work in [51] showed how data from an initial coarse vertical survey can be used to guide a more local assessment. Such a procedure can be implemented based on streamed data that are processed in near-real time, and adjusted flight path instructions uploaded. Alternatively, data can be processed on the UAV itself. Work described in [81,82] showed how even microdrones can perform analysis based on deep neural networks to facilitate autonomous navigation. UAVs with greater payloads have been fitted with more powerful computing units, such as NVIDIA Jetson TX2, which are capable of facilitating advanced real-time object tracking [83] or image segmentation [84].
In follow-up work to INACHUS, H2020 project PANOPTIS (Development of a Decision Support System for increasing the Resilience of Transportation Infrastructure; www.panoptis.eu) focuses on road surface and road corridor damage assessment to detect signs of gradual wear and decay, as well as the ability to respond rapidly to a disaster situation. This is done with a hybrid UAV platform (DeltaQuad from Vertical Solutions) that allows both corridor mapping of a fixed-wing platform and hovering for detailed mapping. Also, here a Jetson TX2 will be used to advance data processing on the drone itself, for both navigation and damage detection.

4.3. Indoor Mapping

UAVs have brought structural damage mapping within touching distance of buildings. Nevertheless, critical damage evidence is frequently hidden from sight, e.g., where internal load-carrying structures are compromised. In addition, damage assessment, such as defined in INACHUS, also includes support for first responders in the search for victims or survivors trapped internally, though different cavity mapping strategies only had limited success. Even with a TLS, interior cavities with connections to the outside could only be detected to a limited extent (Figure 7).
Recent work has demonstrated UAVs operating increasingly autonomously and effectively in interior, largely-GPS-denied spaces [85]. There has been a surge in research on UAV-based indoor mapping, both with single platforms and swarms. Most make use of visual SLAM to map their GPS-denied environment (e.g., [86,87]), or focusing on continuity mapping when transiting between outdoor and indoor places [88]. Others have experimented with localizing via sensors such as ultrasound [89], and the works cited in 4.2 on autonomous navigation and mapping are also relevant here. One element of improved indoor 3D reconstruction and damage mapping will be a more effective use of artificial lighting that, for example, improved the detection of small cracks in [44]. Another line of research has focused on the engineering of UAV platforms that can change shape to facilitate their entering and operating in tight spaces [90].
The damage detection work of INACHUS will also be advanced in indoor environments with H2020 project INGENIOUS (The First Responder of the Future: A Next Generation Integrated Toolkit for Collaborative Response, increasing protection and augmenting operational capacity; www.ingenious-firstresponders.eu). The focus will be on the use of drone swarms for indoor mapping to support first responders in unknown and potentially dark, smoke-filled, and hazardous indoor settings, using UAV platforms of different sizes and with different sensor load and ability, with focus on collaboration and optimization.

4.4. The Age of Drones with Robotic Abilities

UAVs tend to be fragile, susceptible to wind, and, thanks to inexpensive GPS and IMU components subject to positional inaccuracies, best operated away from structures. However, better platform control, use of collision avoidance through use of sensors or depth sensing, as well as progress in robotics and mechatronics have resulted in novel research directions. For example, in the age of aging infrastructure efforts have been spreading towards UAV-supported maintenance. This implies a number of challenges. Infrastructure is diverse and includes complicated indoor spaces such as chimneys [91], but also roads, tunnels, and bridges. Solutions are emerging to carry out day-to-day monitoring to detect defects or signs of decay, but also damage after a disaster event or accident (e.g., [92]). Such works increasingly extend into another emerging line of development, blending UAV-based abilities with robotics and mechatronics solutions. Here, UAVs are not only used to map and model infrastructure spaces, but also to carry actuator arms to place sensors for in-situ measurements [93,94], interact with objects [95,96], perform physical tests [97], or to carry out limited repairs.

5. Conclusions

Structural damage mapping with remote sensing has been a continuous research problem for decades, and for rapid operational disaster response, such as through the Charter or Copernicus EMS, reliable automated methods continue to be lacking. However, substantial progress has been made in the last decade that resulted primarily in rapid developments in UAV technology, computer vision, and in advanced image data processing with machine learning, in particular deep learning with CNN, all of which was assessed in this review. This includes a detailed analysis of the progress in image-based damage mapping that has moved from providing largely descriptive overview imagery to automated scene mapping with advanced machine learning.
The paper has shown how image-derived 3D point clouds allow a highly detailed and accurate scene reconstruction, and how the coupling of the geometric information with the original image information allows very advanced feature recognition. Classifier training is also starting to overcome the challenge of, in particular, CNN-based methods, requiring millions of training samples. The development of unsupervised CNN approaches (such as Auto-encoders) or Generative Adversarial Networks (GAN) could represent a step forward in this direction. Newer approaches are improving the efficiency, but also the transferability of classifiers, critical to be able to respond quickly to a disaster event. Comprehensive tests with first responders and urban search and rescue personnel showed that, in particular, solutions with light-weight off-the-shelf drones strike a very good compromise of high information quality and ready usability.
Developments continue at a rapid pace, with significant research efforts now being focused on UAV-based mapping in indoor settings, on UAVs also being equipped with mechatronic abilities to allow the deployment of additional sensors or to carry out repairs, though newer networks also allow more sophisticated and robust deep learning solutions. Nevertheless, more effort is needed to understand better the actual meaning and significance of specific damage evidence. In addition, UAVs need to become more autonomous to increase the efficiency of damage mapping operations. Finally, progress in the processing of UAV-based imagery, in particular through advanced machine learning, must eventually lead to fully automated and accurate damage mapping with optical satellite imagery.

Author Contributions

Conceptualization, N.K..; Methodology, N.K.; Formal Analysis, N.K., F.N., M.G., D.D., A.V.; Investigation, N.K., F.N., M.G., D.D., A.V.; Writing—Original Draft Preparation, N.K.; Writing—Review & Editing, N.K., F.N., M.G., D.D., A.V.; Visualization, N.K., F.N., M.G., D.D., A.V.; Supervision, N.K.; Funding Acquisition, N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the EU-FP7 projects RECONASS (grant no. 312718) and INACHUS (grant no. 607522), as well as H2020 project PANOPTIS (grant no. 769129).

Acknowledgments

We thank Pictometry, Inc. for providing the Haiti and Italy imagery used in this study, and the DigitalGlobe Foundation (www.digitalglobefoundation.com) for providing satellite images on Italy and Ecuador. We also appreciate the comments made by three anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baker, S. San francisco in ruins: The 1906 aerial photographs of george r. Lawrence. Landscape 1989, 30, 9–14. [Google Scholar]
  2. Kerle, N. Disasters: Risk assessment, management, and post-disaster studies using remote sensing. In Remote Sensing of Water Resources, Disasters, and Urban Studies (Remote Sensing Handbook, 3); Thenkabail, P.S., Ed.; CRC Press: Boca Raton, FL, USA, 2015; pp. 455–481. [Google Scholar]
  3. Dong, L.G.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS-J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  4. Belabid, N.; Zhao, F.; Brocca, L.; Huang, Y.B.; Tan, Y.M. Near-real-time flood forecasting based on satellite precipitation products. Remote Sens. 2019, 11, 252. [Google Scholar] [CrossRef] [Green Version]
  5. Novikov, G.; Trekin, A.; Potapov, G.; Ignatiev, V.; Burnaev, E. Satellite imagery analysis for operational damage assessment in emergency situations. In International Conference on Business Information Systems, Berlin, Germany, 2018; Springer International Publishing: Berlin, Germany, 2008; pp. 347–358. [Google Scholar]
  6. Kerle, N.; Hoffman, R.R. Collaborative damage mapping for emergency response: The role of cognitive systems engineering. Nat. Hazards Earth Syst. Sci. 2013, 13, 97–113. [Google Scholar] [CrossRef] [Green Version]
  7. Ghaffarian, S.; Kerle, N.; Filatova, T. Remote sensing-based proxies for urban disaster risk management and resilience: A review. Remote Sens. 2018, 10, 1760. [Google Scholar] [CrossRef] [Green Version]
  8. Lu, C.H.; Ni, C.F.; Chang, C.P.; Yen, J.Y.; Chuang, R.Y. Coherence difference analysis of sentinel-1 sar interferogram to identify earthquake-iinduced disasters in urban areas. Remote Sens. 2018, 10, 1318. [Google Scholar] [CrossRef] [Green Version]
  9. Li, L.L.; Liu, X.G.; Chen, Q.H.; Yang, S. Building damage assessment from polsar data using texture parameters of statistical model. Comput. Geosci. 2018, 113, 115–126. [Google Scholar] [CrossRef]
  10. Gokon, H.; Post, J.; Stein, E.; Martinis, S.; Twele, A.; Muck, M.; Geiss, C.; Koshimura, S.; Matsuoka, M. A method for detecting buildings destroyed by the 2011 tohoku earthquake and tsunami using multitemporal terrasar-x data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1277–1281. [Google Scholar] [CrossRef]
  11. Bai, Y.B.; Gao, C.; Singh, S.; Koch, M.; Adriano, B.; Mas, E.; Koshimura, S. A framework of rapid regional tsunami damage recognition from post-event terrasar-x imagery using deep neural networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 43–47. [Google Scholar] [CrossRef] [Green Version]
  12. Cooner, A.J.; Shao, Y.; Campbell, J.B. Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 haiti earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef] [Green Version]
  13. Ji, M.; Liu, L.; Buchroithner, M. Identifying collapsed buildings using post-earthquake satellite imagery and convolutional neural networks: A case study of the 2010 haiti earthquake. Remote Sens. 2018, 10, 1689. [Google Scholar] [CrossRef] [Green Version]
  14. Xu, J.Z.; Lu, W.; Li, Z.; Khaitan, P.; Zaytseva, V. Building damage detection in satellite imagery using convolutional neural networks. arXiv 2019, arXiv:1910.06444. [Google Scholar]
  15. Sublime, J.; Kalinicheva, E. Automatic post-disaster damage mapping using deep-learning techniques for change detection: Case study of the tohoku tsunami. Remote Sens. 2019, 11, 1123. [Google Scholar] [CrossRef] [Green Version]
  16. Ji, M.; Liu, L.; Du, R.; Buchroithner, M.F. A comparative study of texture and convolutional neural network features for detecting collapsed buildings after earthquakes using pre- and post-event satellite imagery. Remote Sens. 2019, 11, 1202. [Google Scholar] [CrossRef] [Green Version]
  17. Gupta, R.; Goodman, B.; Patel, N.; Hosfelt, R.; Sajeev, S.; Heim, E.; Doshi, J.; Lucas, K.; Choset, H.; Gaston, M.E. Creating xbd: A dataset for assessing building damage from satellite imagery. In Proceedings of the CVPR Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  18. Nakanishi, H.; Inoue, K. Study on Intelligent Aero-Robot for Disaster Response; Science Press Beijing: Beijing, China, 2005; Volume 5, pp. 1730–1734. [Google Scholar]
  19. Miyano, K.; Shinkuma, R.; Mandayam, N.B.; Sato, T.; Oki, E. Utility based scheduling for multi-uav search systems in disaster-hit areas. IEEE Access 2019, 7, 26810–26820. [Google Scholar] [CrossRef]
  20. Ejaz, W.; Azam, M.A.; Saadat, S.; Iqbal, F.; Hanan, A. Unmanned aerial vehicles enabled iot platform for disaster management. Energies 2019, 12, 2706. [Google Scholar] [CrossRef] [Green Version]
  21. Kerle, N.; Nex, F.; Duarte, D.; Vetrivel, A. Uav-based structural damage mapping—Results from 6 years of research in two european projects. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-3/W8, 187–194. [Google Scholar] [CrossRef] [Green Version]
  22. Whang, S.H.; Kim, D.H.; Kang, M.S.; Cho, K.; Park, S.; Son, W.H. Development of a Flying Robot System for Visual Inspection of Bridges; Ishmii-Int Soc Structural Health Monitoring Intelligent Infrastructure: Winnipeg, MB, Canada, 2007. [Google Scholar]
  23. Pollefeys, M.; Van Gool, L.; Vergauwen, M.; Verbiest, F.; Cornelis, K.; Tops, J.; Koch, R. Visual modeling with a hand-held camera. Int. J. Comput. Vision 2004, 59, 207–232. [Google Scholar] [CrossRef]
  24. Suzuki, T.; Miyoshi, D.; Meguro, J.; Amano, Y.; Hashizume, T.; Sato, K.; Takiguchi, J. Real-time hazard map generation using small unmanned aerial vehicle. In Proceedings of the Sice Annual Conference, Tokyo, Japan, 20–22 August 2018; IEEE: New York, NY, USA, 2008; Volumes 1–7, p. 4. [Google Scholar]
  25. Bendea, H.; Boccardo, P.; Dequal, S.; Giulio Tonolo, F.M.; Marenchino, D.; Piras, M. Low cost uav for post-disaster assessment. In XXIst ISPRS Congress, Beijing, China; ISPRS: Beijing, China, 2008; pp. 1373–1380. [Google Scholar]
  26. Lewis, G. Evaluating the use of a low-cost unmanned aerial vehicle platform in acquiring digital imagery for emergency response. In Geomatics Solutions for Disaster Management; Li, J., Zlatanova, S., Fabbri, A.G., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 117–133. [Google Scholar]
  27. Murphy, R.R.; Steimle, E.; Griffin, C.; Cullins, C.; Hall, M.; Pratt, K. Cooperative use of unmanned sea surface and micro aerial vehicles at hurricane wilma. J. Field Robot. 2008, 25, 164–180. [Google Scholar] [CrossRef]
  28. Kochersberger, K.; Kroeger, K.; Krawiec, B.; Brewer, E.; Weber, T. Post-disaster remote sensing and sampling via an autonomous helicopter. J. Field Robot. 2014, 31, 510–521. [Google Scholar] [CrossRef]
  29. Adams, S.M.; Levitan, M.L.; Friedland, C.J. High resolution imagery collection for post-disaster studies utilizing unmanned aircraft systems (uas). Photogramm. Eng. Remote Sens. 2014, 80, 1161–1168. [Google Scholar] [CrossRef] [Green Version]
  30. Dominici, D.; Baiocchi, V.; Zavino, A.; Alicandro, M.; Elaiopoulos, M. Micro uav for post seismic hazards surveying in old city center of l’aquila. In FIG Working Week; FIG: Rome, Italy, 2012; p. 15. [Google Scholar]
  31. Mavroulis, S.; Andreadakis, E.; Spyrou, N.I.; Antoniou, V.; Skourtsos, E.; Papadimitriou, P.; Kasssaras, I.; Kaviris, G.; Tselentis, G.A.; Voulgaris, N.; et al. Uav and gis based rapid earthquake-induced building damage assessment and methodology for ems-98 isoseismal map drawing: The june 12, 2017 mw 6.3 lesvos (northeastern aegean, greece) earthquake. Int. J. Disaster Risk Reduct. 2019, 37, 20. [Google Scholar] [CrossRef]
  32. Dominici, D.; Alicandro, M.; Massimi, V. Uav photogrammetry in the post-earthquake scenario: Case studies in l’aquila. Geomat. Nat. Hazards Risk 2017, 8, 87–103. [Google Scholar] [CrossRef] [Green Version]
  33. Hein, D.; Kraft, T.; Brauchle, J.; Berger, R. Integrated uav-based real-time mapping for security applications. ISPRS Int. Geo-Inf. 2019, 8, 219. [Google Scholar] [CrossRef] [Green Version]
  34. Xu, Z.Q.; Yang, J.S.; Peng, C.Y.; Wu, Y.; Jiang, X.D.; Li, R.; Zheng, Y.; Gao, Y.; Liu, S.; Tian, B.F. Development of an uas for post-earthquake disaster surveying and its application in ms7.0 lushan earthquake, sichuan, china. Comput. Geosci. 2014, 68, 22–30. [Google Scholar] [CrossRef]
  35. Gowravaram, S.; Tian, P.Z.; Flanagan, H.; Goyer, J.; Chao, H.Y. Uas-based multispectral remote sensing and ndvi calculation for post disaster assessment. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems, Dallas, TX, USA, 12–15 June 2018; IEEE: New York, NY, USA, 2018; pp. 684–691. [Google Scholar]
  36. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. Uav-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat. Hazards Earth Syst. Sci. 2015, 15, 1087–1101. [Google Scholar] [CrossRef] [Green Version]
  37. Grenzdorffer, G.J.; Guretzki, M.; Friedlander, I. Photogrammetric image acquisition and image analysis of oblique imagery. Photogramm. Record 2008, 23, 372–386. [Google Scholar] [CrossRef]
  38. Gerke, M.; Kerle, N. Automatic structural seismic damage assessment with airborne oblique pictometry © imagery. Photogramm. Eng. Remote Sens. 2011, 77, 885–898. [Google Scholar] [CrossRef]
  39. Gerke, M.; Kerle, N. Graph matching in 3d space for structural seismic damage assessment. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  40. Zeng, T.; Yang, W.N.; Li, X.D. Seismic damage information extent about the buildings based on low-altitude remote sensing images of mianzu quake-stricken areas. Appl. Mech. Mater. 2012, 105–107, 1889–1893. [Google Scholar] [CrossRef]
  41. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of damage in buildings based on gaps in 3d point clouds from very high resolution oblique airborne images. ISPRS-J. Photogramm. Remote Sens. 2015, 105, 61–78. [Google Scholar] [CrossRef]
  42. Li, S.; Tang, H.; He, S.; Shu, Y.; Mao, T.; Li, J.; Xu, Z. Unsupervised detection of earthquake-triggered roof-holes from uav images using joint color and shape features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1823–1827. [Google Scholar]
  43. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Segmentation of uav-based images incorporating 3d point cloud information. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 261–268. [Google Scholar] [CrossRef] [Green Version]
  44. Dorafshan, S.; Thomas, R.J.; Maguire, M. Fatigue crack detection using unmanned aerial systems in fracture critical inspection of steel bridges. J. Bridge Eng. 2018, 23, 15. [Google Scholar] [CrossRef]
  45. Chen, S.Y.; Laefer, D.F.; Mangina, E.; Zolanvari, S.M.I.; Byrne, J. Uav bridge inspection through evaluated 3d reconstructions. J. Bridge Eng. 2019, 24, 15. [Google Scholar] [CrossRef] [Green Version]
  46. Akbar, M.A.; Qidwai, U.; Jahanshahi, M.R. An evaluation of image-based structural health monitoring using integrated unmanned aerial vehicle platform. Struct. Control. Health Monit. 2019, 26, 20. [Google Scholar] [CrossRef] [Green Version]
  47. Kakooei, M.; Baleghi, Y. Fusion of satellite, aircraft, and uav data for automatic disaster damage assessment. Int. J. Remote Sens. 2017, 38, 2511–2534. [Google Scholar] [CrossRef]
  48. Vetrivel, A.; Duarte, D.; Nex, F.; Gerke, M.; Kerle, N.; Vosselman, G. Potential of multi-temporal oblique airborne imagery for structural damage assessment. In Proceedings of the XXIII ISPRS Congress, Commission III, International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Halounova, L., Schindler, K., Limpouch, A., Pajdla, T., Safar, V., Mayer, H., Elberink, S.O., Mallet, C., Rottensteiner, F., Bredif, M., et al., Eds.; International Society for Photogrammetry and Remote Sensing: Prague, Czech Republic, 2016; Volume 3, pp. 355–362. [Google Scholar]
  49. Tu, J.H.; Sui, H.G.; Feng, W.Q.; Jia, Q. Detecting facade damage on moderate damaged type from high-resolution oblique aerial images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5598–5607. [Google Scholar] [CrossRef]
  50. Lucks, L.; Bulatov, D.; Thönnessen, U.; Böge, M. Superpixel-wise assessment of building damage from aerial images. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Prague, Czech Republic, 25–27 February 2019; pp. 211–220. [Google Scholar]
  51. Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Towards a more efficient detection of earthquake induced façade damages using oblique uav imagery. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W6, 93–100. [Google Scholar] [CrossRef] [Green Version]
  52. Dorafshan, S.; Maguire, M. Bridge inspection: Human performance, unmanned aerial systems and automation. J. Civ. Struct. Health Monit. 2018, 8, 443–476. [Google Scholar] [CrossRef] [Green Version]
  53. Dorafshan, S.; Coopmans, C.; Thomas, R.J.; Maguire, M. Deep learning neural networks for suas-assisted structural inspections: Feasibility and application. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems, Dallas, TX, USA, 12–15 June 2018; IEEE: New York, NY, USA, 2018; pp. 874–882. [Google Scholar]
  54. Xu, Z.H.; Wu, L.X.; Zhang, Z.X. Use of active learning for earthquake damage mapping from uav photogrammetric point clouds. Int. J. Remote Sens. 2018, 39, 5568–5595. [Google Scholar] [CrossRef]
  55. Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster damage detection through synergistic use of deep learning and 3d point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS-J. Photogramm. Remote Sens. 2018, 140, 45–59. [Google Scholar] [CrossRef]
  56. Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Satellite image classification of building damages using airborne and satellite image samples in a deep learning approach. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, IV-2, 89–96. [Google Scholar] [CrossRef] [Green Version]
  57. Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Multi-resolution feature fusion for image classification of building damages with convolutional neural networks. Remote Sens. 2018, 10, 1636. [Google Scholar] [CrossRef] [Green Version]
  58. Kerle, N.; Ghaffarian, S.; Nawrotzki, R.; Leppert, G.; Lech, M. Evaluating resilience-centered development interventions with remote sensing. Remote Sens. 2019, 11, 2511. [Google Scholar] [CrossRef] [Green Version]
  59. Nex, F.; Duarte, D.; Steenbeek, A.; Kerle, N. Towards real-time building damage mapping with low-cost uav solutions. Remote Sens. 2019, 11, 287. [Google Scholar] [CrossRef] [Green Version]
  60. Tsai, Y.C.; Wei, C.C. Accelerated Disaster Reconnaissance Using Automatic Traffic Sign Detection with UAV and AI; Amer Soc Civil Engineers: New York, NY, USA, 2019; pp. 405–411. [Google Scholar]
  61. Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Detection of seismic façade damages with multi-temporal oblique aerial imagery. GISci. Remote Sens. In press.
  62. Li, Y.D.; Ye, S.; Bartoli, I. Semisupervised classification of hurricane damage from postevent aerial imagery using deep learning. J. Appl. Remote Sens. 2018, 12, 045008. [Google Scholar] [CrossRef]
  63. Li, Y.D.; Hu, W.; Dong, H.; Zhang, X.Y. Building damage detection from post-event aerial imagery using single shot multibox selector. Appl. Sci. 2019, 9, 1128. [Google Scholar] [CrossRef] [Green Version]
  64. Liang, X. Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with bayesian optimization. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 415–430. [Google Scholar] [CrossRef]
  65. Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural building damage detection with deep learning: Assessment of a state-of-the-art cnn in operational conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef] [Green Version]
  66. Song, D.M.; Tan, X.; Wang, B.; Zhang, L.; Shan, X.J.; Cui, J.Y. Integration of super-pixel segmentation and deep-learning methods for evaluating earthquake-damaged buildings using single-phase remote sensing imagery. Int. J. Remote Sens. 2019, 41, 1040–1066. [Google Scholar] [CrossRef]
  67. Huang, H.; Sun, G.Y.; Zhang, X.M.; Hao, Y.L.; Zhang, A.Z.; Ren, J.C.; Ma, H.Z. Combined multiscale segmentation convolutional neural network for rapid damage mapping from postearthquake very high-resolution images. J. Appl. Remote Sens. 2019, 13, 022007. [Google Scholar] [CrossRef]
  68. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of structurally damaged areas in airborne oblique images using a visual-bag-of-words approach. Remote Sens. 2016, 8, 231. [Google Scholar] [CrossRef] [Green Version]
  69. Gong, L.X.; Wang, C.; Wu, F.; Zhang, J.F.; Zhang, H.; Li, Q. Earthquake-induced building damage detection with post-event sub-meter vhr terrasar-x staring spotlight imagery. Remote Sens. 2016, 8, 887. [Google Scholar] [CrossRef] [Green Version]
  70. Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; IEEE: New York, NY, USA, 2015; pp. 1–9. [Google Scholar]
  71. Cusicanqui, J.; Kerle, N.; Nex, F. Usability of aerial video footage for 3-d scene reconstruction and structural damage assessment. Nat. Hazards Earth Syst. Sci. 2018, 18, 1583–1598. [Google Scholar] [CrossRef] [Green Version]
  72. Mitomi, H.; Yamzaki, F.; Matsuoka, M. Automated detection of building damage due to recent earthquakes using aerial television image. In 21st Asian Conference on Remote Sensing, Taipei, Taiwan, 2000; GIS Development: Taipei, Taiwan, 2000; pp. 401–406. [Google Scholar]
  73. Grünthal, G. European Macroseismic Scale 1998 (ems-98); Cahiers du Centre Européen de Géodynamique et de Séismologie, Centre Européen de Géodynamique et de Séismologie: Walferdange, Luxembourg, 1998; Volume 15, p. 99. [Google Scholar]
  74. Yamazaki, F.; Yano, Y.; Matsuoka, M. Visual damage interpretation of buildings in bam city using quickbird images following the 2003 bam, iran, earthquake. Earthq. Spectra 2005, 21, S328–S336. [Google Scholar] [CrossRef]
  75. Corbane, C.; Saito, K.; Dell’Oro, L.; Gill, S.P.D.; Piard, B.E.; Huyck, C.K.; Kemper, T.; Lemoine, G.; Spence, R.J.S.; Shankar, R.; et al. A comprehensive analysis of building damage in the 12 january 2010 mw7 haiti earthquake using high resolution satellite and aerial imagery. Photogramm. Eng. Remote Sens. 2011, 77, 997–1009. [Google Scholar] [CrossRef]
  76. Dubois, D.; Lepage, R. Fast and efficient evaluation of building damage from very high resolution optical satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4167–4176. [Google Scholar] [CrossRef]
  77. Li, W.; Shuai, X.; Liu, Q. Building damage characteristics analysis based on the three-dimensional image from oblique photography. J. Nat. Disasters 2016, 25, 152–158. [Google Scholar]
  78. Lattanzi, D.; Miller, G.R. 3d scene reconstruction for robotic bridge inspection. J. Infrastruct. Syst. 2015, 21, 12. [Google Scholar] [CrossRef]
  79. Zou, Y.; Gonzalez, V.; Lim, J.; Amor, R.; Guo, B.; Babaeian Jelodar, M. Systematic framework for post-earthquake bridge inspection through uav and 3d bim reconstruction. In Proceedings of the CIB World Building Congress, Hong Kong, China, 17–21 June 2019; p. 9. [Google Scholar]
  80. Akcay, S.; Atapour Abarghouei, A.; Breckon, T. Skip-ganomaly: Skip connected and adversarially trained encoder-decoder anomaly detection. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  81. Loquercio, A.; Kaufmann, E.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D. Deep drone racing: From simulation to reality with domain randomization. IEEE Trans. Robot. 2019, 1–14. [Google Scholar] [CrossRef] [Green Version]
  82. Palossi, D.; Loquercio, A.; Conti, F.; Flamand, E.; Scaramuzza, D.; Benini, L. A 64mw dnn-based visual navigation engine for autonomous nano-drones. IEEE Internet Things J. 2019, 6, 8357–8371. [Google Scholar] [CrossRef] [Green Version]
  83. Wu, H.H.; Zhou, Z.; Feng, M.; Yan, Y.; Xu, H.; Qian, L. Real-time single object detection on the uav. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems, ICUAS 2019, Atlanta, GA, USA, 11–14 June 2019; pp. 1013–1022. [Google Scholar]
  84. Siam, M.; Eikerdawy, S.; Gamal, M.; Abdel-Razek, M.; Jagersand, M.; Zhang, H. Real-time segmentation with appearance, motion and geometry. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 5793–5800. [Google Scholar]
  85. Delmerico, J.; Mintchev, S.; Giusti, A.; Gromov, B.; Melo, K.; Horvat, T.; Cadena, C.; Hutter, M.; Ijspeert, A.; Floreano, D.; et al. The current state and future outlook of rescue robotics. J. Field Robot. 2019, 36, 1171–1191. [Google Scholar] [CrossRef]
  86. Trujillo, J.C.; Munguia, R.; Guerra, E.; Grau, A. Visual-based slam configurations for cooperative multi-uav systems with a lead agent: An observability-based approach. Sensors 2018, 18, 4243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Bavle, H.; Sanchez-Lopez, J.L.; de la Puente, P.; Rodriguez-Ramos, A.; Sampedro, C.; Campoy, P. Fast and robust flight altitude estimation of multirotor uavs in dynamic unstructured environments using 3d point cloud sensors. Aerospace 2018, 5, 94. [Google Scholar] [CrossRef] [Green Version]
  88. Zhang, X.; Xian, B.; Zhao, B.; Zhang, Y. Autonomous flight control of a nano quadrotor helicopter in a gps-denied environment using on-board vision. IEEE Trans. Ind. Electron. 2015, 62, 6392–6403. [Google Scholar] [CrossRef]
  89. Paredes, J.A.; Alvarez, F.J.; Aguilera, T.; Villadangos, J.M. 3d indoor positioning of uavs with spread spectrum ultrasound and time-of-flight cameras. Sensors 2018, 18, 89. [Google Scholar] [CrossRef] [Green Version]
  90. Falanga, D.; Kleber, K.; Mintchev, S.; Floreano, D.; Scaramuzza, D. The foldable drone: A morphing quadrotor that can squeeze and fly. IEEE Robot. Autom. Lett. 2018, 4, 209–216. [Google Scholar] [CrossRef] [Green Version]
  91. Quenzel, J.; Nieuwenhuisen, M.; Droeschel, D.; Beul, M.; Houben, S.; Behnke, S. Autonomous mav-based indoor chimney inspection with 3d laser localization and textured surface reconstruction. J. Intell. Robot. Syst. 2019, 93, 317–335. [Google Scholar] [CrossRef]
  92. Schweizer, E.A.; Stow, D.A.; Coulter, L.L. Automating near real-time, post-hazard detection of crack damage to critical infrastructure. Photogramm. Eng. Remote Sens. 2018, 84, 76–87. [Google Scholar] [CrossRef]
  93. Sanchez-Cuevas, P.J.; Ramon-Soria, P.; Arrue, B.; Ollero, A.; Heredia, G. Robotic system for inspection by contact of bridge beams using uavs. Sensors 2019, 19, 305. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. Jimenez-Cano, A.E.; Heredia, G.; Ollero, A. Aerial manipulator with a compliant arm for bridge inspection. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1217–1222. [Google Scholar]
  95. Lin, L.; Yang, Y.; Cheng, H.; Chen, X. Autonomous vision-based aerial grasping for rotorcraft unmanned aerial vehicles. Sensors 2019, 19, 3410. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Ruggiero, F.; Lippiello, V.; Ollero, A. Introduction to the special issue on aerial manipulation. IEEE Robot. Autom. Lett. 2018, 3, 2734–2737. [Google Scholar] [CrossRef]
  97. Salaan, C.J.; Tadakuma, K.; Okada, Y.; Ohno, K.; Tadokoro, S. Uav with two passive rotating hemispherical shells and horizontal rotor for hammering inspection of infrastructure. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration, Taipei, Taiwan, 11–14 December 2017; pp. 769–774. [Google Scholar]
Figure 1. Damages identified from unmanned aerial vehicle (UAV)-derived point clouds and from object-based image analysis (OBIA) processing. (a) Inclination in walls, (b) openings (turquois), cracks (magenta), and damage crossing beams, (c,d) detailed point cloud and segment orientation angles [adapted from 36].
Figure 1. Damages identified from unmanned aerial vehicle (UAV)-derived point clouds and from object-based image analysis (OBIA) processing. (a) Inclination in walls, (b) openings (turquois), cracks (magenta), and damage crossing beams, (c,d) detailed point cloud and segment orientation angles [adapted from 36].
Ijgi 09 00014 g001
Figure 2. Typical problems for image processing posed by shadow and occlusion [51].
Figure 2. Typical problems for image processing posed by shadow and occlusion [51].
Ijgi 09 00014 g002
Figure 3. UAV-derived point clouds of reinforced concrete structure with brick in-fill walls subjected to exterior and interior detonations. Openings, cracks, and debris piles, as well as subtle deformation in the façades were automatically detected.
Figure 3. UAV-derived point clouds of reinforced concrete structure with brick in-fill walls subjected to exterior and interior detonations. Openings, cracks, and debris piles, as well as subtle deformation in the façades were automatically detected.
Ijgi 09 00014 g003
Figure 4. Point cloud representation of an INACHUS pilot structure in Lyon, France, calculated from optical imagery acquired with a low-cost commercial drone (Phantom 4, DJI), showing damage detected through machine learning (red).
Figure 4. Point cloud representation of an INACHUS pilot structure in Lyon, France, calculated from optical imagery acquired with a low-cost commercial drone (Phantom 4, DJI), showing damage detected through machine learning (red).
Ijgi 09 00014 g004
Figure 5. Workflow of the app developed for near real-time damage mapping. Images are streamed to a laptop computer and processed immediately after acquisition. A convolutional neural network (CNN)-based damage detection algorithm is applied, and a progressively built sparse 3D model is used to orthorectify them. By the time the UAV lands, an orthomosaic displaying the damage is finished (adapted from [59]).
Figure 5. Workflow of the app developed for near real-time damage mapping. Images are streamed to a laptop computer and processed immediately after acquisition. A convolutional neural network (CNN)-based damage detection algorithm is applied, and a progressively built sparse 3D model is used to orthorectify them. By the time the UAV lands, an orthomosaic displaying the damage is finished (adapted from [59]).
Ijgi 09 00014 g005
Figure 6. Examples of building damage detection via CNN activation layers, using aerial pre- and post-earthquake façade images. Bright activation colors show damage hotspots (adapted from [61]).
Figure 6. Examples of building damage detection via CNN activation layers, using aerial pre- and post-earthquake façade images. Bright activation colors show damage hotspots (adapted from [61]).
Ijgi 09 00014 g006
Figure 7. Voids within the photogrammetric model shown in Figure 4, obtained with a terrestrial laser scanning system. (a) Estimated size of open spaces observed through openings, (b) distance of voids to the edge of the building.
Figure 7. Voids within the photogrammetric model shown in Figure 4, obtained with a terrestrial laser scanning system. (a) Estimated size of open spaces observed through openings, (b) distance of voids to the edge of the building.
Ijgi 09 00014 g007

Share and Cite

MDPI and ACS Style

Kerle, N.; Nex, F.; Gerke, M.; Duarte, D.; Vetrivel, A. UAV-Based Structural Damage Mapping: A Review. ISPRS Int. J. Geo-Inf. 2020, 9, 14. https://doi.org/10.3390/ijgi9010014

AMA Style

Kerle N, Nex F, Gerke M, Duarte D, Vetrivel A. UAV-Based Structural Damage Mapping: A Review. ISPRS International Journal of Geo-Information. 2020; 9(1):14. https://doi.org/10.3390/ijgi9010014

Chicago/Turabian Style

Kerle, Norman, Francesco Nex, Markus Gerke, Diogo Duarte, and Anand Vetrivel. 2020. "UAV-Based Structural Damage Mapping: A Review" ISPRS International Journal of Geo-Information 9, no. 1: 14. https://doi.org/10.3390/ijgi9010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop