Next Article in Journal
Fast Split Bregman Based Deconvolution Algorithm for Airborne Radar Imaging
Next Article in Special Issue
Monitoring Land Surface Deformation Associated with Gold Artisanal Mining in the Zaruma City (Ecuador)
Previous Article in Journal
Underwater Use of a Hyperspectral Camera to Estimate Optically Active Substances in the Water Column of Freshwater Lakes
Previous Article in Special Issue
Change Analysis in Urban Areas Based on Statistical Features and Temporal Clustering Using TerraSAR-X Time-Series Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Active Contours Model for Environmental Change Detection from Multitemporal Synthetic Aperture Radar Images

1
Department of Civil Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 6617715175, Iran
2
Centre Eau Terre Environnement, Institute National de la Recherche Scientifique, Quebec, QC G1K 9A9, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(11), 1746; https://doi.org/10.3390/rs12111746
Submission received: 16 April 2020 / Revised: 20 May 2020 / Accepted: 22 May 2020 / Published: 29 May 2020
(This article belongs to the Special Issue Multi-temporal Synthetic Aperture Radar)

Abstract

:
In this paper, we propose a novel approach based on the active contours model for change detection from synthetic aperture radar (SAR) images. In order to increase the accuracy of the proposed approach, a new operator was introduced to generate a difference image from the before and after change images. Then, a new model of active contours was developed for accurately detecting changed regions from the difference image. The proposed model extracts the changed areas as a target feature from the difference image based on training data from changed and unchanged regions. In this research, we used the Otsu histogram thresholding method to produce the training data automatically. In addition, the training data were updated in the process of minimizing the energy function of the model. To evaluate the accuracy of the model, we applied the proposed method to three benchmark SAR data sets. The proposed model obtains 84.65%, 87.07%, and 96.26% of the Kappa coefficient for Yellow River Estuary, Bern, and Ottawa sample data sets, respectively. These results demonstrated the effectiveness of the proposed approach compared to other methods. Another advantage of the proposed model is its high speed in comparison to the conventional methods.

Graphical Abstract

1. Introduction

Change detection is the process in which two remote sensing images of a region at different times are used to extract areas that have been changed during the time between the images. In the last decades, optical and radar remote sensing imageries have become vital resources in change detection applications due to their high spatial and temporal resolutions, useful spectral or polarimetric characteristics, extensive spatial coverage, and cost-effectiveness. This includes the synthetic aperture radar (SAR) images, owing to their ability in imaging in all weather conditions, such as rainy and dusty air and in the night, and also their suitable spatial resolution. They have been widely exploited in change detection applications for natural hazards’ impacts [1,2,3], monitoring and mapping of environment and natural resources [4,5], urban development [6,7,8], etc.
The change detection algorithms can be categorized into supervised, semi-supervised, and unsupervised classes, depending on the availability of ground truth information. Supervised approaches, however, do have higher accuracy; they need sample data of change and unchanged areas to train the classifier model [9,10,11]. These models are less frequently used due to a lack of prior information in real applications. Additionally, the labeled and unlabeled samples are utilized simultaneously in the semi-supervised change detection algorithms [12,13]. In these algorithms, the labeled sample data are training data, and the rest of the pixels are the unlabeled sample data. In general, supervised and semi-supervised algorithms have a better efficiency than unsupervised methods. However, due to the unnecessary usage of training data, the unsupervised approaches are more prevalent in change detection applications [14].
The unsupervised change detection methods can be summarized into two classes: Threshold-based and classification methods. In threshold-based algorithms, the goal is to find a threshold in order to classify the difference image into changed and unchanged classes correctly [15,16]. Gong et al. introduced a neighborhood-based ratio operator to generate a difference image and finally extracted the changed regions by applying a threshold on the difference image [17]. Furthermore, Sumaiya and Kumari calculated a threshold based on the mean of the difference image and the logarithm of two input SAR images to extract the changed areas from the background [18]. In another study, they proposed a change detection algorithm based on the Gabor filter and Kittler–Illingworth thresholding algorithm [19]. Although the threshold-based methods have higher simplicity and speed, they are less accurate than other methods.
According to the suitable accuracy and simplicity of the clustering classification-based models, they have been widely used in change detection from temporal SAR images. Classification-based methods are usually combined with conventional image processing and optimization algorithms to increase the accuracy, which reduces their computational cost [20,21]. Shang et al. utilized an artificial immune multi-objective clustering algorithm based on the intensity and texture of the differences image to identify changed regions from unchanged ones [22]. Additionally, Zheng et al. used a K-means clustering method to detect changed pixels using a linear combination of subtraction and log-ratio difference images [23]. Li et al. proposed a multi-objective fuzzy clustering algorithm in which the change detection problem is modeled as a multi-objective optimization problem with two objective functions to preserve image details and remove the noise [24]. Zheng et al. used an unsupervised saliency-guided method and K-means clustering to detect the changed areas in a difference image obtained from a logarithmic ratio operator [14]. In addition, Tian et al. developed an edge-based fuzzy clustering algorithm to detect the changes in the SAR images [25].
The artificial neural networks as a classification method are utilized in change detection applications. Although artificial neural networks can obtain a better performance, they have high complexity and low speed. Convolutional neural networks (CNNs), sparse auto-encoder, and unsupervised clustering algorithms were applied for detecting changes in SAR images [26]. Additionally, Li et al. used a deep learning network, named PCANet (Principal Component Analysis Network), and saliency method for SAR change detection [27]. Recently, Chen et al. introduced a fast unsupervised deep neural network to generate a difference image for change detection of SAR images [28].
Furthermore, other classification models, such as graph cuts, have been used as a change detector of temporal SAR images. Carincotte et al. developed a fuzzy hidden Markov chain model by combining two fuzzy and statistical points of view to identify changes in the SAR images [29]. Ma et al. fused wavelet coefficients for low- and high-frequency bands using fusion rules based on weight averaging and the minimum standard deviation, respectively, to extract changed regions from temporal SAR images [30]. Gong et al. presented a change detection method based on texture and intensity information and a multivariate generalized Gaussian graph cuts model [31].
In this paper, we developed an innovative model based on the active contour model for change detection from the SAR images. The proposed model consists of three stages: The difference image generation, change detection by active contours, and accuracy assessment of the model. Firstly, a new difference image operator is introduced to increase the accuracy of change detection. At the next step, a novel change detection algorithm based on the active contours, named desired feature local active contour (DFLAC), is presented. Finally, the accuracy and speed of the model are evaluated and compared with other studies.

2. Materials and Methods

In all change detection methods, at first, a difference image is produced from two images of the same region before and after the change using one of two standard operator subtraction or log-ratio. Next, a model is developed to detect the changed from the unchanged areas. In this paper, we first introduced a new difference image generation operator to increase the accuracy of the change detection method. On the other hand, we developed an innovative active contour model to extract changed regions from the difference image precisely.
Based on the active contours model proposed by Chunming Li [32], we introduced a new model that detects changes in the presence of inhomogeneity and noise in SAR images. We named the proposed model the desired feature local active contour (DFLAC) model because of the extraction of the desired target feature (changed regions) using the local image information. The DFLAC model needs training data from changed and non-changed areas, which are automatically produced by the Otsu thresholding method.

2.1. New Difference Image Operator

The first step of the change detection algorithm is to generate the difference image from two images before and after the change. Subtraction and log-ratio are two frequently used operators employed in many change detection research works [14]:
I d 1 = | I b I a | ,
I d 2 = | log 10 ( I b + 1 I a + 1 ) | ,
where I d , I b , and I a are the difference, before and after images, respectively.
Although the subtraction operator can detect the small changes, the log-ratio has a better performance in the change detection of SAR images, which have multiplicative noises [14]. We proposed another equation for generating the difference image, namely the root multiplication log-ratio and normal difference (RMLND) operator. This operator fused two subtraction and log-ratio operators and has the benefits of them. Therefore, it can find out the small changes but has less sensitivity to SAR image noises (Figure 3):
I d 3 = I d 2 × I d 4 ,
where I d 2 is log-ratio operator and I d 4 is a normal difference operator defined as follows:
I d 4 = | I b I a I b + I a + η | ,
where the parameter η is a constant value that avoids wrong results in pixels, when I a and I b are equal to zero.

2.2. DFLAC Model

The proposed DFLAC model was developed based on Chunming Li’s model [32], which detects changed regions from the difference image utilizing training data. In this model, the contour C of the model separates the difference image ( I d ) domain (R) into changed and unchanged regions R c and R u , so that R = R c R u . The energy function of the model is the sum of the difference between the pixel values inside the curve C and training data. The function is minimized in an iteration process, and the curve C moves towards the border of the changed and non-changed regions. Inspired by Li’s model, the energy function of the proposed model in a local region S x in the neighborhood of pixel x is defined as follows:
F ( x ) = i = 1 2 A ( I d ( y ) t i ) 2 dy ,
where I(y) represents the difference image in S x , and t i (i = 1,2) represents the training data of changed and unchanged regions. Additionally, A is defined as:
A = { S x R c i = 1 S x R u i = 2 ,
when more than one training data of changed and unchanged regions are introduced to the model. The minimum difference between image intensity and training data is used in the integral of Equation (5). Therefore, this equation can be changed as follows:
F ( x ) = i = 1 2 A min j ( I d ( y ) t i j ) 2 dy ,
where t i j shows jth training data for the ith class of the image (changed and unchanged regions).
The above equation calculates the difference between the intensity of each pixel inside the curve C and the most similar pixel to it from the training data. Accordingly, after minimizing the energy function of the model, the curve C would extract changed regions from the unchanged area. Equation (7) is improved by utilizing a kernel function K such that K(x-y) = 0 for x S x as a non-negative window function to separate S x from other image domains and make use of the local intensity in the energy function [32]:
F ( x ) = i = 1 2 S x K ( x y ) min j ( I d ( y ) t i j ) 2 dy .
Due to the nature of the SAR images, the difference image I d is very heterogeneous and noisy. Therefore, we used the Li model to decompose the difference image into the true image, bias field (a parameter to formulate the intensity inhomogeneity in each pixel of the image), and noise:
I d ( y ) = b ( x ) . J ( y ) + n ( y ) ,
where I d (y) depicts the difference image, J(y) represents the true image, b represents the bias field, and n represents image noise [32].
In the S x area, the true image J(y), the training data consequently take approximately two constant values for changed and unchanged regions so that we can write:
t i j = b ( x ) p i j ( y ) + n ( y ) ,
where p represents the true value of training data. Therefore, based on Equation (10), Equation (8) will be presented as follows:
F ( x ) = i = 1 2 S x K ( x y ) min j ( I d ( y ) b ( x ) p i j ( y ) n ( y ) ) 2 dy .
The F ( x ) is considered as the energy function of pixel x, and we should calculate the integral of F x to obtain the energy function of the whole image domain:
F = Ω i = 1 2 S x K ( x y ) min j ( I d ( y ) b ( x ) p i j ( y ) n ( y ) ) 2 dydx .
After exchanging the order of the integration, Equation (12) is written as follows:
F = Ω i = 1 2 S x K ( x y ) min j ( I d ( y ) b ( x ) p i j ( y ) n ( y ) ) 2 dxdy .
The implicit form of curve C based on level set theory is used to minimize the energy function of the DFLAC model [33]:
F = Ω i = 1 2 S x K ( x y ) min j ( I d ( y ) b ( x ) p i j ( y ) n ( y ) ) 2 W i ( φ ) dxdy .
In the above equation, the changed and unchanged regions can be presented by their membership functions defined by W 1 ( φ ) = H ( φ ) and W 2 ( φ ) = 1 H ( φ ) , respectively, where H represents the Heaviside function, and φ represents a signed distance function to describe curve C implicitly [33].
The above energy function F, namely the image term of the DFLAC model, is used to regularize the model. The two additional terms called the length and distance regularization terms are added to the energy function introduced in the study by Li and his colleagues [32]:
F Final = α F image + β F length + γ F distance ,
where α , β , and γ represent the constant parameters that set the weight of each term in the energy function.

2.3. Minimization of the Energy Function

The parameter φ is an implicit form of curve C, where φ > 0 depicts the inside of the curve and φ < 0 represents the outside of the curve and wherever φ = 0 shows the curve C of the model. As a result, changing the parameter φ over time by minimizing F Final with respect to φ using the standard gradient descent method, the position of the curve C is moved towards the border of the changed and unchanged regions of the image [34]:
φ t = F Final φ .
Therefore, the evolution equation of the level set function φ over time is represented as follows:
φ k + 1 = φ k + φ k t Δ t φ k + 1 = φ k F Final φ Δ t ,
where Δ t is the time step, and φ k represents a level set function in iteration k. The derivative of F Final φ is calculated, and the corresponding gradient flow equation is presented as follows:
φ t = α δ ( φ ) i = 1 2 f i + β δ ( φ ) div ( φ | φ | ) + γ div ( d p ( | φ | ) φ ) ,
where δ ( ϕ ) = H ( ϕ ) ϕ , and div is the divergence operator, represents the gradient operator, and d p   is defined as below according to the study by [32], as follows:
d p ( x ) = p ( x ) x & p ( x ) = ( x 1 ) 2 2 .
Additionally, the term f i can be achieved using the following expression:
f i =   K ( x y ) min j ( I d ( y ) b ( x ) p i j ( y ) n ( y ) ) 2 dx .
Moreover, the above integrals can be described using the convolution function so that:
f i = min j ( I d 2 K u 2 p i j I d ( b K ) 2 InK u + ( p i j ) 2 ( b 2 K ) + 2 p i j n ( b K ) + n 2 K u ) ,
where * represents the convolution operator and K u =   K ( y x ) dx , where K u = 1   except for the region near the boundary of the image domain Ω [32].
Furthermore, in order to calculate parameter b, we assume the parameters φ and t are fixed parameters and minimize F Final with respect to b. So, the parameter b is given by:
b = ( ( I d n ) i = 1 2 P i W i ( φ ) ) K ( i = 1 2 P i 2 ) K ,
where P 1 and P 2 are two matrixes, with the same size as the intensity matrix. Each element of these matrixes is a training data of changed and unchanged regions, respectively, that minimizes the E function for a given pixel:
E i = ( I d bt n ) 2 W i ( φ ) .
In the same way, other unknown parameters n and t are obtained as follows:
n = ( ( I d K ) b i = 1 2 ( P i W i ( φ ) ) ) K u .
Furthermore, the true value of training data can be computed in each iteration as follows:
p i j =   ( b K ) ( I d n ) B i j W i ( φ ) dx   ( b 2 K ) B i j W i ( φ ) dx ,
where B i j is a binary array, which indicates pixels that p i j minimized the function E i defined in Equation (20). In step 1 of iteration, the training data t i j used instead of p i j .

2.4. Training Data Sampling

The DFLAC model separates the difference image domain in two classes of changed and unchanged areas based on the training data of those classes. In the proposed model, the training data are not sampled from the difference image but are generated automatically. Firstly, a threshold number (T) is obtained from the difference image using the Otsu algorithm [35]. The number T is a normalized value that lies in the range [0, 1] that can be used to classify an image into two classes. Therefore, we use the Otsu’ algorithm threshold (T) to generate the training data of the model from the difference image, automatically. Secondly, in the range of [ T ,   1 ] , k 1 numbers and in the range [ 0 ,   T ] , k 2 numbers with equal steps are selected as training data. It should be noted that k 1 and k 2 are the numbers of training data of the changed and unchanged classes, respectively. For example, if T = 0.6 and k 1 = 2 and k 2 = 4 , then the numbers 0.8 and 1 are the training data of the changed class and the numbers 0, 0.15, 0.3, and 0.45 are selected as training data of the unchanged class. Thirdly, the produced training data can be projected into the other ranges, such as [0, 255]. It should be noted that each difference image operator has its specific threshold value (T) and training data.

2.5. Evaluation Indices

In order to evaluate the accuracy of the proposed model, three indices of the percentage correct classification (PCC), overall error (OE), and Kappa were used. PCC is the percentage of pixels that are correctly classified [36]:
PCC = TP + TN N ,
where TP and TN indicate the number of changed and unchanged pixels that are correctly classified, respectively. Additionally, N is the number of pixels in the image. OE is the sum of the number of changed and unchanged pixels incorrectly classified [36]:
OE = FP + FN N ,
where FP and FN are the numbers of unchanged and changed pixels, respectively, that are falsely detected. Finally, the Kappa coefficient is a parameter to indicate the accuracy of the classification model according to the difference between the observed accuracy and the chance agreement [36]:
Kapp = PCC PRE 1 PRE ,
where:
PRE = ( TP + FP ) . N c + ( TN + FN ) . N u N 2 ,
where N c and N u are the total numbers of pixels that belong to the changed and unchanged classes, respectively [36].

3. Implementation Results

3.1. Algorithm’s Workflow

Figure 1 shows the flowchart of the proposed model. The proposed model has four main steps, including difference image generation, training data production, implementation of the DFLAC model, and finally, accuracy assessment. Furthermore, each step has several sub-steps that are described in the following.
  • At the first step of difference image generation, two SAR images of the data set (before and after change) are introduced to the model, and the difference image is then produced based on one of Equations (1) to (4). Secondly, in the training data sampling step, a threshold T was first estimated using Otsu’s method, then, the training data of changed and unchanged classes were selected based on this threshold. In the third step, the DFLAC model was implemented. The DFLAC model starts with defining the initial curve implicitly ( φ 0 ) based on a level set theory, which is a simple shape as a square and circle. Then, the evolution of the DFLAC model’s curve was done over time using Equation (17). Next, the parameters b , n , and p i j were then estimated according to Equations (22), (24), and (25). These last previous steps were repeated until the curve model reached stability and was not changed (i.e.,   | φ n φ n 1 | < ε ). Finally, the output of the model was generated by separating changed regions (pixels inside the curve that φ 0 ) from unchanged areas (pixels outside the curve that φ < 0 ).
  • The accuracy assessment was the last step of the workflow, in which the error image was computed by subtracting the output image from the reference image as follows:
    error   image = | Reference   image output   image | .
  • Finally, the accuracy assessment of the model using the error map and some accuracy criteria, such as PCC, OE, and the Kappa, were estimated based on Equations (26)–(28).
The main stage in the proposed model is the implementation of the DFLAC model stage. Moreover, this step takes the most running time of the whole model. It is mainly because the evolution of the active contour is an iterative process that needs too much time to run. In this regard, more distinctions between the values of the changed and unchanged pixels in the difference image lead to a higher accuracy and speed of evolution of the active contour. In addition, the time step parameter, i.e., Δ t in Equation (17), regularizes the rate of the active contour evolution and has a considerable effect on the performance and speed of the model. Therefore, selecting a large value for the time step parameter leads to the passing of the model through the minimum of the energy function. Contrariwise, a very small value reduces the model speed. Accordingly, assigning an optimal value for the time step has a crucial impact on the efficiency of the model.

3.2. SAR Datasets

We used three sample data sets to evaluate the DFLAC model in change detection from SAR images. Brief information about the data sets is presented in Table 1.
The first data set is a part of the Yellow River Estuary SAR data at Dongying, Shandong province, China. This data set shows a block of farmland that is landlocked. It should be noted that the first image of the Yellow River Estuary data set is four-look data, but the second image is single-look data, which means that the two images have different levels of speckle.
The second data set named Bern data was taken by the European Remote Sensing 2 satellite SAR sensor, which relates to a region near the city of Bern, Switzerland. River Aare flooded parts of the cities of Thun and Bern and the airport of Bern completely. Therefore, the Aare valley between Bern and Thun was chosen to extract flooded regions. The Ottawa data set is the third sample data set, which was acquired over the city of Ottawa by the Radarsat SAR sensor. This data set illustrates regions that were once flooded. Moreover, all data sets have a reference image as ground truth, which indicates changed regions precisely, and we used them to evaluate our model. Figure 2 illustrates the sample data sets and their reference image.

3.3. Difference Image

In this section, the difference images were generated using four formulas explained in Equations (1) and (2). The produced difference images lie on the range [0, 1], but for better processing, we normalized them in the range [0, 255]. Figure 3 demonstrates the difference image of the selected data sets.

3.4. Model Implementation

The constant parameters of the DFLAC model, i.e., α , β , and γ , were determined by the trial and error method as 1 , 0.11 , and 0.4 , respectively. In order to define the training data of changed and unchanged regions, we computed the Otsu thresholding algorithm on four difference image operators. Then, four numbers as training data of the changed class and two numbers as training data of the unchanged class were determined for each difference image operator in the range of 0 to 255 (Section 2.4). The numbers of training data of each dataset for the RMLND difference image operator are shown in Table 2.
Then, the difference image, constant parameters, and training data of each sample data set were introduced to the DFLAC model, and after 20 iterations, the curve of the model (C) extracted the changed regions. The final output of the DFLAC model based on the RMLND operator is represented in Figure 4.

3.5. Accuracy of the Proposed Model

To evaluate the accuracy of the DFLAC model, we calculated the error image by subtracting the output of the model from the reference image. For this purpose, the accuracy parameters, such as Kappa, PCC, and OE, were calculated and compared with the results of the three most known models in change detection of SAR images, including saliency guided K-means (SGK) [14], neighborhood-based ratio (NR) [17], and log-normal generalized Kittler and Illingworth thresholding (LN-GKIT) [37]. Besides, two models of active contours, Chan and Vese (CV) [34] and distance regularized level set evolution (DRLSE) [38], were used for the evaluation of the proposed model. Table 3 demonstrates the accuracy parameters of the DFLAC model comparison with other algorithms for selected data sets by using the RMLND difference image operator as the best operator (Table 4).
As shown in Figure 4, due to the different levels of speckle in the two images of the Yellow River Estuary data sets, the difference between the two images in some areas of unchanged regions is relatively high. Therefore, the proposed model detects some unchanged area as a changed region and decrease the accuracy of the model.

4. Discussion

In this section, we discuss the parameters that affect the accuracy of the model. Additionally, to evaluate the efficiency of our model, the speed of the model is compared with the SGK approach [14].

4.1. Accuracy Assessment

The accuracy and performance of the proposed model depend on three parameters, including the fixed parameters, i.e., α ,   β ,   and   γ , in Equation (18), the difference image operator type, and the number of training data in which the impact of each is discussed below.

4.1.1. The Constant parameters

The constant parameters α , β , and γ regularize the effect of the length, distance, and image terms in the energy function of the model (Equation (12)). Changing these parameters causes a change in the evolution of the curve and the results of the model. As a result, the accuracy of the model is affected. To determine the role of these parameters in the accuracy of the model, we fixed two parameters and then changed the other parameters with 0.1 steps. We then calculated the average accuracy of the three data sets. The rate of change of the Kappa parameter due to the variation of the constant parameters α , β , and γ is depicted in Figure 5, Figure 6 and Figure 7, respectively.
Based on Figure 5, in all data sets, the Kappa increases when α rises from zero to 1. Therefore, 1 is the best value for parameter α . Additionally, as shown in Figure 6, the change rate of Kappa with respect to β in all data sets is low because the impact of the length term in the energy function of the active contour models is slight. Figure 7 shows that the Yellow River Estuary data set has its maximum Kappa γ = 0.2 contrast with other data sets and the average of all data sets, which reached its peak in 0.4.
Consequently, the maximum value of Kappa occurs when α , β , and γ are 1, 0.11, and 0.4, and therefore, we selected these numbers as optimal values of the constant parameters for implementing the model. It can be noted that the Yellow River Estuary and Bern data sets have minimum and maximum sensitivity to the variation of the constant parameters, respectively.

4.1.2. The Difference Image Operator Type

We assessed the four difference-image operators, including subtraction, log-ratio, normal difference, and RMLND, in the case of the Kappa parameter. The results and statistics of these operators, which were executed on sample data sets, are illustrated in Table 4 and Figure 8.
It can be seen that the RMLND operator has the best performance compared to the other operators. Therefore, the RMLND difference image was chosen as the principal operator for implementing our proposed model. Additionally, the normal difference model, after the RMLND operator, has the best efficiency; the log-ratio and the subtraction models are in the next ranks, respectively. The reason for the low performance of the subtraction model is that it detects small changes due to speckle noise. This defect is very significant at the border of the changed regions.
According to Figure 8, the normal difference operator achieved the best result for the Yellow River Estuary data set, and after that, the RMLND, log-ratio, and subtraction operators have more accuracy, respectively. Moreover, the subtraction operator obtains the worst results for the Bern data set 32.44%) and the accuracy of the log-ratio, normal difference, and RMLND operators increases, respectively. Additionally, the RMLND operator has the best results for the Ottawa image, and the accuracy of the log-ratio, normal difference, and subtraction is in the next ranks. In addition, according to the mean accuracy of the three data sets, the best operator is RMLND, and after that, the normal difference and log-ratio have a better performance, respectively. Finally, the subtraction operator achieved less accuracy compared to the rest of the operators.

4.1.3. Numbers of Training Data

The numbers of training data of changed and unchanged regions is an effective parameter in the accuracy of the proposed model. In order to evaluate the effect of the numbers of training data, and determine the optimal numbers of the data, we fixed the numbers of unchanged data and calculated the mean of the Kappa of data sets relative to several training data of changed areas. Similarly, the number 2 was obtained as the best number of the training data in the unchanged regions. Figure 9 and Figure 10 demonstrate the variation of Kappa with respect to the numbers of training data of the changed and unchanged areas.
Based on Figure 9, the kappa parameter increases with an increasing number of classes of the changed regions in all data sets and reaches its maximum value in four, and then decreases gradually. According to Figure 10, the Kappa of all data sets and the average rise sharply by increasing the number of training data of the unchanged regions from 1 to 2, then it goes down slightly. Therefore, the maximum Kappa is related to the optimal numbers of training data of the changed and unchanged region, which the four and two number is the best, respectively.

4.2. Running Time Comparison

One of the efficiency parameters of a model is its running time. Since the proposed model has five steps, the running time of each step was estimated separately (Table 5). Therefore, we compared the run time of the proposed model and the SGK model [14] in three sample data sets. Table 5 illustrates the run time of two models in each data set. As seen in Table 5, our model is approximately 10 times faster than the SGK model in the three sample data.

5. Conclusions

In this paper, we proposed a novel model of active contours for change detection of SAR images. The model was designed to extract a target feature in a digital image by getting some training data from the target feature and image background. Therefore, in this paper, the changed and unchanged features were considered as target features and image backgrounds, respectively. Besides, the training data were generated from the difference image using the Otsu thresholding method automatically. Furthermore, we introduced a new difference image operator to attain more accuracy compared to the existing operators. For the accuracy assessment of the model, it was applied to three temporal SAR images, and the outputs were compared to their corresponding reference images (ground truth). The accuracy of the proposed model depends on three constant values of the model, which were determined by the trial and error method. Additionally, the number of training data of the changed and unchanged region, which were identified manually, affect the accuracy of the model. The results of the model demonstrate the higher accuracy of the proposed model compared with the five most known models of change detection in SAR images. It should be mentioned that the proposed model was completely implemented in the Matlab R15b software environment.

Author Contributions

Conceptualization, methodology, and implementation of the programing: S.A.; Original draft preparation: S.H. and S.A.; Result evaluation: S.H.; Review and editing: S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Kamran Kazemi, associate professor of the Electrical Engineering Department, Shiraz University, Shiraz, Iran, for providing the data sets used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giustarini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.-P.; Bates, P.D.; Mason, D.C. A change detection approach to flood mapping in urban areas using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2417–2430. [Google Scholar] [CrossRef] [Green Version]
  2. Tzeng, Y.-C.; Chiu, S.-H.; Chen, D.; Chen, K.-S. Change detections from SAR images for damage estimation based on a spatial chaotic model. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 1926–1930. [Google Scholar]
  3. Li, N.; Wang, R.; Deng, Y.; Chen, J.; Liu, Y.; Du, K.; Lu, P.; Zhang, Z.; Zhao, F. Waterline mapping and change detection of Tangjiashan Dammed Lake after Wenchuan Earthquake from multitemporal high-resolution airborne SAR imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3200–3209. [Google Scholar] [CrossRef]
  4. Jin, S.; Yang, L.; Danielson, P.; Homer, C.; Fry, J.; Xian, G. A comprehensive change detection method for updating the National Land Cover Database to circa 2011. Remote Sens. Environ. 2013, 132, 159–175. [Google Scholar] [CrossRef] [Green Version]
  5. Quegan, S.; Le Toan, T.; Yu, J.J.; Ribbes, F.; Floury, N. Multitemporal ERS SAR analysis applied to forest mapping. IEEE Trans. Geosci. Remote Sens. 2000, 38, 741–753. [Google Scholar] [CrossRef]
  6. Nonaka, T.; Shibayama, T.; Umakawa, H.; Uratsuka, S. A comparison of the methods for the urban land cover change detection by high-resolution SAR data. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 3470–3473. [Google Scholar]
  7. Ban, Y.; Yousif, O.A. Multitemporal spaceborne SAR data for urban change detection in China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1087–1094. [Google Scholar] [CrossRef]
  8. Gong, M.; Zhang, P.; Su, L.; Liu, J. Coupled dictionary learning for change detection from multisource data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7077–7091. [Google Scholar] [CrossRef]
  9. Samaniego, L.; Bárdossy, A.; Schulz, K. Supervised classification of remotely sensed imagery using a modified k-NN technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112–2125. [Google Scholar] [CrossRef]
  10. Frate, F.D.; Schiavon, G.; Solimini, C. Application of neural networks algorithms to QuickBird imagery for classification and change detection of urban areas. In Proceedings of the IGARSS 2004, 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; Volume 2, pp. 1091–1094. [Google Scholar]
  11. Fernàndez-Prieto, D.; Marconcini, M. A novel partially supervised approach to targeted change detection. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5016–5038. [Google Scholar] [CrossRef]
  12. Roy, M.; Ghosh, S.; Ghosh, A. A novel approach for change detection of remotely sensed images using semi-supervised multiple classifier system. Inf. Sci. 2014, 269, 35–47. [Google Scholar] [CrossRef]
  13. Yuan, Y.; Lv, H.; Lu, X. Semi-supervised change detection method for multi-temporal hyperspectral images. Neurocomputing 2015, 148, 363–375. [Google Scholar] [CrossRef]
  14. Zheng, Y.; Jiao, L.; Liu, H.; Zhang, X.; Hou, B.; Wang, S. Unsupervised saliency-guided SAR image change detection. Pattern Recognit. 2017, 61, 309–326. [Google Scholar] [CrossRef]
  15. Bazi, Y.; Bruzzone, L.; Melgani, F. An unsupervised approach based on the generalized Gaussian model to automatic change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef] [Green Version]
  16. Bazi, Y.; Bruzzone, L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 349–353. [Google Scholar] [CrossRef] [Green Version]
  17. Gong, M.; Cao, Y.; Wu, Q. A neighborhood-based ratio approach for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2011, 9, 307–311. [Google Scholar] [CrossRef]
  18. Sumaiya, M.N.; Kumari, R.S.S. Logarithmic mean-based thresholding for sar image change detection. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1726–1728. [Google Scholar] [CrossRef]
  19. Sumaiya, M.N.; Kumari, R.S.S. Gabor filter based change detection in SAR images by KI thresholding. Opt. Int. J. Light Electron Opt. 2017, 130, 114–122. [Google Scholar] [CrossRef]
  20. Bazi, Y.; Melgani, F.; Bruzzone, L.; Vernazza, G. A genetic expectation-maximization method for unsupervised change detection in multitemporal SAR imagery. Int. J. Remote Sens. 2009, 30, 6591–6610. [Google Scholar] [CrossRef]
  21. Belghith, A.; Collet, C.; Armspach, J.P. Change detection based on a support vector data description that treats dependency. Pattern Recognit. Lett. 2013, 34, 275–282. [Google Scholar] [CrossRef]
  22. Shang, R.; Qi, L.; Jiao, L.; Stolkin, R.; Li, Y. Change detection in SAR images by artificial immune multi-objective clustering. Eng. Appl. Artif. Intell. 2014, 31, 53–67. [Google Scholar] [CrossRef]
  23. Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using combined difference image and $ k $-means clustering for SAR image change detection. IEEE Geosci. Remote Sens. Lett. 2014, 11, 691–695. [Google Scholar] [CrossRef]
  24. Li, H.; Gong, M.; Wang, Q.; Liu, J.; Su, L. A multi-objective fuzzy clustering method for change detection in SAR images. Appl. Soft Comput. 2016, 46, 767–777. [Google Scholar] [CrossRef]
  25. Tian, D.; Gong, M. A novel edge-weight based fuzzy clustering method for change detection in SAR images. Inf. Sci. 2018, 467, 415–430. [Google Scholar] [CrossRef]
  26. Gong, M.; Yang, H.; Zhang, P. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images. ISPRS J. Photogramm. Remote Sens. 2017, 129, 212–225. [Google Scholar] [CrossRef]
  27. Li, M.; Li, M.; Zhang, P.; Wu, Y.; Song, W.; An, L. SAR Image Change Detection Using PCANet Guided by Saliency Detection. IEEE Geosci. Remote Sens. Lett. 2019, 16, 402–406. [Google Scholar] [CrossRef]
  28. Chen, H.; Jiao, L.; Liang, M.; Liu, F.; Yang, S.; Hou, B. Fast unsupervised deep fusion network for change detection of multitemporal SAR images. Neurocomputing 2019, 332, 56–70. [Google Scholar] [CrossRef]
  29. Carincotte, C.; Derrode, S.; Bourennane, S. Unsupervised change detection on SAR images using fuzzy hidden Markov chains. IEEE Trans. Geosci. Remote Sens. 2006, 44, 432–441. [Google Scholar] [CrossRef]
  30. Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1122–1126. [Google Scholar] [CrossRef]
  31. Gong, M.; Li, Y.; Jiao, L.; Jia, M.; Su, L. SAR change detection based on intensity and texture changes. ISPRS J. Photogramm. Remote Sens. 2014, 93, 123–135. [Google Scholar] [CrossRef]
  32. Li, C.; Huang, R.; Ding, Z.; Gatenby, J.C.; Metaxas, D.N.; Gore, J.C. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process. 2011, 20, 2007. [Google Scholar]
  33. Ahmadi, S.; Zoej, M.J.V.; Ebadi, H.; Moghaddam, H.A.; Mohammadzadeh, A. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 150–157. [Google Scholar] [CrossRef]
  34. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  36. Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images. IEEE Trans. Fuzzy Syst. 2013, 22, 98–109. [Google Scholar] [CrossRef]
  37. Moser, G.; Serpico, S.B. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  38. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar]
Figure 1. The flowchart of our proposed model.
Figure 1. The flowchart of our proposed model.
Remotesensing 12 01746 g001
Figure 2. The SAR data sets include before change image, after change image, and reference change maps: (Row 1) Yellow River Estuary; (Row 2) Bern data set; (Row 3) Ottawa data set.
Figure 2. The SAR data sets include before change image, after change image, and reference change maps: (Row 1) Yellow River Estuary; (Row 2) Bern data set; (Row 3) Ottawa data set.
Remotesensing 12 01746 g002
Figure 3. The difference images generated from four operators. (a) Subtraction; (b) Log-ratio; (c) normal difference; (d) RMLND.
Figure 3. The difference images generated from four operators. (a) Subtraction; (b) Log-ratio; (c) normal difference; (d) RMLND.
Remotesensing 12 01746 g003
Figure 4. The results of the DFLAC model and the reference image (Row 1) Results; (Row2) Reference Images.
Figure 4. The results of the DFLAC model and the reference image (Row 1) Results; (Row2) Reference Images.
Remotesensing 12 01746 g004
Figure 5. The variation of the Kappa according to the α parameter.
Figure 5. The variation of the Kappa according to the α parameter.
Remotesensing 12 01746 g005
Figure 6. The variation of the Kappa according to the β parameter.
Figure 6. The variation of the Kappa according to the β parameter.
Remotesensing 12 01746 g006
Figure 7. The variation of the Kappa according to the γ parameter.
Figure 7. The variation of the Kappa according to the γ parameter.
Remotesensing 12 01746 g007
Figure 8. The variation of the Kappa for all operators in each data set.
Figure 8. The variation of the Kappa for all operators in each data set.
Remotesensing 12 01746 g008
Figure 9. The Kappa change rate with respect to the number of training data of the changed region.
Figure 9. The Kappa change rate with respect to the number of training data of the changed region.
Remotesensing 12 01746 g009
Figure 10. The Kappa change rate with respect to the number of training data of the unchanged region.
Figure 10. The Kappa change rate with respect to the number of training data of the unchanged region.
Remotesensing 12 01746 g010
Table 1. Characteristics of the SAR datasets.
Table 1. Characteristics of the SAR datasets.
Data SetSize (Pixel)Resolution (m)Date of the First Image Date of the Second ImageLocationSensor
Yellow River Estuary289 × 2578June 2008June 2009Dongying,
Shandong Province of China
Radarsat 2
Bern301 × 30130April 1999May 1999a region near the city of BernEuropean Remote Sensing 2 satellite
Ottawa350 × 29010July 1997August 1997Ottawa CityRadarsat 2
Table 2. Training data of the sample data sets based on the Otsu threshold in the range [0, 255].
Table 2. Training data of the sample data sets based on the Otsu threshold in the range [0, 255].
DatasetChanged RegionsUnchanged Regions
Yellow river Estuary127.00169.67212.33255042.167
Bern112.40159.94207.47255032.44
Ottawa127.00169.67212.33255042.17
Table 3. The accuracy of the proposed model compared to other models.
Table 3. The accuracy of the proposed model compared to other models.
DatasetMethodPCC %OE %Kappa %
Yellow River EstuarySGK98.061.9285.24
NR88.33 79.99
LN-GKIT69.6030.8233.78
CV95.374.6384.38
DRLSE90.849.1669.47
DFLAC95.494.5184.65
BernSGK99.680.3287.05
NR99.660.3485.90
LN-GKIT99.900.3585.37
CV99.610.3985.32
DRLSE98.701.3063.32
DFLAC99.680.3287.07
OttawaSGK98.951.0595.98
NR97.912.0992.2
LN-GKIT98.352.2291.87
CV97.062.9388.92
DRLSE95.444.5681.37
DFLAC99.001.0096.26
Table 4. The comparison of the Kappa of the difference image operators.
Table 4. The comparison of the Kappa of the difference image operators.
SubtractionLog-RatioNormal DifferenceRMLND
Yellow River Estuary75.2283.9784.7184.65
Bern32.4481.2683.1487.07
Ottawa83.8195.2694.9296.26
Average63.8286.8387.5989.33
Table 5. The comparison of the model’s speed in the second.
Table 5. The comparison of the model’s speed in the second.
Data SetsDFLAC Time StepsSGKSGK/DFLAC
12345Total Time
Yellow River Estuary0.010.040.018.900.018.9760.536.75
Bern0.020.050.018.310.018.4086.6010.31
Ottawa0.020.050.029.340.019.44101.1710.72
Average0.020.050.018.850.018.9482.779.25

Share and Cite

MDPI and ACS Style

Ahmadi, S.; Homayouni, S. A Novel Active Contours Model for Environmental Change Detection from Multitemporal Synthetic Aperture Radar Images. Remote Sens. 2020, 12, 1746. https://doi.org/10.3390/rs12111746

AMA Style

Ahmadi S, Homayouni S. A Novel Active Contours Model for Environmental Change Detection from Multitemporal Synthetic Aperture Radar Images. Remote Sensing. 2020; 12(11):1746. https://doi.org/10.3390/rs12111746

Chicago/Turabian Style

Ahmadi, Salman, and Saeid Homayouni. 2020. "A Novel Active Contours Model for Environmental Change Detection from Multitemporal Synthetic Aperture Radar Images" Remote Sensing 12, no. 11: 1746. https://doi.org/10.3390/rs12111746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop