Next Article in Journal
Registration of Building Scan with IFC-Based BIM Using the Corner Points
Next Article in Special Issue
EDTRS: A Superpixel Generation Method for SAR Images Segmentation Based on Edge Detection and Texture Region Selection
Previous Article in Journal
Unsustainable Anthropogenic Activities: A Paired Watershed Approach of Lake Urmia (Iran) and Lake Van (Turkey)
Previous Article in Special Issue
End-to-End Radar HRRP Target Recognition Based on Integrated Denoising and Recognition Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN

1
Information and Navigation College, Air Force Engineering University, Xi’an 710077, China
2
Collaborative Innovation Center of Information Sensing and Understanding, Air Force Engineering University, Xi’an 710077, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5270; https://doi.org/10.3390/rs14205270
Submission received: 21 September 2022 / Revised: 17 October 2022 / Accepted: 18 October 2022 / Published: 21 October 2022
(This article belongs to the Special Issue SAR Images Processing and Analysis)

Abstract

:
Inverse synthetic aperture radar (ISAR) imaging for maneuvering targets suffers from a Doppler frequency time-varying problem, leading to the ISAR images blurred in the azimuth direction. Given that the traditional imaging methods have poor imaging performance or low efficiency, and the existing deep learning imaging methods cannot effectively reconstruct the deblurred ISAR images retaining rich details and textures, an unblurring ISAR imaging method based on an advanced Transformer structure for maneuvering targets is proposed. We first present a pseudo-measured data generation method based on the DeepLabv3+ network and Diamond-Square algorithm to acquire an ISAR dataset for training with good generalization to measured data. Next, with the locally-enhanced window Transformer block adopted to enhance the ability to capture local context as well as global dependencies, we construct a novel Uformer-based GAN (UFGAN) to restore the deblurred ISAR images with rich details and textures from blurred imaging results. The simulation and measured experiments show that the proposed method can achieve fast and high-quality imaging for maneuvering targets under the condition of a low signal-to-noise ratio (SNR) and sparse aperture.

Graphical Abstract

1. Introduction

Inverse synthetic aperture radar (ISAR) imaging technology is an effective approach to achieving high-resolution imaging for non-cooperative targets and has the merits of all-weather, all-time, and long-range [1]. Classic range Doppler (RD) algorithm is effective for smooth moving targets under the assumption of a small angle, which simplifies the imaging process by transforming two-dimensional ISAR signal processing into two one-dimensional FFT operations. However, for targets with extreme maneuverability, the RD algorithm is no longer applicable [2]. The rotation component during coherent processing intervals (CPI) is no longer equivalent to uniform rotation, resulting in the time-varying Doppler frequencies and leading to the ISAR image blurred along the azimuth direction, which will bring a serious challenge for the subsequent target classification and identification.
Traditional imaging methods for maneuvering targets mainly include range-instantaneous Doppler (RID) methods and parameter estimation methods [3,4]. The RID methods replace the traditional Fourier transform with time-frequency analysis (TFA) tools. Short-time Fourier transform (STFT), Wigner–Ville distribution (WVD), and smooth pseudo WVD distribution (SPWVD) are several common time-frequency analysis tools used in ISAR imaging [5,6]. These methods could reduce the blur of ISAR images for maneuvering targets to some extent, but they inevitably suffer from a tradeoff between the ability to suppress cross-terms and the time-frequency resolution. Adaptive Chirplet decomposition, chirp Fourier transform (CFT), and Radon–Wigner transform (RWT) are several common parameter estimation methods [7,8,9]. By estimating the parameters of the echo, these methods could produce high-resolution images of maneuvering targets without cross-terms, but they have a strict requirement for the scatterer distribution and are sensitive to noise. In order to improve the estimation accuracy, parameter estimation methods come at the cost of high computational complexity, so they cannot meet the demands for ISAR real-time imaging, especially in low signal-to-noise (SNR) scenarios.
In recent years, deep learning (DL) has achieved good applications in target detection [10,11,12], image classification [13], signal recovery, etc. Particularly designed deep neural networks are also introduced to the radar imaging field. Providing unprecedented performance gains in resolution and imaging efficiency, these networks have overcome the main limitations of traditional methods. At present, ISAR imaging methods based on DL can be mainly classified into two categories, i.e., model-driven methods and data-driven methods.
Model-driven DL methods [14] unfold the traditional iterative optimization algorithm into a multi-layer deep network such as the CIST network [15], ADMMN [16], and AF-AMPNet [17]. They are also called deep unfolding. By setting the adjustable parameters, the network could be designed and trained in accordance with the physical models.
Deep-unfolding networks show excellent reconstruction performance while maintaining high computational efficiency and have strong interpretability [18]. However, when imaging the maneuvering targets, these methods cannot fit well. The essence of the deep unfolding methods is the deep implementation of compressed sensing (CS) and iterative optimization algorithms [19]. The imaging performance cannot be dominant over the upper limit of traditional methods. Therefore, without the assistance of traditional methods, model-driven methods cannot avoid the blurring of ISAR images when imaging maneuvering targets.
Typical data-driven DL methods include fully convolutional neural network (FCNN) [20], Unet [21], GAN [22,23], etc. Data-driven methods directly learn the complicated nonlinear mapping from the input low-resolution ISAR images to the super-resolution output images by designing and training the deep networks [24]. By replacing on-line calculation with off-line network training, data-driven methods can reconstruct ISAR images efficiently and have strong robustness to various noise levels. However, there are still two challenges.
Firstly, though the existing data-driven methods show excellent performance on super-resolution and denoising, they have a weak ability to restore image details and textures information. When imaging maneuvering targets, these methods are not able to use neural networks alone to recover deblurred ISAR images. For example, ref. [25] firstly uses the keystone transform to compensate for the main phase error in the echo caused by the maneuver and then uses the u-net network to improve the resolution. The STFT-Unet in [26] plays a role in enhancing the resolution of the time-frequency spectrum. The above data-driven methods have to first use traditional methods to remove most of the image blur before applying deep neural networks, resulting in cumbersome imaging processes, so they are unfavorable for real-time imaging.
Secondly, the imaging performance and the generalization capability of the data-driven methods rely heavily on the datasets [24]. The performance of the network trained by simulation data may be degraded when applied to measured data due to the scattering distributions of simulated data, and measured data are quite different. The ISAR image of the measured data is usually a combination of block regions with different shapes [27]. However, most of the existing literature use randomly distributed scattering points to construct simulation datasets [28], which cannot simulate the complex scattering distributions of the measured block targets.
To cope with the above challenges, we first propose a pseudo-measured data generation method based on the Deeplabv3+ network [29] and Diamond-Square algorithm [30]. The generated random block targets could simulate the complicated scattering distribution of the measured ISAR data. Then we construct a Uformer-based GAN, dubbed UFGAN, to present a novel unblurring ISAR imaging method for maneuvering targets. The latest proposed Uformer on CVPR 2022 has been proven to show superior performance in several image restoration tasks [31]. In this paper, we refer to LeWin Transformer blocks to design a generator with the capability of capturing texture features as well as global information. Moreover, the global GAN and PatchGAN [32] are combined to build a novel Transformer-based discriminator, which can fuse the local details and global features to comprehensively discriminate the generated images. The loss function we use is a combination of the Charbonnier loss, perceptual loss [33], and adversarial loss to focus on both global similarity and perceptual features.
The main contributions of this paper include:
  • A pseudo-measured data generation method is proposed. We construct an aircraft ISAR imaging dataset for network training following this method. It provides a stimulating solution to an awkward predicament where the imaging performance of the existing data-driven DL imaging methods is seriously restricted by the scarcity of the publicly available dataset when imaging measured data.
  • Uformer, as a state-of-the-art Transformer structure, is used to construct a novel UFGAN for the restoration of deblurred ISAR images of maneuvering targets. As far as we know, it is the first attempt to apply a Transformer in ISAR imaging. The constructed network far surpasses traditional imaging methods for maneuvering targets in imaging performance and imaging efficiency and compared with the present data-driven methods, and the UFGAN-based method shows better performance in restoring the details and texture features of ISAR images.
The remainder of this paper is composed as follows. Section 2 presents the signal model of ISAR imaging for a maneuvering target. Section 3 describes the architecture of the proposed UFGAN in detail. Section 4 presents the data acquisition process in detail. In Section 5, simulated and measured experiments are presented to prove the effectiveness of the proposed method. Section 6 concludes the full paper.

2. ISAR Imaging Signal Model of a Maneuvering Target

Assuming that the translational compensation [34] has been finished, a two-dimensional ISAR imaging geometric model for a maneuvering target is presented in Figure 1.
With the Y axis along the radar line of sight (LOS), a cartesian coordinate XOY is established on the target, and the center of the revolving stage O c is determined as the origin of the coordinate. The distance from the origin to the radar is R P . Suppose the scattering point P rotates an angle of θ ( t ) during CPI, the distance from P to the radar can be calculated by:
R P ( t ) = R 0 2 + r p 2 2 R 0 r P cos [ θ ( t ) + π 2 ] R 0 + x P sin θ ( t ) + y P cos θ ( t )
Under the assumption of a small imaging rotation angle for maneuvering targets, high-order motion components can be ignored, so the motion of P in the imaging plane can be approximated as a uniformly accelerated rotation with a constant jerk. Suppose the initial angular velocity is ω 0 , the angular acceleration is β , and the angular jerk is γ , the rotation angle could be written as:
θ ( t ) = ω 0 t + 1 2 β t 2 + 1 6 γ t 3
The range shift of P caused by rotation can be calculated as:
R Δ ( t ) = R P ( t ) R 0 = x p sin θ ( t ) + y p cos θ ( t )
Furthermore, for small angle, we have sin θ ( t ) θ ( t ) ,   cos θ ( t ) 1 , and Equation (3) can be rewritten as:
R Δ ( t ) = y p + x p ω t + 1 2 x p β t 2 + 1 6 x p λ t 3
Suppose the radar transmits linear frequency modulation (LFM)signal as:
s ( t k , t m ) = rect ( t k T p ) exp [ j 2 π ( f c t + 1 2 μ t k 2 ) ]
where rect ( t k T p ) = { 1 , | t k | T p / 2 0 , | t k | > T p / 2 is the rectangular window function, t = t k + t m is the full time, t k is the fast time, indicating the elapsed time from the transmission to the reception of a pulse, t m is the slow time, indicating the transmission moment of each pulse. T p represents the pulse width, f c is the carrier frequency and μ indicates the chirp rate.
For point P, suppose the propagation speed of electromagnetic wave is c, the time delay of the radar signal from transmission to reception can be calculated by τ = 2 R P ( t m ) c , the received echo signal can be written as:
s r ( t k , t m ) = σ P rect ( t k τ T p ) exp [ j 2 π f c ( t k τ ) ] exp [ j π μ ( t k τ ) 2 ]
where σ P is the scattering coefficient of the point P.
To simplify the calculation, the center of the revolving stage is selected as the reference point. Similarly, the time delay at the reference point can be calculated by τ ref = 2 R 0 c , then the reference signal can be written as:
s ref ( t k , t m ) = rect ( t k τ ref T ref ) exp [ j 2 π f c ( t k τ ref ) ] exp [ j π μ ( t k τ ref ) 2 ]
The range compression signal could be obtained by “dechirp” processing as follows:
s ( t k , t m ) = s r ( t k , t m ) s ref * ( t k , t m ) = σ P rect ( t k 2 R P ( t m ) / c T p ) exp [ j 4 π c μ ( t k 2 R ref ( t m ) c ) ] exp [ j 4 π c f c R Δ ( t m ) ] exp [ j 4 π μ c 2 R Δ 2 ( t m ) ]
After Fourier transform to Equation (8), the high-resolution range profile (HRRP) of the target can be obtained as:
S ( f k , t m ) = σ P T p sin c [ T p ( f k + 2 μ c R Δ ( t m ) ) ]   exp [ j 4 π c f c R Δ ( t m ) ] exp [ j 4 π μ c 2 R Δ 2 ( t m ) ] exp [ j 4 π c 2 f c R Δ ( t m ) ]
The last two phase terms are the residual video phase (RVP) and the envelope skew term, respectively, and need to be compensated for. After phase compensation, the range compressed signal can be written as:
S ( f k , t m ) = σ P T p sin c [ T p ( f k + 2 μ c R Δ ( t m ) ) ] exp [ j 4 π c f c R Δ ( t m ) ]
Substituting Equation (4) into Equation (10), we have:
S ( f k , t m ) = S ( f k ) exp [ j 2 π μ ( 2 x p ω t m c ) ] exp [ j 2 π μ ( x p β t m 2 c ) ] exp [ j 4 π μ c y p ]
where S ( f k ) = T p sin c [ T p ( f k + 2 μ c ( y p + p ω t m + 1 2 x p β t m 2 + 1 6 x p γ t m 3 ) ) ] .
Suppose there are M scattering points, including P, within the discussed range unit. The azimuth echo signal can be obtained by adding the sub-echoes of each scattering point in the range unit:
s p ( t m ) = i = 1 M σ i exp [ j 2 π μ ( 2 x i ω 0 t m c ) ] exp [ j 2 π μ ( x i β t m 2 c ) ] exp [ j 2 π μ ( x i γ t m 3 3 c ) ] exp ( j 4 π μ c y i ) = i = 1 M σ i exp [ j 2 π ( f i t m + 1 2 g i t m 2 + 1 6 k i t m 3 ) ] exp ( j 4 π μ c y i )
where f i = 2 x i ω 0 μ c = 2 x i ω 0 λ , g i = 2 x i β μ c = 2 x i β λ , k i = 2 x i γ μ c = 2 x i γ λ .
Equation (12) demonstrates that the azimuth echo signal has the form of the multicomponent amplitude modulation-quadratic frequency modulation (AM-QFM) signal [35], which can more accurately describe the imaging characteristics of the maneuvering targets. The quadratic and cubic phase term of the AM-QFM signal leads to the blurring of ISAR image. The center frequency f i , chirp rate g i , and derivative of chirp rate k i of the AM-QFM signal are decided by the initial angular velocity ω 0 , angular acceleration β and angular jerk γ of the turntable model, respectively.
The last phase term indicates the migration through range cells (MTRC) of the ISAR image. It shows that the MTRC of the maneuvering target has nothing to do with the motion parameters.
According to Equation (12), we can further give the following relationship:
K β ω = g i f i = 2 x i β / λ 2 x i ω / λ = β ω
K γ β = k i g i = 2 x i γ / λ 2 x i β / λ = γ β
where K β ω , K γ β are called relative acceleration ratio and relative jerk ratio in this paper, respectively. Equation (12) can be rewritten as:
s p ( t m ) = i = 1 M σ i exp [ j 4 π ω λ x i ( 1 + 1 2 K β ω t m + 1 6 K γ β t m ) t m ] exp ( j 4 π μ c y i )
According to Equation (15), blurring only occurs along the azimuth direction of the ISAR image for the maneuvering target, and the blurring degree of the image is only related to the azimuth coordinates of the scattering point once the motion parameters are determined.

3. Proposed UFGAN-Based ISAR Imaging for Maneuvering Target

Profiting from the excellent simulation capability for arbitrary data distributions, GANs receive widespread use in the imaging field. GAN consists of two key components, i.e., generators and discriminators. The adversarial relationship between the above two gives GAN the ability to generate simulated images that are similar to real images. To obtain a high-quality deblurred ISAR image that looks real, we construct the UFGAN and propose a novel UFGAN-based ISAR Imaging method for maneuvering targets. In this section, we will present the imaging framework, the network architecture, and the loss function in detail.

3.1. Imaging Framework Based on UFGAN

The overall imaging framework for maneuvering the target is shown in Figure 2. By setting different motion parameters for the simulated scattering points, ISAR echoes with motion errors can be obtained. As analyzed in Section 2, the resulting ISAR images by the RD algorithm are blurred in azimuth direction due to the phase error introduced by the target maneuver. The generator is to transform the blurred image into the deblurred image, and the discriminator is to determine and distinguish whether the generated image is real or fake. The generator and the discriminator fight each other until the discriminator can barely distinguish between real and fake images.
Some ISAR images obtained from measured data have small sizes and low resolution due to the small number of range samples or azimuth samples. Existing data-driven DL imaging methods generally directly input the small-size and low-resolution images to neural networks, and the restoration effect is limited by the small number of pixel points of the images. The network cannot learn enough features to recover the details and textures of the images and, therefore, cannot obtain high-quality ISAR images.
In this paper, we add a “resize” operation in the training stage and testing stage to increase the size of small images by performing BiCubic interpolation before they are input to the network. Moreover, delicate label images with higher resolution and finer details are also presented. Ideal ISAR images are obtained by convolving the coordinates of simulated scattered points with PSF, so it is easy to obtain delicate label images by the simulated training data. This operation has two benefits. Firstly, it helps the network to learn more hidden layer features and obtain high-quality ISAR images. Secondly, the image is resized before it is input to the network so that the network can keep the input and output images with the same resolution, thus avoiding the need to adjust the network parameters and retrain when the input and output images are of different sizes.
In the testing stage, due to the publicly available measured echo data of maneuvering targets being rare, we used the ISAR-measured echo data of smooth targets to equivalently generate the echoes of maneuvering targets by means of the Fourier interpolation method. The details of the method are given in Section 4.

3.2. Design of the Proposed UFGAN

In our design of UFGAN, the adversarial mode of GAN is adopted to make the deblurred images generated by the network closer to the ideal high-quality ISAR images. The locally-enhanced window (LeWin) Transformer blocks and the learnable multi-scale restoration modulators are used to build a novel generator to restore more image details. Global GAN and PatchGAN are combined to construct a new Transformer-based discriminator to improve the discrimination criteria of generated images by comprehensively evaluating global information and texture features. The Charbonnier loss, perceptual loss, and adversarial loss are combined to construct a comprehensive loss function to match the design of the network.

3.2.1. Generator

The overall architecture of the proposed generator is a symmetric hierarchical structure following the spirits of U-Net, as shown in Figure 3. The generator consists of an encoder, a bottleneck, a decoder, and several multi-scale restoration modulators. The input is the blurred ISAR image represented by I b R H × W × 3 , with the image size of H × W and the channels of 3. Firstly, a 3 × 3 convolution with LeakyReLU is adopted to extract the shallow features represented by X 0 R H × W × C . Then X 0 is fed into four consecutive encoder levels. Each level includes several LeWin Transformer blocks connected in series and a down-sampling operation of a 4 × 4 convolution with stride two. After each level, the height and width of the feature maps are halved while the feature channels are doubled. Next, LeWin Transformer blocks connected in series as the bottleneck layer are used to capture longer dependencies.
The decoder has a symmetric structure with the encoder. Similarly, each decoder level consists of an up-sampling operation of 2 × 2 transposed convolution with stride two and a group of LeWin Transform blocks. Each level doubles the height and width of the feature maps while halving the feature channels. Owing to the design of the skip connection, the feature maps fed into the next decoder level are the concatenation of the output of the up-sampling layer and the features from the corresponding encoder level.
The multi-scale restoration modulators are denoted as learnable tensors with the shape of M × M × C , where M indicates the size of the window. The modulators are attached to all non-overlapping windows separately and act as a shared bias to calibrate features, which improves the adaptability of the network and promotes recovering more detail.
At last, the feature maps output from the encoder are flattened to two-dimensional feature maps and sent to the output projection layer with a 3 × 3 convolution. Then the residual image I r R H × W × 3 is produced and superimposed on the input image to generate a deblurred high-resolution ISAR image I 0 = I b + I r ( I 0 R H × W × 3 ) .

3.2.2. Discriminator

Traditional global discriminator aims to distinguish between the generated and real images by considering images holistically without focusing on whether the patches are well-matched to the global image. In contrast, PatchGAN [32] slides a window over the input image and obtains a scoring matrix to judge whether each patch is real or fake, which is more effective in revealing local details and capturing high-resolution.
In the design of our proposed discriminator, as shown in Figure 4, global GAN and PatchGAN are fused. Firstly, a shared layer consisting of LeWin Transform blocks and down-sampling layers is presented to extract shallow features, which have a similar structure to the encoder in the generator. After three levels, the network is divided into two paths. In one path, two dense layers are used, with channels 512 and 1 following an encoding layer to extract the global features. The other path employs a 3 × 3 convolutional layer to output a feature matrix containing all patch-level features for evaluating the local texture details.
By incorporating the two paths of global GAN and PatchGAN, the overall architecture integrates the local context and global information and provides a comprehensive evaluation of the image as a whole, as well as the consistency in local details.
The performance of the generator and discriminator is constantly improved as they work against each other, and the network eventually outputs deblurred images close to the real ISAR images.

3.2.3. LeWin Transformer Block

Standard Transformer structure has two disadvantages in image restoration. Firstly, it exploits the global self-attention on feature maps, leading to high computational costs as quadratic to the size of the feature maps. Secondly, the Transformer suffers a limited capability of leveraging local context, which is significant to restore deblurred ISAR images with high resolution.
Unlike the standard Transformer, the LeWin Transformer block performs a multi-head self-attention (W-MSA) for the non-overlapping local windows to reduce the computational cost, as shown in Figure 5. Moreover, the traditional Feed-Forward network is improved by adding a deep convolutional layer to enhance its local expression ability as the Locally-enhanced Feed-Forward Network (LeFF).
Figure 6 illustrates how the feature map is divided into non-overlapping windows.
Suppose the feature map X input R H × W × C is partitioned into non-overlapping windows of the size of M × M , the feature map in the i-th window X input i R L × L × C is flattened and transposed to be as X i R M 2 × C , where i { 1 , 2 , , N } , N = H W / M 2 . Next, by applying self-attention, X i is projected to the query, key and value represented by Q i , K i and V i R L 2 × C , respectively:
Q i = X i W Q , K i = X i W K , V i = X i W V
where W Q , W K and W V are the projection matrices. Next, Q i , K i and V i are respectively divided into k heads:
Q i = { Q 1 i , , Q k i } , K i = { K 1 i , , K k i } , V i = { V 1 i , , V k i }
The Self-Attention (SA) for the j-th head can be written as:
SA ( Q j i , K j i , V j i ) = softmax ( Q j i K j i T C / k ) V j i
The output feature map X output i R L × L × C of the i-th window can be obtained by concatenating the above attention value and being reshaped:
X output i = Reshape ( Concat j = 1 k ( S A ( Q j i , K j i , V j i ) ) W O + P )
where Reshape ( ) presents the reshaping operation and Concat ( ) presents concatenating, W O R C × C denotes learnable parameters, and P R M 2 × C denotes the embedding position information. At last, the output feature maps of all image patches { X output 1 , X output 2 , , X output N } are combined to obtain the overall feature map of the entire image X output R H × W × C .
Adjacent pixels are the essential references for image restoration, but the present Feed-Forward Network (FFN) in the standard Transformer shows a limitation in extracting local context information. The design of LeFF overcomes this drawback by adding a 3 × 3 depth-wise convolutional layer to the Feed-Forward Network.
The design of the LeWin Transformer Block can obviously reduce the computational cost. Given the feature map X input R H × W × C , the computational complexity of the standard Transformer is O ( H 2 W 2 C ) , while the computational complexity of the LeWin Transformer module is O ( H W M 2 M 4 C ) = O ( M 2 H W C ) .

3.3. Loss Function

The role of the loss function is to optimize the network in the expected direction during training. Different designs of the loss function will improve the performance of the output images in different aspects. Using a combination of several kinds of loss functions can improve the overall performance of the output images.

3.3.1. The Charbonnier Loss

Exploiting Mean Square Error (MSE) as the loss function promotes a high peak signal-to-noise ratio (PSNR) of the reconstruction results. However, the high-frequency information is easily lost, and the over-smooth texture will appear by MSE, which will make some weak scatterers disappear in ISAR images. In order to overcome this issue, the Charbonnier loss function is adopted as follows:
L char ( I o , I ) = I o I 2 + ε 2
where I o is the output deblurred ISAR image, I is the ideal unblurred ISAR image, and ε = 1 0 3 [31,36,37] is a constant to stabilize the value in experiments.

3.3.2. The Perceptual Loss

In order to achieve high-quality ISAR imaging while removing the blur, the perceptual loss focusing on image texture and edge features is used. Instead of calculating the loss between the output image and the ideal image directly, the key idea of perceptual loss is to compare the feature maps of the real image and the generated image, enhancing the similarity in feature space. The perceptual loss can be formulated by:
L p e r c ( I o , I ) = 1 H i W i C i X output i φ ( I H ) 2 2
where H i , W i , C i represent the height, width and channel of the feature map, X output i represents the output feature map of the i-th layer and φ ( ) is a function to obtain the feature map of an image. We select the fourth layer to calculate the perceptual loss.

3.3.3. The Adversarial Loss

The classic generative adversarial loss of GAN suffers from training difficulties, unstable gradients and mode collapse, etc. In order to train the network stably, the adversarial loss function of Wasserstein GAN with gradient penalty (WGAN-GP) proposed by Ishaan et al. is used [38]. WGAN-GP presents the definition of the Earth-Mover (EM) distance, and the objective function can be derived as:
min G   max D V ( G , D ) = E I O ~ P ( I O ) [ D ( G ( I O ) ) ] + E I ~ P ( I ) [ D ( I ) ] λ E I Pena ~ P ( I Pena ) [ ( D ( I Pena ) 2 1 ) 2 ]
where E represents taking average value, I Pena represents the sample image imposed a penalty. P ( ) indicates the distribution of the image, G ( ) and D ( ) indicate the output of the generator and discriminator, respectively. indicates the gradient, λ is the gradient penalty coefficient and the last term is the additional gradient penalty to constrain network training. The discriminator loss and generator loss with gradient penalty can be written as:
L adv D = E I O ~ P ( I O ) [ D ( G ( I O ) ) ] E I ~ P ( I ) [ D ( I ) ] + λ E I Pena ~ P ( I Pena ) [ ( D ( I Pena ) 2 1 ) 2 ]
L a d v G = E I O ~ P ( I O ) [ D ( G ( I O ) ) ]

3.3.4. The Overall Loss Function

Finally, the overall loss function can be obtained by the weighted sum of the above three loss functions:
L = L char ( I o , I ) + λ 1 L p e r c ( I o , I ) + λ 2 ( L G - a d v D + L G - a d v G ) + λ 3 ( L P - a d v D + L P - a d v G )
where λ 1 , λ 2 , λ 3 are three tradeoff parameters to control the balance of the combination of loss functions. In specific, the generator parameters are updated by overall loss L , the global GAN path and the PatchGAN path are trained by λ 2 ( L G - a d v D + L G - a d v G ) and λ 3 ( L P - a d v D + L P - a d v G ) , respectively. The generator and discriminator are trained in steps. During each mini-batch, firstly, the discriminator is fixed when training the generator, and then the generator is fixed when training the discriminator.

4. Data Generation

4.1. Generation of Simulated Targets

In practice, the scattering points on a target do not always emerge individually but exist in the form of regions or blocks. According to the scattering distribution characteristics behaved in ISAR images, we divide the imaging targets into two categories, i.e., point targets and block targets. Point targets are composed of individual scattering points and can be easily simulated by setting up randomly distributed scattering points. However, for a block target, the spectrum of the ISAR image is mixed and superimposed due to the aggregation characteristics of the scattering points, leading to rich image details and texture information. The block targets simulated by simple shapes are quite different from the real data.
Existing data-driven DL methods directly use the network trained by simulated point targets to image the measured block targets. However, this approach only preserves the pixels with large magnitudes as the individual scattering points, ignoring the weak scatters around and losing a lot of image details. In this paper, we propose a pseudo-measured data generation method to generate a variety of block targets with similar scattering distributions of the real measured data. Due to our focus on imaging for aircraft targets, the generation of various pseudo-measured aircraft block targets is taken as an example, as shown in Figure 7.
The processing steps of the method can be presented as follows:
The first step is to acquire varieties of aircraft geometric outlines. We use the images under “aeroplane” category in the PASCAL VOC2012 Augmented Dataset [39] to train the DeepLabv3+ network to be able to specifically segment geometric outlines from images containing aircraft targets. Next, by inputting images under the categories of “airliner” and “warplane” in the ImageNet2012 dataset [40] at a total of 2602 into the trained network model, 2602 images of aircraft geometric outlines are finally obtained.
The second step is gridding. Each aircraft geometric outline is meshed and mapped to a plane Cartesian coordinate with the size of 40 m × 40 m.
The third step is to generate random blocks within the gridded aircraft geometric outlines. The Diamond-Square algorithm is a random terrain generation algorithm that can randomly generate terrains with various shapes, such as mountains, hills, and oceans, in the grid of virtual scenes. Inspired by this, we refer to the Diamond-Square algorithm to randomly generate continuous scattering blocks within the aircraft’s geometric outlines.
Starting from the initial conditions, the scattering coefficient grid is continuously refined and calculated through the Diamond step and the Square step. The Diamond step is to calculate the scattering coefficient of the intersection of the square diagonals by a 2D random midpoint displacement algorithm, and the Square step is to calculate the scattering coefficients of the midpoint of each side of the square with the same random offset as the Diamond step. The detailed algorithm steps can be found in Appendix A.
Through the iterative calculation of the two steps, the scattering coefficients of all grid points can be obtained. Some examples of the generated block targets are shown in Figure 8.

4.2. Acquisition of Blurred ISAR Images

For simulated targets, the blurred ISAR images of maneuvering targets can be easily obtained according to Equation (15). However, for real data, most of the publicly available ISAR-measured data are collected from the detection of stationary moving targets. To address the problem that the measured ISAR data of real maneuvering targets are scarce, the Fourier interpolating re-sampling method is used to generate the equivalent ISAR echo data of maneuvering targets based on the existing measured data of smooth targets.
As indicated in Section 2, for a maneuvering target, the uniform motion or uniform acceleration motion model is not enough to accurately describe the motion state of the target. By retaining the third derivative term of displacement with respect to time, the motion state of the target is modeled as a variable acceleration motion as:
ω = ω 0 + β t + 1 2 γ t 2
where ω 0 , β , k represent the angular velocity, the angular acceleration and the angular jerk of the target, respectively.
Assuming that the slow time sampling interval is T l and the number of azimuth sampling is N l when the target rotates at a uniform angular velocity ω l , and the total angle the target rotates during the CPI can be calculated by:
Ω = ω l ( N l 1 ) T l
When the target rotates the same angle Ω with variable acceleration as shown in Equation (26), assuming that the angular velocity increment caused by angular acceleration is P ω 0 , the angular acceleration increment caused by the angular jerk γ is Q β , the radar slow time sampling interval is T , and the number of sampling is N , we can obtain:
{ ω l ( N 1 ) T l = ω 0 ( N 1 ) T + 1 2 β [ ( N 1 ) T ] 2 + 1 6 γ [ ( N 1 ) T ] 3 P ω 0 = β ( N 1 ) T + 1 2 γ [ ( N 1 ) T ] 2 Q β = γ ( N 1 ) T
The above system of equations can be solved as:
{ T = N l 1 ( N 1 ) ( 3 + Q 3 Q + 6 P + 1 ) ω 0 β = ( 3 + Q 3 Q + 6 P + 1 ) P ω 0 2 ( N l 1 ) ( 0.5 Q + 1 ) γ = ( 3 + Q 3 Q + 6 P + 1 ) 2 PQ ω 0 3 ( N l 1 ) 2 ( 0.5 Q + 1 )
When observing the variable acceleration moving target, the rotation angle at the n-th ( n [ 0 , N 1 ] ) sample point can be calculated as
Ω n = ω 0 T n + 1 2 β n T 2 + 1 6 γ n T 3 = N l 1 N 1 ξ ( P , Q ) n + N l 1 P N 1 2 ( Q + 2 ) ξ ( P , Q ) n 2 + N l 1 P Q N 1 3 ( 3 Q + 6 ) ξ ( P , Q ) n 3
where ξ ( P , Q ) = 3 + Q 3 Q + 6 P + 1 .
Since the radar pulse repetition interval is fixed, the uniformly moving target is sampled with an equal interval, while the variable-speed moving target is sampled with an unequal interval. Therefore, the slow-time sampling signal of the variable-acceleration moving target can be obtained by performing interpolating re-sampling on the echo of the uniform moving target according to the displacement change rule indicated by Equation (30).
Suppose the slow time sampling signal of the uniform moving target is s ( Ω n ) , by converting the distance axis of the signal into the time axis, the slow time sampling sequence s ( n ) can be obtained. The discrete Fourier transform of s ( n ) is:
X ( k ) = n = 0 N 1 s ( n ) exp ( j 2 k π n N ) , k [ ( N 1 ) / 2 , ( N 1 ) / 2 ]
Assuming that the slow time sampling signal of the variable acceleration moving target is s ( Ω n ) , the interpolated slow time sampling sequence of the variable acceleration moving target s ( n ) can be obtained by inverse Fourier transform as:
s ( n ) = s ( Ω n ) = s ( ( N l 1 ) ξ ( P , Q ) [ n N 1 + P n 2 ( N 1 ) 2 ( Q + 2 ) + PQ n 3 ( N 1 ) 3 ( 3 Q + 6 ) ] ) = 1 N l k = ( N l 1 ) / 2 k = ( N l 1 ) / 2 S ( k ) exp { j 2 π N l ξ ( P , Q ) ( N l 1 ) [ n N 1 + P n 2 ( N 1 ) 2 ( Q + 2 ) + PQ n 3 ( N 1 ) 3 ( 3 Q + 6 ) ] } ,   n [ 0 , N 1 ]
According to Equation (32), once we know the velocity increment P and acceleration increment Q of the variable acceleration movement during the CPI period, we can perform interpolating re-sampling through the slow time sampling sequence of uniform motion with the same moving distance and obtain the equivalent slow time sampling sequence of the maneuvering target.

4.3. Acquisition of Label ISAR Images

As shown in Figure 1, assuming the total rotation angle during CPI is Ω, the echo signal of the target can be regarded as the sum of the backscattered field of each scattering point as:
E S ( k , Ω ) = i = 1 N σ i exp [ j 2 ( k cos Ω x i + k sin Ω y i ) ]
where N represents the total number of scattering points on the target, σ i is the scattering coefficient, and ( x i , y i ) is the coordinate location for the i-th scattering point. k represents the wave number. Under the condition of small angle, k can be approximated as k k c = 2 π f c / c , where f c is the center frequency and k c is the wave number. Therefore, Equation (33) can be simplified as:
E S ( k , Ω ) = i = 1 N A i exp ( j 2 π f c c x i ) exp ( j 2 π Ω λ y i )
For ISAR imaging, the two-dimensional point spread response (PSR) of range direction and azimuth direction can be expressed by:
h ( x , y ) = [ 2 B c sin c ( 2 B c x ) exp ( j 4 π f c c x ) ] [ 2 Ω λ sin c ( 2 Ω λ y ) exp ( j 4 π Ω c λ y ) ]
where B represents the bandwidth of the transmitting signal. Ω c represents the central value of the coherent accumulation angles.
The ideal ISAR imaging of all scattering points on a target can be obtained by 2D inverse Fourier integral (2D-IFFT) of the echo signal E S ( k , Ω ) as:
ISAR ( x , y ) = i = 1 N k min k max Ω min Ω max E S ( k , Ω ) · exp j 2 k cos Ω · x i + k sin Ω · y i d 2 f c · d k c Ω π = i = 1 N A i · δ x x i , y y i h ( x , y )
where k min and k max represent the minimum and maximum values lower of the spatial frequency. Ω min and Ω max represent the initial and final look-angles, δ ( x , y ) is the impulse response.
It can be seen from Equation (36) that the 2D ISAR imaging result is nothing but the convolution of the position coordinates of all scattering points on the target with the 2D PSF function.

5. Experiments

In this section, we conduct simulation and measured experiments on ISAR imaging for maneuvering point targets and block targets based on the proposed UFGAN. The imaging results and imaging times are compared with the classical RD algorithm, which is for smooth targets, and STFT, WVD, SPWVD, and RWT algorithms which are used for maneuvering targets. The image entropy (IE), structural similarity (SSIM), target-to-clutter ratio (TCR), and imaging time are used as evaluation metrics to quantitatively evaluate the performance of different methods.

5.1. ISAR Imaging for Maneuvering Point Targets

The publicly available Boeing-727 data from V.C. Chen [41] are typical echo data of a point target. The numbers of range cells and azimuth cells are 64 and 256, respectively. The radar transmits a signal with a bandwidth of 150 MHz and a PRF of 20 KHz. The carrier frequency is 9 GHz. In order to ensure the best performance of the proposed method on Boeing-727 data, the same radar parameters are set for all simulated scattering point targets. The training set consists of 500 point targets, which are composed of randomly distributed scatter points.
As analyzed in Section 2, the blurring level of the image depends on the parameters K β ω and K γ β . Considering the achievability of the maneuver, in reality, we first restrict the value of K β ω and K γ β to be randomly taken between 0 to 5. Then the initial angular velocity ω 0 is set to be randomly taken in the range of 0.01~0.1 rad/s. The angular acceleration β and the jerk γ can be naturally determined by the above three parameters. In other words, ω 0 , K β ω and K γ β completely determined the motion state of the target. In order to improve the imaging performance of the network under sparse aperture and low SNR, the echo is added with the additive white Gaussian noise (AWGN) of SNR = −10 dB~10 dB and down-sampled at the sampling ratio of 20−80% in the azimuth direction. In this paper, the sampling ratio indicates the ratio of the number of retained samples after down-sampling to the total number of samples. The training stage went through 250 epochs and cost 3 h in total.
Figure 9 shows a training sample. Figure 9a presents the coordinate distribution of the scattering points on the point target. The motion parameters are: ω 0 = 0.024 rad/s, K β ω = 3.5 , K γ β = 4.5 , thus the angular acceleration and the jerk can be calculated as β = ω K β ω = 0.084 rad/s2, γ = β K γ β = 0.378 m/s3. Figure 9b presents the blurred imaging result by the RD method under a 30% sampling ratio with SNR = 0 dB. The ideal ISAR image without any phase error is generated according to Equation (36), as shown in Figure 9c.

5.1.1. Simulated Experiments

A simulated aircraft model of 74 points, as shown in Figure 10, is used to test the trained UFGAN network.
Assuming that the motion compensation has been completed, the target equivalently rotates at a variable acceleration. We set different motion parameters and conducted three experiments with different noise levels and sampling ratios. The motion parameters and imaging conditions are presented in Table 1.
The imaging results of the three experiments are presented in Figure 11. We can see that the high-resolution deblurred ISAR images can always be recovered with different motion parameters and signal conditions.
According to Table 1 and Figure 11, several conclusions that concluded from Equation (15) can be verified: (i) The degree of blurring of the ISAR image for a maneuvering target obtained by RD algorithm does not depend on the magnitude of its angular velocity, angular acceleration or angular jerk, but on their proportional relationship, i.e., the value of relative acceleration ratio K β ω and relative jerk ratio K γ β . (ii) The blurring of the target is gradually serious as K β ω and K γ β increase. In the case of short imaging time, K γ β has less effect on the degree of image blurring than K β ω .

5.1.2. Imaging Experiments of Public BOEING-727 Data

Boeing-727 data from Victor C. Chen are used to verify the effectiveness of the proposed method on real ISAR data. In order to demonstrate the superiority of the proposed method, we select several traditional methods for comparison.
Figure 12 presents the imaging results of different methods under full aperture without adding any noise. It can be seen that the imaging result by RD method is heavily blurred in the azimuth direction. STFT method has eliminated the main blurring of the image but suffers a low time-frequency resolution. ISAR images by WVD method have a better resolution, but cross-terms appear, degrading the quality of the imaging result. SPWVD method succeeds in suppressing the cross-terms but reduces the frequency resolution and loses some weak scattering points. RWT method effectively improves the azimuth focusing of the image, but there still exists some blurring and smearing. Compared with the above traditional methods, the proposed method achieves high-resolution restoration of the ISAR images without any blurring.
The second experiment is to verify the imaging performance of the proposed method under low SRN conditions by adding noise to the Boeing-727 echo. Figure 13 presents the imaging results under SNR = −10 dB by different methods. It can be seen that the ISAR images obtained by traditional methods are seriously degraded, that the target can be barely distinguishable due to the interference of the noise. By contrast, Figure 13f shows that the UFGAN network can effectively restore the blurred image even in a scenario with strong noise. Although there exist several hot pixels in the background, the target subject is still reconstructed with high quality.
Table 2 gives the values of the evaluation indicators of the traditional algorithms and the proposed method for Figure 13. For the Boeing-727 data, we cannot obtain the label ISAR image, so the SSIM could not be used as evaluation indicator in this experiment. The RD algorithm is far superior to other traditional algorithms in imaging time, but the image quality is the worst according to the value of IE and TCR. The proposed method outperforms other traditional algorithms a lot in imaging quality and imaging time.
The third experiment is to verify the imaging performance of the proposed method under sparse aperture. Figure 14a gives the down-sampled echo signal with a sampling ratio of 40%. Figure 14b–f presents the imaging results under sparse aperture by different methods. It can be seen that the target is subject to different degrees of spectral occlusion by traditional algorithms. By contrast, a high-resolution image with clear background is obtained by the proposed method.
Table 3 gives the values of the evaluation indicators by the traditional algorithms and the proposed method for Figure 14. It can be seen that the proposed method achieves superior performance compared with the traditional method, and the imaging is shorter than traditional methods except RD algorithm.

5.2. ISAR Imaging for Maneuvering Block Targets

In this section, we use the pseudo-measured dataset presented in Section 4 to carry out imaging experiments for maneuvering block targets. The radar parameters are set as the same as measured Yak-42 data, where the size of the echo matrix is 256 × 256, the carrier frequency is 5.52 GHz, the bandwidth is 400 MHz, the pulse width is 25.6 µs, and the PRF is 400 Hz.
The motion parameters of the training data are set as the same as the training point targets in Section 5.1. Then the blurred ISAR images and ideal label ISAR images are generated according to Equation (15) and Equation (36), respectively, which together form the paired pseudo-measured ISAR image sets.
In order to enhance the robustness of the network, sparse aperture and low SNR are considered. The noise of SNR randomly distributed in the range of −10 dB~10 dB is added to the pseudo-measured echoes, and the sampling ratio is randomly distributed in the range of 20–80%. We allocate 80% of the paired images as the training set and the remaining 20% as the test set to train the UFGAN network. The training stage went through 300 epochs and cost 7 h in total.
Figure 15 gives a sample of the training set. Figure 15a presents the coordinate distribution of the scattering points on the block target. The motion parameters of the block target are: ω 0 = 0.072 rad/s, K β ω = 4.0 , K γ β = 1.5 , thus the angular acceleration and the jerk can be calculated as β = ω K β ω = 0.288 rad/s2 and γ = β K γ β = 0.432 m/s3. Figure 15b presents the blurred ISAR imaging result by RD method under 35% sampling ratio with the noise level of SNR = 0 dB. Figure 15c presents the ideal imaging result.

5.2.1. Simulated Experiments

We chose a block target from the test set to test the effectiveness of the trained UFGAN network. The coordinate distribution and ideal ISAR image of the block target are shown in Figure 16.
We conducted four experiments with different motion parameters and imaging conditions, as presented in Table 4.
Figure 17 shows the imaging results of the four experiments. It can be seen that under the condition of acceptable noise level and sampling ratio, the blurred ISAR images of the maneuvering block targets can be effectively restored by the network to high-quality images without phase error. Even under more extreme imaging conditions, as shown in Figure 17d,h, the target subject region of the image can still be deblurred though there appear some hot pixels.

5.2.2. Measured Experiments

In the data acquisition process of Yak-42 aircraft by ISAR experimental radar, the movement with little maneuver can nearly be considered as a uniform motion during CPI. The Yak-42 echo can be imaged by RD algorithm to receive the unblurring ISAR image as shown in Figure 18.
To verify the performance of the trained UFGAN in maneuvering block target imaging, we adopt the Fourier interpolation re-sampling method as given in Section 4 to generate the echo data of the maneuvering target on the basis of the original Yak-42 data. The motion parameters are set as: the angular velocity of 0.03 rad/s, angular acceleration of 0.1 rad/s2, and angular jerk of 0.4 rad/s3. Several time-frequency analysis methods and parameter estimation methods are also presented to be compared with the proposed method. For the reason that cross-term issues of the WVD method have particularly serious impact on block targets, leading to a poor imaging result, it is no longer used as a comparison algorithm in the Yak-42 imaging experiment.
Figure 19 shows the imaging results without adding extra noise under full aperture. Figure 19a presents the “dechirped” echo after motion compensation. Figure 19b–f presents the imaging results by several traditional algorithms. It can be seen that the spectrum is heavily expanded along azimuth direction by RD algorithm. STFT algorithm alleviates the blurring issue of the image but has a low time-frequency resolution. The SPWVD method suppresses the cross-terms at the expense of stretching the spectrum in the azimuth direction, and the image contrast is severely decreased. RWT method improves the time-frequency resolution but has a weak ability to distinguish weak scattering areas from background noise, leading to a structural loss of the target scatterers. By contrast, the proposed method achieves superior performance, reconstructing a high-resolution ISAR image with rich details and fine textures.
Figure 20 and Figure 21 present the imaging results under the condition of different noise levels and sparse aperture. Figure 20a and Figure 21a present the down-sampled echo at the sampling ratio of 50% and 25% with SNR = 0 dB and −10 dB, respectively. Figure 20b–f and Figure 21b–f present the imaging results of different methods. The motion parameters are set as: the angular velocity of 0.05 rad/s and 0.08rad/s, angular acceleration of 0.2 rad/s2 and 0.24 rad/s2, and the angular jerk of 0.8 rad/s3 and 0.72 rad/s3, respectively. It can be seen that due to the presence of noise and sparse aperture, the quality of ISAR images obtained by traditional methods declined sharply. Especially under the condition of strong noise and low sampling ratio, as shown in Figure 20, the target has been nearly completely submerged in noise and spectral occlusion, to be hardly distinguished from the degraded images. However, the deblurred high-resolution ISAR image with rich details and fine textures can still be restored by UFGAN network, though several hot pixel blocks appear in the background.
Table 5 gives the evaluation indicators of the imaging results of the above two experiments by traditional algorithms and the proposed method for Figure 20 and Figure 21. Due to the Yak-42 data being collected from smooth target, the imaging result of the original echo by RD algorithm is unblurred, which can be used as the ideal image to calculate the SSIM. It can be seen that under the condition of sparse aperture and low SNR, the imaging performance of traditional methods seriously deteriorated, while the proposed method achieves fast and high-quality reconstruction of the ISAR images.

5.3. Performance Comparison with Existing Data-Driven Methods

To demonstrate that the proposed UFGAN-based method shows better performance in restoring the details and texture features of ISAR images compared with the existing data-driven methods, we selected an ISAR super-resolution imaging method based on improved GAN recently proposed by Wang H et al. [22]. It is a typical data-driven method, and, like most data-driven methods, it uses randomly distributed scattering points as the training set. We conducted an imaging comparison experiment on the measured Yak-42 data. Figure 22 presents the imaging results of the measured Yak-42 data. The first, second, and third lines give the imaging results under ideal imaging conditions, SNR = 0 dB at 50% sampling ratio and SNR = −10 dB at 25% sampling ratio, respectively. It can be seen that the imaging result by RD algorithm is seriously blurred along azimuth direction. The imaging result by method in [22] achieves super-resolution, but the ISAR images lose a lot of fine structural information, as shown in Figure 22b,e,h, and the outline of the target in the obtained ISAR image are stretched along the azimuth, and the shape of the target is distorted due to the maneuverability of the target. In contrast, the proposed method can achieve unblurring ISAR imaging with more details and fine textures, reconstructing the geometric shape and structure of the target more accurately.
To verify the robustness of the proposed UFGAN, Figure 23 presents the performance curves of more comparison results of the proposed DL-based methods with the traditional methods and method in [22] under the condition of different SNR and different sampling ratios. The motion parameters of Yak-42 data are kept unchanged, which is the same as the experiment presented in Figure 19. In order to control variate, the sampling ratio is fixed to be 50% in Figure 23a, and the SNR is fixed to be 0 dB in Figure 23b. It can be seen that imaging results by the proposed method have the minimum IE under various SNR and sampling rate conditions compared with the traditional methods.
It is worth noting that although some evaluation indicators, e.g., the IE, of the method in [22] are smaller than that of the proposed method under various SNR and sampling rate conditions, it loses many weak scattering points in the ISAR image and could hardly restore the details and texture features of the image as shown in Figure 22, leading to the distortion of the ISAR images in detail, while the proposed method performs better in unblurring reconstruction of high-quality ISAR images, retaining more detail and texture features.

6. Conclusions

For ISAR imaging of maneuvering targets, the existing deep learning methods could not avoid the blurring of ISAR images without the assistance of traditional methods such as RID and show a weak ability to restore image details and textures. In this article, a novel unblurring ISAR imaging method for maneuvering targets based on UFGAN is proposed. Firstly, according to the derivation of the azimuth echo signal with the form of a QFM signal, the blurred ISAR images for network training are obtained. To improve the generalization in measured data, we propose a pseudo-measured data generation method based on the DeepLabv3+ network and the Diamond-Square algorithm. Then we use the LeWin blocks and multi-scale restoration modulators to build a novel UFGAN, which can effectively restore the image details. The discriminator is designed by combining the PatchGAN and global GAN to aggregate the local and global information to provide a comprehensive evaluation of the image as a whole as well as the consistency in local details. A comprehensive loss function to consider both perceptual loss and adversarial loss is designed to match the performance of the network. In the test stage, to verify the effectiveness of the network on the measured data, Fourier interpolating re-sampling is used to obtain the equivalent ISAR echo of maneuvering targets. Finally, we conducted simulated and measured experiments and comparisons under sparse aperture and low SNR conditions to verify the effectiveness and efficiency of the proposed method.
Noticing that the proposed method cannot succeed in effectively imaging multiple maneuvering targets because the motion parameters of each target are different. In order to cope with the issue, a recognition module might be needed to distinguish different objects in one imaging scene according to the degree of blurring, and then different partitions of the image can be processed separately.

Author Contributions

Conceptualization and methodology, W.L.; software, W.L.; resources, Y.L.; writing—review and editing, W.L., Y.Y., Y.Z. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants 62131020 and 62001508.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

As shown in Figure A1, the solid circles represent the newly calculated scattering points, and the hollow circles represent the known grid points with the scattering coefficients to be updated at the same time.
Figure A1. The schematic diagram of the implementation steps of Diamond-Square algorithm: (a) The initial points; (b) The generation of ceter point; (c) The generation of midpoint of each side of the square; (d) The generation of the intersection of the diagonals; (e) The generation midpoint of each side of the smaller square.
Figure A1. The schematic diagram of the implementation steps of Diamond-Square algorithm: (a) The initial points; (b) The generation of ceter point; (c) The generation of midpoint of each side of the square; (d) The generation of the intersection of the diagonals; (e) The generation midpoint of each side of the smaller square.
Remotesensing 14 05270 g0a1
The algorithm is initialized by planting several random seeds for the vertices of the square as their scattering coefficient values. The key calculation steps are:
  • Diamond Step: As shown in Figure A1b,d, suppose the coordinates of the lower left corner of the current square is ( x , y ) , calculate the scattering coefficient of the intersection of the diagonals of the square as follows:
    h ( x + s / 2 , y + s / 2 ) = h ( x , y ) + h ( x + s , y ) + h ( x , y + s ) + h ( x + s , y + s ) 4 + r m i
    where s is the iteration step, which is initialized as S and is halved at each iteration.
    r m i = 1 2 2 r h 2 ( d i ) r h rand
    where d i represents the segment spacing after subdivision, r h represents the given roughness and r a n d represents the random numbers obeying standard normal distribution.
  • Square Step: As shown in Figure A1c,e, the scattering coefficient of the midpoint of each side of the square is calculated as:
    h ( x + s / 2 , y ) = h ( x , y ) + h ( x + s , y ) 2 + r m i , h ( x , y + s / 2 ) = h ( x , y ) + h ( x , y + s ) 2 + r m i

References

  1. Chen, V.C.; Martorella, M. Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications; SciTech Publishing: Edison, NJ, USA, 2014; pp. 116–123. [Google Scholar]
  2. Chen, V.C.; Qian, S. Joint time-frequency transform for radar range Doppler imaging. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 486–499. [Google Scholar] [CrossRef]
  3. Fu, J.; Xing, M.; Sun, G. Time-Frequency Reversion-Based Spectrum Analysis Method and its Applications in Radar Imaging. Remote Sens. 2021, 13, 600. [Google Scholar] [CrossRef]
  4. Wang, Y.; Huang, X.; Zhang, Q.X. Rotation parameters estimation and cross-range scaling research for range instantaneous Doppler ISAR images. IEEE Sens. J. 2020, 20, 7010–7020. [Google Scholar] [CrossRef]
  5. Chen, V.C.; Miceli, W.J. Time-varying spectral analysis for radar imaging of maneuvering targets. IET Proc. Radar Sonar Navig. 1998, 145, 262–268. [Google Scholar] [CrossRef]
  6. Xing, M.D.; Wu, R.B.; Li, Y.C.; Bao, Z. New ISAR imaging algorithm based on modified Wigner Ville distribution. IET Proc. Radar Sonar Navig. 2009, 3, 70–80. [Google Scholar] [CrossRef]
  7. Lv, Y.; Wang, Y.; Wu, Y.; Wang, H.; Qiu, L.; Zhao, H.; Sun, Y. A Novel Inverse Synthetic Aperture Radar Imaging Method for Maneuvering Targets Based on Modified Chirp Fourier Transform. Appl. Sci. 2018, 8, 2443. [Google Scholar] [CrossRef] [Green Version]
  8. Li, J.; Ling, H. Application of adaptive chirplet representation for ISAR feature extraction from targets with rotating parts. IEEE Proc. Radar Sonar Navig. 2003, 150, 284–291. [Google Scholar] [CrossRef]
  9. Li, W.C.; Wang, X.S.; Wang, G.Y. Scaled radon-Wigner transform imaging and scaling of maneuvering target. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 2043–2051. [Google Scholar] [CrossRef]
  10. Yiğit, E.; Demirci, Ş.; Özdemir, C. Clutter removal in millimeter wave GB-SAR images using OTSU’s thresholding method. Int. J. Eng. Sci. 2022, 7, 43–48. [Google Scholar] [CrossRef]
  11. Duysak, H.; Yiğit, E. Investigation of the performance of different wavelet-based fusions of SAR and optical images using Sentinel-1 and Sentinel-2 datasets. Int. J. Eng. Sci. 2022, 7, 81–90. [Google Scholar] [CrossRef]
  12. Bayramoğlu, Z.; Uzar, M. Performance analysis of rule-based classification and deep learning method for automatic road extraction. Int. J. Eng. Sci. 2023, 8, 83–97. [Google Scholar] [CrossRef]
  13. Zhao, S.Y.; Zhang, Z.H.; Zhang, T.; Guo, W.W.; Luo, Y. Transferable SAR Image Classification Crossing Different Satellites under Open Set Condition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  14. Kang, L.; Sun, T.C.; Luo, Y.; Ni, J.C.; Zhang, Q. SAR Imaging based on Deep Unfolded Network with Approximated Observation. IEEE Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  15. Wei, S.; Liang, J.; Wang, M.; Zeng, X.; Shi, J.; Zhang, X. CIST: An Improved ISAR Imaging Method Using Convolution Neural Network. Remote Sens. 2020, 12, 2641. [Google Scholar] [CrossRef]
  16. Zhang, S.H.; Liu, Y.X.; Li, X. Computationally Efficient Sparse Aperture ISAR Autofocusing and Imaging Based on Fast ADMM. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8751–8765. [Google Scholar] [CrossRef]
  17. Wei, S.J.; Liang, J.D.; Wang, M.; Shi, J.; Zhang, X.L.; Ran, J.H. AF-AMPNet: A Deep Learning Approach for Sparse Aperture ISAR Imaging and Autofocusing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5206514. [Google Scholar] [CrossRef]
  18. Liang, J.D.; Wei, S.J.; Wang, M.; Shi, J.; Zhang, X.L. Sparsity-driven ISAR imaging via hierarchical channel-mixed framework. IEEE Sens. J. 2021, 21, 19222–19235. [Google Scholar] [CrossRef]
  19. Wang, M.; Wei, S.; Liang, J.; Zeng, X.; Wang, C.; Shi, J.; Zhang, X. RMIST-Net: Joint Range Migration and Sparse Reconstruction Network for 3-D mmW Imaging. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  20. Hu, C.; Wang, L.; Li, Z.; Zhu, D. Inverse synthetic aperture radar imaging using a fully convolutional neural Network. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1203–1207. [Google Scholar] [CrossRef]
  21. Yang, T.; Shi, H.Y.; Lang, M.Y.; Guo, J.W. ISAR imaging enhancement: Exploiting deep convolutional neural network for signal reconstruction. Int. J. Remote Sens. 2020, 41, 9447–9468. [Google Scholar] [CrossRef]
  22. Wang, H.; Li, K.; Lu, X.; Zhang, Q.; Luo, Y.; Kang, L. ISAR Resolution Enhancement Method Exploiting Generative Adversarial Network. Remote Sens. 2022, 14, 1291. [Google Scholar] [CrossRef]
  23. Yuan, Y.X.; Luo, Y.; Ni, J.C.; Zhang, Q. Inverse Synthetic Aperture Radar Imaging Using an Attention Generative Adversarial Network. Remote Sens. 2022, 14, 3509. [Google Scholar] [CrossRef]
  24. Luo, Y.; Ni, J.C.; Zhang, Q. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence. J. Radars 2020, 9, 107–122. [Google Scholar]
  25. Shi, H.Y.; Lin, Y.; Guo, J.W.; Liu, M.X. ISAR autofocus imaging algorithm for maneuvering targets based on deep learning and keystone transform. J. Syst. Eng. Electron. 2020, 31, 1178–1185. [Google Scholar]
  26. Qian, J.; Huang, S.Y.; Wang, L.; Bi, G.A.; Yang, X.B. Super-resolution ISAR imaging for maneuvering target based on deep-learning-assisted time-frequency analysis. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  27. Eldar, Y.C.; Kuppinger, P.; Bolcskei, H. Block-Sparse Signals: Uncertainty Relations and Efficient Recovery. IEEE Trans. Signal Process. 2010, 58, 3042–3054. [Google Scholar] [CrossRef] [Green Version]
  28. Gao, J.; Deng, B.; Qin, Y.; Wang, H.; Li, X. Enhanced radar imaging using a complex-valued convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2019, 1, 35–39. [Google Scholar] [CrossRef] [Green Version]
  29. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  30. Miller, G.S.P. The definition and rendering of terrain map. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 9–13 August 1986; pp. 39–48. [Google Scholar]
  31. Wang, Z.D.; Cun, X.D.; Bao, J.M.; Zhou, W.G.; Liu, J.; Li, H. Uformer: A general u-shaped Transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 17683–17693. [Google Scholar]
  32. Isola, P.; Zhu, J.Y.; Zhou, T.H.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  33. Zhao, M.; Wang, M.; Chen, J.; Rahardja, S. Perceptual Loss-Constrained Adversarial Autoencoder Networks for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  34. Li, D.; Zhan, M.; Liu, H.; Liao, Y.; Liao, G. A robust translational motion compensation method for ISAR imaging based on keystone transform and fractional Fourier transform under low SNR environment. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2140–2156. [Google Scholar] [CrossRef]
  35. Li, L.; Yan, L.; Li, D.; Liu, H.Q.; Zhang, C.X. A novel ISAR imaging method for maneuvering target based on AM-QFM model under low SNR environment. IEEE Access 2019, 7, 140499–140512. [Google Scholar] [CrossRef]
  36. Ahuja, N.; Lai, W.S.; Huang, J.B.; Yang, M.H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar]
  37. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 492–511. [Google Scholar]
  38. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved training of wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach (NIPS), Long Beach, CA, USA, 3–9 December 2017; pp. 5769–5779. [Google Scholar]
  39. The PASCAL VOC2012 Augmented Dataset. Available online: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ (accessed on 11 August 2022).
  40. The ImageNet Dateset. Available online: https://image-net.org/ (accessed on 15 August 2022).
  41. Wang, Y.; Ling, H.; Chen, V.C. ISAR motion compensation via adaptive joint time–frequency technique. IEEE Trans. Aerosp. Electron. Syst. 1997, 34, 670–677. [Google Scholar] [CrossRef]
Figure 1. The ISAR imaging geometry for maneuvering target.
Figure 1. The ISAR imaging geometry for maneuvering target.
Remotesensing 14 05270 g001
Figure 2. ISAR imaging framework for maneuvering target.
Figure 2. ISAR imaging framework for maneuvering target.
Remotesensing 14 05270 g002
Figure 3. The architecture of the proposed generator.
Figure 3. The architecture of the proposed generator.
Remotesensing 14 05270 g003
Figure 4. The architecture of the proposed discriminator.
Figure 4. The architecture of the proposed discriminator.
Remotesensing 14 05270 g004
Figure 5. Structure of LeWin Transformer blocks in series and the LeFF module.
Figure 5. Structure of LeWin Transformer blocks in series and the LeFF module.
Remotesensing 14 05270 g005
Figure 6. How the feature map is divided into non-overlapping windows.
Figure 6. How the feature map is divided into non-overlapping windows.
Remotesensing 14 05270 g006
Figure 7. The generation process of pseudo-measured aircraft block targets.
Figure 7. The generation process of pseudo-measured aircraft block targets.
Remotesensing 14 05270 g007
Figure 8. Some examples of the pseudo-measured block targets generation: (a) The origin images in dataset; (b) The segmented images; (c) The pseudo-measured targets.
Figure 8. Some examples of the pseudo-measured block targets generation: (a) The origin images in dataset; (b) The segmented images; (c) The pseudo-measured targets.
Remotesensing 14 05270 g008
Figure 9. An example of the training data: (a) The coordinate distribution of the scattering points; (b) The blurred ISAR imaging result by RD method; (c) The ideal ISAR image.
Figure 9. An example of the training data: (a) The coordinate distribution of the scattering points; (b) The blurred ISAR imaging result by RD method; (c) The ideal ISAR image.
Remotesensing 14 05270 g009
Figure 10. The simulated aircraft model: (a) The coordinate distribution of the scattering points; (b) The ideal ISAR image.
Figure 10. The simulated aircraft model: (a) The coordinate distribution of the scattering points; (b) The ideal ISAR image.
Remotesensing 14 05270 g010
Figure 11. Imaging results of simulated aircraft model with the motion parameters and imaging condition as listed in Table: (ac) Imaging results by RD method; (df) Imaging results of the proposed method.
Figure 11. Imaging results of simulated aircraft model with the motion parameters and imaging condition as listed in Table: (ac) Imaging results by RD method; (df) Imaging results of the proposed method.
Remotesensing 14 05270 g011
Figure 12. Imaging results of Boeing-727 data under full aperture without noise by different methods: (a) RD algorithm; (b) STFT algorithm; (c) WVD algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Figure 12. Imaging results of Boeing-727 data under full aperture without noise by different methods: (a) RD algorithm; (b) STFT algorithm; (c) WVD algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Remotesensing 14 05270 g012
Figure 13. Imaging results of Boeing-727 data with SNR = −10 dB by different methods: (a) RD algorithm; (b) STFT algorithm; (c) WVD algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Figure 13. Imaging results of Boeing-727 data with SNR = −10 dB by different methods: (a) RD algorithm; (b) STFT algorithm; (c) WVD algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Remotesensing 14 05270 g013
Figure 14. Imaging results of Boeing-727 data under 40% sampling ratio by different methods: (a) The down-sampled echo signal; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Figure 14. Imaging results of Boeing-727 data under 40% sampling ratio by different methods: (a) The down-sampled echo signal; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) the proposed method.
Remotesensing 14 05270 g014
Figure 15. An example of the training data: (a) The coordinate distribution of the scattering points on the block target; (b) The blurred ISAR image as the input of UFGAN; (c) The ideal image as the label of the UFGAN.
Figure 15. An example of the training data: (a) The coordinate distribution of the scattering points on the block target; (b) The blurred ISAR image as the input of UFGAN; (c) The ideal image as the label of the UFGAN.
Remotesensing 14 05270 g015
Figure 16. A block target sample from test set: (a) The coordinate distribution of the scattering points; (b) The ideal ISAR image.
Figure 16. A block target sample from test set: (a) The coordinate distribution of the scattering points; (b) The ideal ISAR image.
Remotesensing 14 05270 g016
Figure 17. Imaging results of the block target with the motion parameters and imaging condition as listed in Table 2: (ad) Imaging results by RD method; (eh) Imaging results be the proposed method.
Figure 17. Imaging results of the block target with the motion parameters and imaging condition as listed in Table 2: (ad) Imaging results by RD method; (eh) Imaging results be the proposed method.
Remotesensing 14 05270 g017
Figure 18. The imaging result of the original echo by RD algorithm.
Figure 18. The imaging result of the original echo by RD algorithm.
Remotesensing 14 05270 g018
Figure 19. Imaging results of Yak-42 measured data under full aperture without noise by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Figure 19. Imaging results of Yak-42 measured data under full aperture without noise by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Remotesensing 14 05270 g019
Figure 20. Imaging results of Yak-42 under 50% sampling ratio with SNR = 0 dB by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Figure 20. Imaging results of Yak-42 under 50% sampling ratio with SNR = 0 dB by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Remotesensing 14 05270 g020
Figure 21. Imaging results of Yak-42 under 25% sampling ratio with SNR = −10 dB by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Figure 21. Imaging results of Yak-42 under 25% sampling ratio with SNR = −10 dB by different methods: (a) echo after translational motion compensation; (b) RD algorithm; (c) STFT algorithm; (d) SPWVD algorithm; (e) RWT algorithm; (f) The proposed method.
Remotesensing 14 05270 g021
Figure 22. Imaging results of Yak-42 by different methods: (a,d,g) RD algorithm; (b,e,h) The method in [22]; (c,f,i) The proposed method.
Figure 22. Imaging results of Yak-42 by different methods: (a,d,g) RD algorithm; (b,e,h) The method in [22]; (c,f,i) The proposed method.
Remotesensing 14 05270 g022
Figure 23. Performance curves of IE versus SNR and sampling ratio of six different methods: (a) Performance curves of IE versus SNR; (b) Performance curves of IE versus sampling ratio.
Figure 23. Performance curves of IE versus SNR and sampling ratio of six different methods: (a) Performance curves of IE versus SNR; (b) Performance curves of IE versus sampling ratio.
Remotesensing 14 05270 g023
Table 1. The motion parameters and imaging conditions of the three experiments.
Table 1. The motion parameters and imaging conditions of the three experiments.
Parameters and ConditionsExperiment 1Experiment 2Experiment 3
angular velocity(rad/s)0.080.060.03
angular acceleration(rad/s2)0.160.120.12
angular jerk(rad/s3)0.160.480.48
relative acceleration ratio224
relative jerk ratio144
SNR−3 dB−7 dB
sampling rate100%70%30%
Table 2. The evaluation indicators of different methods under SNR = −10 dB.
Table 2. The evaluation indicators of different methods under SNR = −10 dB.
MethodIETCR/dBImaging Time/s
RD5.742840.15190.0172
STFT5.596248.68922.9459
WVD4.315757.674310.2687
SPWVD4.529846.485141.9245
RWT3.447764.582616.7458
Proposed method0.845287.75930.1903
Table 3. The evaluation indicators of different methods under sparse aperture.
Table 3. The evaluation indicators of different methods under sparse aperture.
MethodIETCR/dBImaging Time/s
RD4.784343.46880.0154
STFT4.885451.02402.6851
SPWVD3.348257.964738.4186
RWT1.586172.487515.2943
Proposed method0.758585.45820.1884
Table 4. The motion parameters and imaging conditions of the four experiments.
Table 4. The motion parameters and imaging conditions of the four experiments.
Parameters and ConditionsExp.1Exp.2Exp.3Exp.4
angular velocity (rad/s)0.020.040.060.08
angular acceleration (rad/s2)0.080.040.120.24
angular jerk (rad/s3)0.080.160.360.48
relative acceleration ratio4123
relative jerk ratio1432
SNR0 dB−8 dB−12 dB
sampling ratio100%45%25%15%
Table 5. The evaluation indicators of different methods under sparse aperture.
Table 5. The evaluation indicators of different methods under sparse aperture.
ExperimentMethodIESSIMTCR/dBImaging Time/s
Exp.1:
50% sampling ratio,
SNR = 0 dB
RD4.88160.125656.85690.014
STFT4.61240.192161.51642.8795
RWT4.11250.084555.521816.2416
SPWVD3.31620.091767.568238.1458
Proposed method2.43560.816391.20410.195
Exp.2:
25% sampling ratio,
SNR = −10 dB
RD6.08470.093341.29580.0015
STFT5.87260.128444.52912.8851
RWT4.54820.068447.512616.2541
SPWVD3.71590.076158.125937.1567
Proposed method2.52140.804789.45820.186
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, W.; Yuan, Y.; Zhang, Y.; Luo, Y. Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN. Remote Sens. 2022, 14, 5270. https://doi.org/10.3390/rs14205270

AMA Style

Li W, Yuan Y, Zhang Y, Luo Y. Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN. Remote Sensing. 2022; 14(20):5270. https://doi.org/10.3390/rs14205270

Chicago/Turabian Style

Li, Wenzhe, Yanxin Yuan, Yuanpeng Zhang, and Ying Luo. 2022. "Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN" Remote Sensing 14, no. 20: 5270. https://doi.org/10.3390/rs14205270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop