Open Access
1 March 2014 Radar coincidence imaging in the presence of target-motion-induced error
Dongze Li, Xiang Li, Yongqiang Cheng, Yuliang Qin, Hongqiang Wang
Author Affiliations +
Abstract
Radar coincidence imaging (RCI) is a new instantaneous imaging technique that does not depend on Doppler frequency for resolution. Such an imaging method does not require target relative motion and has an imaging interval that is even shorter than a pulse width. The potential advantages in processing both the relatively stationary and maneuvering targets make RCI provide a supplementary imaging approach for the conventional range Doppler imaging methods. The simulation experiments have preliminarily demonstrated the feasibility of the RCI technique. However, further investigations show that the imaging error arises for moving targets, and moreover, it is particularly related to target scattering maps. The paper analyzes the target-motion-induced error and points out that three factors are involved: target velocity, target scattering map, and the time-space independence of detecting signals. The current image-reconstruction algorithms of RCI, which are based on the least-square (LS) principle, are found to be seriously sensitive to the motion-induced errors and will be limited in practical imaging scenarios. Accordingly, the compressive sensing (CS) recovery algorithm is employed, which can utilize sparsity restriction to diminish the effect of the motion-induced error on image reconstruction. Simulations are designed to illustrate the three factors of the target-motion-induced error. The imaging performance of the LS and the CS methods in RCI image recovery are compared as well.

1.

Introduction

Various radar imaging techniques based on the range Doppler (RD) principle derive target images via the processing of measuring time-delay and Doppler frequency.1 High resolution is generally obtained by high bandwidth and large aspect-angle variation. Especially, for a high azimuth resolution, there are two typical approaches to increase the aspect-angle integration. One way is the utilization of a real-aperture antenna array, but the antenna number commensurate with a desired high azimuth resolution is quite high, and it is too complex to realize the maximum resolution limited by the hardware and physical environment.2,3 The other is synthetic aperture radar imaging that achieves aspect-angle integration via the relative motion between targets and radars.1,4 This method requires relative motion to keep uniform during the imaging time. Unfortunately, this condition could hardly be satisfied in real imaging scenarios where a lot of noncooperative targets and fast maneuvers exist.5 Consequently, the noncooperative motion induces time-varying Doppler frequency into the return signals, which widens the frequency spectrum and would badly blur the images beyond recognition. Therefore, relative motion between targets and radars indeed provides the desired resolution, but meanwhile the resultant nonuniform space sampling would produce blurred images, which are difficult to be refocused even though various motion compensation algorithms are applied.5,6

On consideration of aforementioned reasons, our recent work in Ref. 7 developed an instantaneous imaging technique—radar coincidence imaging (RCI)—which was motivated by classical coincidence imaging in optical systems. Classical coincidence imaging is a nonlocal (images are produced in the channel without objects) imaging method, where the target pattern is obtained via signal spatial fluctuations on the imaging plane.810 The extension of such imaging formulism to microwave system has two potential advantages: (1) it does not depend on Doppler gradient for resolution, and then the targets that are relatively stationary to radars can be well imaged and (2) the imaging interval can be shorter than a pulse width, and then the impact of target noncooperative motions will be considerably decreased. The key property of RCI is to yield time-space-independent radar signals in the detecting area, which makes targets or scatterers at different locations reflect mutually independent echoes. Therefore, targets or scatterers within a radar beam can be resolved via the independent waveforms of their echoes instead of the time-delay and Doppler analysis. For the targets in the same range bin, they remain resolvable even if their Doppler frequencies are equal to each other. RCI presents potential advantages in processing both the stationary targets and the maneuvering ones, which could be a supplementary imaging approach for the conventional range Doppler imaging methods.

The experiments in Ref. 7 using simulated data have preliminarily shown the feasibility of RCI for imaging both stationary targets and maneuvering ones, as shown in Figs. 1(a) and 1(b). This experiment, which derives satisfactory results, employs a simple target model consisting of several scattering centers. In our further investigation, however, obvious contrast arises when we employ a plane target model, which is more complex, as shown in Figs. 1(c) and 1(d). The imagery quality of the plane model remains satisfactory when it is stationary, whereas it gets seriously degraded beyond recognition when it moves. Thus, the performance of the RCI technique is related not only to the target movement but also to the target scattering maps. Obviously, the current image-reconstruction algorithms of RCI will be quite limited in the practical imaging scenarios where moving targets or various scattering maps generally exist. In consideration of this problem, the paper investigates the reason of the degraded imagery quality in RCI and then analyzes the limitation of the current recovery algorithms and finally provides an applicable approach for RCI in the presence of motion-induced error.

Fig. 1

Radar coincidence imaging (RCI) example. (a) Imaging result of a four-point target when it is stationary. (b) Imaging result of a four-point target when it is uncooperatively moving. (c) Imaging result of a plane target when it is stationary. (d) Imaging result of a plane target when it is uncooperatively moving.

JEI_23_2_023014_f001.png

The remaining sections are organized as follows. Section 2 gives the fundamental analysis of RCI. Section 3 is devoted to the analysis of the target-motion-induced error and the image reconstruction. Along with simulation results, the three factors of the motion-induced error are illustrated and the imaging performance of the LS method and CS algorithm are discussed in Sec. 4. Section 5 concludes this paper.

2.

Basic Analysis of Radar Coincidence Imaging

The essential requirement to perform RCI is to produce time-space-independent detecting signals.7 It means that the detecting signals in different positions or at different time instants should be independent of each other. In classical coincidence imaging of optical system, this requirement can be easily accomplished via a fully incoherent thermal source that contains numerous particle subsources emitting fields independently and randomly.9 To satisfy such a requirement in microwave radar systems, a multitransmitting configuration is necessary, based on which the detecting signals exhibiting spatial diversity could be produced.

Coincidence imaging basically depends on the coincidence processing of two channel signals, i.e., the detecting signal and the received signal. As depicted in Fig. 2, there is an array of N transmitters and one receiver. The transmitters will be controlled to emit group-orthogonal and time-independent signals. Then the detecting signals of time-space independence will be generated in the detecting area. In other words, the wave front will present great spatial incoherence at each time instant. Illuminated by such detecting signals, the scatterers within a radar beam will reflect echoes whose waveforms are highly different from each other according to their respective positions. As a result, the coincidence processing can extract the echo component of every scatterer from the received signal via measuring their independent waveforms and will finally reconstruct the target image.

Fig. 2

Geometry of RCI.

JEI_23_2_023014_f002.png

Generally, the target location can be first estimated based on the detection and localization techniques.7,8 Then we define a target area centered at the estimated location. It is assumed that the target area is large enough to cover the entire target or targets within the radar beam. Therefore, target images will be obtained if we can reconstruct the scene of the target area. Herein, the target area or the imaging area is denoted as I. Then a local coordinate system is established at the target area center, as shown in Fig. 2. The target area is discretized to a grid consisting of L small rectangles of uniform size and shape. Each small rectangle is the grid cell and is approximated by its own center. Thus, the discrete target area is expressed as I={r1,r2,,rL}, where rl is the position vector of the l’th grid-cell center. Rn(t) and Rr(t) denote the position vector of the n’th transmitter and the receiver, respectively. Stn(t) is the transmitted signal of the n’th transmitter. Radar imaging basically employs the start-stop approximation, which assumes that the target is stationary while the radar pulse illuminates the target because of the so-short individual pulses. Since the imaging interval of RCI is no more than a pulse width, the targets are assumed to be stationary during the data acquisition time. Hence, Rr(t)Rr(tref)=Rr, Rn(t)Rn(tref)=Rn, where tref is the reference time of estimating the target center.

Radar signals distributed in the target area are denoted as SI(r,t), where r is the position vector of an arbitrary point within I. Then, SI(r,t) is referred to as the detecting signal, and {SI(r,t),rI} is the wave front at t. The detecting signal can be expressed as

Eq. (1)

SI(r,t)=n=1NStn(t|rRn|c).

Then, the autocorrelation of SI(r,t) is given to present the spatial characteristic of the detecting signal.

Eq. (2)

RI(r,r;τ,τ)=SI(r,tτ)SI*(r,tτ)dt=m=1Nn=1NStm(t|rRn|cτ)Stn*(t|rRm|cτ),=n=1Nn=1NRTran(m,n;|rRm|c+τ,|rRn|c+τ),
where RTran(m,n;τ,τ)=Stm(tτ)Stn(tτ)dt is the cross-correlation of the transmitted signals.

Note that the ideal transmitted waveform of the RCI is group-orthogonal and time-independent,7 which is expressed as

Eq. (3)

RTran(m,n;τm,τn)=δ(τmτn,mn).

Substituting Eq. (3) in Eq. (2), RI(r,r;τ,τ) becomes

Eq. (4)

RI(r,r;τ,τ)=n=1Nδ[|rRn||rRn|c(ττ)].

Obviously, the maximum of RI(r,r;τ,τ) is N. Reference 7 has demonstrated that, on the condition of N>2, the right side of Eq. (4) will reach its maximum only when r=r and τ=τ. If the antenna number is 2, for example N=6, the shape of RI(r,r;τ,τ) will reach the top of 6 at r=r, τ=τ, and sharply decrease for the other cases. Thus, RI(r,r;τ,τ) can be approximately regarded as a delta function.

Eq. (5)

RI(r,r;τ,τ)Nδ(rr,ττ).

Therefore, the detecting signals present the time-space-independent characteristic, as denoted in Eq. (5), on the condition that the N (N>2) transmitted signals are group-orthogonal and time-independent. Note that the noise signals naturally satisfy such requirements. For instance, the transmitted signal can be produced by imposing zero-mean Gaussian-noise modulation on amplitude, expressed as

Eq. (6)

Stn(t)=An(t)exp[j(2πft+φ)]·rect(tTp).

Herein, {An(t),1nN} labels the mutual independent stochastic process functions, of which the cross-correlation function is RA(n,m;τ1,τ2)=δ(nm,τ1τ2). Then, the cross-correlation of the transmitted signals can be learned as follows:

Eq. (7)

limTP1TPTP/2TP/2Stm(tτm)Stn*(tτn)dt=limTP1TPTP/2TP/2Am(tτm)An*(tτn)exp[j2πf(τnτm)]dt=exp[j2πf(τnτm)]·E[A(tτm)A*(tτn)]=exp[j2πf(τnτm)]·RA(τ)=δ(τmτn,mn).

Thus, the transmitted signals as denoted in Eq. (6) approximately have time-independence and orthogonality, which will produce the detecting signals presenting time-space-independent feature in the target area. Based on Eq. (5), the spatial distribution of the wave front is

Eq. (8)

E[SI(r,t)SI*(r,t)]=RI(r,r;0,0)Nδ(rr).

It indicates that detecting signals of different locations are independent, and the wave front shows spatial independence at each instant.

Then, the received signal Sr(t) can be expressed as the superposition of SI(r,t).

Eq. (9)

Sr(t)=l=1LσlSI(rl,t|rlRr|c),
where σl is the scattering coefficient of the l’th grid cell, and for the cells without target scattering centers, σl=0. For the sake of simplicity, the coincidence imaging formulism needs a reference signal,10 which herein can be simply structured using SI(r,t) as follows:

Eq. (10)

S(r,t)=SI(r,t|rRr|c).

Obviously, the reference signal S(r,t) is just the transform of SI(r,t) with an additional time delay induced by the propagation to the receiver. Then, Eq. (9) becomes

Eq. (11)

Sr(t)=l=1LσlS(rl,t).

Finally, the scattering coefficient at an arbitrary grid cell rx can be explicitly obtained via the correlation between the received signal and S(rx,t).

Eq. (12)

Sr(t)S*(rx,t)dt=[l=1Lσl·SI(rl,t|rlRr|c)]·SI*(rx,t|rxRr|c)dt=l=1Lσl·RI(rl,rx;|rlRr|c,|rxRr|c)l=1Lσl·Nδ(rlrx)=N·σx.

That is,

Eq. (13)

σx1NSr(t)S*(rx,t)dt.

Since the detecting signal or S(r,t) can be derived in advance, the scattering coefficient of every imaging cell will be extracted from the received signal as denoted in Eq. (13). Thus, RCI can obtain the target image when the transmitted signals generate the time-space-independent detecting signals by satisfying the condition in Eq. (3).

Based on the analysis above, the essence of RCI is straightforward and its peculiarity can be clarified in comparison with conventional radar imaging methods based on the RD principle. RD imaging resolves targets by extracting the differences emerging in time delay and Doppler gradient of their echoes. With respect to RCI, the superposition of the time-independent and group-orthogonal transmitted signals makes the wave front on a target exhibit such a marked spatial incoherence that each scatterer is illuminated by mutually independent signals. As a result, targets or sactterers within a beam reflect echoes of independent waveforms associated with their respective locations. Therefore, target echoes do not just differ from each other upon time delay or Doppler frequency, and above all, their waveforms are highly different, which provides alternative information for distinguishing themselves. Especially, this resolvable characteristic does not require aspect-angle integration and can be achieved within a pulse-width interval. Consequently, due to the very short imaging time, the influence of the target noncooperative motion on imagery qualities will be considerably reduced. In addition, RCI does not require the target to move for cooperating data acquisition. Therefore, targets can be well imaged either when they are relatively stationary or have noncooperative motions.

The excellent point-to-point relationship in Eq. (13), which means a high resolution, depends on a remarkable time-space independence of the detecting signals. It requires that the transmitted signals have the perfect time-independence as presented in Eq. (3). However, a complete time-independence is almost impossible in practice. The requirement of highly time-independent signals imposes particularly demanding requirements on radar systems. Due to the limited system condition, the inadequate time-independence of the transmitted signal, thus, will degrade the time-space-independent characteristic of the detecting signal. That is to say, the desired point-to-point relationship as denoted in Eq. (13) is difficult to be realized in microwave radar system. Consequently, the image recovery simply depending on the correlation between Sr(t) and S(r,t) cannot generate a resolution for RCI as high as that in the classical case [the correlation of Eq. (13) can provide a high resolution for classical coincidence imaging because the thermal optical signals, whose natural randomness leads to marked time-independence, will produce the detecting signals of sharp spatial independence].11

To improve the resolution, the parameterized method is employed for the image reconstruction of RCI, which is less constrained by the signal time-independence. This method structures a coincidence imaging equation based on the relationship between the received signal and the reference signal. According to Eq. (11), Sr(tk)=l=1LσlS(rl,tk), where tk is the time sample. Then, an imaging equation can be given as follows:

Eq. (14)

Sr=S·σ[Sr(t1)Sr(t2)Sr(tK)]=[S(r1,t1)S(r2,t1)S(rL,t1)S(r1,t2)S(r2,t2)S(rL,t2)S(r1,tK)S(r2,tK)S(rL,tK)]·[σ1σ2σL],
where S is the reference signal matrix, Sr is the vector of the received signal, σ is the unknown scattering coefficient vector, and K is the number of time samples.

Obviously, the imaging equation has a unique solution on the condition that S is nonsingular. Thus, the time samples first should not be less than the imaging cells, i.e., KL, which is generally tractable to be satisfied. For the sake of simplicity, we let K=L, leading to a square matrix of S. As denoted in Eq. (10), the columns and the rows of S fundamentally represent the detecting signals in different positions and at different instants. Thus, the linear independency of the matrix S corresponds to the time-space independence of the detecting signals. The detecting signals of high independence indicate a perfectly nonsingular S and vice versa. It suggested that the row rank and the column rank of S are determined by the independent degree of the detecting signal in time and space. Therefore, the independence characteristic denoted in Eq. (5) ensures the incoherence property of the matrix SL×L, based on which σ can be uniquely recovered.

An example is given to look into the time-space independence of the detecting signals, and also the incoherence of the matrix S. As mentioned previously, the detecting signals will present time-space-independent characteristic when more than two transmitters emit orthogonal and time-independent signals. Thus, the example employs the signals given in Eq. (6) and uses an N-transmitter one-receiver linear array, which consists of six antenna elements with 0.5-m distance between each other. The grid cell is 0.5×0.5m. The carrier frequency is 9.5 GHz, the pulse width is 50 μs, the bandwidth is 1 GHz, and the sampling frequency is 2 GHz. Then, the detecting signals of an area 5 km away from the array center is given in Fig. 3. For the sake of comparison, the detecting signals produced by the conventional linear frequency modulated (LFM) signals are also illustrated.

Fig. 3

Example of the detecting signals. (a) Wave front produced by LFM signal. (b) Wave front of RCI. (c) Wave front of RCI when the grid cell is reduced. (d) Spatial correlation of the wave front produced by LFM signal. (e) Spatial correlation of the RCI wave front. (f) Spatial correlation of the RCI wave front when the grid cell is reduced.

JEI_23_2_023014_f003.png

As depicted in Figs. 3(a) and 3(b), the RCI wave front sharply fluctuates in space, whereas that produced by the LFM signals shows highly spatial correlation. It indicates that the RCI technique can produce the detecting signals presenting time-space-independent characteristic via transmitting group-orthogonal and time-independent signals. Then, we pay attention to the incoherence of the matrix S, which is highly related to these detecting signals. Herein, we measure matrix incoherence via the condition number, which is defined as cond(S)S·S1. cond(S) is lower when the matrix S has better incoherence and vice versa. As shown in Fig. 3, the detecting signals of RCI give a condition number of 542, leading to a nonsingular S, whereas it is 9.87×1016 for the LFM signals, where the S is singular.

Then, a subsequent question is whether the matrix incoherence is entirely determined by the detecting signals. The matrix incoherence or cond(S) represents the linear independence of the rows/columns of S. Take the first two columns of S, for instance, i.e., s1T and s2T. Their correlation is s1*s2T=k=1KS*(r1,tk)S(r2,tk), which basically is the correlation between the detecting signals of r1 and r2. Hence, the incoherence of matrix S can be changed in two ways. If the detecting signals are fixed according to the transmitted signals, which means S(r,t) is determined, then s1*s2T is only related to the distance between r1 and r2. The bigger grid cell results in longer distance between grid cells, which leads to higher incoherence of S and lower cond(S). On the other side, if the grid or the distance between grid cells is fixed, which means the difference between r1 and r2 is fixed, then s1*s2T is only related to the spatial independence of the detecting signals. As mentioned earlier, the time-space-independent characteristic of detecting signals is mainly related to the time-independence of the transmitted signals. Herein, the transmitted signals are noise signals. Then, noise signals with larger bandwidth generate the transmitted signals of higher time-independence, leading to detecting signals with better spatial independence.

Therefore, the incoherence of the matrix S is related to two factors: the grid and the noise bandwidth. Hence, to reduce either the grid-cell size or the bandwidth, one can degrade the matrix incoherence and decrease cond(S). We give an example to illustrate the wave front of RCI when the grid-cell size is decreased to 5 cm, as shown in Fig. 3. The matrix S remains nonsingular, but cond(S) is increased to 955. Likewise, cond(S) will rise when the noise band is decreased.

In conclusion, target images will be uniquely recovered as long as the detecting signals ensure a full-rank matrix S.

After the analysis above, we now summarize the imaging scheme as follows:

  • Step 1: Estimate the target center.

  • Step 2: Discretize the imaging area to form I={rl}l=1L.

  • Step 3: Compute the reference signal {S(rl,t)}l=1L.

  • Step 4: Draw out L samples from {S(rl,t)}l=1L in time domain to form SL×L.

  • Step 5: Draw out L samples from Sr(t) at the same sample points to form Sr.

  • Step 6: Solve the equation Sr=S·σ.

3.

Image Reconstruction in the Presence of the Target-Motion-Induced Error

Based on the analysis above, the image reconstruction is accomplished via solving the imaging equation Sr=S·σ. Since the detecting signals generally ensure a nonsingular S, a target image often can be uniquely recovered using the LS method. However, the imaging equation is based on the assumption that the targets are stationary during the data acquisition time. Thus, errors will be induced to the signal model of moving target imaging because of the neglected motions.

Before analyzing the target-motion-induced errors, we first emphasize a difference between the received signal and the detecting signal. The former is actually derived via the target scattering in practice. The latter is precomputed artificially, which should be referred to as the computational detecting signal in a more exact manner. In terms of stationary targets, the imaging plane is fixed and its relative positions with respect to antennas are constants. Thus, the computational detecting signal will be accordant with the actual signals in the imaging area and can exactly express the received signal as Sr(t)=σlS(rl,t). In other words, the computational detecting signal matches the received signal in this case. On the other side, for moving targets, the imaging area will move synchronously and the position vectors of antennas will also be time-varying, as shown in Fig. 4. Then, the actual signals in the imaging plane will certainly move with the target area, which should be computed according to the time-varying positions. Thus, the computational detecting signal, which is obtained by replacing the time-varying Rr(t) and Rn(t) with the constants Rr and Rn, will be different from the actual signal. As a result, the superposition of SI(r,t) in Eq. (9) cannot accurately express the received signal, i.e., the computational detecting signal mismatches the received signal.

Fig. 4

The imaging area of moving targets.

JEI_23_2_023014_f004.png

3.1.

Analysis of the Target-Motion-Induced Error

First, in the signal model without errors, the true reference signal and the received signal should be rewritten as

Eq. (15)

S(r,t)=SI[r,t|rRr(t)|c]=n=1NStn[t|rRn(t)|+|rRr(t)|c],

Eq. (16)

Sr(t)=l=1LσlSI(rl,t|rlRr|c)=l=1Ln=1NσlStn[t|rlRn(t)|+|rlRr(t)|c].

We decompose the reference signal into two parts.

Eq. (17)

S(r,t)=S^(r,t)+ΔS(r,t)S^(r,t)=n=1NStn[t|rRn(tref)|+|rRr(tref)|c],
where ΔS(r,t)=S(r,t)S^(r,t). Finally, the true coincidence imaging equation can be decomposed as follows:

Eq. (18)

Sr=S·σ=(S^+ΔS)·σ,
where S^=[S^(rl,tk)]L×L, ΔS=[ΔS(rl,tk)]L×L. Herein, ΔS is the error of the reference matrix caused by target motions. Because of the stationary assumption, SS^. Therefore, under this assumption, the coincidence imaging equation actually employed in practice is

Eq. (19)

Sr=S^·σ^=(SΔS)·(σεσ).

In contrast with the error-free equation Sr=S·σ, Eq. (19) is the estimated coincidence imaging equation, where σ^=σεσ is the estimated scattering coefficient vector. Thus, εσ is the estimation error of the target scattering coefficient vector, i.e., the image recovery error, which is caused by the target motion. Hereafter, εσ is referred as the target motion error.

Substituting Eq. (18) in Eq. (19), we have

Eq. (20)

(SΔS)·εσ=ΔS·σ.

Given that S is invertible, and during the very short imaging time, the difference ΔS is small enough that ΔS·S-1<1; then IS-1·ΔS is invertible and the inequality (IS-1·ΔS)-11/(1S-1·ΔS) holds.12 Since both S and IS-1·ΔS are invertible, Eq. (20) turns to S(IS1ΔS)·εσ=ΔS·σ. Then, εσ can be explicitly given as

Eq. (21)

εσ=S1·(IS1·ΔS)1·ΔS·σ.

Taking norms in Eq. (21) according to the matrix norm consistent and the inequality above, we have

Eq. (22)

εσS1·ΔS·σ1S1·ΔS=S·S1·ΔSS1S·S1·ΔSS·σ.

Using the expression of S·S1cond(S), Eq. (22) is finally written as

Eq. (23)

εσcond(S)·ΔSS1cond(S)·ΔSS·σ.

The right side of Eq. (23) is a boundary of the target-motion-induced error, which is an indication to analyze the influence factors. Obviously, there are three key factors of the motion-induced error, i.e., ΔS/S, cond(S), and σ.

First, ΔS/S represents the relative difference between S^ and S caused by the target motion. Certainly, a lower speed will generate smaller variation of the target position during the imaging time, resulting in a minor difference between S^ and S. Second, cond(S) is generally viewed as the amplification factor of ΔS/S. As shown in Eq. (23), a small condition number will weaken the impact of ΔS, and contrarily, a large one will enhance this impact. The condition number is the measure of the dependencies for a matrix. The higher the condition number, the weaker independent degree the row/column vectors of the matrix present. In the RCI method, cond(S) measures the incoherency property of the reference signal matrix S, which essentially represents the time-space independence of the detecting signal. Finally, the target scattering coefficient vector is another key factor for moving target imaging. If a target consists of just several scattering centers, then only a minority of the σ elements are nonzero, resulting in a very small σ to reduce εσ. By contrast, a large σ cannot decrease the effect of the target-motion-induced error.

Therefore, we summarize the three factors of the target-motion-induced error as follows.

  • 1. Target motion velocity: A lower velocity will generate a minor difference between S(r,t) and S^(r,t), resulting in a smaller ΔS or ΔS/S to decrease the target-motion-induced error.

  • 2. Time-space independence of detecting signals: The detecting signals of higher time-space independent degree lead to a more incoherent matrix S, resulting in a smaller cond(S) to decrease the target-motion-induced error.

  • 3. Target scattering coefficient: The target with a small scattering coefficient vector can decrease the target-motion-induced error.

3.2.

Image Reconstruction

Based on the LS principle, target images can be uniquely recovered according to the imaging equation Sr=S^·σ^ (S^ is a full-rank square). As is well-known, various algorithms based on the LS principle can solve this inverse problem via the optimization to minimize the objective function of SrS^·σ^2. Since S^ is invertible, SrS^·σ^=0 is equivalent to σσ^εσ=0. Therefore, for simplicity, the σ^ recovered based on the LS principle can be expressed as

Eq. (24)

σ^=argminσ˜σσ˜εσ2.

In comparison with the optimal estimation of σopt=argminσ˜σσ˜2, the target-motion-induced error obviously disturbs the optimization process expressed in Eq. (24). However, the LS methods are basically driven by the criterion of SrS^·σ^2 minimization. Once the objective function is perturbed by the motion-induced error, the error will be directly presented in the solution. Especially when the motion-induced error goes to a high value, the estimated result might deviate from the true value out of control. Unfortunately, the targets with large σ and/or rapid motions generally exist in practice. Additionally, the time-space-independent degree of detecting signals is also limited by radar system conditions. That is to say, a severe motion-induced error εσ is impossible to be avoided. Therefore, the sensitivity of the LS method to the motion-induced error would make the image reconstruction unstable to give an effective scattering coefficient vector for moving targets.

Considering the aforementioned reasons, other criteria are expected to be added into the optimization of the recovery. Then the weight of the SrS^·σ^2 minimization will be reduced, and the impact of its target-motion-induced error will be decreased as well. Certainly, the criteria based on some prior information would be a reasonable choice. Therefore, the CS reconstruction algorithms are concerned here, which utilizes not only the coincidence imaging equitation, but also the information of target sparsity.13,14 The applicability of the CS method on RCI and its potential advantages will be discussed in the following paragraphs.

Herein, the CS theory is reviewed briefly. Consider an unknown K-sparse vector x, denoted as [x0,x1,,xN1]T. We have M<N measurements of x that can be expressed in matrix notation as y=Φx, where y=[y0,y1,,yM1]T and Φ is the M×N measurement matrix. Since M<N, recovery of the signal x from the measurement y is ill-posed in general. However, the CS theory demonstrates that when the matrix Φ has the restricted isometry property (RIP),15 it is possible to recover x from M=O[Klog(N/K)] measurements of y. The RIP is closely related to an incoherency property of Φ. It is proved that random or incoherent matrix performs well.15 In other words, an incoherent or random measurement matrix would be powerful for the CS reconstruction. When the RIP holds, the signal x can be recovered exactly from y by solving an l1 minimization problem.

Eq. (25)

x^=argminx1subjecttoy=Φx.

The utilization of CS reconstruction algorithms for RCI relies on two key observations.17 First, the scattering coefficient vector σ should be often sparse or compressible. Second, the measurement matrix, which corresponds to the reference signal matrix in RCI, is expected to be a random one. Fortunately, the two observations are both satisfied. For the former requirement, the grid-cell number is generally much larger than the scattering-center number, which means only a minority of grid cells involves the target scattering centers. Thus, the scattering coefficient vector actually consists of a great majority of zero elements, resulting in a sparse σ. In terms of the latter requirement, the independence of the row/column vectors in the matrix S has been ensured by the time-space-independent detecting signals. Thus, a random measurement matrix is also satisfied in the coincidence imaging equation. By combining the two observations, we can rebuild the target images using CS principles. Based on the model that takes noise into account,16 the estimation σcs recovered using the CS principles can be described as

Eq. (26)

σcs=argminσ˜σ˜1subjecttoSrS^·σ˜2<ε,
where ε is the noise level, and Sr and S^ consist of part of samples in Sr and S^. Without loss of generality, we use M seriate samples for simplicity; then Sr=[Sr(t1),Sr(t2),,Sr(tM)]T, and S^=[α1,α2,,αM]T, where αk is the k’th row vector of S^. Thus, for a K-sparse scattering coefficient vector, only M=O[Klog(L/K)] rather than L samples are utilized.

The CS reconstruction algorithms can certainly further decrease the required number of the received signal samples Sr(k) than that in the LS method, resulting in a shorter imaging time. Furthermore, the key contribution of employing the CS method consists in that its optimization process utilizes both the equation Sr=S^·σ^ and the sparsity of the scattering coefficient vector for image recovery. Therefore, the weight of the SrS^·σ^2 minimization that has been disturbed by the target-motion-induced error is balanced with the l1 minimization of σ^. As a result, the error effect will be weakened in the image reconstruction. It is possible to infer that the CS method enhances the stability of the image reconstruction in the presence of the target-motion-induced error.

4.

Results and Discussion

In this section, a set of examples will be shown to illustrate RCI in the presence of the target-motion-induced errors. The antenna number and signal parameters of RCI keep the same with the example of Fig. 3. The target area and antenna arrangement are shown in Fig. 5. The targets in the following examples have translational and rotational motions. Then Ω and V denote the rotation and translation vectors, respectively, and the orientations are shown in Fig. 5.

Fig. 5

The imaging scene and the linear antenna array.

JEI_23_2_023014_f005.png

The target area, which is large enough to cover the entire region emitted by the radar beam, generally has a considerably large size. The big target area along with the small grid cell means a big grid-cell number, leading to an imaging equation of a quite high dimension. The high dimension increases the computation complexity and might make it too complex to solve the equation. As is well-known, space targets generally account for a fraction of the large beam range. If we can estimate these subareas that actually contain targets, then the size of the target area dealt with in the parameterized method will be reduced. Therefore, we concern employing the correlation method to first estimate the subareas where the targets exist. As mentioned earlier, correlation method has a low resolution but can roughly estimate the target range. This process is stated in detail in the following steps:

  • 1. Based on the initial large target area I0, set a large grid-cell size, according to which I0 is divided to L0 grid cells, i.e., I0={rl0}l=1L0. The initial target scattering coefficient vector is σ=[r10,r20,,rL00].

  • 2. Recover σ via the correlation method based on I0. The initial image is labeled as D0.

  • 3. Based on D0, select P small subareas, which gives distinct response because of including targets, labeled as {Ip}p=1P.

  • 4. Reset the grid cell to a desired small size, according to which redivide {Ip}p=1P, i.e., Ip={rlp}l=1Lp.

  • 5. Thus, the grid cells that need to be processed in the imaging equation can be denoted as Isub={r11,,rL11,r12,,rL22,,r1p,,rLpp}. Then the grid-cell number is decreased from L to Lsub=p=1PLp, and the scattering coefficient vector to be solved becomes σsub=[σr]Lsub×1, rIsub.

  • 6. Recover σsub via the parameterized method. Then, obtain the subimages corresponding to {Ip}p=1P, which are denoted as {Dp}p=1P.

  • 7. Obtain the final target image via combining D0 and {Dp}p=1P.

According to the seven steps above, we give an example where the initial target area is 1000m×1000m. Herein, three targets dispersedly exist in the initial target area, as shown in Fig. 5. They are target 1 located at (0,0); target 2 located at (300, 300); and target 3 located at (300, 250).

At the first step, the grid cell is set to 5m×5m. Figure 6(a) gives the initial image recovered via the correlation method based on the 5m×5m grid cell. There are three distinct pixels in Fig. 6(a). The positions corresponding to three pixel centers are derived: (0,0), (300, 300), and (300, 250). It denotes that targets are included in the three square areas of 5m×5m size. To be prudent, we expand the size to 20m×20m. That is, the new subareas are three square areas of 20m×20m, whose centers are (0,0), (300, 300), and (300,250), respectively. Then, the grid cell is reset to 0.5m×0.5m, based on which the new three subareas are redivided. Then, target scattering coefficients can be recovered via the parameterized method with respect to the three subareas, as shown in Figs. 6(b) and 6(c). In this case, the grid-cell number needed to be considered in the parameterized method is decreased from 4×106 to 4800 (herein, 4×106 is the grid-cell number for a 1000m×1000m target area when we keep a 0.5m×0.5m grid cell), which greatly reduces the complexity burden. For the sake of simplicity, the following examples directly show the imaging results after selecting the target subareas.

Fig. 6

RCI example for a target area with large size. (a) Reconstructed image of the initial target area. (b) Subimage of target 2. (c) Subimage of target 3. (d) Subimage of target 1.

JEI_23_2_023014_f006.png

4.1.

Comparison Between the Radar Coincidence Imaging Method and the Range-Doppler Algorithm

This example concerns the comparison between the RCI and the RD algorithm (RDA). We employ a simple four-point target model, shown as target 1 in Fig. 5. The positions of the four target scattering centers are (1.5, 0.4), (1.5, 0.4), (1.5, 0.4), and (1.5, 0.4), respectively. The example will be implemented in three different scenarios, i.e., (1) the target is stationary; (2) the target has a uniform rotation velocity, Ω=2t; (3) the target has a noncooperative rotation, Ω=2t+20t2+20t3 (the rotations are all referred in angle measurement). Since RCI employs a six-antenna array, the RD imaging will also be implemented based on a six-antenna radar array. Moreover, the multitransmitting RD results herein are derived under the optimal conditions.3 The imaging results are shown in Fig. 7.

Fig. 7

Comparison between RCI and range-Doppler algorithm (RDA). (a) RDA and scene 1. (b) RDA and scene 2. (c) RDA and scene 3. (d) RCI and scene 1. (e) RCI and scene 2. (f) RCI and scene 3. Herein, “RCI and scene 1” denotes the imaging result of scene 1 derived via the RCI method.

JEI_23_2_023014_f007.png

As shown in Fig. 7(a), the scattering centers in the same range bin cannot be resolved for the stationary target. It indicates that high azimuth resolution of RD imaging cannot be achieved because of the absence of relative rotation between the target and antennas. By contrast, all scattering centers in Fig. 7(d) can be resolved because RCI does not depend on Doppler gradient for resolution. Moreover, Figs. 7(c) and 7(f) demonstrate that RCI is superior in processing noncooperative targets because of its quite short imaging time. The comparison between Figs. 7(e) and 7(f) shows that the RCI method is hardly affected by the noncooperative components in target motions.

The target in this example is simple and the rotational velocity is relatively low. In this simple imaging scenario, the RCI technique can obtain target images with high imagery quality. In comparison with the results of stationary targets in Fig. 7(d), however, the imaging blur caused by the target-motion-induced error is clearly visible. Then the following examples will concern the image reconstruction in the presence of the target-motion-induced error in detail.

4.2.

Three Factors of the Target-Motion-Induced Error

The following example concerns how the three factors of the target-motion-induced error affect the imagery quality. The imaging experiment will be performed in the scenarios where targets have different velocities or have different scattering coefficient vectors, or where the detecting signals have different time-space-independent degrees.

The first example is to investigate the impact of target velocity. The rotational velocity and acceleration of the target are labeled as ω and ωa, respectively. Figure 8 shows the imaging results when the target has different rotation velocities.

Fig. 8

Target images of different rotation velocities. (a) Target model. (b) Result for a stationary target. (c) Result for ω=10deg/s, ωa=20deg/s2. (d) Result for ω=45deg/s, ωa=20deg/s2. (e) Result for ω=360deg/s, ωa=20deg/s2. (f) Result for ω=1440deg/s, ωa=20deg/s2.

JEI_23_2_023014_f008.png

As shown in Fig. 8(b), the image of the stationary target has no imaging error. Then the target remains recognizable in Figs. 8(c) and 8(d) despite the visible imaging blurs. However, the target image is difficult to be distinguished in Figs. 8(e) and 8(f). It indicates that the reconstruction quality of the LS method gets worse when the growing velocity increases the target-motion-induced error.

It should be noticed that the imaging errors mainly occur at several determinate cross-range bins (herein, the cross-range bin is defined as the grid cells that have the same x-axis positions) for all the imaging results in Fig. 8. The imaging errors are concentrated at some special grid cells. It is possible to infer that the errors of different grid cells have considerable differences. The imaging error is not uniform among all the grid cells. Thus, we need to analyze the imaging error of every individual grid cell, i.e., εσ(l), so as to explain why the imaging blur concentrates in the special locations of Fig. 8.

According to Eq. (20), we have S^·εσ=ΔS·σ. Since ΔS and σ are both unknown, the equation can be simplified as

Eq. (27)

S^·εσ=ε˜,
where ε˜=ΔS·σ. Given that S^ is a full-rank square, S^ can be expressed as S^=UH·diag(λ1,λ2,λL)·V, where λl is the singular value of S^, and U and V are the unitary matrixes. Thus, we have the following expression:

Eq. (28)

εσ=VH·diag(1λ1,1λ2,,1λL)·U·ε˜=VH·[1λ1u1,1λ2u2,,1λLuL]T·ε˜=[1λ1VHu1ε˜,1λ2VHu2ε˜,,1λLVHuLε˜]T,
where ul is the row vector of U, i.e., U=[u1,u2,,uL]T. Obviously, the imaging error of every individual grid cell is

Eq. (29)

εσ(l)=VHulε˜λl.

Taking norms in Eq. (29) according to the matrix norm consistent, we have

Eq. (30)

εσ(l)=1λlVHulε˜|1λl|·VH·ul·ε˜=|1λl|·VH·ε˜.

Thus, the boundary given in the right side of Eq. (30) can be analyzed as the indication of the imaging error of every individual gird cell. Obviously, for different grid cells, the error boundary εσ(l) have equal VH and equal ε˜ but different |1/λl|. It indicates that the differences of εσ(l) are highly related to the differences of |λl|. λl is the singular value corresponding to the l’th column vector of S^. Here we denote the l’th column vector of S^ as s^l, i.e., S^=[s^1,s^2,,s^L]. High λl indicates high linear independence between s^l and the other columns.19 Low λl indicates the low linear independence for s^l. In addition, s^l consists of the samples of reference signal of the l’th grid cell. Hence, the linear dependence between s^l and the other column vectors basically presents the spatial independence between S(rl,t) and the reference signals of the other grid cells. Therefore, if the reference signal of the l’th grid cell shows higher independence with the other grid cells, then s^l exhibits better linear independence with the other column vectors, finally resulting in a larger λl and a smaller εσ(l). Then, we need to look into the spatial independence of the reference signal for every individual grid cell.

For the sake of simplicity, we consider a single radar antenna with the position of (xa,ya), as shown in Fig. 9. We depict a range bin (herein, the range bin is defined as the grid cells that have the same y-axis positions). The distance between two adjacent grid cells is Δx. The l’th grid cell center position is (xl,yl); then the range difference between the two adjacent grid cells can be expressed as

Eq. (31)

ΔR=(xlxa)2+(ylya)2(xlxa+Δx)2+(ylya)2=(ylya)[(xlxaylya)2+1(xlxa+Δxylya)2+1].

Fig. 9

The imaging error is nonuniform in the target area.

JEI_23_2_023014_f009.png

Using the first-order Taylor approximation, we have

Eq. (32)

ΔR=Δx·xxayya+O(Δx2).

Then, the correlation function of the reference signals between the two grid cells is

Eq. (33)

Rref=S(rl,t)S*(rl+1,t)dt=St[t2(xlxa)2+(ylya)2c]St*{t2[(xlxa)2+(ylya)2+ΔR]c}dt=RTran(2ΔRc).

As shown in Eq. (33), the independence of the reference signal between two grid cells is basically determined by the autocorrelation of the transmitted signals and the range difference ΔR. Obviously, bigger ΔR leads to better independence between S(rl,t) and S(rl+1,t). It should be noticed that ΔR is not equal for the grid cells in the imaging region. As expressed in Eq. (32), when y is determinate in the same range bin, bigger |xxa| produces larger ΔR. Thus, ΔR2>ΔR1, as shown in Fig. 9. That is, the grid cells that are farther from the vertical line of the radar antennas have bigger range difference. In this example, the x-axis positions of the antennas are from 0 to 5. Then, we label region Q as the red shadow in Fig. 9, which corresponds to the vertical region of the antennas. Thus, the grid cells in region Q have smaller ΔR, and their reference signals S(rl,t) accordingly present weaker spatial independence, and the column vectors s^l corresponding to these grid cells have smaller singular value λl. As a result, due to the inverse 1/λl in Eq. (30), the imaging errors of the grid cells in region Q are more serious than that of the other grid cells.

In the next example, we examine the imaging quality of the LS method when targets have different scattering coefficient vectors. For the sake of simplicity, we let all of the target scattering coefficients be 1 in the following examples. Thus, σ is simply equal to the scattering-center number. The targets to be imaged have the same motion velocity, i.e., ω=20deg/s, ωa=20deg/s2. Figure 10 gives the imaging results.

Fig. 10

Target images of different scattering maps. (a) 2-point target model. (b) 4-point target model. (c) 10-point target model. (d) 77-point target model. (e) Result of the 2-point target. (f) Result of the 4-point target. (g) Result of the 10-point target. (h) Result of the 77-point target.

JEI_23_2_023014_f010.png

The two-point target is well rebuilt almost without blurs, as shown in Fig. 10(e). The four-point and 10-point targets could be recognized, but their images are blurred to different degrees. However, the image of the 77-point target in Fig. 10(h) is almost totally overwhelmed by the imaging error. It indicates that the reconstruction quality using the LS method gets degraded while the target scattering centers increase. Especially for the 77-point plane model, the imaging result is considerably blurred beyond recognition.

As shown in the previous example of Fig. 10(h), imaging the target with a high σ is a great problem for the LS method. Focusing on this plane model in Fig. 10(d), the following example concerns the recovery performance of the LS method when the detecting signal has different time-space-independent degrees. The target model to be imaged is shown in Fig. 10(d) and has the rotation velocity of ω=20deg/s, ωa=20deg/s2. The independent degree of detecting signal is measured by the condition number of S. The imaging results are given in Fig. 11.

Fig. 11

Target images for different detecting signals. (a) cond(S)=7.10(dB). (b) cond(S)=6.86(dB). (c) cond(S)=6.11(dB). (d) cond(S)=4.69(dB).

JEI_23_2_023014_f011.png

From Figs. 11(a) to 11(d) the imaging errors markedly decline while the condition number gets smaller. The detecting signal with high independent degree can produce quite excellent imaging quality, as shown in Fig. 11(d). However, such a time-space-independent degree is difficult to be accomplished, which mainly depends on the time-independent degree of the transmitted signal. Herein, we measure the time-independent degree of the transmitted signal with the correlation time τ0, which is defined as |RSt(τ0)|0.05·max[RSt(τ)], and RSt(τ) is the autocorrelation function of the transmitted signal.18 Then, for instance, the imaging quality as shown in Fig. 11(c) requires the detecting signal to have a condition number of 6.11 dB, which is produced by the transmitted signal with the correlation time τ0 of 3.6 ns. Obviously, it is a rigorous condition to be realized for radar systems despite the fine imaging quality it provides.

The imaging results above have shown the impact of the three factors on the recovery performance of the LS method. To quantitatively depict the sensitivity of the LS method to the target-motion-induced error, Fig. 12 illustrates the curves of the actual imaging error against the three factors. Herein, the actual imaging error is defined as follows:

Eq. (34)

εa=1X×Yx=1Xy=1Y{exp[da(x,y)]exp[d(x,y)]exp[d(x,y)]}2,
where da(x,y) is the pixel value of the recovered image with coordinates (x,y), d(x,y) denotes the error-free target image, and X×Y is the image range [exp(·) is used to avoid the zero pixel being reciprocal]. For the sake of comparison, Fig. 12 also depicts the two-norm of another two types of errors, i.e., the target-motion-induced error εσ denoted in Eq. (21) and its boundary denoted in Eq. (23).

Fig. 12

Imagery quality and the motion-induced error factors. (a) Errors versus the target rotation velocity. (b) Errors versus the target scattering-center number. (c) Errors versus the condition number.

JEI_23_2_023014_f012.png

As shown in Fig. 12, the curves of the target-motion-induced error and the error boundary have the similar varying trend. They are both increased when target velocity gets higher, or the scattering-center number grows, or the condition number rises. Furthermore, the actual imaging error of the LS method is almost identical to the target-motion-induced error. It implies that the imaging error of the LS method is determined by the target-motion-induced error, or its imaging quality is sensitive to this error.

4.3.

Comparison Between the Image Reconstruction Using the CS Algorithm and the LS Method

The LS method is sensitive to the target-motion-induced error, as shown in the examples above. Especially for the plane model with a high σ in Fig. 10(d), the motion-induced error degrades the imaging results of the LS method beyond recognition. Thus, with respect to this target model, the following example employs the CS algorithm for the image reconstruction of RCI. The example compares the CS and LS methods in four different scenes, where the detecting signal has different time-space-independent degrees. Herein, the time-space independence of the detecting signals in the four scenes is still measured by the condition number, as shown in Table 1. The target model is given in Fig. 10(d), where the sparsity of the scattering coefficient vector is ς=K/L=77/4096=0.019 (the target scattering-center number is 77, and the grid-cell number is 64×64=4096). The rotation velocity and acceleration is ω=20deg/s and ωa=20deg/s2, respectively. The translational velocity and acceleration is v=10m/s and va=10m/s2, respectively. The imaging results are given in Fig. 13, and the imaging errors are provided in Table 1.

Table 1

The condition number and the errors in the image reconstruction.

Scene 1Scene 2Scene 3Scene 4
Condition number (dB)7.09989.487213.609614.2642
Target-motion-induced2.40754.45178.48069.4282
Imaging error of least-square (LS)2.40754.45178.48069.4282
Imaging error of compressive sensing (CS)2.29631.73651.49200.4064
Position error of LS0.170.280.330.68
Position error of CS0000

Fig. 13

Comparison between least-square (LS) and compressive sensing (CS) recovery. (a) LS and scene 1. (b) LS and scene 2. (c) LS and scene 3. (d) LS and scene 4. (e) CS and scene 1. (f) CS and scene 2. (g) CS and scene 3. (h) CS and scene 4. Here, “LS and scene 1” denotes the imaging result using the LS method in scene 1.

JEI_23_2_023014_f013.png

The condition number considerably grows from scene 1 to 4, which greatly increases the target-motion-induced error, as shown in Table 1. Both the data in Table 1 and Figs. 13(a) to 13(d) illustrate that the imaging error of the LS method is markedly raised along with the increasing motion-induced error. By contrast, the imaging error of the CS algorithm remains at a considerably low level. Figures 13(e) to 13(h) also show that the target images are well rebuilt via the CS method despite the nonideal conditions.

In addition, we should note that the imaging result of the CS method almost correctly gives all the scatterer positions. Actually, the imaging results could provide effective target definition if the scatterer positions are well estimated despite wrong scattering coefficients. The recovery of scatterer positions might attract more attention by comparison with scattering intensity. Then, we define an error especially for the scatterer-position estimation so as to measure the imaging results in another side.

Eq. (35)

εp=1X×Yx=1Xy=1Y|δ[da(x,y)]δ[d(x,y)]|.

Obviously, δ[da(x,y)] normalizes the image, where the scattering coefficient is 1 or 0. It wipes away the effect of scattering coefficients. δ[da(x,y)]δ[d(x,y)] will be 0 when the imaging result gives a point where a scatterer indeed exists, and will be 1 when it gives a point at a wrong position or does not show a point at the right position. Thus, εp only concerns the scatterer positions and ignores the scattering intensity. The position errors of the target images in Fig. 13 are given in Table 1.

The position errors in Table 1 further indicate that the CS algorithm can recover moving-target images with high quality even though the detecting signals have weak time-space independence. Therefore, this example suggests that the CS algorithm combining the sparse restriction and the coincidence imaging equation can considerably diminish the adverse influence of the target-motion-induced error.

As previously stated, the RIP is a condition of great significance for the CS recovery. Then the imaging qualities will be compared when S^ has different RIP conditions in the following example. The RIP is highly related to the incoherent property of the reference matrix S^, which can be measured by the condition number. Hence, the condition number of S^ is employed to represent the RIP condition. The condition number and the imaging performance of the CS recovery varying with the grid-cell size are depicted in Fig. 14. When the grid cell gets smaller, the spatial independence of the detecting signals between adjacent cells will be decreased. Consequently, the incoherence of S^ gets weaker, as shown in Fig. 14(a), resulting in a worse RIP condition. Because the decreasing grid-cell size causes the violation of the RIP condition, the imaging error and the position error are increased and the CS recovery is no longer reliable, as shown in Figs. 14(b) and 14(c).

Fig. 14

Imagery quality versus the grid-cell size. (a) The condition number of the matrix S versus the grid-cell size. (b) The imaging error of the CS recovery versus the grid-cell size. (c) The position error of the CS recovery versus the grid-cell size.

JEI_23_2_023014_f014.png

The results in Figs. 13 and 14 imply that if the reference signal matrix S^ satisfies the RIP condition, the target image could be correctly recovered despite the motion-induced error existing in the imaging equation. However, it can be inferred that the tolerance of the CS method for the motion-induced error will not be infinite even though it is markedly larger than the one of the LS method. Therefore, the following example concerns the reconstruction quality of the CS method when the motion-induced error is further increased by the growing velocity. In this example, target translational velocity v increases from 200 to 600m/s. The other motion parameters are ω=30deg/s, ωa=30deg/s2, and va=40m/s2. Figures 15(a) to 15(c) give the imaging results using the CS method in cases of different v. The imaging qualities remain excellent when v is 200 and 400m/s, as shown in Figs. 15(a) and 15(b). However, the target image in Fig. 15(c) is degraded when the velocity goes to 600m/s. It suggests that a high enough velocity will make the motion-induced error reach such a high level that the CS method will be no longer applicable to the image reconstruction.

Fig. 15

Target images for different velocities. (a) v=200m/s. (b) v=400m/s. (c) v=600m/s.

JEI_23_2_023014_f015.png

A possible approach to solve the problem is to update the computational detecting signal (or the imaging plane) in accordance with the target motion parameters. That is, the radar position vectors Rn(t) and Rr(t) should be computed based on an estimated velocity instead of being regarded as constants. Then the mismatch degree between computational S(r,t) and the true received signal will be reduced. Even though the approach needs motion parameter estimation, the estimation complexity is reduced. First, only the first-order velocity is required to be estimated because the CS method presents fine robustness to high-order motion terms, as shown in Fig. 15(a). Second, the estimation precision of the motion parameters is relatively low. Take the experiment of Fig. 15, for example. A high-quality target image can be obtained as far as the parameter estimation error is <400m/s. In summary, the CS algorithms provide an enhancement in processing high-speed targets for the image recovery of RCI.

The experiments above demonstrate the improvement of the CS algorithm for RCI in the presence of the target-motion-induced error. Certainly a coincidence imaging equation without errors will give a correct target scattering coefficient vector (provided that S is nonsingular). Once errors exist, the correctness of the target image recovery is determined by whether the errors are below the tolerance level of the reconstruction algorithms chosen to solve the imaging equation. The imaging results above illustrate that the error tolerance level of LS method is lower than that of the CS algorithm. The CS method can well recover images of the moving targets even on the condition of a big scattering coefficient vector or the detecting signals with weak independence.

5.

Conclusions

As an instantaneous imaging method, RCI does not depend on target relative motion. Because of the quite short imaging time, this method employs an assumption that targets are stationary with respect to the radar system during the data acquisition time. Consequently, the target-motion-induced error arises in the scenario of moving targets, which might seriously degrade imagery quality. This paper looks into the image recovery of RCI in the presence of the target-motion-induced errors.

Three key factors of the target-motion-induced errors are target velocity, target scattering map, and the independence of detecting signals. These factors in practical scenes might produce a high target-motion-induced error. However, the LS algorithm is so sensitive to the error that its recovered images are often blurred beyond recognition. Thus, we employ the CS method, which can reduce the error effect on image reconstruction via utilizing the sparsity restriction. Numeric simulations have shown that the CS recovery algorithms can obtain high-quality images even when the target moves with a high velocity or has a large scattering coefficient vector, or when the detecting signals have weak time-space independence.

There still exist several unsolved issues that are worthy of further consideration. First, to give an explicit resolution of RCI is our ongoing work. In addition, it has been demonstrated that the CS algorithms could achieve the super-resolution in optical coincidence imaging.20 Then, the potential resolution enhancement provided by the CS methods in RCI is worth to be investigated in the future work.

Furthermore, we should notice the computation complexity of RCI. The target area might have large size and leads to thousands of grid cells. For instance, if the grid cell is 0.1m×0.1m and the target area is 10m×10m, then the grid-cell number is 104. That is, we need to recover the target image via solving a 104-dimensional equation. Obviously, it is a great computational burden. Thus, a fast algorithm design is needed for RCI and will be a special subject of our further work.

Acknowledgements

The paper is supported by the National Science Foundation for Distinguished Young Scholars of China under Grant 61025006 and the National Natural Science Foundation for Young Scientists of China under Grant 61101182.

References

1. 

D. A. Aushermanet al., “Developments in radar imaging,” IEEE Trans. Aerosp. Electron. Syst., AES-20 (4), 363 –400 (1984). http://dx.doi.org/10.1109/TAES.1984.4502060 IEARAX 0018-9251 Google Scholar

2. 

B. D. Steinberg, “Radar imaging from a distorted array: the radio camera algorithm and experiments,” IEEE Trans. Antennas Propag., 29 (5), 740 –748 (1981). http://dx.doi.org/10.1109/TAP.1981.1142652 IETPAK 0018-926X Google Scholar

3. 

Y. ZhuY. SuW. Yu, “An ISAR imaging method based on MIMO technique,” IEEE Trans. Geosci. Remote Sens., 48 (8), 3290 –3299 (2010). http://dx.doi.org/10.1109/TGRS.2010.2045230 IGRSD2 0196-2892 Google Scholar

4. 

M. CheneyB. Borden, Fundamentals of Radar Imaging, SIAM, PA (2009). Google Scholar

5. 

V. C. ChenH. Ling, Time Frequency Transforms for Radar Imaging and Signal Analysis, Artech House, Boston (2002). Google Scholar

6. 

T. Thayaparanet al., “Application of adaptive joint time-frequency algorithm for focusing distorted ISAR images from simulated and measured radar data,” IEE Proc.-Radar Sonar Navig., 150 (4), 213 –220 (2003). http://dx.doi.org/10.1049/ip-rsn:20030670 IRSNE2 1350-2395 Google Scholar

7. 

D. Liet al., “Radar coincidence imaging: an instantaneous imaging technique with stochastic signals,” IEEE Trans. Geosci. Remote Sens., 52 (4), 2261 –2277 (2014). http://dx.doi.org/10.1109/TGRS.2013.2258929 IGRSD2 0196-2892 Google Scholar

8. 

Y. Shih, “Quantum imaging,” IEEE J. Sel. Topics Quantum Electron., 13 (4), 1016 –1030 (2007). http://dx.doi.org/10.1109/JSTQE.2007.902724 IJSQEN 1077-260X Google Scholar

9. 

Y. Shih, “The physics of ghost imaging,” Quantum Inf. Process., 11 (4), 949 –993 (2012). http://dx.doi.org/10.1007/s11128-012-0396-5 QIPUAT 1570-0755 Google Scholar

10. 

A. Gattiet al., “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett., 93 (9), 093602 (2004). http://dx.doi.org/10.1103/PhysRevLett.93.093602 PRLTAO 0031-9007 Google Scholar

11. 

G. ScarcelliV. BerardiY. Shih, “Can two-photon correlation of chaotic light be considered as correlation of intensity fluctuations,” Phys. Rev. Lett., 96 (6), 063602 (2006). http://dx.doi.org/10.1103/PhysRevLett.96.063602 PRLTAO 0031-9007 Google Scholar

12. 

R. A. HornC. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge (1990). Google Scholar

13. 

E. J. CandèsM. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., 25 (2), 21 –30 (2008). http://dx.doi.org/10.1109/MSP.2007.914731 ISPRE6 1053-5888 Google Scholar

14. 

D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, 52 (4), 1289 –1306 (2006). http://dx.doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448 Google Scholar

15. 

R. G. Baraniuket al., “A simple proof of the restricted isometry property for random matrices (aka the Johnson-Lindenstrauss lemma meets compressed sensing),” Constr. Approx., 28 (3), 253 –263 (2008). http://dx.doi.org/10.1007/s00365-007-9003-x 0176-4276 Google Scholar

16. 

L. C. PotterJ. T. Parker, “Sparsity and compressed sensing in radar imaging,” Proc. IEEE, 98 (6), 1006 –1020 (2010). http://dx.doi.org/10.1109/JPROC.2009.2037526 IEEPAD 0018-9219 Google Scholar

17. 

R. BaraniukP. Steeghs, “Compressive radar imaging,” in IEEE Radar Conf., 128 –133 (2007). Google Scholar

18. 

P. F. LuoW. M. Zhang, Stochastic Signal Analysis and Processing, Tsinghua University Press, Beijing (2006). Google Scholar

19. 

X. D. Zhang, Matrix Analysis and Applications, Tsinghua University Press, Beijing (2004). Google Scholar

20. 

W. GongS. Han, “Super-resolution far-field ghost imaging via compressive sampling,” (2011) http://arxiv.org/abs/0911.4750 September ). 2011). Google Scholar

Biography

Dongze Li is currently working toward her PhD degree at the Institute of Space Electronics and Information Technology of National University of Defense Technology, Changsha, China. She received her BS degree in information and communication engineering from National University of Defense Technology. Her research interests lie in the areas of remote sensing, signal processing, and radar imaging.

Xiang Li is currently a professor at National University of Defense Technology, Changsha, China. He received his BS degree from Xidian University in 1989 and his MS and PhD degrees from the National University of Defense Technology in 1995 and 1998, respectively. Since 2003, he has been in the Institute of Space Electronics and Information Technology, where he focused on target recognition, signal detection, and radar imaging.

Yongqiang Cheng is currently a lecturer at the School of Electronic Science and Engineering, National University of Defense Technology, Changsha, China. He received his BS, MS, and PhD degrees in information and communication engineering from National University of Defense Technology in 2005, 2007, and 2012, respectively. His research interests lie in the areas of statistical signal processing and information geometry.

Yuliang Qin is currently an associate professor at the School of Electronic Science and Engineering, National University of Defense Technology, Changsha, China. He received his BS, MS, and PhD degrees in information and communication engineering from National University of Defense Technology in 2002, 2004, and 2008, respectively. His research interests lie in the areas of SAR imaging and radar signal processing.

Hongqiang Wang is currently a professor at the School of Electronic Science and Engineering, National University of Defense Technology, Changsha, China. He received his BS, MS, and PhD degrees from the National University of Defense Technology in 1993, 1999, and 2002, respectively. His research interests are in automatic target recognition, radar imaging, and target tracking.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Dongze Li, Xiang Li, Yongqiang Cheng, Yuliang Qin, and Hongqiang Wang "Radar coincidence imaging in the presence of target-motion-induced error," Journal of Electronic Imaging 23(2), 023014 (1 March 2014). https://doi.org/10.1117/1.JEI.23.2.023014
Published: 1 March 2014
Lens.org Logo
CITATIONS
Cited by 30 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Associative arrays

Image registration

Infrared imaging

Visualization

Radar imaging

Visible radiation

Earth observing sensors

RELATED CONTENT


Back to Top