1 Introduction

Multiple target tracking can be defined as the processing of multiple measurements obtained from multiple targets in order to maintain estimates of target’s current states, see e.g., [1]. In the classical target tracking problem, it is assumed that at most a single measurement is received from a point target at each time step. However, in many cases, high resolution sensors may be able to resolve individual features or measurement sources on an extended target.

Keeping complex dynamic situations on sea, land, and in the air under surveillance is a challenge, which, in times of network-centric warfare, has to be tackled among other things by the combination of sensor networking and the corresponding sensor data. In this context, many tracking applications consider targets to be tracked as a point source, i.e., their extension is assumed to be neglectable in comparison with sensor resolution and error. With ever-increasing sensor resolution capabilies, however, this assumption is no longer (or rather less and less) valid, e.g., in short-range applications or for maritime surveillance, where different scattering centers of the observed objects may give rise to several distinct detections varying, from scan to scan, in both number as well as relative origin location. Concerning the fluctuation number of measurements, a similar set of problems arises in the case of closely spaced targets, where, depending on the sensor-to-target geometry, limited sensor resolution prevents a successful tracking of (all of) the individual targets [2]. On top of that, hostile fighter aircraft can exploit this surveillance gap by flying in close formation to disguise their actual fighting capacity, where the fluctuating number of measurements and the ambiguities in observation-to-track assignments can entail track instabilities culminating in track loss [2].

Most traditional multi-target tracking approaches apply data association techniques, such as Global Nearest Neighbor (GNN), see e.g., [3], Joint Probability Data Association (JPDA), see e.g., [4] and Multiple Hypothesis Tracking (MHT), see e.g., [5]. GNN considers all possible associations within track gates under a constraint that, an observation can only be associated with the most likely hypothesis. The GNN approach has lower computational complexity, but, only works well in the case of sparse targets and few clutters in the track gates, see e.g., [6]. The JPDA method allows a track to be updated by all observation in its track gate, however, it suffers from a problem related to the constancy of target number during the track, see e.g., [7]. The MHT approach hypothesizes all possible data associations over multiple scans and uses subsequent data to resolve the uncertainty of associations, but it overly relies on prior information, see e.g., [8]. All these approaches mentioned above have a common drawback that their computational complexity grows rapidly when the number of measures increases.

Closely related to extended target is group target, defined as a clutter of point targets which cannot be tracked individually, but has to be treated as a single object which can give rise to multiple measurements. Gilholm and Salmond [9] have presented an approach for tracking extended targets under the assumption that the number of received target measurements in each time step is Poisson distributed. They show an example where they track objects that have a 1-D extension (infinitely thin stick of length 1).

In [10], it was suggested that a measurement model is an inhomogeneous Poisson distributed random number of measurements generated, distributed around the target. This measurement model can be understood to imply that the extended target is sufficiently far away from the sensor for its measurements to resemble a cluster of points and not a geometrically structured ensemble.

Using Finite Set Statistic (FISST), Mahler has presented a rigorous framework for target tracking employing the Probability Hypothesis Density (PHD) filter [11]. A random Finite Set (RFS) is a finite collection of elements where not only each RFS constituent element is random but the number of elements is also random [12]. The RFS approach to multi-target tracking is elegant in that the multiple target states and the number of targets are integrated to a single RFS. More importantly, RFS provides a solid foundation for Bayesian multi-target tracking which is not found in traditional multi-target tracking approaches [13, 14]. In the PHD filter, the targets and measurements are treated as RFS, which allows the problem of estimating multiple targets in clutter and uncertain associations to be cast in a Bayesian filtering framework [11]. By approximating the PHD with a Gaussian Mixture (GM), a practical implementation of the PHD filter for point targets is obtained, called the Gaussian Mixture PHD (GM-PHD) filter [14].

In this paper, we will describe the results of the GM-PHD filter as in [14, 15], and then give a Gamma Gaussian implementation of it. Gamma Gaussian mixture implementation can be reduced to the Gaussian mixture filter by using the approximation of the Variational Bayesian inference. We extend the work in [10, 11] and finally present the improved GM-PHD filter for multi-target tracking. To the best of our knowledge, such a filter has not been presented before. The remainder of this paper is organized as follow. In Sect. 2, we present the target tracking problem where the dynamic and measurement models are both assumed to be linear Gaussian. Sections 3 and 4 present respectively the Bayesian framework and the PHD recursion. The implementation of the Gamma-Gaussian mixture is presented in Sect. 5. The performance evaluation metric is presented in Sect. 6 and the results from a simulation study are given in Sect. 7. Conclusion remarks are given in Sect. 8 and mathematical details on the Variational Bayesian Inference are given in the appendix.

2 Multi-target Tracking/Filtering Background

2.1 Problem Formulation

Let \( y_{k}^{(i)} \) denote the state of the i:th target at time k and let the set of \( N_{k} \) targets at time k be denoted:

$$ Y_{k} = \left\{ {y_{k}^{(i)} } \right\}_{i = 1}^{{_{{N_{k} }} }} . $$
(1)

Where \( N_{k} \) is the unknown time varying number of targets present.

The set of target generated measurements obtains at time k is denoted:

$$ Z_{k} = \left\{ {z_{k}^{(j)} } \right\}_{j = 1}^{{M_{k} }} . $$
(2)

Where \( M_{k} \) is the number of measurements.

The set of target measurements is distributed according to an i.i.d. cluster process. The corresponding set likelihood is given as

$$ P\left( {{{Z_{k} } \mathord{\left/ {\vphantom {{Z_{k} } y}} \right. \kern-0pt} y}} \right) = M_{k} !P_{z} \left( {{{M_{T,k} } \mathord{\left/ {\vphantom {{M_{T,k} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right)\prod\limits_{{z_{k} \in Z_{k} }} {p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } y}} \right. \kern-0pt} y}} \right)} . $$
(3)

Where \( M_{k} = \left| {Z_{k} } \right| \) is the number of measurements, \( P_{z} \left( {{{M_{T,k} } \mathord{\left/ {\vphantom {{M_{T,k} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right) \) and \( p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right) \) denote respectively the probability mass function for the cardinality \( M_{k} \) and the like-lihood of a single measurement conditioned on the state \( y_{k} \).

2.2 Target Tracking: Linear Dynamical Model

In this section, we present the target tracking in Gamma-Gaussian linear dynamical models. We use the following notations:

  • \( {\mathbb{R}}^{n} \) is the set of real n-vectors.

  • \( g\left( {\gamma ,\alpha ,\beta } \right) \) denotes a gamma probability density function (pdf) defined over \( \gamma > 0 \) with \( \alpha > 0 \) and inverse scale \( \beta > 0 \):

    $$ g\left( {\gamma ,\alpha ,\beta } \right) = \frac{{\beta^{\alpha } }}{\varGamma \left( \alpha \right)}\gamma^{\alpha - 1} e^{ - \beta \gamma } . $$
    (4)
  • \( {\mathcal{N}}\left( {x;m,P} \right) \) denotes a multi-variate Gaussian pdf defined over the vector \( x \in {\mathbb{R}}^{{n_{x} }} \) with mean \( m \in S_{ + }^{{n_{x} }} \) and covariance matrix \( P \in S_{ + }^{{n_{x} }} \) where \( S_{ + }^{{n_{x} }} \) the set of symmetric positive semi definite \( n \times n \) matrices:

    $$ {\mathcal{N}}\left( {x;m,P} \right) = \frac{1}{{\left( {2\pi } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \left| P \right|^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} }}e^{{ - \frac{1}{2}\left( {x - m} \right)^{T} P^{ - 1} \left( {x - m} \right)}} . $$
    (5)

The linear dynamical model considered in this work is, as described in e.g., [14, 16] characterized by a spontaneous birth. For the sake of simplicity, target spawning is omitted. The following assumptions about the number of measurements generated by each target are made.

Assumption 1.

The number of measurements generated by the target \( y_{k} \) is Poisson distributed with a gamma distributed rate \( \gamma_{k} \).

Note that Assumption 1 is presented for the specific case of our paper with the unknown measurement rates, given a set of M observations \( Z = \left\{ {z_{i} } \right\}_{i = 1}^{M} \) described in Fig. 1.

Fig. 1.
figure 1

Graphical model for a Variational Bayesian example where m, p are Gaussian components and a, b represent the parameters of gamma. \( Z = \left\{ {z_{i} } \right\}_{i = 1}^{M} \) is a set of M observations.

Assumption 2.

The measurement rate \( \gamma_{k} \) is conditionally independent of target kinematics and positions.

Since, in general, it is unknown how \( \gamma_{k} \) evolves over time, conditioned on the kinematics and extension, we believe the assumption which helps us to avoid notational inconsistency.

Following Assumptions 1 and 2 and as shown in Fig. 1, the target states \( y_{k} \) is defined as:

$$ y_{k} \triangleq \left( {\gamma_{k} ,x_{k} } \right). $$
(6)

Where \( \gamma_{k} \) > 0 is the measurement rate, \( x_{k} \in {\mathbb{R}}^{n} \) is the position and kinematical states.

We modeled \( x_{k} \) by a dynamical process

$$ x_{k} = Ax_{k - 1} + Bw_{k} . $$
(7)

Where

$$ x_{k} = \left[ {\xi_{k}^{T} ,\phi_{k}^{T} } \right]^{T} . $$
(8)

\( \xi_{k} \) is the position states and \( \phi_{k} \) the kinematical state. A and B are some pre-specified matrices defending the linear dynamical process and \( w_{k} \) is a time-uncorrelated random Gaussian vector with zero mean and covariance I.

Following [17] in the extended target state case, the target state \( Y_{k} \), conditioned on \( Z_{k} \) is modeled as Gamma-Gaussian distributed:

$$ P\left( {{{y_{k} } \mathord{\left/ {\vphantom {{y_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right) = P\left( {{{\gamma_{k} } \mathord{\left/ {\vphantom {{\gamma_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right).P\left( {{{x_{k} } \mathord{\left/ {\vphantom {{x_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right), $$
(9a)
$$ P\left( {{{y_{k} } \mathord{\left/ {\vphantom {{y_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right) = P\left( {{{\gamma_{k} } \mathord{\left/ {\vphantom {{\gamma_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right).P\left( {{{x_{k} } \mathord{\left/ {\vphantom {{x_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right), $$
(9b)
$$ P\left( {{{y_{k} } \mathord{\left/ {\vphantom {{y_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right) = GaG\left( {y_{k} ;\varphi_{k} } \right), $$
(9c)

Where

$$ \varphi_{k} = \left\{ {\alpha_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}} ,\beta_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}} ,m_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}} ,p_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}} } \right\}. $$
(10)

2.3 Transition Density and Measurement Likelihood

The state transition density \( p\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right) \) describes the time evolution of the target states and rates from time k to time k + 1.

In Bayesian state estimation, the prediction step consists of solving the Chapman-Kolmogorov equation:

$$ p\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right) = \int {p\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right)p\left( {{{y_{k} } \mathord{\left/ {\vphantom {{y_{k} } {Z_{k} }}} \right. \kern-0pt} {Z_{k} }}} \right)dy_{k} } . $$
(11)

The following assumption is made about the transition density.

Assumption 3.

The target state transition density satisfies:

$$ p\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right) \approx p^{\gamma } \left( {{{\gamma_{k + 1} } \mathord{\left/ {\vphantom {{\gamma_{k + 1} } {\gamma_{k} }}} \right. \kern-0pt} {\gamma_{k} }}} \right)p^{x} \left( {{{x_{k + 1} } \mathord{\left/ {\vphantom {{x_{k + 1} } {x_{k} }}} \right. \kern-0pt} {x_{k} }}} \right) $$
(12)

for all \( \xi_{k} ,\xi_{k + 1} ,\phi_{k} \) and \( \phi_{k + 1} \).

For consistency, the independent dynamical models are assumed to describe the transition density of each target. The approximation to predict the measurement rate independent of the kinematical state and extension state is inherited from [16].

The following assumptions are made about the measurements likelihood.

Assumption 4.

The individual measurement likelihood gave as \( p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } y}} \right. \kern-0pt} y}} \right) \) in (3); does not depend on the measurement rate \( \gamma_{k} \).

$$ p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } y}} \right. \kern-0pt} y}} \right) = p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } x}} \right. \kern-0pt} x}} \right). $$
(13a)

The reason for this is that the likelihood of the measurement rate will be taken as the multiplicative term in (3). In summary, the individual likelihood function is defined as:

$$ P\left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } y}} \right. \kern-0pt} y}} \right) = p_{\gamma } \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } \gamma }} \right. \kern-0pt} \gamma }} \right)p_{z} \left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } x}} \right. \kern-0pt} x}} \right). $$
(13b)

Assumption 5.

Each targets’ kinematical and position states follow a linear Gaussian dynamical model.

Assumption 6.

The sensor has a linear Gaussian measurement model. In particular, we use the linear Gaussian dynamics model suggested in [14] by:

$$ x_{k + 1} = F_{k} x_{k} + v_{k} , $$
(14)
$$ z_{k} = H_{k} x_{k} + w_{k} . $$
(15)

Where \( F_{k} \) and \( H_{k} \) are the transition matrix and the observation matrix respectively defining the linear function. The random sequences \( v_{k} \) and \( w_{k} \) are mutually independent zero mean white Gaussian with covariance \( Q_{k} \) and \( R_{k} \) respectively. Details on \( F_{k} \),\( H_{k} \),\( Q_{k} \) and \( R_{k} \) are given in Sect. 7.

3 Bayesian Frameworks

Multi-target tracking can be modeled by a random finite set framework. Comments on the appropriateness of this model can be found in e.g., [13, 14]. Its mathematical formulation will be expanded upon in our following work.

We consider a standard state model for a single target tracking problem. Our goal is to estimate \( y_{k} \) over time. In the sequential Bayesian framework, we assume, knowledge of the state transition density and the likelihood function denoted respectively \( P\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {y_{k} }}} \right. \kern-0pt} {y_{k} }}} \right) \) and \( P\left( {{{z_{k} } \mathord{\left/ {\vphantom {{z_{k} } {Y_{k} }}} \right. \kern-0pt} {Y_{k} }}} \right) \). The number of measurements generated by the target \( y_{k} \) is Poisson distributed with a Gamma distributed rate parameter.

The Bayesian approach considered is to find the posterior pdf \( P\left( {{{y_{k} } \mathord{\left/ {\vphantom {{y_{k} } {z_{1:k} }}} \right. \kern-0pt} {z_{1:k} }}} \right) \) which will allow us to estimate \( y_{k} \). The posterior pdf obeys the following recursion:

$$ P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {z_{1:k} }}} \right. \kern-0pt} {z_{1:k} }}} \right) = \int {P\left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } y}} \right. \kern-0pt} y}} \right)} P\left( {{y \mathord{\left/ {\vphantom {y {z_{1:k} }}} \right. \kern-0pt} {z_{1:k} }}} \right)dy, $$
(16)
$$ P_{k + 1} \left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {z_{1:k + 1} }}} \right. \kern-0pt} {z_{1:k + 1} }}} \right) = \frac{{P\left( {{{z_{k + 1} } \mathord{\left/ {\vphantom {{z_{k + 1} } {y_{k + 1} }}} \right. \kern-0pt} {y_{k + 1} }}} \right)P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{{y_{k + 1} } \mathord{\left/ {\vphantom {{y_{k + 1} } {z_{1:k} }}} \right. \kern-0pt} {z_{1:k} }}} \right)}}{{\int {P\left( {{{z_{k + 1} } \mathord{\left/ {\vphantom {{z_{k + 1} } y}} \right. \kern-0pt} y}} \right)P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{y \mathord{\left/ {\vphantom {y {z_{1:k} }}} \right. \kern-0pt} {z_{1:k} }}} \right)dy} }}. $$
(17)

In R.F.S. Bayesian framework described in [12], we consider determining the posterior pdf \( P_{k + 1} \left( {{{Y_{k + 1} } \mathord{\left/ {\vphantom {{Y_{k + 1} } {Z_{1:k + 1} }}} \right. \kern-0pt} {Z_{1:k + 1} }}} \right) \) thereby estimating \( Y_{k} \) over time. Moreover, it has a recursive relation reminiscent of the prediction and update formulae in [14], given as follows:

$$ P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{{Y_{k + 1} } \mathord{\left/ {\vphantom {{Y_{k + 1} } {Z_{1:k} }}} \right. \kern-0pt} {Z_{1:k} }}} \right) = \int {P\left( {{{Y_{k + 1} } \mathord{\left/ {\vphantom {{Y_{k + 1} } Y}} \right. \kern-0pt} Y}} \right)} P\left( {{Y \mathord{\left/ {\vphantom {Y {Z_{1:k} }}} \right. \kern-0pt} {Z_{1:k} }}} \right)\mu_{s} \left( {dY} \right), $$
(18)
$$ P_{k + 1} \left( {{{Y_{k + 1} } \mathord{\left/ {\vphantom {{Y_{k + 1} } {Z_{1:k + 1} }}} \right. \kern-0pt} {Z_{1:k + 1} }}} \right) = \frac{{P\left( {{{Z_{k + 1} } \mathord{\left/ {\vphantom {{Z_{k + 1} } {Y_{k + 1} }}} \right. \kern-0pt} {Y_{k + 1} }}} \right)P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{{Y_{k + 1} } \mathord{\left/ {\vphantom {{Y_{k + 1} } {Z_{1:k} }}} \right. \kern-0pt} {Z_{1:k} }}} \right)}}{{\int {P\left( {{{Z_{k + 1} } \mathord{\left/ {\vphantom {{Z_{k + 1} } Y}} \right. \kern-0pt} Y}} \right)P_{{{{k + 1} \mathord{\left/ {\vphantom {{k + 1} k}} \right. \kern-0pt} k}}} \left( {{Y \mathord{\left/ {\vphantom {Y {Z_{1:k} }}} \right. \kern-0pt} {Z_{1:k} }}} \right)\mu_{S} (dY)} }}. $$
(19)

Where \( \mu_{S} \) is an appropriate reference measure on the collections of all finite subsets of Y.

4 PHD Recursion

Let \( v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \) and \( v_{k} \) denote the respective intensities associated to multi-target predicted density and multi-target posterior density in the recursion (20) and (21). The predicted PHD \( v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} (y) \) is the sum of the PHD of predicted existing targets and the birthed PHD.

$$ v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} (y) = \int {p_{S,k} \left( \phi \right)} p\left( {{y \mathord{\left/ {\vphantom {y \phi }} \right. \kern-0pt} \phi }} \right)d\phi + \varGamma_{k} \left( y \right), $$
(20)
$$ \begin{aligned} & v_{k} \left( y \right) = \left[ {1 - p_{D,k} \left( y \right)} \right]v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( y \right) \\ & \quad \quad \quad \quad \quad \quad + \sum\limits_{{z \in Z_{k} }} {\frac{{p_{D,k} \left( y \right).p\left( {{z \mathord{\left/ {\vphantom {z y}} \right. \kern-0pt} y}} \right).v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( y \right)}}{{K_{k} \left( z \right) + \int {p_{D,k} \left( \xi \right).p\left( {{z \mathord{\left/ {\vphantom {z \xi }} \right. \kern-0pt} \xi }} \right).v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( \xi \right)d\xi } }}} . \\ \end{aligned} $$
(21)

5 GaGM-PHD Filter

Here, we present a Gamma Gaussian mixture implementation of a target tracking PHD filter.

5.1 Assumptions

In order to derive prediction and update for the GaGM-PHD filter, a number of assumptions are made here in addition to the assumption already described.

Assumption 7.

The current estimated PHD \( v_{k} \left( \cdot \right) \) is un-normalized mixture of GaG distribution:

$$ v_{k - 1} \left( y \right) = \sum\limits_{i = 1}^{{J_{k - 1} }} {w_{k - 1}^{\left( i \right)} } GaG\left( {y;\varphi_{k - 1}^{\left( i \right)} } \right). $$
(22)

Where \( J_{k - 1} \) is the number of components, \( w_{k - 1}^{\left( i \right)} \) is the weight of i:th component and \( \varphi_{k - 1}^{\left( i \right)} \) is the density parameter of i:th component.

Note that Assumption 7 is presented for the estimation of both the measurement rate and the kinematics state.

Assumption 8.

The intensity of the birth R.F.S. is un-normalized mixture of GaG distribution.

Assumption 9.

Similarly to [14], the survival probability and the detection probability are state independent, i.e.,

$$ p_{S,k} \left( y \right) = p_{S} . $$
(23a)
$$ p_{D,k} \left( y \right) = p_{D} . $$
(23b)

Remark 1.

Note that, Assumption 8 has modeled the intensity of target birth based on the measurement rate. The details of this model are given in (33) and further explanations are given in Remark 2.

Assumption 9 is commonly used in many tracking algorithms [1, 8]. For clarity in the presentation, we only focus on state independent \( p_{S} \) and \( p_{D} \).

5.2 Prediction

The PHD corresponding to prediction of existing targets is given by:

$$ v_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}} (y) = \int {p_{S,k} \left( \phi \right)} p\left( {{y \mathord{\left/ {\vphantom {y \phi }} \right. \kern-0pt} \phi }} \right)v_{k - 1} \left( \phi \right)d\phi . $$
(24)

Through (12), (22) and the Assumption 8, the integral is simplified into:

$$ \begin{aligned} & v_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}} (y) = p_{S} \sum\limits_{i = 1}^{{J_{k - 1} }} {w_{k - 1}^{\left( i \right)} } \int {g\left( {\gamma ;\alpha_{k - 1}^{\left( i \right)} ,\beta_{k - 1}^{\left( i \right)} } \right)} .p_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( \gamma \right)} \left( {{{\gamma_{k} } \mathord{\left/ {\vphantom {{\gamma_{k} } {\gamma_{k - 1} }}} \right. \kern-0pt} {\gamma_{k - 1} }}} \right)d\gamma \\ & \quad \quad \quad \quad \quad \quad \quad \; \times \int {{\mathcal{N}}\left( {x;m_{k - 1}^{\left( i \right)} ,p_{k - 1}^{\left( i \right)} } \right)} .p_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{x} \left( {{{x_{k} } \mathord{\left/ {\vphantom {{x_{k} } {x_{k - 1} }}} \right. \kern-0pt} {x_{k - 1} }}} \right)dx \\ \end{aligned} $$
(25)

We assume that:

$$ \int {g\left( {\gamma ;\alpha_{k - 1}^{\left( i \right)} ,\beta_{k - 1}^{\left( i \right)} } \right)} .p_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( \gamma \right)} \left( {{{\gamma_{k} } \mathord{\left/ {\vphantom {{\gamma_{k} } {\gamma_{k - 1} }}} \right. \kern-0pt} {\gamma_{k - 1} }}} \right)d\gamma = g\left( {\gamma ;\alpha_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,\beta_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right). $$
(26a)

Using the linear Gaussian model given in (14), we have:

$$ \int {{\mathcal{N}}\left( {x;m_{k - 1}^{\left( i \right)} ,p_{k - 1}^{\left( i \right)} } \right)} .p_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{x} \left( {{{x_{k} } \mathord{\left/ {\vphantom {{x_{k} } {x_{k - 1} }}} \right. \kern-0pt} {x_{k - 1} }}} \right)dx = {\mathcal{N}}\left( {x;m_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right), $$
(26b)

Where

$$ m_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} = F_{k - 1} m_{k - 1}^{i} , $$
(27)
$$ p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} = Q_{k - 1} + F_{k - 1} p_{k - 1}^{\left( i \right)} F_{k - 1}^{T} . $$
(28)

(26a) and (26b) give:

$$ v_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}} (y) = p_{S} \sum\limits_{i = 1}^{{J_{k - 1} }} {w_{k - 1}^{\left( i \right)} } g\left( {\gamma ;\alpha_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,\beta_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right).{\mathcal{N}}\left( {x;m_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right). $$
(29)

Let’s define:

$$ \varPi \left( {\gamma ,x} \right) = g\left( {\gamma ;\alpha_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,\beta_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right){\mathcal{N}}\left( {x;m_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right). $$
(30a)

Through the approximation of the Variational Bayesian inference presented in the appendix, we have below:

$$ \varPi \left( {\gamma ,x} \right) \approx \varPi \left( x \right) = {\mathcal{N}}\left( {x;M_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,P_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right), $$
(30b)

Where:

$$ M_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} = F_{k - 1} m_{k - 1}^{\left( i \right)} + \frac{{P_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} }}{{p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} }}, $$
(31)
$$ P_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} = \frac{1}{2}p_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} . $$
(32)

The un-normalized mixture is converted in the Gaussian mixture.

The intensity of the birth is related to:

$$ \varGamma_{k} \left( x \right) = \sum\limits_{i = 1}^{{J_{b,k} }} {w_{b,k}^{\left( i \right)} } {\mathcal{N}}\left( {x;M_{b,k}^{\left( i \right)} ,P_{b,k}^{\left( i \right)} } \right). $$
(33)

The summary of the results gave the predicted intensity for time k as a Gaussian mixture of the form

$$ v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( x \right) = v_{{{{S,k} \mathord{\left/ {\vphantom {{S,k} {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( x \right) +\Gamma _{k} \left( x \right). $$
(34a)
$$ v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( x \right) = \sum\limits_{i = 1}^{{J_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} }} {w_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} {\mathcal{N}}\left( {x;M_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} ,P_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( i \right)} } \right)} , $$
(34b)

Where \( J_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} = J_{k - 1} + J_{b,k} . \)

Remark 2.

In (33), \( M_{(b,k)}^{i} ,i = 1, \ldots ,J_{b,k} \) are the peaks of the spontaneous birth intensity. These points have the highest local concentrations of the expected number of spontaneous births, and represent, for example, airbases or airports where targets are most likely to appear. The covariance matrix \( P_{b,k}^{\left( i \right)} \) determines the spread of the birth intensity in the vicinity of the peak \( M_{b,k}^{\left( i \right)} \). The weight \( w_{b,k}^{\left( i \right)} \) gives the expected number of new targets originating from \( M_{b,k}^{\left( i \right)} \).

5.3 Update

The posterior intensity at time k is also a Gaussian mixture given by:

$$ v_{k} \left( x \right) = \left( {1 - p_{D} } \right)v_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} \left( x \right) + \sum\limits_{{z \in Z_{k} }} {\mu_{D,k} \left( {x;z} \right)} . $$
(35a)

Where

$$ \mu_{D,k} \left( {x;z} \right) = \sum\limits_{j = 1}^{{J_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} }} {w_{k}^{\left( j \right)} } \left( z \right){\mathcal{N}}\left( {x;M_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}}^{\left( j \right)} ,P_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}}^{\left( j \right)} } \right). $$
(35b)
$$ w_{k}^{\left( j \right)} \left( z \right) = \frac{{p_{D} w_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} \psi_{k}^{\left( j \right)} \left( z \right)}}{{C_{k} \left( z \right) + p_{D} \sum\limits_{l = 1}^{{J_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}} }} {w_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( l \right)} \psi_{k}^{\left( l \right)} \left( z \right)} }}. $$
(35c)
$$ \psi_{k}^{\left( j \right)} \left( z \right) = {\mathcal{N}}\left( {z;H_{k} M_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} ,R_{k} + H_{k} P_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} H_{k}^{T} } \right). $$
(35d)
$$ M_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}}^{\left( j \right)} \left( z \right) = M_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} + C_{k}^{\left( j \right)} \left( {z - H_{k} M_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} } \right). $$
(35e)
$$ P_{{{k \mathord{\left/ {\vphantom {k k}} \right. \kern-0pt} k}}}^{\left( j \right)} = \left[ {I - C_{k}^{\left( j \right)} H_{k} } \right]P_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} . $$
(35f)
$$ C_{k}^{\left( j \right)} = P_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} H_{k}^{T} \left[ {H_{k} P_{{{k \mathord{\left/ {\vphantom {k {k - 1}}} \right. \kern-0pt} {k - 1}}}}^{\left( j \right)} H_{k}^{T} + R_{k} } \right]^{ - 1} . $$
(35g)

5.4 Pruning and Merging

The comments on the appropriateness of this model can be found in e.g., [13] for detail on the GM-PHD filter.

6 Performance Evaluation

In this section, we discuss about the metric used in order to evaluate the performance of the GaGM-PHD filter converted into the improved GM-PHD filter.

The performance for a multi-target tracking algorithm can be measured by a distance between two finite sets which represent the true target states and the corresponding point estimates produced by the algorithm see e.g., [19, 20].

Because of face problems of inconstancy of the OMAT metric: the geometry dependent behavior, the contrived construction for differing cardinalities which are undefined if the cardinality is zero and incompatibility with mathematical theory, Schuhmacher et al. developed the OSPA metric: a new performance metric based on Wasserstein’s construction which completely eliminates most of the aforementioned problems of the OMAT metric see e.g., [18].

Let the true multi-target state finite sets \( X = \left\{ {x_{1} , \ldots x_{n} } \right\} \) and the corresponding point estimates set

$$ Y = \left\{ {y_{1} , \ldots ,y_{m} } \right\}\;{\text{with}}\;m,n \in N_{0} = \left\{ {0,1,2, \ldots } \right\} $$

Let \( \pi_{k} \) denote the set of permutations on \( \left\{ {0,1,2, \ldots ,k} \right\} \) for any \( k \in \left\{ {0,1,2, \ldots } \right\} \). The order parameter is constrained as \( 0 < p < \infty \) and the cut-off parameter is constrained as \( c > 0 \). The order p and the cut-off c control the filter performance. The order determines how punishing the metric calculation leads to larger errors; as p increases, outliers are more heavily penalized.

The cut-off determines the maximum error for a point.

In the context of multi-target performance evaluations, the OSPA distance is the error comprised of two components each separately accounting for “localization” and “cardinality” errors.

Previously, for \( p < \infty \) the components are given by:

$$ \left[ {\text{a}} \right]\;e_{p,loc}^{\left( c \right)} \left( {X,Y} \right) = \left[ {\frac{1}{n}.\begin{array}{*{20}c} {\hbox{min} } \\ {\pi \in \pi_{n} } \\ \end{array} \sum\limits_{i = 1}^{m} {d^{\left( c \right)} .\left( {x_{i} ,y_{\pi \left( i \right)} } \right)^{p} } } \right]^{{{1 \mathord{\left/ {\vphantom {1 p}} \right. \kern-0pt} p}}} . $$
(36a)
$$ [{\text{b}}]\;e_{p,card}^{\left( c \right)} \left( {X,Y} \right) = \left[ {\frac{{c^{p} \left( {n - m} \right)}}{n}} \right]^{{{1 \mathord{\left/ {\vphantom {1 p}} \right. \kern-0pt} p}}} . $$
(36b)
$$ [a] + [b]\mathop{\longrightarrow}\limits{{}}d_{p}^{\left( c \right)} \left( {X,Y} \right). $$
(36c)

Where \( d^{\left( c \right)} \left( {x,y} \right) \) denotes the distance between the point x and y, subject to the maximum cut-off c.

$$ d^{\left( c \right)} \left( {x,y} \right) = \hbox{min} \left( {c,d\left( {x,y} \right)} \right) = \sqrt {\left( {x - y} \right)^{T} \left( {x - y} \right)} . $$
(36d)

If \( m \le n \), and \( d_{p}^{\left( c \right)} \left( {X,Y} \right) = d_{p}^{\left( c \right)} \left( {Y,X} \right) \) if \( m > n \); moreover,

$$ d_{\infty }^{\left( c \right)} \left( {X,Y} \right) = \begin{array}{*{20}c} {\hbox{min} } \\ {\pi \in \pi_{n} } \\ \end{array} \begin{array}{*{20}c} {\hbox{max} } \\ {1 \le i \le n} \\ \end{array} d^{\left( c \right)} \left( {x_{i} ,y_{\pi \left( i \right)} } \right)\quad {\text{if}}\;m = n $$
(36e)
$$ d_{\infty }^{\left( c \right)} \left( {X,Y} \right) = c\quad {\text{if}}\;m \ne n $$
(36f)

In either case set, the distance comes to zero if \( m = n = 0 \). The function \( d_{p}^{\left( c \right)} \) is called the OSPA metric of p with cut-off c where OSPA stands for Optimal Sub Pattern Assignment.

7 Simulation Results and Discussion

Two simulation examples are used to test the proposed improved GM-PHD filter. For both simulations, the surveillance area is \( \left[ { - 1000\,{\text{m}},1000\,{\text{m}}} \right] \times \left[ { - 1000\,{\text{m}},1000\,{\text{m}}} \right] \). The probability of survival is set to \( p_{S} = 0.99 \), and the probability of detection is \( p_{D} = 0.98 \). The target state vector \( x_{k} \) at time k comprises the position \( \left[ {p_{x,k} ,p_{y,k} } \right]^{T} \) and the velocity \( \left[ {v_{x,k} ,v_{y,k} } \right]^{T} \), i.e. \( x_{k} = \left[ {p_{x,k} ,p_{y,k} ,v_{x,k} ,v_{y,k} } \right]^{T} \).

Each target follows the linear system model described in (14) and (15) with

$$ F_{k} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {I_{2} } & {TI_{2} } \\ \end{array} } \\ {\begin{array}{*{20}c} {0_{2} } & {I_{2} } \\ \end{array} } \\ \end{array} } \right],\quad H_{k} = \left[ {\begin{array}{*{20}c} {I_{2} } & {0_{2} } \\ \end{array} } \right] $$

where \( T = 1s \) is the sampling period.\( I_{2} \) and \( 0_{2} \) denote the \( 2 \times 2 \) identity and zero matrix respectively.

The process noise \( v_{k} \) and the observation noise \( w_{k} \) are both zero mean Gaussian noise with covariance \( Q_{k} \) and \( R_{k} \) respectively, i.e.

$$ Q_{k} = \sigma_{v}^{2} \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.25T^{4} I_{2} } \\ {0.5T^{3} I_{2} } \\ \end{array} } & {\begin{array}{*{20}c} {0.5T^{3} I_{2} } \\ {T^{2} I_{2} } \\ \end{array} } \\ \end{array} } \right],\quad R_{k} = \sigma_{ \in }^{2} I_{2} $$

Where \( \sigma_{v} = 5{m \mathord{\left/ {\vphantom {m {s^{2} }}} \right. \kern-0pt} {s^{2} }} \) is the standard deviation of the process noise. \( \sigma_{ \in } = 10m \) is the standard deviation of the measurement noise. The detected measurements also have clutter which is modeled as a Poisson distribution with intensity \( K_{k} \left( z \right) = \lambda_{c} Vu(z) \) where \( V = 4.10^{6} m^{2} \) is the volume of the surveillance region. \( \lambda_{c} = 12.5.10^{ - 6} m^{ - 2} \) is the average number of clutter return per volume and \( u\left( \cdot \right) \) is the uniform density over the surveillance region. In the merging and pruning step, the weigh threshold value is \( T_{w.thr} = 10^{ - 5} \). The merge threshold is \( T_{m.thr} = 4 \). In the state extraction step, the threshold is \( T_{s.thr} = 0.5 \). The observed targets which have a velocity greater than \( v_{\hbox{max} } = 100 \) are deleted.

Example 1.

In this first example, we consider three maneuvering targets with closed birth positions. In order to describe the targets conveniently, we have named all the targets from 1 to 3. The initial position and the final position of the targets are given in Table 1.

Table 1. Coordinates of the initial and final positions of targets in example 1

The spontaneous R.F.S. is Poisson with intensity

$$ \gamma_{k} = 0.1{\mathcal{N}}\left( {x;m_{\gamma }^{1} ,p_{\gamma } } \right) + 0.1{\mathcal{N}}\left( {x;m_{\gamma }^{2} ,p_{\gamma } } \right) + 0.1{\mathcal{N}}\left( {x;m_{\gamma }^{3} ,p_{\gamma } } \right), $$

Where:

$$ m_{\gamma }^{1} = \left[ { - 200,100,0,0} \right]^{T} , $$
$$ m_{\gamma }^{2} = \left[ { - 200, - 200,0,0} \right]^{T} , $$
$$ m_{\gamma }^{3} = \left[ { - 100, - 100,0,0} \right]^{T} $$

and

$$ p_{\gamma } = diag\left( {\left[ {100,100,25,25} \right]^{T} } \right). $$

To show the control of the time on which the estimation process started, we did not include the start of estimate tracks for five time steps.

Figure 2 shows the cluttered measurement set used in the simulation with the ground truth. Figure 3 presents the plots of x and y estimates of targets given by the improved GM-PHD filter compared with the ground truth. Figure 3 shows that the improved GM-PHD filter provides good estimates of the targets’ positions and picks up a few false alarms. Furthermore, it shows that the number of false tracks can be reduced by using the new improved GM-PHD filter.

Fig. 2.
figure 2

Measurements and target position in example 1

Fig. 3.
figure 3

Position estimates of the improved Gaussian Mixture PHD filter in example 1

Quantitatively, the efficiency of the point state estimates given is better measured in terms of the multi-target miss-distance between two finite sets representing the true target positions and their respective estimates. Figure 4 shows the tracking performance of the GM-PHD filter and the improved GM-PHD filter with the OSPA metric. With regard to the improved GM-PHD filter, Fig. 4 shows that the OSPA error metric is small except at a few time steps when the position estimates do not match the true target’s position. However, the OSPA error metric is considerably higher for the position estimates given by the GM-PHD filter. Taking account of this result and comparing it with that of the GM-PHD filter, we can confirm without much risk that the improved GM-PHD filter is most efficient when the targets are closed.

Fig. 4.
figure 4

OSPA error metric for the GM-PHD filter and improved GM-PHD filter in example 1

Example 2.

In this second example, we consider two maneuvering targets with positions almost opposite. The initial position and the final position of the targets are given in Table 2.

Table 2. Coordinates of the initial and final positions of targets in example 2

The intensity of Poisson R.F.S. is given by:

$$ \gamma_{k} = 0.1{\mathcal{N}}\left( {x;m_{\gamma }^{4} ,p_{\gamma } } \right) + 0.1{\mathcal{N}}\left( {x;m_{\gamma }^{5} ,p_{\gamma } } \right), $$

Where:

$$ m_{\gamma }^{4} = \left[ { - 500, - 900,0,0} \right]^{T} , $$
$$ m_{\gamma }^{5} = \left[ {700,500,0,0} \right]^{T} , $$

and

$$ p_{\gamma } = diag\left( {\left[ {100,100,25,25} \right]^{T} } \right). $$

Figure 5 shows the clutter measurement set used in the simulation and the ground truth position. Figure 6 presents the plots of x and y estimates of targets given by the improved GM-PHD filter compared with the ground truth. As a result, the estimate of target positions produced by the improved GM-PHD filter matches closely with the true target track. Furthermore, Fig. 6 shows that the improved GM-PHD filter is able to initiate birth positions, track locations, and identify death positions of the targets reasonably well. Figure 7 shows the tracking performance of the GM-PHD filter and the improved GM-PHD filter with the OSPA error metric. The OSPA error metric is considerably higher for the position estimates given by the GM-PHD filter. It shows that the number of false tracks picked by the improved GM-PHD filter is smaller than the one given by the GM-PHD filter.

Fig. 5.
figure 5

Measurements and true target positions in example 2

Fig. 6.
figure 6

Position estimates of the improved Gaussian Mixture PHD filter in example 2

Fig. 7.
figure 7

OSPA error metric for the GM-PHD filter and improved GM-PHD filter in example 2

The estimation results confirm that the performance of the improved GM-PHD filter depends on the target’s positions. First, no matter whether the targets are closed or not, the efficiency of the estimation of the target’s positions is respected and second, in this particular case, the comparison presented by Figs. 4 and 7 shows the highest performance of the improved GM-PHD. Simulation results in Sect. 7 show that the improved GM-PHD filter can provide good track-valued estimates of targets in addition to recursively providing good point states of targets.

8 Conclusion

This paper presents a novel prediction algorithm in Gaussian Mixture Probability Hypothesis Density filter, first in a close proximity target environment and second, in a distant proximity target environment to improve how multi-target state estimation is performed. The proposed algorithm has considered the measurement rates which follow the gamma distribution and varies according to the estimation time step. By the use of the Variational Bayesian estimation, the gamma Gaussian mixture is converted into Gaussian mixture with its new variables of mean and covariance. The simulation results show that the proposed algorithm is vigorous enough, even if any two targets are significantly closed or not. We can recommend the improved Gaussian Mixture Probability Hypothesis Density filter in such scenarios, and support its remarkable robustness to track targets.