Keywords

1 Introduction

Multiple Target Tracking (MTT) is important in perception applications (autonomous vehicle, surveillance, etc.). The MTT system is usually based on two main steps: data association and tracking. The first step associates detected objects in the perceived scene, called targets, to known objects characterized by their predicted tracks. The second step estimates the track states over time typically thanks to Kalman Filters [1], or improved state estimation techniques (like particle filters, etc). Nevertheless, bad associations provide wrong track estimation and then leads to false perception results.

The data association problem is usually resolved by Bayesian theory [1, 2]. Several methods have been proposed as the Global Nearest Neighbor (GNN) method, the Probabilistic Data Association Filter (PDAF), and the Multiple Hypothesis Tracking (MHT) [3, 12, 22]. However, the Bayesian theory doesn’t manage efficiently data imperfection due to the lack of knowledge we can have on sensor quality, reliability, etc. To circumvent this drawback, the Evidential theory [9, 25] appears as an interesting approach because of its ability to model and deal with epistemic uncertainty. Its provides a theoretical framework to manage ignorance and data imperfection.

Several evidential data association approaches have been proposed [6, 10, 20, 23] in the framework of belief functions. Rombaut [23] uses the Evidential theory to measure the confidence of the association between perceived and known obstacles. To manage efficiently objects appearance and disappearance, Gruyer and Cherfaoui [15] propose the bi-directional data association. The first direction concerns the target-to-track pairings which provides a good way to manage the appearance of the new tracks. The second direction concerns the track-to-target pairings and then manage disappearance of tracks. This approach has been extended by Mercier et al. [20] to track vehicles by using a global optimization to make assignment decisions. To reduce the complexity for real-time applications, a local optimization has been used [5, 6]. For all these methods, the data fusion process begins by defining belief masses from sensor information and prior knowledge. These masses represent the belief and ignorance on the assignment hypotheses. Thereafter, the masses are combined in order to provide a complete information of the considered problem. Finally, to make a decision, the belief masses are classically approximated by a probability measure thanks to a chosen probabilistic transformation.

For data association applications, the widely used probabilistic transformation (i.e. approximation) is the pignistic transformation [5, 6, 17, 20]. This transformation is based on a simple mapping process from belief to probability domain. However, several published works criticize the pignistic transformation and propose generalized and/or alternative transformations [7, 8, 11, 19, 21, 30]. To our knowledge, the proposed transformations have been evaluated by their authors only on simulated examples. The main objective of this paper is to compare these transformations on real-data in order to determine which one is well-suited for assignment problems.

The rest of the paper is structured as follows. Section 2 recalls the basics of belief functions and their uses in data association problems. In Sect. 3, the most appealing probabilistic transformations are presented and compared on the well-known KITTI public database in Sect. 4. Finally, Sect. 5 concludes the paper.

2 Belief Functions for Data Association

To select “best” associations, the data fusion process consists in four steps: modeling, estimation, combination and decision-making. This section presents their definitions and principles.

2.1 Basic Fundamentals

The Belief Functions (BF) have been introduced by Shafer [25] based on Dempster’s researches [9]. They offer a theoretical framework for reasoning about uncertainty. Let’s consider a problem where we have an exhaustive list of hypotheses (\(H_j\)) which are mutually exclusive. They define a so-called frame of discernment \(\varTheta \):

$$\begin{aligned} \begin{array}{c} \displaystyle \varTheta = \bigcup ^{k}_{j=1} \left\{ H_j \right\} \,\text {with}\,H_i \cap H_j = \emptyset \\ \end{array} \end{aligned}$$
(1)

The power set \(2^\varTheta \) is the set of all subsets of \(\varTheta \), that is:

$$\begin{aligned} \begin{array}{c} 2^\varTheta = \left\{ {\emptyset , H_1, ..., H_k, ..., \left\{ H_1, H_2, H_3\right\} , ..., \varTheta } \right\} \end{array} \end{aligned}$$
(2)

The proposition \(A=\{H_1, H_2, H_3\}\) represents the disjunction meaning that either \(H_1\) or \(H_2\) or \(H_3\) can be the solution to the problem under concern. In other words, A represents a partial ignorance if A is the disjunction of several elements of \(\varTheta \). The union of all hypotheses \(\varTheta \) represents the total ignorance and \(\emptyset \) is the empty set that represents the impossible solution (interpreted usually as the conflicting information).

The truthfulness of each proposition \(A\in 2^\varTheta \) issued from source j is modeled by a basic belief assignment (bba) \(m_j^\varTheta (A)\):

$$\begin{aligned} m_j^{\varTheta }: 2^{\varTheta } \rightarrow [0,1], \sum \limits _{A\in {2^{\varTheta }}} {m_j^{\varTheta }(A)} = 1 \end{aligned}$$
(3)

Thereafter, the different bbas (\(m_j^{\varTheta }\)) are combined which provides a global knowledge of the considered problem. Several rules of combination have been proposed [29], the conjunctive operator is widely used in many rules proposed in the literature for the combination of sources of evidence. For instance, Shafer [25] did propose Dempster’s rule of combination below which is nothing but the normalized version of the conjunctive rule [26]:

$$\begin{aligned} \left\{ \begin{array}{lll} m_{DS}^{\varTheta }(A) &{} = &{} \frac{1}{1-K}\sum \limits _{A_1 \cap ... \cap A_p = A}{\prod \limits _{j=1}^{p} {m_{j}^{\varTheta }\left( A_j \right) }} \\ m_{DS}^{\varTheta }(\emptyset ) &{} = &{} 0, \\ \end{array} \right. \end{aligned}$$
(4)

where K is a normalized coefficient:

$$\begin{aligned} K=\sum \limits _{A_1 \cap ... \cap A_p = \emptyset }{\prod \limits _{j=1}^{p} {m_{j}^{\varTheta }\left( A_j \right) }}. \end{aligned}$$
(5)

Finally, in order to make decisions in \(\varTheta \), a probabilistic approximation of the combined bbas (\(m_{DS}^{\varTheta }(A)\)) is usually done. The upper and the lower bounds of the unknown probability P(A) are defined by the belief Bel(A) and the plausibility Pl(A) functions given respectively by:

$$\begin{aligned} \left\{ \begin{array}{lll} Bel(A) &{}=&{} {\displaystyle \sum \limits _{\scriptscriptstyle {B \subseteq A}} { m_{DS}^{\varTheta } (B)}}\\ Pl(A) &{}=&{} {\displaystyle \sum \limits _{\scriptscriptstyle {B \cap A \ne \emptyset }}{ m_{DS}^{\varTheta } (B)}} \end{array} \right. \end{aligned}$$
(6)

2.2 Belief Modeling

The data association problem can be analyzed from two points of view: target-to-track and track-to-target association. Consequently, two frames of discernment are defined: \(\varTheta _{i,.}\) and \(\varTheta _{.,j}\), \(i=1,...,n\), with n the number of targets, and \(j=1,...,m\), with m the number of tracks:

$$\begin{aligned} \begin{array}{ll} \varTheta _{i,.} &{} = \left\{ Y_{(i,1)}, Y_{(i,2)} , ..., Y_{(i,m)}, Y_{(i,*)} \right\} \\ \varTheta _{.,j} &{} = \left\{ X_{(1,j)}, X_{(2,j)} , ..., X_{(n,j)}, X_{(*,j)} \right\} \\ \end{array} \end{aligned}$$
(7)

where \(\varTheta _{i,.}\) is composed of the m possible target(i)-to-track(j) associations denoted \(Y_{(i,j)}\). The hypothesis of appearance is represented by \(Y_{(i,*)}\)Footnote 1. \(\varTheta _{.,j}\) contains the n possible track(j)-to-target(i) associations denoted \(X_{(i,j)}\), and \(X_{(*,j)}\) is the track disappearance.

2.3 Basic Belief Assignment

For target-to-track assignment, three bba’s are used to answer the question “Is target \(X_{i}\) associated with track \(Y_{j}\)?”:

  • \(m^{\varTheta _{i,.}}_{j}(Y_{(i,j)})\): belief in “\(X_i\) is associated with \(Y_j\)”,

  • \(m^{\varTheta _{i,.}}_{j}(\overline{ Y_{(i,j)}})\): belief in “\(X_i\) is not associated with \(Y_j\)Footnote 2,

  • \(m^{\varTheta _{i,.}}_{j}(\varTheta _{i,.})\): the degree of ignorance.

The recent benchmark [4] on huge real data shows that the most suited model is the non-antagonist model [14, 23] which is defined as follows:

$$\begin{aligned} m^{\varTheta _{i,.}}_{j}(Y_{(i,j)})= \left\{ \begin{array}{ll} 0&{}, I_{i,j} \in \left[ 0,\tau \right] \\ \varPhi _1(I_{i,j})&{},I_{i,j} \in \left[ \tau , 1\right] \end{array} \right. \end{aligned}$$
(8)
$$\begin{aligned} m^{\varTheta _{i,.}}_{j}(\overline{ Y_{(i,j)} })= \left\{ \begin{array}{ll} \varPhi _2(I_{i,j})&{},I_{i,j} \in \left[ 0,\tau \right] \\ 0&{},I_{i,j} \in \left[ \tau , 1\right] \end{array} \right. \end{aligned}$$
(9)
$$\begin{aligned} m^{\varTheta _{i,.}}_{j}(\varTheta _{i,.})= 1- m^{\varTheta _{i,.}}_{j}(Y_{(i,j)}) - m^{\varTheta _{i,.}}_{j}(\overline{ Y_{(i,j)} }), \end{aligned}$$
(10)

where \(0<\tau <1\) represents the impartiality of the association process and \(I_{i,j} \in [0,1]\) is an index of similarity between \(X_i\) and \(Y_j\). \(\varPhi _1(.)\) and \(\varPhi _2(.)\) are two cosine functions defined by:

$$\begin{aligned} \left\{ \begin{array}{l} \varPhi _1(I_{i,j}) = \frac{\alpha }{2}\left[ 1-\cos (\pi \frac{I_{i,j}-\tau }{\tau })\right] \\ \varPhi _2(I_{i,j}) = \frac{\alpha }{2}\left[ 1+\cos (\pi \frac{I_{i,j}}{\tau })\right] , \end{array} \right. \end{aligned}$$
(11)

where \(0<\alpha <1\) is the reliability factor of the data source. In the same manner, belief masses are generated for the track-to-target assignment.

2.4 Belief Combination

Based on Dempster’s rule (4), the combined masses \(m^{\varTheta _{i,.}}\) (and \(m^{\varTheta _{.,j}}\)) over \(2^{\varTheta _{i,.}}\) (and \(2^{\varTheta _{.,j}}\)) can be computed as follows [24]:

$$\begin{aligned} \begin{array}{l} m^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) = K\cdot \ m_{j}^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) \displaystyle \prod \limits _{\scriptstyle a = 1 \atop \scriptstyle a \ne j }^m { \alpha _{(i,a)} } \\ m^{\varTheta _{i,.}} ( \{Y_{(i,j)},\ldots ,Y_{(i,l)}, Y_{(i,*)} \} ) = \ K\cdot \gamma _{(i,(j,\ldots ,l))}\displaystyle \prod \limits _{\begin{array}{l} {\scriptstyle a = 1} \\ {\scriptstyle a \ne j} \\ {\scriptstyle ......} \\ {\scriptstyle a \ne l} \\ \end{array}}^m { \beta _{(i,a)} } \\ m^{\varTheta _{i,.}} \left( Y_{(i,*)} \right) = K\cdot \displaystyle \prod \limits _{{\scriptstyle a = 1}}^m { \beta _{(i,a)}}\\ m^{\varTheta _{i,.}} \left( \varTheta _{i,.} \right) = \ K\cdot \displaystyle \prod \limits _{a = 1}^m {m_{a}^{\varTheta _{i,.}} \left( \varTheta _{i,.} \right) }\\ \end{array} \end{aligned}$$
(12)

with:

$$\begin{aligned} \left\{ \begin{array}{l} \alpha _{(i,a)} = 1 - m_{a}^{\varTheta _{i,.}} \left( Y_{(i,a)}\right) \\ \beta _{(i,a)} = m_{a}^{\varTheta _{i,.}}\left( {\overline{Y_{(i,a)}}} \right) \\ \gamma _{(i, (j,\ldots ,l))} = m_{j}^{\varTheta _{i,.}} \left( \varTheta _{i,.} \right) \ldots m_{l}^{\varTheta _{i,.}} \left( \varTheta _{i,.} \right) \\ K = \ \left[ {\displaystyle \prod \limits _{a = 1}^m { \alpha _{(i,a)} }} + {\displaystyle \sum \limits _{a = 1}^m {m_{a}^{\varTheta _{i,.}}} \left( Y_{(i,a)} \right) } \displaystyle \prod \limits _{\scriptstyle b = 1 \atop \scriptstyle b \ne a }^m \alpha _{(i,b)} \right] ^{-1} \end{array} \right. \end{aligned}$$

2.5 Decision-Making

Finally, the probabilities matrix \(P_{i,.}\) (\(P_{.,j}\)) is obtained by using a probabilistic transformation. Table 1 presents the \(P_{i,.}\) matrix where each line defines the association probabilities of the target \(X_i\) with all tracks \(Y_j\). \(P_{i,.}(Y_{(i,*)})\) represents the appearance probability of \(X_i\).

Table 1. Probabilities of target-to-track associations

The association decisions are made by using a global or a local optimization strategy. The Joint Pignistic Probability (JPP) [20] selects associations that maximize the probability product. However, this global optimization is time-consuming and can select doubtful local associations. To cope these drawbacks, local optimizations have been proposed as the Local Pignistic Probability (LPP). Interested readers in the benchmark of these algorithms can refer to [17, 18].

3 Probabilistic Transformations

The generalized formula of the probabilistic transformation can be defined as follows:

$$\begin{aligned} \begin{array}{lll} P_{i,.} \left( Y_{(i,j)} \right)= & {} m^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) + \displaystyle \sum \limits _{\scriptstyle A \in 2^{\varTheta _{i,.}} \atop \scriptstyle Y_{(i,j)} \subset A } {T(Y_{(i,j)}, A)\cdot m^{\varTheta _{i,.}}\left( A \right) }, \end{array} \end{aligned}$$
(13)

where A represents the partial/global ignorance about the association of target \(X_i\) and \(T(Y_{(i,j)}, A)\) represents the rate of the ignorance mass \(m^{\varTheta _{i,.}}\left( A \right) \) which is transferred to singleton \(Y_{(i,j)}\).

Several probabilistic transformations have been proposed in the literature. In this section, only the most interesting ones are presented.

3.1 Pignistic Probability

The pignistic transformation denoted by BetP and proposed by Smets [27, 28] is still widely used for evidential data association applications [6, 14, 16, 20]. This transformation redistributes equitably the mass of ignorance on singletons as follows:

$$\begin{aligned} \begin{array}{lll} T_{BetP_{i,.}}(Y_{(i,j)}, A)= & {} \frac{1}{{\left| A \right| } }, \end{array} \end{aligned}$$
(14)

where |A| represents the cardinality of the subset A. However, the pignistic transformation (14) ignores the bbas of singletons which can be considered as a crude commitment. BetP is easy to implement because it has a low complexity due to its simple redistribution process.

3.2 Dezert-Smarandache Probability

Besides of the cardinality, Dezert-Smarandache Probability (DSmP) transformation [11] considers the values of masses when transferring ignorance on singletons:

$$\begin{aligned} \begin{array}{lll} T_{DSmP_{i,.}}(Y_{(i,j)}, A)= & {} \frac{m^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) + \epsilon }{\displaystyle \sum \limits _{\scriptstyle Y_{(i,k)} \subset A \atop }{ m^{\varTheta _{i,.}} \left( Y_{(i,k)} \right) }+ \epsilon \cdot {\left| A \right| }} \end{array} \end{aligned}$$
(15)

The value of the tuning parameter \(\epsilon \ge 0\) is used to adjust the effect of focal element’s cardinality in the proportional redistribution, and to make DSmP defined and computable when encountering zero masses. Typically, one takes \(\epsilon =0.001\). The smaller \(\epsilon \), the better approximation of probability measure we get [11]. DSmP allows to obtain in general a higher Probabilistic Information Content (PIC) [31] than BetP because it uses more information than BetP for its establishment. The PIC indicates the level of the available knowledge to make a correct decision. \(PIC=0\) indicates that no knowledge exists to take a correct decision.

3.3 MultiScale Probability

The Multiscale Probability (MulP) transformation [19] highlights the proportion of each hypothesis in the frame of discernment by using a difference function between belief and plausibility:

$$\begin{aligned} \begin{array}{lll} T_{MulP_{i,.}}(Y_{(i,j)}, A)= & {} \frac{(Pl^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) - Bel^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) )^q }{\displaystyle \sum \limits _{\scriptstyle Y_{(i,k)} \subset A \atop } { (Pl^{\varTheta _{i,.}} \left( Y_{(i,k)} \right) - Bel^{\varTheta _{i,.}} \left( Y_{(i,k)} \right) )^q}} , \end{array} \end{aligned}$$
(16)

where \(q \ge 0\) is a factor used to amend the proportion of the difference \((Pl(\cdot )-Bel(\cdot ))\). However, the \(T_{MulP_{i,.}}\) is not defined \((\frac{0}{0})\) when \(m(\cdot )\) is a Bayesian mass (\(Pl(\cdot )=Bel(\cdot )\)).

3.4 Sudano’s Probabilities

Sudano proposes several alternatives to BetP as the Proportional Plausibility (PrPl) and the Proportional Belief (PrBel) transformations [11, 30]. Those latter redistribute respectively the ignorance mass according to the normalized plausibility and belief functions:

$$\begin{aligned} \begin{array}{lll} T_{PrPl_{i,.}}(Y_{(i,j)}, A)= & {} \frac{Pl^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) }{\displaystyle \sum \limits _{\scriptstyle Y_{(i,k)} \subset A \atop } { Pl^{\varTheta _{i,.}} \left( Y_{(i,k)} \right) }} \end{array} \end{aligned}$$
(17)
$$\begin{aligned} \begin{array}{lll} T_{PrBel_{i,.}}(Y_{(i,j)}, A)= & {} \frac{Bel^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) }{\displaystyle \sum \limits _{\scriptstyle Y_{(i,k)} \subset A \atop } { Bel^{\varTheta _{i,.}} \left( Y_{(i,k)} \right) }} \end{array} \end{aligned}$$
(18)

3.5 Pan’s Probabilities

Other proportional transformations have been proposed in [21]. Those transformations assume that the bba are proportional to a function \(S(\cdot )\) which is based on the belief and the plausibility:

$$\begin{aligned} \begin{array}{lll} T_{PrBP_{i,.}}(Y_{(i,j)}, A)= & {} \frac{S(i,j) }{\displaystyle \sum \limits _{\scriptstyle Y_{(i,k)} \subset A \atop } { S(i,k)}}, \end{array} \end{aligned}$$
(19)

where different definitions of S have been proposed:

$$\begin{aligned} \left\{ \begin{array}{ll} PrBP1_{i,.}:&{} S(i,j) = Pl^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) \cdot Bel^{\varTheta _{i,.}} \left( Y_{(i,j)} \right) \\ PrBP2_{i,.}:&{} S(i,j) = Bel^{\varTheta _{i,.}} (Y_{(i,j)}) \cdot (1- Pl^{\varTheta _{i,.}} (Y_{(i,j)}))^{-1}\\ PrBP3_{i,.}:&{} S(i,j) = Pl^{\varTheta _{i,.}} (Y_{(i,j)}) \cdot (1- Bel^{\varTheta _{i,.}} (Y_{(i,j)}))^{-1} \end{array} \right. \end{aligned}$$
(20)

4 Results

This section presents a benchmark of the probabilistic transformations in the framework of the object association system for autonomous vehicles. The aim is to assign detected objects in the scene (targets) to known ones (tracks). The transformations have been evaluated on real data.

Fig. 1.
figure 1

Examples of images provided by KITTI [4].

The KITTI datasetFootnote 3 provides 21 sequences recorded from cameras mounted on a moving vehicle on urban roads [13]. To our knowledge, no comparison of probabilistic transformations has been done on real data where more than 30000 associations have been observed. These latter cover different road scenario as shown in Fig. 1. For this work, detections are defined only by 2D bounding box in the image plane as presented in Fig. 1.

4.1 Experimental Setting

The assignment information are based on the distance between objects in the image plane. For that, the distance \(d_{i,j}\) is defined as follows:

$$\begin{aligned} d_{i,j} = \frac{1}{2} (d_{i,j}^{~right}+d_{i,j}^{~left}), \end{aligned}$$
(21)

where \(d_{i,j}^{~right}\) (resp. \(d_{i,j}^{~left}\)) is the Euclidean distance between bottom-right (resp. top-left) corners of the bounding boxes of target \(X_i\) (detected object) and track \(Y_j\) (known object) as presented in Fig. 2.

Fig. 2.
figure 2

The illustration of the distances \(d_{i,j}^{~right}\) and \(d_{i,j}^{~left}\) [4].

The parameters of the bba model (11) are: \(\alpha =0.9\) and \(\tau =0.5\). The index of similarity is defined as follows:

$$\begin{aligned} I_{i,j}=\left\{ \begin{array}{ll} 1-\frac{d_{i,j}}{D}&{},if~d_{i,j}< D\\ 0&{},otherwise, \end{array} \right. \end{aligned}$$
(22)

where D is the limit distance for association which is determined heuristically, e.g. \(D=210\) in this work.

The tuning parameters \(\epsilon =0.001\) and \(q=5\) for DSmP and MulP transformations respectively. The LPP algorithm has been used as optimization strategy in the decision-making step.

4.2 Comparison of Probabilistic Transformations

All discussed transformations are characterized by an equivalent complexity except the pignistic transformation. BetP is computed directly from combined masses which leads to a lower computational time.

To compare the performance of the probabilistic transformations presented previously, the object association system is evaluated by the True Associations Rate (TAR):

$$\begin{aligned} TAR = \frac{\sum _{t}True~Association_t}{\sum _{t}Ground~ Truth_t}, \end{aligned}$$
(23)

where t is the frame index.

Table 2 compares association outcomes of the system based on different probabilistic transformations. Only target-to-track association results have been presented in Table 2 due to the lack of space. However, from track-to-target association results, similar comments/conclusions hold. The penultimate row of Table 2 shows the weighted average of TAR value based on all sequences which is given by:

$$\begin{aligned} TAR_{avg}= \sum _{i=0}^{20} w_i TAR_i \end{aligned}$$
(24)

where \(TAR_i\) is the TAR value of the i-th sequence, and where the weight \(w_i\) is \(w_i=n_i/ \sum _{i=0}^{20} n_i\) and \(n_i\) being the number of associations of the i-th sequence. For instance, \(TAR_{avg}=0.9852\) (or \(98.52\%\)) for the BetP transformation, etc. The last row of Table 2 represents the weighted standard deviation (\(\sigma _w\)) of association scores defined as follows:

$$\begin{aligned} \sigma _w = \sqrt{\sum _{n=0}^{20} w_i(TAR_{i}-TAR_{avg})^2} \end{aligned}$$
(25)
Table 2. target-to-track associations score (in \(\%\)) obtained by different probabilistic transformations.

The obtained results show that PrBel, PrBP1, and PrBP2 provide the worst mean associations scores (\({\le } 97.40\%\)) with the largest standard deviation \((1.36\%)\) for PrBP2. It can be explained by the fact that these transformations are based on the Bel function which is a pessimistic measurement. The rest of the transformations provide rates of correct association (i.e. scores) \({>}98.40\%\) which represents a gain of \({+}1\%\). The best mean score \({\approx }98.50\%\) is given by BetP, PrPl, and MultP transformations. Based only on the mean score criterion, BetP seems more interesting because it provides better scores on 15 sequences from 21 as illustrated in Fig. 3. In addition, BetP is based on a very simple transferring process of uncertainty which makes BetP a good choice for real-time applications. However, this apparent advantage of BetP needs to be seen in relative terms because BetP also generates a quite large standard deviation of \(1.38\%\), which clearly indicate that BetP is not very precise. PrPl and MultP are also characterized by a relatively high standard deviation (\(1.22\%\) and \(1.39\%\)). On the other hand, the lower standard deviation \(1.05\%\) is given by DSmP transformation with a good association score = \(97.85\%\). This transformation performs well in term of PCI criteria which leads to make correct decisions [11]. Consequently, DSmP is an interesting alternative to BetP for the data association process in autonomous vehicle perception system.

Fig. 3.
figure 3

The number of worst/best scores obtained by each probabilistic transformation on 21 sequences; e.g. PrBel provides three worst scores (sequences 3, 10, and 17) and only one best score on sequence 12.

5 Conclusion

An evaluation of several probabilistic transformations for evidential data association has been presented in this paper. These transformations approximate the belief masses by a probability measure in order to make association decisions. The widely used probabilistic approximation is the pignistic transformation. However, several published studies criticize the choice of this method of approximation and propose generalized transformations.

We did compare the performances of these probabilistic transformations on real-data in order to determine which one is more suited for assignment problems in the context of autonomous vehicle navigation based on real datasets. The obtained results based on the well-known KITTI dataset show that the pignistic transformation provides one of the better scores. However, it provides a quite large standard deviation contrary to DSmP transformation which provides the lowest standard deviation. In addition, DSmP procures a nearly similar association score to that given by BetP. Consequently, DSmP can be a good alternative to BetP for the autonomous vehicle perception problematic requiring a bit more computational power with respect to BetP.