Introduction

The problem of search and detection is a crucial part of aerial and maritime operations. The importance of this task was accentuated by the tragic mystery of the missing Malaysia Airlines Flight, known as MH3701. The surface search operation that began in the aftermath of the disappearance involved as many as 40 ships and aircraft sweeping an area of over 4,600,000 square kilometers in the Indian Ocean but ended without finding any conclusive traces of the passengers or the plane wreckage2.

There are two significant sources of difficulty in designing search operations on the surface of the ocean. First is the high level of uncertainty associated with the assessment of the search target area. In the case of MH370, the target areas were continuously reassigned due to a stream of incoming information from satellite data analysis, suspicious debris sightings, and detection of signals suspected to be from the underwater locator beacon. This type of uncertainty is further compounded by the complex drift dynamics on the surface of the ocean. The unsteady nature of ocean currents amplifies the errors in the estimation of the splash location in the days lapsed between the splash and the start of the search operation. In addition, given that drift models also have inaccuracies, the final computational estimates for the search targets typically have a considerable amount of uncertainty. A more detailed studies are published on topics of complexity and uncertainties of MH370 path and splash estimation3,4, and debris drift affected by the stirring of the ocean5,6,7,8,9.

The second source of difficulty is that even when the target area for search is known and the associated uncertainty is well-characterized, e.g. in the form of a probability distribution, the problem of the design of the search agents’ paths generally does not have a known optimal solution. The conventional approach is to divide the target area into subdomains and assign them to different agents which will sweep each subdomain in organized paths, e.g., parallel lines. This so-called “lawnmower” strategy is easy to plan and implement for simple geometries, but it cannot be readily extended to complex geometries that arise from the surface drifts or areas with non-uniform distribution of likelihood for finding targets. The lawnmower technique also lacks the flexibility required for real-world operations: for example, it would require reassignment of search areas and agents in case of an agent going astray.

We propose a multi-agent motion control method called modified Dynamic Spectral Multi-scale Coverage (mDSMC), for search and detection of objects in dynamically evolving environments such as the ocean surface. This algorithm combines the classical theory of optimal search10,11 with concepts from ergodic theory and is capable of accommodating complex geometries, non-uniform distributions and the instantaneous drift of targets during the search. The algorithm is particularly designed to reduce the influence of small spatial characteristics of the target area which leads to a significant increase in the success rate of the search operation. To show the promise of this algorithm for real-world applications, we compare its performance with conventional search techniques in a computational replication of the surface search for MH370.

Related research

The design of search agents’ path is a multi-agent control problem whose objective function involves the estimation of a time evolving probability distribution. Although considering the dynamics is a key feature of successful oceanic search, there are a lot of relevant and interesting details that can be found in papers discussing both static and dynamic search problems. The following overview of related literature presents the relevant publications grouped to approaches considering static targets and the ones where targets are dynamic.

Stationary targets

In12 a route planning for Unmanned Aerial Vehicle (UAV) in stationary target search mission over a river region is considered. Sub-regions along the river are extracted using a Gaussian mixture model (GMM) and prioritized heuristically with an approximation insertion (AI) approach. The optimal routes are obtained maximizing stationary target detection in a given time window. This approach is extended to the search with multiple UAVs in13 where Receding Horizon Control (RHC, also known as Model Predictive Control or MPC) is employed for finding the optimal paths. In14 RHC is employed to solve ergodic exploration of distributed information. It is shown that optimization-based approach is suitable for both local and global search density i.e. whether the information is localized or diffused. The optimization-based path-planner, presented by Gramajo and Shankar15, optimizes the search for given energy consumption and maneuverability constraints.

An advanced lawnmower-type control algorithm presented in16 considers the dynamics of the UAV, the spatial scope and the quality of the sensor. Another lawnmower-based search improvement17 finds paths that approximate the payoff of an optimal solution based on partial detection in the form of a task difficulty map. The algorithm uses the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions.

A reconnaissance mission control presented by Wang et al.18 focuses on particle swarm optimization for UAV swarm path planning. The optimization considers different search strategies and optimization objectives, and constraints regarding the mobility and communication of the UAV swarm. A machine learning approach for the search and rescue in indoor environments using UAVs has been investigated in19. The reinforced learning is used to locate the trapped victim by sensing the RF signals emitted from the victim’s smart device. A heat equation driven area coverage (HEDAC) control method20 has been employed for heterogenous multi-agent search in uncertainty conditions21. An exact probabilistic model and state-of-the-art control method ensures a near-optimal performance for stationary targets. An algorithm called layered search and rescue (LSAR) is employed in22 for multi-agent search and rescue missions in order to minimize the search time while finding the maximum number of victims.

Non-stationary targets

An Ant Colony Optimization (ACO) is used in23 to determine paths of multiple UAVs that ensure minimal time needed to find moving targets. A specialized multi-UAV sea area search map is presented in24, where the target probability map (TPM) was designed to handle uncertainties caused by dynamic targets. The TPM is used as a pheromone map in an improved multi-ant colony algorithm - a derivation of ACO algorithm. The main drawback of this heuristic approach is that the resulting paths are straight-segmented due to the discrete nature of the method. Another heuristic approach is presented in25 where several algorithms are combined in a multi-UAV search procedure for missing persons with the time-varying distribution of target location probabilities. The algorithm used for evolving a population of main solutions is a hybrid evolutionary optimization method, while each UAV path is finely optimized using Tabu Search method.

A distributed cooperative multi-UAV search based on the prediction of non-stationary targets is proposed in26. Target existence probability is updated using detection information of on-board sensors while the UAV’s are directed using RHC for maximizing search performance. Ergodic exploration using RHC, presented in27, plans a real-time motion that locally optimize ergodicity with respect to the dynamic information density. RHC is also utilized in28 where targets are partially aware of the UAVs locations and are trying to escape them. Although this is not the case in MH370 search, it provides an interesting game-theoretic view of the search problem29.

A mission planner for locating and tracking harmful ocean debris with UAVs is presented in30. Actual weather data and predicted icing conditions and their impact on the UAV performance are taken into account in the search simulations. Market-based cooperation strategies for multiple UAVs and evolutionary computation techniques are utilized for conducting low-cost oceanic search missions.

A path planning algorithm31 governs the coordinated search using unmanned aerial vehicles and autonomous underwater vehicles. The proposed control framework is tested on lawnmower-style multi-agent search simulations. A search control for autonomous underwater vehicles is studied in32 where non-stationary targets driven by ocean current are considered. A search for multiple target locations in an unsteady ocean environment is achieved with a self-organizing map (SOM) neural network. Considering target movement due to ocean currents, an optimal search agent path is found using a velocity synthesis approach.

Yau and Chung33 investigate the application of linear search and discrete myopic search, coupled with surrogate ocean models, for locating a drifting object. A motion and camera control algorithm for multiple UAVs proposed by Perez et al.34 minimizes the expected detection time of a nondeterministically moving target of an uncertain initial position. The method is tested on 3 real-world inspired scenarios including a drifting boat by the coast. Multi-UAV search for moving targets in an unknown environment is established in35 using a reinforced learning scheme. The technique is applied in a search of moving ships at the sea.

Search in dynamic environments

We will consider a rectangular domain \(S\subset {\mathbb {R}}^2\) with boundary \(\partial S\) which is large enough to contain all potential debris floating in the sea and agent trajectories for the entire search period. The algorithm that we use here is a new variant of the Spectral Multi-scale Coverage (SMC) algorithm which was proposed in36. This algorithm was extended to dynamic environments in37 (called DSMC) and its practical feasibility was shown in38. We give a short introduction to DSMC algorithm and emphasize the alterations and that leads to the algorithm introduced in this paper called modified DSMC (mDSMC).

From a mathematical viewpoint, a search problem is formulated around two central concepts. First is the probability distribution function of the search targets (i.e. survivors, debris, etc.) which denotes the likelihood of finding targets in each subset of the domain. This distribution, denoted here by \(p^t\), is dynamic and evolves in time according to the drift equation in an incompressible flow:

$$\begin{aligned} \frac{\partial p^t}{\partial t} + v\cdot \nabla p^t = 0, \quad \forall x\in S, \end{aligned}$$
(1)

where v is the ocean surface velocity and the initial distribution \(p^{t_0}\) specifies the estimated splash location and the uncertainties therein at the estimated time of the splash \(t_0\).

We denote a flow map \({{\mathscr {T}}}^{t_1,t_2}: {\mathbb {R}}^2 \mapsto {\mathbb {R}}^2\), which maps a location of a sample (or a target) at time \({t_1}\) to its location at time \({t_2}\) using the unsteady velocity field v. The flow map \({{\mathscr {T}}}\) describes the evolution of target and sample positions on the ocean surface which is a basis for the use of Lagrangian methods. The advection of samples placed on the initial splash location through flow map composition yields numerical solutions almost devoid of artificial diffusion and allows for direct parallelization.

The second concept is the coverage distribution \(c^t\), which reflects the spatial distribution of the conducted search time up to time t. In other words, regions with higher coverage have been traversed by search agents more often. Assuming N search agents are dispatched at time 0 for a ceaseless search operation, and the path of each agent is denoted by \(z^t_i\) with \(i=1,2, \ldots , N\), the coverage distribution is given by

$$\begin{aligned} c^t =\sum _{i=1}^N \int _0^t \delta ({x} - {{\mathscr {T}}}^{\tau ,t}({z}_i^\tau )) \mathrm {d}\tau , \end{aligned}$$
(2)

where \(\delta \) is the Dirac delta function. The coverage can be thought of as a distribution with support on the points visited by the agents, but evolved forward in time by drift dynamics of the ocean surface. Note that the total search coverage at time t can be computed by integrating the above density over the whole area of search S,

$$\begin{aligned} \int _{S}c^t \mathrm {d}x = N t. \end{aligned}$$
(3)

When considering the concept of coverage, it is useful to model the search process by taking into account the properties of the detection system used in the search. The agent is able to detect targets near the current location but the detection ability decreases as the distance between the agent and the target increases. A simple model of this detection process and data acquisition can be described using a positive smooth compact support Radial Basis Functions (RBF) \(\phi \) and \(\phi _\sigma \) that satisfy:

$$\begin{aligned} \phi _\sigma ({x})=\sigma ^{-2}\phi \left( \frac{{x}}{\sigma }\right) \; \text { and } \int _{{\mathbb {R}}^2} \phi ({x})\mathrm {d}x = 1, \end{aligned}$$

where \(\sigma \) is a positive scaling factor that controls the visual detection range. The RBF models the measurement density of the visual detection system. To incorporate the RBF model we must also modify the coverage and define a smooth modified coverage function which represents the approximation of total measurement density in the domain up to time t:

$$\begin{aligned} c_\sigma ^t(x) =\phi _\sigma (x)*c^t, \end{aligned}$$
(4)

which also satisfies the condition:

$$\begin{aligned} \int _S c^t_\sigma \mathrm {d}x = N t. \end{aligned}$$
(5)

Furthermore, one can observe that these two definitions are consistent by passing to the limit:

$$\begin{aligned} \lim _{\sigma \rightarrow 0^+}c^t_\sigma (x) = c^t. \end{aligned}$$

If the coverage (4) is integrated over S and the support of the RBF function is outside S, one must rescale the coverage to keep growth of total covearge linear in time.

This modification of coverage is not an exact model of coverage drift since it does not allow for the stretch and contraction of coverage support during the drift (i.e. \(supp(c_\sigma ^t)\subset supp(c^t)+supp(\phi _\sigma )\), where \(+\) denotes the Minkowski addition) but it models the search process more faithfully compared to (2). Furthermore, it makes implementation straightforward and allows for real-time computation.

The goal of the search operation planning is to design the paths of the search agents such that the probability of detecting targets is maximized over a given time window. From39, the probability of the target being detected by time t is

$$\begin{aligned} P_d = \int _{S} p^t {{\mathscr {F}}}(c^t_\sigma ({x})) \mathrm {d}x \end{aligned}$$
(6)

where \({{\mathscr {F}}}\) is called the detection function. To be more precise, \({{\mathscr {F}}}(c^t_\sigma ({x}))\) denotes the (conditional) probability of the target being detected given the target is at x. The prevalent choice of detection function, which comes from the assumption of locally random search40,41, is the exponential saturation. Mathematically speaking, this law is stated as

$$\begin{aligned} {{\mathscr {F}}}(c) = 1 - e^{-c}. \end{aligned}$$

This means that if the agent and the target visit the same location at the same time, there is still some probability that the agent misses the target, but this probability decays exponentially with each visit.

The optimal coverage distribution that maximizes the probability in (6), regardless of how the motion of agents should construct such distribution, is given in the seminal paper of Koopman39. Assume for now that the target distribution p is stationary. Koopman showed that the optimal coverage distribution up to time t is

$$\begin{aligned} c^t_{opt}=\max \left[ \ln p-\alpha ^t, 0\right] , \end{aligned}$$
(7)

where \(\alpha ^t\) satisfies

$$\begin{aligned} Nt = \int _{S} \max \left[ \ln p-\alpha ^t, 0\right] \mathrm {d}x. \end{aligned}$$

From (3) and (5) it is clear that the total coverage is a linear function in time. Therefore, the total amount of available coverage in a time interval of size \(\Delta t\) is equal to \(N\Delta t\). Koopman also showed that the optimal solution can be achieved by incremental planning, i.e., if after spending Nt of coverage, the target has not been found but some extra amount of coverage, \(N\Delta t\), becomes available then we can use the same procedure, using the posterior probability of targets, to compute the optimal distribution for the next stage of the search, and yet the probability of detection would have been the same if we had allocated \(N(t+\Delta t)\) from the beginning.

Although the optimal coverage distribution in (7) can be readily computed, it is not known how to design the search agent paths to achieve this optimal distribution while satisfying physical constraints like the continuity of the paths. In fact, given any special form of coverage distribution, such as (2) or (4), achieving the optimal distribution might be infeasible. On the other hand, direct maximization of (6) over the space of agent paths leads to a nonconvex problem which is highly dependent on the initial guess, and therefore too costly for real-time applications. In order to make this problem feasible, we reformulate the optimal search problem as follows.

Assume that during the time interval [0, t], N agents have searched the area but the search target has not been detected yet. The current coverage distribution is given by \(c^t\), which is not necessarily the optimal coverage. Now for the next stage of the search over the horizon \([t,t+\Delta t]\), there are \(N_1\) agents available and the amount of available coverage is \( N_1 \Delta t\). If we assume that during the time period \(\Delta t\) the drift of distribution \(p^t\) is negligible when compared with dynamics of the search, the optimal coverage at time \(t+\Delta t\) according to Koopman theory of search is given by (7), but this time \(\alpha \) satisfies

$$\begin{aligned} \int _{S} c^t_\sigma \mathrm {d}x + N_1 \Delta t = \int _{S} \max \left[ \ln p^{t} - \alpha ^{t}, 0\right] \mathrm {d}x. \end{aligned}$$
(8)

Given that achieving this optimal coverage may be infeasible, we only strive to minimize the mismatch between the current distribution and the optimal distribution locally in time. Therefore, the mDSMC algorithm uses a mismatch distribution which follows naturally from the Koopman search theory:

$$\begin{aligned} s_\sigma ^t = \max \left[ \ln p^t - \alpha ^t - c_\sigma ^t, 0\right] . \end{aligned}$$
(9)

and we seek the next direction of motion for each agent that minimizes the scalar quantity

$$\begin{aligned} \Phi ^t = \Vert s_\sigma ^t \Vert \end{aligned}$$
(10)

On the other hand, the DSMC algorithm mismatch distribution used in37 has a much simpler form:

$$\begin{aligned} s^t = p^t - \frac{c^t}{Nt}, \end{aligned}$$
(11)

and we seek to minimize \(\Vert s^t \Vert \).

There are some important advantages of using (10) instead of (11). The mDSMC mismatch excludes the “over-searched” areas i.e. when \(c_\sigma ^t >\ln p^t - \alpha ^t\). which improves search results significantly and removes some instabilities which are often present with DSMC algorithm.

Spectral multi-scale coverage path planning

The key feature of spectral multi-scale algorithms is that they use a special class of function norms, called the Sobolev-space norm of negative index, to quantify the mismatch of current and optimal coverage. This norm for the mismatch distribution defined above is

$$\begin{aligned} \Phi ^t = \sum _{k \in Z^2} \Lambda _k s^t_k, \end{aligned}$$
(12)

where \(s^t_k\) are the coefficients of two-dimensional Fourier expansion of \(s^t\), given by

$$\begin{aligned} s^t_k = \int _{A}f_k s^t \mathrm {d}x \end{aligned}$$

where \(f_k (x)\) are the Fourier basis functions, and k is the wave-number vector. The norm coefficients in (12) are given by

$$\begin{aligned} \Lambda _k = \left( 1 + \Vert k \Vert ^2 \right) ^\beta , \end{aligned}$$

where \(\beta \) is the index of the norm and takes a negative value. This type of coefficient makes the contribution of components with smaller spatial scale (represented by larger k) proportionally smaller. Therefore, any algorithm that minimizes the mismatch function \(\Phi ^t\) automatically puts more weight on the large-scale spatial features of the target distribution. For the mDSMC algorithm \(\beta =-\frac{1}{2}\) is used while for the DSMC \(\beta \le -\frac{3}{2}\) (in all DSMC calculations we used \(\beta =-3/2\)). Lower exponent \(\beta \) in mDSMC algorithm reduces smoothing of the mismatch function which becomes important later in the search when smaller spatial scale features are explored.

DSMC and mDSMC algorithms minimize the mismatch distribution by implementing the instantaneous corrections to the agent paths that result in the fastest descend in the value of \(\Phi \) at any time. Our modified algorithm can be extended to agents with second-order dynamics similar to37 but here we assume that the agents have first-order dynamics with constant velocity. As shown in37, we first need to define the potential field

$$\begin{aligned} u^t(z^t_i) = \sum _{k \in Z^2} \Lambda _k s^t_k f_k(x). \end{aligned}$$

Then the velocity vector for each agent is given by

$$\begin{aligned} v^t_i = \frac{\nabla u^t(z^t_i)}{\Vert \nabla u^t(z^t_i) \Vert } v_{mag,i} \end{aligned}$$

where \(v_{mag,i}\) is the constant velocity magnitude of the i-th agent. Finally, the movement of the search agents is governed by the first order motion law: \(\frac{\mathrm {d}z^t_i}{\mathrm {d}t} = v^t_i\).

To summarize, the mDSMC in contrast to DSMC uses a new type of objective function, described in (9) and minimizes the Sobolev norm with index \(\beta =-1/2\), which is better suited to search problems. In this specific application of mDSMC algorithm for the RBF, we used a compact support approximation of normal distribution centered at the agent location with a standard deviation of \(\sigma = 3 \text {km}\) and 1-hour window to determine the amount of available coverage in (8).

Search for MH370 (Results)

The search and rescue operation for MH370 began on March 9, 2014, one day after the loss of communications between the plane and ground stations. The surface search lasted for 50 days and included areas near the Malay Peninsula and the Southern Indian Ocean. The estimated splash location was constantly updated due to a stream of incoming information and the drifted image of the most probable splash area was drifted under the ocean model to obtain the target area for each stage of the search. We consider three probable splash areas:

  • Area A: the drifted image of this area was searched from March 28 to April 1,

  • Area B: the drifted image of this area was searched on April 2 and 3,

  • Area P: this area, in its totality, was not searched in the surface search. However, after the conclusion of the surface search, area P was recognized as the most probable splash area and marked as the priority location for underwater search2.

In the actual search for MH370, the probable splash areas (including A and B above) were drifted using the knowledge of the ocean currents and wind effects, over the time lapse between the splash and commencement of the search. A precise description of the used drift model is not disclosed; however, the coordinates of the splash areas and some of the drifted images (including images of A and B) are publicly available. In our computational model, we use the surface velocity data obtained from the HYbrid Coordinate Ocean Model (HYCOM) to drift the splash areas. Moreover, we assume that the initial target distribution is uniform over each area.

We simulate the following scenarios:

  1. 1.

    Lawnmower scenario 1: the reported search areas are searched using the lawnmower algorithm representing the strategy that occurred in the actual MH370 search.

  2. 2.

    Lawnmower scenario 2: the splash areas are drifted using our drift model (explained in more detail in Section 5.2), and then the convex hull of the target distribution is searched using the lawnmower algorithm.

  3. 3.

    DSMC: the splash areas are drifted using our drift model, and then searched using the DSMC algorithm.

  4. 4.

    mDSMC: the splash areas are drifted using our drift model, and then searched using the mDSMC algorithm.

Note that scenario 1 is a replication of the actual search. Contrasting scenario 2 and 3 provides a comparison of the search algorithms independent of the drift model, while contrasting scenario 1 and 3 distinguishes the combined effect of our drift model and search algorithm. Comparing scenario 3 and 4 shows the advantages gained by modifying DSMC.

Problem Definition of Search for MH370

The search parameters used in this study are based on two main sources: The coordinates of the probable splash areas are extracted from a series of reports on MH370 search operation by the Australian Transport Safety Bureau (ATSB)2, and the information on search agents and the searched areas are collected from media releases and daily briefings by the Joint Agency Coordination Centre (JACC) for MH370 search. The target areas of search by JACC were determined by drifting the splash areas using models of the surface currents complemented with the real-time wind and wave data, however, the details of the drifting model are not publicly available. The search areas specified by JACC are used in scenario 1. In scenario 2, we have used the splash areas reported by ATSB and drifted them using our model. The search area for lawnmower was then computed as the smallest convex bounding polygon around the area with a nonzero probability of targets.

Up to 21 aircraft and 19 ships were deployed in the search operation. In our study, we have only used the aircrafts as search agents, due to the lack of data on the technical specifications of the ships. The number and type of the deployed aircraft vary with the search day (see Table 1). but we have chosen the speed of every agent to be 380 km/h which represents the typical loiter speed of the military aircraft involved in the search. We also assume that the scanning of the search areas on each day started at 2:00 pm and ended at 5:00 pm (UTC).

Table 1 The parameters of MH370 search operation.

A simplified stochastic model is used to emulate the target detection by the search observers aboard the aircraft. In this model, if the target remains within the 1.5-km radius of an agent for time t, then it will be detected with the probability \(P=1-\exp (-t/T)\), where \(T=2\) seconds is the expected detection time. A machine vision system for a low-cost fixed-wing UAV has been investigated in42 where thermal imaging camera and onboard processing unit are used for performing real-time detection, classification, and tracking of objects floating on the ocean surface. Bearing in mind that mainly an eye vision is used in MH370 search, the authors believe the expected detection time of 2 seconds used in simulations is a reasonable estimate.

Drift model

Our drift model is based on the surface velocity data computed by the US Navy, using the HYbrid Coordinate Ocean Model (HYCOM). The data consists of 3-hourly longitudinal and latitudinal velocity components with a spatial resolution of 1/25 degrees in each direction. The drift model is used to compute the flow map \({{\mathscr {T}}}\) i.e. the paths of the search targets as well as the trace of search agent paths in (2). The location of the targets in the splash area is initialized at 0:30 UTC on March 8 using the Halton sequence and then updated through a 4th-order adaptive Runge-Kutta method with linear interpolations of the surface velocity in time and space. The evolution of the target distribution is simulated using a semi-Lagrangian method: we uniformly sampled the probable splash areas with a high number of tracers (\(10^5\) tracers per area) and drifted those tracers using the above drift model. By computing the local density of tracers, we assemble the distribution at future times. This method does not require a numerical mesh and eliminates the artificial diffusion associated with Eulerian schemes for solving (1).

Figure 1 shows the drift of the area A. During the 19-day lapse between the estimated splash time and the start of the search in this area, the probability distribution of the search targets undergoes substantial change, and a potential search target moves several hundred kilometers away from its initial position (purple diamond). The reported areas of actual search are mainly covering the drifted target probability, but it is evident that a lot of search effort is spent on regions where target probability is zero. For area B, a similar visualization is shown in Fig. 2. Seemingly, the conducted search regions are well placed, but a large part of the target probability in the west is not covered at all. The stretching and folding of the support of the target distribution in Figs. 1 and 2 is typical of unsteady geophysical flows with strong mixing behavior43. Relative dispersion properties in the ocean surface layer further complicate drifting estimation44. Using a lawn-mower search algorithm, which is suited to regular shapes, would be inefficient in such situations. In contrast, the paths of the mDSMC would naturally adapt to the shape of the drifted area.

Figure 1
figure 1

The estimated splash area A (purple diamond) and the drifted probability distribution of search targets on the first day of actual search (March 28). Gray polygons represent regions of actual search which is simulated in Lawnmower scenario 1 which was conducted from March 28 to April 1.

Figure 2
figure 2

The estimated splash area B (purple circle) and the drifted probability distribution of search targets on the first day of actual search (April 2). Gray polygons represent regions of actual search which is simulated in Lawnmower scenario 1 which happened on April 2 and April 3.

The search

The visualization of all four considered scenarios for area A search is shown in Fig. 3. The lawnmower areas of actual MH370 search yield the trajectories shown in 3.A, while 3.B show trajectories of lawnmower strategy which relies on our drift model. Both sets of trajectories fails to achieve acceptable results in terms of detection rate (33.7% and 43.8%, respectively) which is indicated with green crosses representing detected targets.

The DSMC and mDSMC search strategies (Figs. 3C,D, respectively) are guided by the drifted target probability distributions and, as result of that, accomplished trajectories are passing through the regions of high probability and consequentially produce better target detection rate. The improvements made in mDSMC, in contrast to DSMC, are recognizable in a better local search and in the elimination of wide and unnecessary paths. It is rather interesting to observe how mDSMC covers thin ribbons of the target probability distribution, especially the one in the north of the search domain.

Figure 3
figure 3

Visualizations of different search scenarios for MH370 area A at the end of the search. Figures show drifted target distribution (blue), undetected/detected targets (black/green) and search agent paths (red) for: (A) the lawnmower reconstruction of the reported search (scenario 1), (B) lawnmower search on the drifted image (scenario 2), (C) DSMC search on the drifted image (scenario 3) and (D) mDSMC search on the drifted image (scenario 4).

Figure 4 displays the results of search simulations for area B. In lawnmower strategies, an unexpected anomaly can be observed: the lawnmower scenario 2, shown in Fig. 4B, although taking into account probability distribution drifted with the same drift model as targets, is apparently not better than the “naive” lawnmower scenario 1 ( Fig. 4A) if the number of detected targets is compared. Even though the lawnmower scenario 1 strategy missed a big portion (approx. 1/4) of targets on the west part of the domain, due to the density of lawnmower trajectories in remaining regions, it outperforms the scenario 2 which sparsely covers whole target distribution. Similar to area A results, both DSMC and mDSMC (Fig. 4C,D, respectively) achieved better results than lawnmower strategies. Due to the highly indented and complex shape of the target probability distribution, DSMC’s tendency to global search prevents more accurate search.

Figure 4
figure 4

Visualizations of different search scenarios for MH370 area B at the end of the search. Figures show drifted target distribution (blue), undetected/detected targets (black/green) and search agent paths (red) for: (A) the lawnmower reconstruction of the reported search (scenario 1), (B) lawnmower search on the drifted image (scenario 2), (C) DSMC search on the drifted image (scenario 3) and (D) mDSMC search on the drifted image (scenario 4).

To evaluate the success of different search scenarios, we have performed the following experiment: For each case, we have seeded the splash area on the day of the splash with 1000 randomly positioned targets and drifted the targets with the surface flow. On each day of the search, the search algorithms are executed while the targets are drifting during the search, and the success rate of each algorithm is computed as the fraction of targets detected by the agents. The detection is modeled as a stochastic process with the law of exponential saturation. The described procedure of a search simulation is repeated 100 times for each scenario using randomly varying initial positions for the targets and agents, and the ensemble average of the success rates are reported here.

Figures 5 and 6 summarize the results of the above experiment for area A and area B, respectively. The execution of the search algorithms is limited to a 3-hour period each day to account for the travel time of the agents between the closest airbase and the search area. The performance of DSMC and mDSMC algorithms is remarkably different from the lawnmower technique: the lawnmower scenarios have uniform daily progress which is expected since the lawnmower covered area progresses linearly with time. The DSMC generates trajectories that roughly follow the target distribution - addressing the broad non-local search which is improved in mDSMC. In the case where mDSMC algorithm is used, however, the search agents perform large sweeps of the area first, which results in capturing about %50 of the targets on the first day. In the subsequent two days, the agents perform increasingly localized search to find the remaining targets. The results for both areas A and B show that mDSMC algorithm would have significantly improved the chances of finding floating debris from the MH370 airplane. The dynamics and comparison of the search for area A and B can be found in Video 1 and Video 2 included in Supplementary.

Figure 5
figure 5

Success rate of MH370 search scenarios for drifting area A: (A) The progress of the search scenarios vs days of the search. (B) Histogram of the success rate for search scenarios in 100 realizations of each scenario with random initial conditions.

Figure 6
figure 6

Success rate of MH370 search scenarios for drifting area B: (A) The progress of the search scenarios vs days of the search. (B) Histogram of the success rate for search scenarios in 100 realizations of each scenario with random initial conditions.

Effect of search delay on the search success rate

An unfortunate circumstance of the MH370 search was that a reliable estimate of the splash location was not known immediately after the plane’s disappearance. The day after the plane went missing, the search started by focusing on the vicinity of the Malay Peninsula. In the meantime, the analysis of satellite data indicated that the plane had continued to fly for six hours after the last communication, and two preliminary corridors, one pointing north toward Kazakhstan and one to the south toward the Indian Ocean, were speculated as to the probable flight paths. By March 24, a consensus was formed around the hypothesis that the plane had crashed on the ocean surface in the southern Indian Ocean.

An important question that arose in these circumstances is how the dispersion of the debris until the start of the search would have affected the success of the search performance. To answer this question, we have simulated the mDSMC search on the area P starting 0, 5 and 10 days after the day of the splash. Figure 7 illustrates the dynamics of ocean surface mixing and the complexity of drifted target distribution. The initially estimated splash area and its drifted image after 20 days is shown in (Fig. 7A). The drifted target probability distribution and the accomplishment of the search started on the day of the splash, 5 and 10 days later are shown in Fig. 7B–D, respectively. As time goes by, the target distribution is accumulated in certain parts of the search domain and, since it is aware of the target distribution, mDSMC control method guided agents towards parts with higher target probability. At the start of the search (Figure 7 show searches after a day of the search) trajectories are more accumulated as the later search start time. We should emphasize here, that delaying the search reduces the chances of finding survivors and in a real search situations some of the floating objects may sink before the search starts. Here we consider the situation where delay in commencement of search is inevitable or affordable.

Figure 8.A shows the daily progress of the search algorithm starting with 0-day (immediately after the splash), 5-day and 10-day delay, while Figure 8.B depicts the variations thereupon for the search operations starting 5 or 10 days later. The results, surprisingly, indicate that the chances of finding debris in a 5-day search are greater if the search is started later. The consistency of the detection rate over 100 runs indicates the robustness of the mDSMC method.

Figure 7
figure 7

(A) The estimated splash area P (purple region) and the drifted area P for March 28. (BD) show drifted target distribution (blue), the mDSMC produced search agent paths (red) and undetected/detected targets (black/green) at the end of the first search day as a result of search simulation starting on the day of the splash, 5 and 10 days after, respectively.

Figure 8
figure 8

Effect of delayed search on the success rate of mDSMC algorithm: (A) The progress of the mDSMC search algorithm on area P. (B) Variations in the success rate of daily search due to a 5-day or 10-day delay in the commencement of search.

The above observation can be explained by inspecting the surface mixing in the search area. To do so, we employ the so-called hypergraph analysis introduced in43. The hypergraphs provide qualitative maps of finite-time motion for small objects carried by unsteady flows. Based on determinant \(\det \left( \nabla {{\mathscr {T}}}^{t_1, t_2} / (t_2, t_1)\right) \) the flow domain is partitioned into two qualitative classes of behavior: mesoelliptic regions where the motion is dominated by rotation, and mesohyperbolic regions where motion is dominated by stretching in one direction and contraction in the other. The hypergraph of the area P for mixing over the 12-day interval starting at the splash day is shown in Figure 9, with hyperbolic behavior, marked in red and elliptic in blue. The key observation in43 was that hyperbolic regions reveal the convergence zones for floating objects on the surface. As shown in the figure, the majority of the search targets would accumulate around these regions over the first few days after the splash. As a result, the delayed search operations would initially encounter a higher concentration of targets in those areas and therefore yield higher success rates at the beginning. This phenomena was recently studied in45 with similar observations.

Figure 9
figure 9

Finite-time mixing analysis of the search area: The colors depict the finite-time Lagrangian behavior in the drifted area A. Dominant rotational behavior is marked in blue and stretching behavior in red. The search targets accumulate around red ridges which reduces the effective size of the search area.

The drift uncertainty

The favorable outcome of the proposed search methodology strongly relies on knowing the ocean surface velocity field. But in real-world applications, whether the surface flow is modeled or measured, uncertainty is always present in the obtained velocity field to some extent. To determine how much the uncertainty of the ocean surface velocity affects the success of the mDSMC search, we perform simulations in which a random error term is added to the drift equation. This “noisy” drift process is represented by the stochastic differential equation

$$\begin{aligned} \mathrm {d}y^t = -v^t(y^t)\mathrm {d}t + B\mathrm {d}W \end{aligned}$$
(13)

where y is the position of target and W are independent Wiener processes (Standard Brownian motion), and B is the intensity of randomness46.

The Langevin equation (13) is the Lagrangian equivalent to the Eulerian formulation of advection-diffusion equation:

$$\begin{aligned} \frac{\partial p^t}{\partial t} + v \nabla p^t = \nabla D \nabla p^t, \quad \forall x\in S. \end{aligned}$$
(14)

The diffusion tensor D (scalar for isotropic case) has a direct relation with Brownian motion \(B=\sqrt{2D}\). Using the advection-diffusion equation (14) instead of the advection (only) equation (1) results in correct dissipation of p according to estimated uncertainty of the surface flow.

Drifting the probability using PDE (14) is significantly more computationally demanding and it is not performed for this study. Instead, we use (13) to investigate the search performance when disturbances/errors are present in the available surface flow. These disturbances are modeled as random displacements using normal distribution \({{\mathscr {N}}}(0, v_{err})\). The influence of velocity error scale to the search success is investigated by using different standard deviations \(v_{err}= 0.1, 0.2, 0.5, 1 \text {m/s}\).

The dissipative effects of stochastic target drift are evident in Figure 10 where the search is visualized after 10 days of the search on area P (starting on the day of the splash). When compared with drifted samples driven with (only) the advection phenomena, the scattered locations of targets reveals the influence of the stochastic Brownian motion. Due to the introduced drift uncertainty, the target distribution density does not match the target probability density. As a result some part of the targets fall out of the scope of mDSMC directed search. Nevertheless, the number of targets severely mismatching the probability density is relatively low and the search success rate is still considerably high when compared with alternative methods.

To compare search performance for different drift uncertainty rates, the same methodology was used as in the previous search cases. For each \(v_{err}\), 100 simulation are conducted and the search success rates are compared in Figure 11. As expected, a higher target drift uncertainty continuously deteriorates the success of the search. However, mDSMC approach still remains competent search strategy even if available ocean surface velocity field is not absolutely correct.

Figure 10
figure 10

The search paths (red) in area P (grey) after 10-th day of the search considering stochastic drift of undetected/detected targets (black/green) to simulate uncertainty present in ocean surface velocity. The target drift is governed by stochastic differential equation where advection is accompanied with Brownian motion using \(v_{err}= 0.5 \text {m/s}\).

Figure 11
figure 11

Success rate of search scenarios with different drift uncertainty on area P: (A) The progress of the search scenarios vs days of the search for drifting area P. (B) Histogram of the success rate for search scenarios in 100 realizations of each scenario with random initial conditions.

Conclusion

The mDSMC algorithm is shown to be a viable and efficient algorithm for real-time planning of search operations in a dynamic and large environment such as ocean surface. This algorithm is based on a feasible formulation of the classic optimal search theory and is capable of addressing complex geometries with large uncertainty that arise in complex and unsteady environmental dynamics. Important features of mDSMC are the awareness of search agents of the large-scale spatial structure of the target area, the continuous balance of search load between agents and the ability to concentrate the search on most feasible areas. mDSMC brings an improvement over DSMC in local search, where highly detailed structures of the probability distribution are explored. The application of this algorithm to the MH370 case showed that these features lead to a several-fold improvement in the success rate of the search compared to the conventional lawnmower method. The presented algorithms were compared using Monte Carlo simulations with random initial positions including the analysis of stochastic drift of targets. This suggests that using this algorithm could lead to a critical improvement in the survivor rescue operations and/or finding critical evidence in similar incidents. The mDSMC algorithm can also be employed in other forms of operations that involve mobile agents (planes, drones, land patrols, etc.) and moving targets (individuals, animals, debris, black box, etc.). Examples of such operations include geographical and zoological surveys, underwater sonar mapping and search, the rescue of/from wildlife, contamination removal and spill containment in natural reserves and oceans.

A surprising conclusion of our study was that delaying the search operation in dynamic environments can lead to higher success rates in finding the debris in the initial stages of the search. In the case of MH370, this is caused by the convergence zones within the initial uncertainty distribution. Over the first few days, these zones attract and accumulate the target probability density on the surface. A search operation at later time can benefit from this accumulation of density and lead to a higher success rate given the same amount of coverage. On the other hand in search and rescue missions the goal of the search is to locate debris or survivors as soon as possible and one should never delay the start of the search to save fuel and resources. The convergence property is typical of a dynamic environment that shows regular patterns for the accumulation of targets (e.g. shrinkage of area, or existence of attracting regions), and it can be exploited in other problems to increase the efficiency of search and sampling in terms of search time and the number of agents. The uncertainty and the lack of precision in the drift simulations is expected in real-world application, but the results indicate that the mDSMC method remains efficient even when based on imperfect flow field.