1 Introduction

Segmentation of an image into its individual objects is one incredibly important application of image processing techniques. Segmentation can take two forms: firstly, global segmentation for isolation of all foreground objects in an image from the background and secondly, selective segmentation for isolation of a subset of the objects in an image from the background. A comprehensive review of selective segmentation can be found in [7, 19] and in [44] for medical image segmentation where selection refers to extraction of single organs.

Approaches to image segmentation broadly fall into two classes: region based and edge based. Some region-based approaches are region growing [1], watershed algorithms [39], Mumford and Shah [29] and Chan and Vese [11]. The final two of these are partial differential equations (PDEs)-based variational approaches to the problem of segmentation. There are also models which mix the two classes to use the benefits of the region-based and edge-based approaches and will incorporate features of each. Edge-based methods aim to encourage an evolving contour towards the edges in an image and normally require an edge detector function [8]. The first edge-based variational approach was devised by Kass et al. [22] with the famous snakes model and further developed by Casselles et al. [8] who introduced the geodesic active contour (GAC) model. Region-based global segmentation models include the well-known works of Mumford and Shah [29] and Chan and Vese [11]. Importantly, they are non-convex and hence a minimiser of these models may only be a local, not the global, minimum. Further work by Chan et al. [10] gave rise to a method to find the global minimiser for the Chan–Vese model under certain conditions.

This paper is mainly concerned with selective segmentation of objects in an image, given a set of points near the object or objects to be segmented. It builds in such user input to a model using a set \( {\mathcal {M}}=\{ (x_{i},y_{i})\in \varOmega , 1\le i\le k\} \) where \(\varOmega \subset {\mathbb {R}}^{2}\) is the image domain [4, 5, 17]. Nguyen et al. [30] considered marker sets \({\mathcal {M}}\) and \({\mathcal {A}}\) which consist of points inside and outside, respectively, the object or objects to be segmented. Gout et al. [17] combined the GAC approach with the geometrical constraint that the contour passes through the points of \({\mathcal {M}}\). This was enforced with a distance function which is zero at \({\mathcal {M}}\) and nonzero elsewhere. Badshah and Chen [4] then combined the Gout et al. model with [11] to incorporate a constraint on the intensity in the selected region, thereby encouraging the contour to segment homogeneous regions. Rada and Chen [35] introduced a selective segmentation method based on two-level sets which was shown to be more robust than the Badshah–Chen model. We also refer to [5, 23] for selective segmentation models which include different fitting constraints, using coefficient of variation and the centroid of \({\mathcal {M}}\), respectively. None of these models have a restriction on the size of the object or objects to be detected, and depending on the initialisation, these methods have the potential to detect more or fewer objects than the user desired. To address this and to improve on [35], Rada and Chen [36] introduced a model combining the Badshah–Chen [4] model with a constraint on the area of the objects to be segmented. The reference area used to constrain the area within the contour is that of the polygon formed by the markers in \({\mathcal {M}}\). Spencer and Chen [38] introduced a model with the distance fitting penalty as a stand-alone term in the energy functional, unbounding it from the edge detector term of the Gout et al. model.

All of the above selective segmentation models discussed are non-convex, and hence the final result depends on the initialisation. Spencer and Chen [38], in the same paper, reformulated the model they introduced to a convex form using convex relaxation and an exact penalty term as in [10]. Their model uses Euclidean distance from the marker set \({\mathcal {M}}\) as a distance penalty term; however, we propose replacing this with the edge-weighted geodesic distance from \({\mathcal {M}}\) (we call this simply the geodesic distance). This distance increases at edges in the image and is more intuitive for selective segmentation. The proposed model is given as a convex relaxed model with exact penalty term, and we give a general existence and uniqueness proof for the viscosity solution to the PDE given by its Euler–Lagrange equation, which is also applicable to a whole class of PDEs arising in image segmentation. We note that the use of geodesic distance for segmentation has been considered before [6, 33]; however, the models only use geodesic distance as the fitting term within the regulariser, so are liable to make segmentation errors for poor initialisation or complex images. Here, we take a different approach, by including geodesic distance as a stand-alone fitting term, separate from the regulariser, and using intensity fitting terms to ensure robustness.

In this paper, we only consider 2D images; however, for completion, we remark that 3D segmentation models do exist [25, 43] and it is simple to extend the proposed model to 3D. The contributions of this paper can be summarised as follows:

  • We incorporate the geodesic distance as a distance penalty term within the variational framework.

  • We propose a convex selective segmentation model using this penalty term and demonstrate how it can achieve results which cannot be achieved by other models.

  • We improve the geodesic penalty term, focussing on improving robustness to noise and improving segmentation when object edges are blurred.

  • We give an existence and uniqueness proof for the viscosity solution for the PDEs associated with a whole class of segmentation models (both global and selective).

We find that the proposed model gives accurate segmentation results for a wide range of parameters and, in particular, when segmenting the same objects from the same modality images, i.e. segmenting lungs from CT scans, the parameters are very similar from one image to the next to obtain accurate results. Therefore, this model may be used to assist the preparation of large training sets for deep learning studies [40, 41] that concern segmentation of particular objects from images.

The paper is structured as follows: in Sect. 2, we review some global and selective segmentation models. In Sect. 3, we discuss the geodesic distance penalty term, propose a new convex model and also address weaknesses in the naïve implementation of the geodesic distance term. In Sect. 4, we discuss the nonstandard AOS scheme, introduced in [38], which we use to solve the model. In Sect. 5, we give an existence and uniqueness proof for a general class of PDEs arising in image segmentation, thereby showing that for a given initialisation, the solution to our model is unique. In Sect. 6, we compare the results of the proposed model to other selective segmentation models and show that the proposed model is less parameter dependent than other models and is more robust to user input. Finally, in Sect. 7, we provide some concluding remarks.

2 Review of Variational Segmentation Models

Although we focus on selective segmentation, it is illuminating to introduce some global segmentation models first. Throughout this paper, we denote the original image by z(xy) with image domain \(\varOmega \subset {\mathbb {R}}^{2}\).

2.1 Global Segmentation

The model of Mumford and Shah [29] is one of the most famous and important variational models in image segmentation. We will review its two-dimensional piecewise constant variant, commonly known as the Chan–Vese model [11], which takes the form

$$\begin{aligned}&F_{\mathrm{CV}}(\varGamma ,c_{1},c_{2}) \nonumber \\&\quad =\mu \cdot \hbox {length} (\varGamma )+\lambda _{1}\int _{\varOmega _{1}} |z(x,y)-c_{1}|^{2}\,\hbox {d}\varOmega \nonumber \\&\qquad +\lambda _{2}\int _{\varOmega _{2}}|z(x,y)-c_{2}|^{2}\,\hbox {d}\varOmega \end{aligned}$$
(1)

where the foreground \(\varOmega _{1}\) is the subdomain to be segmented, the background \(\varOmega _{2}=\varOmega \backslash \varOmega _{1}\) and \(\mu ,\lambda _{1},\lambda _{2}\) are fixed nonnegative parameters. The values \(c_{1}\) and \(c_{2}\) are the average intensities of z(xy) inside \(\varOmega _{1}\) and \(\varOmega _{2}\), respectively. We use a level set function

$$\begin{aligned} \phi (x,y)= {\left\{ \begin{array}{ll} >0,&{} (x,y)\in \varOmega _{1},\\ 0,&{} (x,y)\in \varGamma ,\\ <0, &{} \hbox {otherwise},\\ \end{array}\right. } \end{aligned}$$

to track \(\varGamma = \{ (x,y)\in \varOmega \, |\, \phi (x,y)=0\}\) (an idea popularised by Osher and Sethian [31]) and reformulate (1) as

$$\begin{aligned}&F_{\mathrm{CV}}(\phi ,c_{1},c_{2})\nonumber \\&\quad =\mu \int _{\varOmega }|\nabla H_{\varepsilon }(\phi )|\,\hbox {d}\varOmega +\lambda _{1}\int _{ \varOmega }(z(x,y)-c_{1})^{2}H_{\varepsilon }(\phi )\,\hbox {d}\varOmega \nonumber \\&\quad \quad +\lambda _{2}\int _{\varOmega }(z(x,y)-c_{2})^{2}(1-H_{\varepsilon }(\phi ))\, \hbox {d}\varOmega , \end{aligned}$$
(2)

with \(H_{\varepsilon }(\phi )\) a smoothed Heaviside function such as \(H_{\varepsilon }(\phi ) = \frac{1}{2} + \frac{1}{\pi }\arctan (\frac{\phi }{\varepsilon })\) for some \(\varepsilon \), we set \(\varepsilon = 1\) throughout. We solve this in two stages, first with \(\phi \) fixed we minimise \(F_{\mathrm{CV}}\) with respect to \(c_{1}\) and \(c_{2}\), obtaining

$$\begin{aligned}&c_{1}=\frac{\int _{\varOmega }H_{\varepsilon }(\phi )\cdot z(x,y)\,\hbox {d}\varOmega }{\int _{\varOmega }H_{\varepsilon }(\phi ) \,\hbox {d}\varOmega }, \quad \nonumber \\&\quad c_{2}=\frac{\int _{\varOmega }(1-H_{\varepsilon }(\phi ))\cdot z(x,y)\,\hbox {d}\varOmega }{\int _{\varOmega }(1-H_{\varepsilon }(\phi )) \,\hbox {d}\varOmega }, \end{aligned}$$
(3)

and second, with \(c_{1}\) and \(c_{2}\) fixed we minimise (2) with respect to \(\phi \). This requires the calculation of the associated Euler–Lagrange equations. A drawback of the Chan–Vese energy functional (2) is that it is non-convex. Therefore, a minimiser may only be a local minimum and not the global minimum and the final segmentation result is dependent on the initialisation. Chan et al. [10] reformulated (2) using an exact penalty term to obtain an equivalent convex model—we use this same technique in Sect. 2.2 for the Geodesic Model.

2.2 Selective Segmentation

Selective segmentation models make use of user input, i.e. a marker set \({\mathcal {M}}\) of points near the object or objects to be segmented. Let \( {\mathcal {M}}=\{ (x_{i},y_{i})\in \varOmega , 1\le i\le k\} \) be such a marker set. The aim of selective segmentation is to design an energy functional where the segmentation contour \(\varGamma \) is close to the points of \({\mathcal {M}}\).

Early work An early model by Caselles et al. [8], commonly known as the geodesic active contour (GAC) model, uses an edge detector function to ensure the contour follows edges, and the functional to minimise is given by

$$\begin{aligned} \int _{\varGamma }g(|\nabla z(x,y)|)\hbox {d}\varGamma . \end{aligned}$$

The term \(g(|\nabla z(x,y)|)\) is an edge detector; one example is \(g(s) = 1/(1+\beta s^{2})\) with \(\beta \) a tuning parameter. It is common to smooth the image with a Gaussian filter \(G_{\sigma }\) where \(\sigma \) is the kernel size, i.e. use \(g(|\nabla \left( G_{\sigma }*z(x,y)\right) |)\) as the edge detector. This mitigates the effect of noise in the image, giving a more accurate edge detector. Gout et al. [25] built upon the GAC model by incorporating a distance term \({\mathcal {D}}(x,y)\) into this integral; i.e. the integrand is \({\mathcal {D}}(x,y)g(|\nabla z|)\). The distance term is a penalty on the distance from \({\mathcal {M}}\), and this model encourages the contour to be near to the set \({\mathcal {M}}\) whilst also lying on edges. However, this model struggles when boundaries between objects and their background are fuzzy or blurred. To address this, Badshah and Chen [4] introduced a new model which adds the intensity fitting terms from the Chan–Vese model (1) to the Gout et al.  [35] model. However, their model has poor robustness. To improve on this, Rada and Chen [36] introduced a model which adds an area fitting term into the Badshah–Chen model and is far more robust.

The Rada–Chen model [36] We first briefly introduce this model, defined by

$$\begin{aligned} F_{\text {RC}}(\phi ,c_{1},c_{2})=&\,\mu \int _{\varOmega }{\mathcal {D}}(x,y)g(|\nabla z(x,y)|)|\nabla H_{\varepsilon }(\phi )|\,\hbox {d}\varOmega \nonumber \\&+ \lambda _{1}\int _{\varOmega }(z(x,y)-c_{1})^{2}H_{\varepsilon }(\phi )\,\hbox {d}\varOmega \nonumber \\&+ \lambda _{2}\int _{\varOmega }(z(x,y)-c_{2})^{2}(1-H_{\varepsilon }(\phi ))\,\hbox {d}\varOmega \nonumber \\&+ \gamma \bigg [\left( \int _{\varOmega }H_{\varepsilon }(\phi )\,\hbox {d}\varOmega -A_{1}\right) ^{2}\nonumber \\&+ \left( \int _{\varOmega }(1-H_{\varepsilon }(\phi ))\,\hbox {d}\varOmega -A_{2}\right) ^{2}\bigg ], \end{aligned}$$
(4)

where \(\mu ,\lambda _{1},\lambda _{2},\gamma \) are fixed nonnegative parameters. There is freedom in choosing the distance term \({\mathcal {D}}(x,y)\); see [36] for some examples. \(A_{1}\) is the area of the polygon formed from the points of \({\mathcal {M}}\) and \(A_{2}=|\varOmega |-A_{1}\). The final term of this functional puts a penalty on the area inside a contour being very different to \(A_{1}\). One drawback of the Rada–Chen model is that the selective fitting term uses no location information from the marker set \({\mathcal {M}}\). Therefore, the result can be a contour which is separated over the domain into small parts, whose sum area totals the area fitting term.

Nguyen et al. [30] This model is based on the GAC model and uses likelihood functions as fitting terms, it has the energy functional

$$\begin{aligned} \begin{aligned} F_{\text {NG}}(\phi )=&\mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla H_{\varepsilon }(\phi )|\,\hbox {d}\varOmega \\&+ \lambda \int _{\varOmega } \alpha \left( P_{\text {B}}(x,y) - P_{\text {F}}(x,y)\right) \\&+ \left( 1-\alpha \right) \left( 1-2P(x,y)\right) \phi \,\hbox {d}\varOmega \end{aligned} \end{aligned}$$

where \(P_{\text {B}}(x,y)\) and \(P_{\text {F}}(x,y)\) are the normalised log-likelihoods that the pixel (xy) is in the foreground and background, respectively. P(xy) is the probability that pixel (xy) belongs to the foreground, \(\alpha \in [0,1]\) and minimisation is constrained, requiring \(\phi \in [0,1]\), so \(F_{\text {NG}}(\phi )\) is convex. This model is good for many examples, see [30], and however, fails when the boundary of the object to segment is non-smooth or has fine structures. Also, the final result is sometimes sensitive to the marker sets used.

The Spencer–Chen model [38] The authors introduced the following model:

$$\begin{aligned} F_{\text {SC}}(\phi ,c_{1},c_{2})=&\mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla H_{\varepsilon }(\phi )|\,\hbox {d}\varOmega \nonumber \\&+ \lambda _{1}\int _{\varOmega }(z(x,y)-c_{1})^{2}H_{\varepsilon }(\phi )\,\hbox {d}\varOmega \nonumber \\&+ \lambda _{2}\int _{\varOmega }(z(x,y)-c_{2})^{2}(1-H_{\varepsilon }(\phi ))\,\hbox {d}\varOmega \nonumber \\&+\theta \int _{\varOmega }{\mathcal {D}}_{E}(x,y)H_{\varepsilon }(\phi )\,\hbox {d}\varOmega , \end{aligned}$$
(5)

where \(\mu ,\lambda _{1},\lambda _{2},\theta \) are fixed nonnegative parameters. Note that the regulariser of this model differs from the Rada–Chen model (4) as the distance function \({\mathcal {D}}(x,y)\) has been separated from the edge detector term and is now a stand-alone penalty term \({\mathcal {D}}_{E}(x,y)\). The authors use normalised Euclidean distance \({\mathcal {D}}_{E}(x,y)\) from the marker set \({\mathcal {M}}\) as their distance penalty term. We will discuss this later in Sect. 3 as it is one of the key improvements we make to the Spencer–Chen model, replacing the Euclidean distance term with a geodesic distance term.

Convex Spencer–Chen model [38] Spencer and Chen use the ideas of [10] to reformulate (5) into a convex minimisation problem. It can be shown that the Euler–Lagrange equations for \(F_{\text {SC}}(\phi ,c_{1},c_{2})\) have the same stationary solutions as for

$$\begin{aligned}&F_{\text {SC1}}(u,c_{1},c_{2})\nonumber \\&\quad =\mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla u|\,\hbox {d}\varOmega \nonumber \\&\quad + \int _{\varOmega }\left[ \lambda _{1}(z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y)-c_{2})^{2} \right] u\,\hbox {d}\varOmega \nonumber \\&\quad \quad + \theta \int _{\varOmega }{\mathcal {D}}_{E}(x,y)u \,\hbox {d}\varOmega , \end{aligned}$$
(6)

with the minimisation constrained to \(u\in [0,1]\). This is a constrained convex minimisation which can be reformulated to an unconstrained minimisation using an exact penalty term \(\nu (u) := \max \{0, 2|u -\frac{1}{ 2}|-1\}\) in the functional, which encourages the minimiser to be in the range [0, 1]. In [38], the authors use a smooth approximation \(\nu _{\varepsilon }(u)\) to \(\nu (u)\) given by

Fig. 1
figure 1

Comparison of distance measures. a Simple binary image with marker point; b normalised Euclidean distance from marker point; c edge map function f(x) for the image; d normalised geodesic distance from marker point

$$\begin{aligned} \nu _{\varepsilon }(u) = H_{\varepsilon }\left( \sqrt{(2u-1)^{2}+\varepsilon }-1\right) \left[ \sqrt{(2u-1)^{2}+\varepsilon }-1 \right] ,\nonumber \\ \end{aligned}$$
(7)

and perform the unconstrained minimisation of

$$\begin{aligned}&F_{\text {SC2}}(u,c_{1},c_{2})\quad \nonumber \\&\quad =\mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla u|\,\hbox {d}\varOmega \nonumber \\&\quad \quad + \int _{\varOmega }\left[ \lambda _{1}(z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y)-c_{2})^{2} \right] u\,\hbox {d}\varOmega \nonumber \\&\quad \quad + \theta \int _{\varOmega }{\mathcal {D}}_{E}(x,y)u \,\hbox {d}\varOmega + \alpha \int _{\varOmega }\nu _{\varepsilon }(u)\,\hbox {d}\varOmega . \end{aligned}$$
(8)

When \( \alpha > \frac{1}{2} \left| \left| \left[ \lambda _{1}(z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y)-c_{2})^{2} \right] + \theta {\mathcal {D}}_{E}(x,y) \right| \right| _{L^{\infty }}\), the above functional has the same set of stationary solutions as \(F_{\text {SC1}}(u,c_{1},c_{2})\). It permits us to choose arbitrary u initialisation to obtain the desired selective segmentation result due to its complexity.

Convex Liu et al. model [26] Recently, a convex model was introduced by Liu et al. which applies a weighting to the data fitting terms, and the functional to minimise is given by

$$\begin{aligned} \begin{aligned} F_{\text {LIU}}(u)=&\mu \int _{\varOmega }|\nabla u|\,\hbox {d}\varOmega + \mu _{2}\int _{\varOmega } |\nabla u|^{2}\,\hbox {d}\varOmega \\&+ \lambda \int _{\varOmega } \omega ^{2}(x,y)\left| z - u\right| ^{2}\,\hbox {d}\varOmega , \end{aligned} \end{aligned}$$
(9)

where \(\mu ,\mu _{2},\lambda \) are nonnegative parameters and \(\omega (x,y) = 1-{\mathcal {D}}(x,y)g(|\nabla z|)\) where \({\mathcal {D}}(x,y)\) is a distance function from marker set \({\mathcal {M}}\) (see [26], e.g.).

3 Proposed Convex Geodesic Selective Model

We propose an improved selective model, based on the Spencer–Chen model, which uses geodesic distance from the marker set \({\mathcal {M}}\) as the distance term, rather than the Euclidean distance. Increasing the distance when edges in the image are encountered gives a more accurate reflection of the true similarity of pixels in an image from the marker set. We propose minimising the convex functional

$$\begin{aligned}&F_{\text {CG}}(u,c_{1},c_{2})\nonumber \\&\quad =\mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla u|\,\hbox {d}\varOmega \nonumber \\&\quad \quad + \int _{\varOmega }\left[ \lambda _{1}(z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y)-c_{2})^{2} \right] u\,\hbox {d}\varOmega \nonumber \\&\quad \quad + \theta \int _{\varOmega }{\mathcal {D}}_{M}(x,y)u \,\hbox {d}\varOmega + \alpha \int _{\varOmega }\nu _{\varepsilon }(u)\,\hbox {d}\varOmega , \end{aligned}$$
(10)

where \({\mathcal {D}}_{M}(x,y)\) is the edge-weighted geodesic distance from the marker set. In Fig. 1, we compare the normalised geodesic distance and the Euclidean distance from the same marker point (i.e. set \({\mathcal {M}}\) has one point in it); clearly, the former gives a more intuitively correct distance penalty than the latter. We will refer to this proposed model as the Geodesic Model.

3.1 Computing the Geodesic Distance Term \({\mathcal {D}}_{M}(x,y)\)

The geodesic distance from the marker set \({\mathcal {M}}\) is given by \({\mathcal {D}}_{M}(x,y) = 0\) for \((x,y)\in {\mathcal {M}}\) and \({\mathcal {D}}_{M}(x,y) = \frac{{\mathcal {D}}_{M}^{0}(x,y)}{||{\mathcal {D}}_{M}^{0}(x,y)||_{L^{\infty }}}\) for \((x,y)\not \in {\mathcal {M}}\), where \({\mathcal {D}}_{M}^{0}(x,y)\) is the solution of the following PDE:

$$\begin{aligned}&|\nabla {\mathcal {D}}_{M}^{0}(x,y)| = f(x,y), \nonumber \\&{\mathcal {D}}^{0}_{M}(x_{0},y_{0}) = 0, \,(x_{0},y_{0})\in {\mathcal {M}}. \end{aligned}$$
(11)

where f(xy) is defined later on with respect to the image contents.

If \(f(x,y)\equiv 1\) (i.e. \(|\nabla {\mathcal {D}}_{M}^{0}(x,y)|=1\)), then the distance penalty \(D_{M}(x,y)\) is simply the normalised Euclidean distance \({\mathcal {D}}_{E}(x,y)\) as used in the Spencer–Chen model (5). We have free rein to design f(xy) as we wish. Looking at the PDE in (11), we see that when f(xy) is small, this results in a small gradient in our distance function and it is almost flat. When f(xy) is large, we have a large gradient in our distance map. In the case of selective image segmentation, we want small gradients in homogeneous areas of the image and large gradients at edges. If we set

$$\begin{aligned} f(x,y) = \varepsilon _{{\mathcal {D}}} + \beta _{G}|\nabla z(x,y)|^{2}, \end{aligned}$$
(12)

this gives us the desired property that in areas where \(|\nabla z(x,y)|\approx 0\), the distance function increases by some small \(\varepsilon _{{\mathcal {D}}}\); here, image z(xy) is scaled to [0, 1]. At edges, \(|\nabla z(x,y)|\) is large and the geodesic distance increases here. We set value of \(\beta _{G} =1000\) and \(\varepsilon _{{\mathcal {D}}} = 10^{-3}\) throughout. In Fig. 1, we see that the geodesic distance plot gives a low distance penalty on the triangle, which the marker indicates we would like segmented. There is a reasonable penalty on the background, and all other objects in the image have a very high distance penalty (as the geodesic to these points must cross two edges). This contrasts with the Euclidean distance, which gives a low distance penalty to some background pixels and maximum penalty to the pixels furthest away.

Fig. 2
figure 2

Examples of images showing the problems discussed and the resulting geodesic distance maps. Column 1 shows the lack of robustness to noise, column 2 shows that outside the patient we have unreasonably low distance penalty and column 3 shows how the blurred edge under the aorta leads to the distance term being very low throughout the heart

3.2 Comparing Euclidean and Geodesic Distance Terms

We briefly give some advantages of using the geodesic distance as a penalty term rather than Euclidean distance and a remark on the computational complexity for both distances.

  1. 1.

    Parameter robustness. The Geodesic Model is more robust to the choice of the fitting parameter \(\theta \), as the penalty on the inside of the shape we want segmented is consistently small. It is only outside the shape where the penalty is large. However, with the Euclidean distance term, we always have a penalty inside the shape we actually want to segment. This is due to the nature of the Euclidean distance which does not discriminate on intensity—this penalty can also be quite high if our marker set is small and does not cover the whole object.

  2. 2.

    Robust to marker set selection. The geodesic distance term is far more robust to point selection; for example, we can choose just one point inside the object we want to segment and this will give a nearly identical geodesic distance compared to choosing many more points. This is not true of the Euclidean distance term which is very sensitive to point selection and requires markers to be spread in all areas of the object you want to segment (especially, at extrema of the object).

Remark 1

(Computational complexity) The main concern of using the geodesic penalty term, which we obtain by solving PDE (11), would be that it takes a significant amount of time to compute compared to the Euclidean distance. However, using the fast marching algorithm of Sethian [37], the complexity of computing \({\mathcal {D}}_{M}(x,y)\) is \({\mathcal {O}}(N\log (N))\) for an image with N pixels. This is only marginally more complex than computing the Euclidean distance which has \({\mathcal {O}}(N)\) complexity [28].

3.3 Improvements to Geodesic Distance Term

We now propose some modifications to the geodesic distance. Although the geodesic distance presents many advantages for selective image segmentation, we have three key disadvantages of this fitting term, which the Euclidean fitting term does not suffer.

  1. 1.

    Not robust to noise. The computation of the geodesic distance depends on \(|\nabla z(x,y)|^{2}\) in f(xy) [see (11)]. So, if an image contains a lot of noise, each noisy pixel appears as an edge and we get a misleading distance term.

  2. 2.

    Objects far from\({\mathcal {M}}\)with low penalty. As the geodesic distance only uses marker set \({\mathcal {M}}\) for its initial condition [see (11)], this can result in objects far from \({\mathcal {M}}\) having a low distance penalty, which is clearly not desired.

  3. 3.

    Blurred edges. If we have two objects separated by a blurry edge and we have marker points only in one object, the geodesic distance will be low to the other object, as the edge penalty is weakly enforced for a blurry edge. We would desire low penalty inside the object with markers and a reasonable penalty in the joined object.

In Fig. 2, each column shows an example for each of the problems listed above. We now propose solutions to each of these problems.

Fig. 3
figure 3

Edge maps and geodesic distance maps. Left to right: the clean image, the image with 10% Gaussian noise, the noisy image with Gaussian convolution applied (\(\sigma =5\)) and the noisy image with 100 iterations of anisotropic TV Gauss–Seidel smoothing. The set \({\mathcal {M}}\) is shown on the top row, which is the same for each image

Problem 1: Noise robustness

A naïve solution to the problem of noisy images would be to apply a Gaussian blur to z(xy) to remove the effect of the noise, so we change f(xy) to

$$\begin{aligned} {\tilde{f}}(x,y) = \varepsilon _{{\mathcal {D}}} + \beta _{G}|\nabla G_{\sigma } * z(x,y)|^{2} \end{aligned}$$
(13)

where \(G_{\sigma }\) is a Gaussian convolution with standard deviation \(\sigma \). However, the effect of Gaussian convolution is that it also blurs edges in the image. This then gives us the same issues described in Problem 3. We see in Fig. 3 column 3 that the Gaussian convolution reduces the sharpness of edges and this results in the geodesic distance being very similar in adjacent objects—therefore, we see more pixels with high geodesic distance. Our alternative to Gaussian blur is to consider anisotropic TV denoising. We refer the reader to [9, 32] for information on the model; here, we just give the PDE which results from its minimisation:

$$\begin{aligned} \begin{aligned} {\tilde{\mu }}\nabla \cdot \bigg ( g(|\nabla z(x,y)|) \frac{\nabla u}{|\nabla u|_{\varepsilon _{2}}} \bigg ) + \iota (z(x,y) - u) = 0, \end{aligned} \end{aligned}$$
(14)

where \({\tilde{\mu }},\iota \) are nonnegative parameters (we fix throughout \({\tilde{\mu }} = 10^{-3}, \iota = 5\times 10^{-4}\)). It is proposed to apply a relatively small number of cheap fixed point Gauss–Seidel iterations (between 100 and 200) to the discretised PDE. We cycle through all pixels (ij) and update \(u_{i,j}\) as follows:

$$\begin{aligned} u_{i,j} = \frac{A_{i,j}u_{i+1,j} + B_{i,j}u_{i-1,j} + C_{i,j}u_{i,j+1} + D_{i,j}u_{i,j-1}}{A_{i,j}+B_{i,j}+C_{i,j}+D_{i,j} + \iota }\nonumber \\ \end{aligned}$$
(15)

where \(A_{i,j} = \frac{{\tilde{\mu }}}{h_{x}^{2}}g(|\nabla z(x,y)|)_{i+1/2,j}, B_{i,j} = \frac{{\tilde{\mu }}}{h_{x}^{2}}g(|\nabla z(x,y)|)_{i-1/2,j}\), \(C_{i,j} = \frac{{\tilde{\mu }}}{h_{y}^{2}}g(|\nabla z(x,y)|)_{i,j+1/2}\) and \(D_{i,j} = \frac{{\tilde{\mu }}}{h_{y}^{2}}g(|\nabla z(x,y)|)_{i,j-1/2}\). We update all pixels once per iteration and solve the PDE in (11) with f(xy) replaced by

$$\begin{aligned} {f}_{1}(x,y) = \varepsilon _{{\mathcal {D}}} + \beta _{G}|\nabla S^{k}(z(x,y))|^{2} \end{aligned}$$
(16)

where S represents the Gauss–Seidel iterative scheme and k is the number of iterations performed (we choose \(k=100\) in our tests). In the final column of Fig. 3, we see that the geodesic distance map more closely resembles that of the clean image than the Gaussian blurred map in column 3, and in Fig. 4, we see that the segmentation results are qualitatively and quantitatively better using the anisotropic smoothing technique.

Fig. 4
figure 4

Segmentation results and Tanimoto coefficients (see Sect. 6) for images with 10%, 20% and 30% Gaussian Noise with and without smoothing, \(\lambda _{1} = \lambda _{2} = 5\), \(\theta = 3\)

Problem 2: Objects far from\({\mathcal {M}}\)with low penalty

In Fig. 2 column 2, we see that the geodesic distance to the outside of the patient is lower than to their ribs. This is due to the fact that the region outside the body is homogeneous and there is almost zero distance penalty in this region. Similarly for Fig. 3 column 4, the distances from the marker set to many surrounding objects are low, even though their Euclidean distance from the marker set is high. We wish to have the Euclidean distance \({\mathcal {D}}_{E}(x,y)\) incorporated somehow. Our solution is to modify the term \({f}_{1}(x,y)\) from (16) to

$$\begin{aligned} {f}_{2}(x,y) = \varepsilon _{{\mathcal {D}}} + \beta _{G}|\nabla S^{k}(z(x,y))|^{2} + \vartheta {\mathcal {D}}_{E}(x,y). \end{aligned}$$
(17)

In Fig. 5, the effect of this is clear; as \(\vartheta \) increases, the distance function resembles the Euclidean distance more. We use value \(\vartheta = 10^{-1}\) in all experiments as it adds a reasonable penalty to pixels far from the marker set.

Fig. 5
figure 5

Displayed is \({\mathcal {D}}_{M}(x,y)\) using \({f}_{2}(x,y)\) for various \(\vartheta \) values. The marker set is the same as that used in Fig. 3

Fig. 6
figure 6

Left to right: original image, \({\mathcal {M}}\) (green) and \(\mathcal {AM}\) (pink), segmentation result just using marker set, \({\mathcal {D}}_{AM}(x,y)\) using anti-markers, segmentation result using anti-markers. For these \(\mu = 1, \lambda _{1}=\lambda _{2}=5, \theta = 25\) (Color figure online)

Problem 3: Blurred edges

If there are blurred edges between objects in an image, the geodesic distance will not increase significantly at this edge. Therefore, the final segmentation result is liable to include unwanted objects. We look to address this problem through the use of anti-markers. These are markers which indicate objects that we do not want to segment, i.e. the opposite of marker points, and we denote the set of anti-marker points by \(\mathcal {AM}\). We propose to use a geodesic distance map from the set \(\mathcal {AM}\) denoted by \({\mathcal {D}}_{AM}(x,y)\) which penalises pixels near to the set \(\mathcal {AM}\) and does not add any penalty to those far away. We could naïvely choose \({\mathcal {D}}_{AM}(x,y) = 1 - \tilde{{\mathcal {D}}}_{GAM}(x,y)\) where \(\tilde{{\mathcal {D}}}_{GAM}(x,y)\) is the normalised geodesic distance from \(\mathcal {AM}\). However, this puts a large penalty on those pixels inside the object we actually want to segment (as \(\tilde{{\mathcal {D}}}_{GAM}(x,y)\) to those pixels is small). To avoid this problem, we propose the following anti-marker distance term:

$$\begin{aligned} {\mathcal {D}}_{AM}(x,y) = \frac{\exp \left( -{\tilde{\alpha }}\tilde{{\mathcal {D}}}_{GAM}(x,y) \right) -\exp \left( -{\tilde{\alpha }}\right) }{1-\exp \left( -{\tilde{\alpha }}\right) } \end{aligned}$$

where \({\tilde{\alpha }}\) is a tuning parameter. We choose \({\tilde{\alpha }} = 200\) throughout. This distance term ensures rapid decay of the penalty away from the set \(\mathcal {AM}\) but still enforces high penalty around the anti-marker set itself. See Fig. 6 where a segmentation result with and without anti-markers is shown. As \({\mathcal {D}}_{AM}(x,y)\) decays rapidly from \(\mathcal {AM}\), we do require that the anti-marker set be close to the blurred edge and away from the object we desire to segment.

Fig. 7
figure 7

a Exact penalty function \(\nu (u)\) and b, c\(\nu _{\varepsilon }(u)\) for different \(\varepsilon \) values. a\(\nu (u)\). b\(\nu _{\varepsilon }(u)\) for \(\varepsilon =1\). c\(\nu _{\varepsilon }(u)\) for \(\varepsilon =0.1\)

Fig. 8
figure 8

a\(\nu '(u)\) (discontinuities shown in red) and b, c\(\nu '_{\varepsilon }(u)\) for different \(\varepsilon \) values. a\(\nu '(u)\). b\(\nu '_{\varepsilon }(u)\) for \(\varepsilon =1\). c\(\nu '_{\varepsilon }(u)\) for \(\varepsilon =0.1\) (Color figure online)

3.4 The New Model and Its Euler–Lagrange Equation

The proposed Geodesic Model. Putting the above three ingredients together, we propose the model

$$\begin{aligned}&\min _{u,c_{1},c_{2}} \Big \{ F_\mathrm{GEO}(u,c_{1},c_{2})\nonumber \\&\quad = \int _{\varOmega }\left[ \lambda _{1}(z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y)-c_{2})^{2} \right] u\,\hbox {d}\varOmega \nonumber \\&\qquad + \mu \int _{\varOmega }g(|\nabla z(x,y)|) |\nabla u|\,\hbox {d}\varOmega + \theta \int _{\varOmega }{\mathcal {D}}_{G}(x,y)u \,\hbox {d}\varOmega \nonumber \\&\qquad + \alpha \int _{\varOmega }\nu _{\varepsilon }(u)\,\hbox {d}\varOmega \Big \}, \end{aligned}$$
(18)

where \({\mathcal {D}}_{G}(x,y) = \left( {\mathcal {D}}_{M}(x,y) + {\mathcal {D}}_{AM}(x,y)\right) /2\) and \({\mathcal {D}}_{M}(x,y)\) is the geodesic distance from the marker set \({\mathcal {M}}\). We compute \({\mathcal {D}}_{M}(x,y)\) using (11) where \(f(x,y) = f_{2}(x,y)\) defined in (17). Using Calculus of Variations, solving (18) with respect to \(c_{1}, \ c_{2}\), with u fixed, leads to

$$\begin{aligned}&c_{1}(u)=\frac{\int _{\varOmega }u\cdot z(x,y)\,\hbox {d}\varOmega }{\int _{\varOmega }u \,\hbox {d}\varOmega },\nonumber \\&c_{2}(u)=\frac{\int _{\varOmega }(1-u)\cdot z(x,y)\,\hbox {d}\varOmega }{\int _{\varOmega }(1-u) \,\hbox {d}\varOmega }, \end{aligned}$$
(19)

and the minimisation with respect to u (with \(c_{1}\) and \(c_{2}\) fixed) gives the PDE

$$\begin{aligned}&\mu \nabla \cdot \left( g(|\nabla z(x,y)|) \frac{\nabla u}{|\nabla u|_{\varepsilon _{2}}} \right) \nonumber \\&\quad - \Big [ \lambda _{1} (z(x,y)-c_{1})^{2} - \lambda _{2}(z(x,y) - c_{2})^{2}\Big ]\nonumber \\&\quad - \theta {\mathcal {D}}_{G}(x,y) - \alpha \nu '_{\varepsilon }(u)= 0 \end{aligned}$$
(20)

in \(\varOmega \), where we replace \(|\nabla u|\) with \(|\nabla u|_{\varepsilon _{2}} = \sqrt{u_{x}^{2}+u_{y}^{2}+\varepsilon _{2}}\) to avoid zero denominator; we choose \(\varepsilon _{2}=10^{-6}\) throughout. We also have Neumann boundary conditions \(\frac{\partial u}{\partial {\varvec{n}}} = 0\) on \(\partial \varOmega \) where \({\varvec{n}}\) is the outward unit normal vector.

Next, we discuss a numerical scheme for solving this PDE (20). However, it should be remarked that updating \(c_{1}(u), c_{2}(u)\) should be done as soon as u is updated; practically, \(c_1, c_2\) converge very quickly since the object intensity \(c_1\) does not change much.

4 An Additive Operator Splitting Algorithm

Additive operator splitting (AOS) is a widely used method [14, 27, 42] as seen from more recent works [2,3,4,5, 36, 38] on the diffusion-type equation such as

$$\begin{aligned} \frac{\partial u}{\partial t} = \mu \nabla \cdot (G(u)\nabla u) - f. \end{aligned}$$
(21)

AOS allows us to split the two-dimensional problem into two one-dimensional problems, which we solve and then combine. Each one- dimensional problem gives rise to a tridiagonal system of equations which can be solved efficiently; hence, AOS is a very efficient method for solving diffusion-like equations. AOS is a semi-implicit method and permits far larger time steps than the corresponding explicit schemes would. Hence, AOS is more stable than an explicit method [42]. We rewrite the above equation as

$$\begin{aligned} \frac{\partial u}{\partial t} = \mu \bigg ( \partial _{x}\left( G(u)\partial _{x} u\right) +\partial _{y}\left( G(u)\partial _{y} u\right) \bigg ) - f, \end{aligned}$$

and after discretisation, we can rewrite this as [42]

$$\begin{aligned} u^{k+1} = \frac{1}{2}\sum _{\ell =1}^{2}\bigg (I-2\tau \mu A_{\ell }(u^{k})\bigg )^{-1}\left( u^{k}+\tau f\right) \end{aligned}$$

where \(\tau \) is the time step, \(A_{1}(u) = \partial _{x}(G(u)\partial _{x})\) and \(A_{2}(u) = \partial _{y}(G(u)\partial _{y})\). For notational convenience, we write \(G = G(u)\). The matrix \(A_{1}(u)\) can be obtained as follows:

$$\begin{aligned} \begin{aligned} \bigg ( A_{1}(u^{k})u^{k+1}\bigg )_{i,j}&= \bigg ( \partial _{x}\Big (G \partial _{x} u^{k+1}\Big )\bigg )_{i,j} \\&= \Bigg ( \frac{G_{i+\frac{1}{2},j}}{h_{x}^{2}} \Bigg ) u^{k+1}_{i+1,j} + \Bigg ( \frac{G_{i-\frac{1}{2},j}}{h_{x}^{2}} \Bigg ) u^{k+1}_{i-1,j}\\&\quad - \Bigg ( \frac{G_{i+\frac{1}{2},j}+G_{i-\frac{1}{2},j}}{h_{x}^{2}} \Bigg ) u^{k+1}_{i,j} \end{aligned} \end{aligned}$$

and similarly to [36, 38], for the half points in G, we take the average of the surrounding pixels, for example \(G_{i+\frac{1}{2},j} = \frac{G_{i+1,j} +G_{i,j} }{2}\). Therefore, we must solve two tridiagonal systems to obtain \(u^{k+1}\), and the Thomas algorithm allows us to solve each of these efficiently [42]. The AOS method described here assumes f does not depend on u; however, in our case, it depends on \(\nu '_{\varepsilon }(u)\) [see (20)] which has jumps around 0 and 1, so the algorithm has stability issues. This was noted in [38], and the authors adapted the formulation of (20) to offset the changes in f. Here, we repeat their arguments for adapting AOS when the exact penalty term \(\nu '_{\varepsilon }(u)\) is present (we refer to Figs. 7 and 8 for plots of the penalty function and its derivative, respectively).

The main consideration is to extract a linear part out of the nonlinearity in \(f=f(u)\). If we evaluate the Taylor expansion of \(\nu '_{\varepsilon }(u)\) around \(u=0\) and \(u=1\) and group the terms into the constant and linear components in u, we can, respectively, write \(\nu '_{\varepsilon }(u) = a_{0}(\varepsilon ) + b_{0}(\varepsilon ) u + {\mathcal {O}}(u^{2})\) and \(\nu '_{\varepsilon }(u) = a_{1}(\varepsilon ) + b_{1}(\varepsilon ) u + {\mathcal {O}}(u^{2})\). We actually find that \(b_{0}(\varepsilon ) =b_{1}(\varepsilon ) \) and denote the linear term as b from now on. Therefore, for a change in u of \(\delta u\) around \(u=0\) and \(u=1\), we can approximate the change in \(\nu '_{\varepsilon }(u)\) by \(b\cdot \delta u\). To focus on the jumps, define the interval in which \(\nu '_{\varepsilon }(u)\) jumps as

$$\begin{aligned} I_{\zeta } := [0-\zeta ,0+\zeta ]\cup [1-\zeta ,1+\zeta ] \end{aligned}$$

and refine the linear function by

$$\begin{aligned} {\tilde{b}}_{i,j}^{k} = {\left\{ \begin{array}{ll} b, &{} u_{i,j}^{k} \in I_{\zeta }\\ 0, &{} \hbox {else}. \end{array}\right. } \end{aligned}$$

Using these, we can now offset the change in \(\nu '_{\varepsilon }(u^{k})\) by changing the formulation (21) to

$$\begin{aligned} \frac{\partial u}{\partial t} = \mu \nabla \cdot (G(u)\nabla u) - \alpha {\tilde{b}}^{k} u + \big [ \alpha {\tilde{b}}^{k}u - f\big ] \end{aligned}$$

or in AOS form \( u^{k+1} =u^{k}+\tau \mu \nabla \cdot (G(u^{k})\nabla u^{k+1}) - \tau \alpha {\tilde{b}}^{k} u^{k+1} + \big [ \tau \alpha {\tilde{b}}^{k}u^{k} - f^{k} \big ] \) which, following the derivation in [38], can be reformulated as

$$\begin{aligned} u^{k+1}= & {} \frac{1}{2}\sum _{\ell =1}^{2}{\underbrace{\bigg (I+{\tilde{B}}^{k}-2\tau \mu A_{\ell }\left( u^{k}\right) \bigg )}_{Q_{1}}}^{-1}\nonumber \\&\left( \left( I+{\tilde{B}}^{k}\right) u^{k}+\tau f^{k}\right) \end{aligned}$$

where \({\tilde{B}}^{k} = \text {diag}(\tau \alpha {\tilde{b}}^{k})\). We note that \(Q_{1}\) is invertible as it is strictly diagonally dominant. This scheme improves on (21) as now, changes in \(f^{k}\) are damped. However, it is found in [38] that although this scheme does satisfy most of the discrete scale-space conditions of Weickert [42] (which guarantee convergence of the scheme), it does not satisfy all of them. In particular, the matrix \(Q_{1}\) does not have unit row sum and is not symmetrical. The authors adapt the scheme above to the equivalent

$$\begin{aligned} u^{k+1}= & {} \frac{1}{2}\sum _{\ell =1}^{2}{\underbrace{\bigg (I-2\tau \mu \left( I+{\tilde{B}}^{k}\right) ^{-1}A_{\ell }\left( u^{k} \right) \bigg )}_{Q_{2}}}^{-1}\nonumber \\&\left( u^{k}+\tau \left( I+{\tilde{B}}^{k}\right) ^{-1}f^{k}\right) , \end{aligned}$$
(22)

where the matrix \(Q_{2}\) does have unit row sum; however, the matrix is not always symmetrical. We can guarantee convergence for \(\zeta = 0.5\) (in which case, \(Q_{2}\) must be symmetrical) but we desire to use a small \(\zeta \) to give a small interval \(I_{\zeta }\). We find experimentally that convergence is achieved for any small value of \(\zeta \), which is due to the fact that at convergence the solution u is almost binary [10]. Therefore, although initially \(Q_{2}\) is asymmetrical at some pixels, at convergence all pixels have values which fall within \(I_{\zeta }\) and \(I+{\tilde{B}}^{k}\) is a matrix with all diagonal entries \(1+\tau \alpha b\). Therefore, we find that at convergence \(Q_{2}\) is symmetrical and the discrete scale-space conditions are all satisfied. In all of our tests, we fix \(\zeta = 0.01\).

figure d

5 Existence and Uniqueness of the Viscosity Solution

In this section, we use the viscosity solution framework and the work of Ishii and Sato [20] to prove that, for a class of PDEs in image segmentation, the solution exists and is unique. In particular, we prove the existence and uniqueness of the viscosity solution for the PDE which is determined by the Euler–Lagrange equation for the Geodesic Model. Throughout, we will assume \(\varOmega \) is a bounded domain with \(C^{1}\) boundary.

From the work of [12, 20], we have the following Theorem for analysing the solution of a partial differential equation of the form \(F(\varvec{x},u,Du,D^{2}u)=0\) where \(F: {\mathbb {R}}^{n}\times {\mathbb {R}}\times {\mathbb {R}}^{n}\times {\mathscr {M}}^{n}\rightarrow {\mathbb {R}}\), \({\mathscr {M}}^{n}\) is the set of \(n\times n\) symmetrical matrices, Du is the gradient of u and \(D^{2}u\) is the Hessian of u. For simplicity, and in a slight abuse of notation, we use \(x := \varvec{x}\) for the vector of a general point in \({\mathbb {R}}^{n}\).

Theorem 2

(Theorem 3.1 [20]) Assume that the following conditions (C1)–(C2) and (I1)–(I7) hold. Then, for each \(u_{0}\in C({\overline{\varOmega }})\) there is a unique viscosity solution \(u\in C([0,T)\times {\overline{\varOmega }})\) of (23) and (24) satisfying (25).

$$\begin{aligned}&\frac{\partial u}{\partial t} + F(t,x,u,Du,D^{2}u) = 0 \quad \text {in} \quad Q=(0,T)\times \varOmega ,\nonumber \\ \end{aligned}$$
(23)
$$\begin{aligned}&B(x,Du) = 0 \quad \text {in} \quad S=(0,T)\times \partial \varOmega , \end{aligned}$$
(24)
$$\begin{aligned}&u(0,x) = u_{0}(x) \quad \text {for} \quad x\in {\overline{\varOmega }}. \end{aligned}$$
(25)

Conditions (C1)–(C2)

  1. (C1)

    \(F(t,x,u,p,X) \le F(t,x,v,p,X)\) for \(u\le v\).

  2. (C2)

    \(F(t,x,u,p,X) \le F(t,x,u,p,Y)\) for \(X,Y\in {\mathscr {M}}^{n}\) and \(Y\le X\).

Conditions (I1)–(I7) Assume \(\varOmega \) is a bounded domain in \({\mathbb {R}}^{n}\) with \(C^{1}\) boundary.

  1. (I1)

    \(F \in C \left( [0,T] \times {\overline{\varOmega }} \times {\mathbb {R}} \times \left( {\mathbb {R}}^{n}\backslash \{ 0 \} \right) \times {\mathscr {M}}^{n} \right) \).

  2. (I2)

    There exists a constant \(\gamma \in {\mathbb {R}}\) such that for each \((t,x,p,X)\in [0,T]\times {\overline{\varOmega }}\times \left( {\mathbb {R}}^{n}\backslash \{ 0\} \right) \times {\mathscr {M}}^{n}\) the function \(u\mapsto F(t,x,u,p,X) - \gamma u\) is non-decreasing on \({\mathbb {R}}\).

  3. (I3)

    F is continuous at (txu, 0, 0) for any \((t,x,u)\in [0,T]\times {\overline{\varOmega }}\times {\mathbb {R}}\) in the sense that

    $$\begin{aligned} -\infty< F_{*}(t,x,u,0,0) = F^{*}(t,x,u,0,0)<\infty \end{aligned}$$

    holds. Here, \(F^{*}\) and \(F_{*}\) denote, respectively, the upper and lower semi-continuous envelopes of F, which are defined on \([0,T]\times {\overline{\varOmega }}\times {\mathbb {R}}\times {\mathbb {R}}^{n}\times {\mathscr {M}}^{n}\).

  4. (I4)

    \(B\in C\left( {\mathbb {R}}^{n} \times {\mathbb {R}}^{n} \right) \cap C^{1,1}\left( {\mathbb {R}}^{n} \times \left( {\mathbb {R}}^{n}\backslash \{ 0 \}\right) \right) \), where \(C^{1,1}\) is the Hölder functional space.

  5. (I5)

    For each \(x\in {\mathbb {R}}^{n}\), the function \(p\mapsto B(x,p)\) is positively homogeneous of degree one in p, i.e. \(B(x,\lambda p) = \lambda B(x,p)\) for all \(\lambda \ge 0\) and \(p\in {\mathbb {R}}^{n}\backslash \{ 0\}\).

  6. (I6)

    There exists a positive constant \(\Theta \) such that \(\langle {\varvec{n}}(x), D_{p} B(x,p)\rangle \ge \Theta \) for all \(x\in \partial \varOmega \) and \(p\in {\mathbb {R}}^{n}\backslash \{ 0\}\). Here, \({\varvec{n}}(x)\) denotes the unit outward normal vector of \(\varOmega \) at \(x\in \partial \varOmega \).

  7. (I7)

    For each \(R>0\), there exists a non-decreasing continuous function \(\omega _{R}:[0,\infty )\rightarrow [0,\infty )\) satisfying \(\omega _{R}(0)=0\) such that if \(X,Y\in {\mathscr {M}}^{n}\) and \(\mu _{1},\mu _{2}\in [0,\infty )\) satisfy

    $$\begin{aligned} \begin{bmatrix} X&0 \\ 0&Y \\ \end{bmatrix} \le \mu _{1} \begin{bmatrix} I&-I \\ -I&I \\ \end{bmatrix} +\mu _{2} \begin{bmatrix} I&0\\ 0&I \\ \end{bmatrix}\end{aligned}$$
    (26)

    then

    $$\begin{aligned} \begin{aligned}&F(t,x,u,p,X) - F(t,y,u,q,-Y) \ge \\&\quad -\omega _{R}\Big ( \mu _{1}\left( |x-y|^{2}+\rho (p,q)^{2} \right) + \mu _{2} + |p-q| \\&\quad + |x-y|\left( \max (|p |, |q|) +1 \right) \Big )\\ \end{aligned} \end{aligned}$$

    for all \(t\in [0,T], x,y\in {\overline{\varOmega }}, u\in {\mathbb {R}}\), with \(|u|\le R\), \(p,q\in {\mathbb {R}}^{n}\backslash \{ 0\}\) and \(\rho (p,q) = \min \left( \frac{|p-q|}{\min (|p|,|q|)},1 \right) \).

5.1 Existence and Uniqueness for the Geodesic Model

We now prove that there exists a unique solution for the PDE (20) resulting from the minimisation of the functional for the Geodesic Model (18).

Remark 3

It is important to note that although the values of \(c_{1}\) and \(c_{2}\) depend on u, they are fixed when we solve the PDE for u and therefore the problem is a local one and Theorem 2 can be applied. Once we update \(c_{1}\) and \(c_{2}\), using the updated u, then we fix them again and apply Theorem 2. In practice, as we near convergence, we find \(c_{1}\) and \(c_{2}\) stabilise so we typically stop updating \(c_{1}\) and \(c_{2}\) once the change in both values is below a tolerance.

To apply the above theorem to the proposed model (20), the key step will be to verify the nine conditions. First, we multiply (20) by the factor \(|\nabla u|_{\varepsilon _{2}}\), obtaining the nonlinear PDE

$$\begin{aligned} \begin{aligned}&-\mu |\nabla u|_{\varepsilon _{2}}\nabla \cdot \bigg ( G(x,\nabla z) \frac{\nabla u}{|\nabla u|_{\varepsilon _{2}}} \bigg )\\&+ |\nabla u|_{\varepsilon _{2}}\bigg [ \lambda _{1} (z(x,y)-c_{1})^{2} -\lambda _{2} (z(x,y) - c_{2})^{2}\bigg .\\&\bigg . + \theta {\mathcal {D}}_{G}(x,y) + \alpha \nu '_{\varepsilon }(u) \bigg ]= 0 \end{aligned} \end{aligned}$$
(27)

where \(G(x,\nabla z) = g(|\nabla z(x,y)|)\). We can rewrite this as

$$\begin{aligned} F(x,u,p,X)= & {} -\mu \, \text {trace}\left( A(x,p)X \right) - \mu \langle \nabla G(x,\nabla z), p \rangle \nonumber \\&+ |p|k(u) + |p|f(x)=0 \end{aligned}$$
(28)

where \(f(x) = \lambda _{1}(z(x)-c_{1})^{2} - \lambda _{2}(z(x) - c_{2})^{2}, k(u)=\alpha \nu '_{\varepsilon }(u), p = (p_{1},p_{2}) = |\nabla u|_{\varepsilon _{2}}, X\) is the Hessian of u and

$$\begin{aligned} A(x,p) = \begin{bmatrix} G(x,\nabla z)\frac{p_{2}^{2}}{|p|^{2}}&- G(x,\nabla z)\frac{p_{1}p_{2}}{|p|^{2}} \\ - G(x,\nabla z)\frac{p_{1}p_{2}}{|p|^{2}}&G(x,\nabla z)\frac{p_{1}^{2}}{|p|^{2}} \\ \end{bmatrix} \end{aligned}$$
(29)

Theorem 4

(Theory for the Geodesic Model) The parabolic PDE \(\frac{\partial u}{\partial t} + F(t,x,u,Du,D^{2}u) = 0 \) with \(u_{0} = u(0,x)\in C({\overline{\varOmega }})\), F as defined in (28) and Neumann boundary conditions has a unique solution \(u=u(t,x)\) in \(C([0,T)\times {\overline{\varOmega }})\).

Proof

By Theorem 2, it remains to verify that F satisfies (C1)–(C2) and (I1)–(I7). We will show that each of the conditions is satisfied. Most are simple to show, the exception being (I7) which is non-trivial.

(C1): Equation (28) only has dependence on u in the term k(u); we therefore have a restriction on the choice of k, requiring \(k(v)\ge k(u)\) for \(v\ge u\). This is satisfied for \(k(u)=\alpha \nu '_{\varepsilon }(u)\) with \(\nu '_{\varepsilon }(u)\) defined as in (7).

(C2): We find for arbitrary \(s=(s_{1},s_{2})\in {\mathbb {R}}^{2}\) that \(s^{T} A(x,p) s \ge 0\) and so \( A(x,p)\ge 0\). It follows that \(-\text {trace}(A(x,p) X) \le -\text {trace}(A(x,p) Y)\); therefore, this condition is satisfied.

(I1): A(xp) is only singular at \(p=0\); however, it is continuous elsewhere and satisfies this condition.

(I2): In F, the only term which depends on u is \(k(u)=\alpha \nu '_{\varepsilon }(u)\). With \(\nu '_{\varepsilon }(u)\) defined as in (7), in the limit \(\varepsilon \rightarrow 0\) this function is a step function from \(-2\) on \((\infty ,0)\), 0 on [0, 1] and 2 on \((0,\infty )\). So we can choose any constant \(\varepsilon < -2\). With \(\varepsilon \ne 0\), there is smoothing at the end of the intervals; however, there is still a lower bound on L for \(\nu '_{\varepsilon }(u)\) and we can choose any constant \(\gamma < L\).

(I3): F is continuous at (x, 0, 0) for any \(x \in \varOmega \) because \(F^{*}(x, 0, 0) = F_{*} (x, 0, 0) = 0.\) Hence, this condition is satisfied.

(I4): The Euler–Lagrange equations give Neumann boundary conditions

$$\begin{aligned} B(x,\nabla u) = \frac{\partial u}{\partial {\varvec{n}}} ={\varvec{n}}\cdot \nabla ~u = \langle {\varvec{n}} , \nabla u \rangle = 0 \end{aligned}$$

on \(\partial \varOmega \), where \({\varvec{n}}\) is the outward unit normal vector, and we see that \(B(x,\nabla u)\in C^{1,1}\left( {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\backslash \{ 0 \}\right) \) and therefore this condition is satisfied.

(I5): By the definition above, \(B(x,\lambda \nabla u) = \langle {\varvec{n}} , \lambda \nabla u \rangle =\lambda \langle {\varvec{n}} , \nabla u \rangle = \lambda B(x,\nabla u) \). So this condition is satisfied.

(I6): As before, we can use the definition, \(\langle \varvec{n}(x), D_{p}B(x,p)\rangle = \langle \varvec{n}(x), \varvec{n}(x) \rangle = |\varvec{n}(x)|^{2}\). So we can choose \(\Theta = 1\) and the condition is satisfied.

(I7): This is the most involved condition to prove and uses many other results. For clarity of the overall paper, we postpone the proof to ‘Appendix A’.

5.2 Generalisation to Other Related Models

Theorems 2 and 4 can be generalised to a few other models. This amounts to writing each model as a PDE of the form (28) where k(u) is monotone and f(x), k(u) are bounded. This is summarised in the following corollary:

Corollary 5

Assume that \(c_1\) and \(c_2\) are fixed, with the terms f(x) and k(u), respectively, defined as follows for a few related models:

  • Chan–Vese [11]: \(f(x) = f_{\mathrm{CV}}(x):=\lambda _{1} (z(x)-c_{1})^{2} - \lambda _{2}(z(x)-c_{2})^{2}\), \(k(u) = 0\).

  • Chan–Vese (Convex) [10]: \(f(x) = f_{\mathrm{CV}}(x)\), \(k(u) = \alpha \nu '_{\varepsilon }(u)\).

  • Geodesic active contours [8] and Gout et al. [25]: \(f(x) = 0\), \(k(u) = 0\).

  • Nguyen et al. [30]: \(f(x) = \alpha \left( P_{\text {B}}(x,y) - P_{\text {F}}(x,y)\right) + \left( 1-\alpha \right) \left( 1-2P(x,y)\right) \), \(k(u) = 0\).

  • Spencer–Chen (Convex) [38]: \(f(x) = f_{\mathrm{CV}}(x) + \theta {\mathcal {D}}_{E}(x)\), \(k(u) = \alpha \nu '_{\varepsilon }(u)\).

Then, if we define a PDE of the general form

$$\begin{aligned} -\mu \nabla \cdot \left( G(x)\frac{\nabla u}{|\nabla u|_{\varepsilon _{2}}} \right) + k(u) + f(x)= 0 \end{aligned}$$

with

  1. (i)

    Neumann boundary conditions \(\frac{\partial u}{\partial {\varvec{n}}} = 0\) (\({\varvec{n}}\) the outward normal unit vector)

  2. (ii)

    k(u) satisfies \(k(u)\ge k(v)\) if \(u\ge v\)

  3. (iii)

    k(u) and f(x) are bounded; and

  4. (iv)

    \(G(x) = Id\) or \(G(x) = f(|\nabla z(x)|) = \frac{1}{1+|\nabla z(x)|^{2}}\),

we have a unique solution \(u\in C([0,T)\times {\overline{\varOmega }})\) for a given initialisation. Consequently, we conclude that all above models admit a unique solution.

Proof

The conditions (i)–(iv) are hold for all of these models. All of these models require Neumann boundary conditions and use the permitted G(x). The monotonicity of \(\nu '_{\varepsilon }(u)\) is discussed in the proof of (C1) for Theorem 4, and the boundedness of f(x) and k(u) is clear in all cases.

Remark 6

Theorem 4 and Corollary 5 also generalise to cases where \(G(x) = \frac{1}{1+\beta |\nabla z|^{2}}\) and to \(G(x) = {\mathcal {D}}(x) g(|\nabla z|)\) where \({\mathcal {D}}(x)\) is a distance function such as in [15,16,17, 38]. The proof is very similar to that shown in Sect. 5.1, relying on Lipschitz continuity of the function G(x).

Remark 7

We cannot apply the classical viscosity solution framework to the Rada–Chen model [36] as this is a non-local problem with \(k(u) = 2\nu \left( \int _{\varOmega }H_{\varepsilon }(u)\,\hbox {d}\varOmega -A_{1} \right) \).

6 Numerical Results

In this section, we will demonstrate the advantages of the Geodesic Model for selective image segmentation over related and previous models. Specifically, we shall compare

  • M1 —the Nguyen et al. [30] model;

  • M2 —the Rada and Chen [36] model;

  • M3 —the convex Spencer–Chen [38] model;

  • M4 —the convex Liu et al. [26] model;

  • M5 —the reformulated Rada–Chen model with geodesic distance penalty (see Remark 8);

  • M6 —the reformulated Liu et al. model with geodesic distance penalty (see Remark 8);

  • M7 —the proposed convex Geodesic Model (Algorithm 1).

Remark 8

(A note on M5 and M6) We include M5–M6 to test how the geodesic distance penalty term can improve M2 [36] and M4 [26]. These were obtained as follows:

Fig. 9
figure 9

Test 1 setting: a Image 1; b Image 1 with marker and anti-marker set shown in green and pink, respectively; c Test Image 2; d Image 2 with marker set shown (Color figure online)

Fig. 10
figure 10

Visual comparison of M1–M7 results for Test Image 1. M1 segmented part of the object, M2–M4 failed to segment the object, M5 gave a reasonable result (though not accurate) and, M6 and M7 correctly segmented the object. aM1 (Left to right:) Test Image 1 with markers (red) and anti-markers (blue), foreground segmentation and background segmentation (we used published software, no parameter choice required). bM2\(\lambda =1\), \(\gamma =10\). cM3\(\lambda = 5\), \(\theta = 3\). dM4\(\lambda = 1/4\). eM5\(\lambda = 5, \gamma = 3, \theta = \frac{1}{10}\). fM6\(\lambda = 15, \theta = 3\). gM7\(\lambda = 10, \theta = 1\) (Color figure online)

  • we extend M2M5 simply by including the geodesic distance function \({\mathcal {D}}_{G}(x,u)\) in the functional.

  • we extend M4M6 with a minor reformulation to include data fitting terms. Specifically, the model M6 is

    $$\begin{aligned}&\min _{u,c_{1},c_{2}}\Big \{F_{CV\omega }(u,c_{1},c_{2})\nonumber \\&\quad = \int _{\varOmega }\omega ^{2}(x,y)\left[ \lambda _{1}(z(x,y)-c_{1})^{2}\right. \nonumber \\&\qquad - \left. \lambda _{2}(z(x,y)-c_{2})^{2} \right] u\,\hbox {d}\varOmega + \mu \int _{\varOmega }g(|\nabla z|))|\nabla u|\,\hbox {d}\varOmega \nonumber \\&\qquad + \theta \int _{\varOmega }{\mathcal {D}}_{G}(x,y)u \,\hbox {d}\varOmega + \alpha \int _{\varOmega }\nu _{\varepsilon }(u)\,\hbox {d}\varOmega \Big \} \end{aligned}$$
    (30)

    for \(\mu ,\lambda _{1},\lambda _{2}\) nonnegative fixed parameters, \(\alpha \) and \(\nu _{\varepsilon }(u)\) as defined in (7) and \(\omega \) as defined for the convex Liu et al. model. This is a convex model and is the same as the proposed Geodesic Model M7 but with weighted intensity fitting terms.

Fig. 11
figure 11

Visual comparison of M1–M7 results for Test Image 2. M1 segmented part of the object, M2–M4 failed to segment the object, M5, M6 and M7 correctly segmented the object. aM1 (Left to right): Test Image 2 with markers (red) and anti-markers (blue), foreground segmentation and background segmentation (we used published software, no parameter choice required). bM2\(\lambda =1\), \(\gamma =15\). cM3\(\lambda = 5\), \(\theta = 1\). dM4\(\lambda = 1/8\). eM5\(\lambda = 1, \gamma = 15, \theta = \frac{1}{10}\). fM6\(\lambda = 15, \theta = 1\). gM7\(\lambda = 10, \theta = 1\) (Color figure online)

Fig. 12
figure 12

Parameter maps for M3, M6 and M7. a Original image. b Ground truth segmentation. cM3 TC values for various \(\lambda \) and \(\theta \) values. dM6 TC values for various \(\lambda \) and \(\theta \) values. eM7 TC values for various \(\lambda \) and \(\theta \) values

Fig. 13
figure 13

Parameter maps for M3, M6 and M7. a Original image with marker set. b Ground truth segmentation. cM3 TC values for various \(\lambda \) and \(\theta \) values. dM6 TC values for various \(\lambda \) and \(\theta \) values. eM7 TC values for various \(\lambda \) and \(\theta \) values

Four sets of test results are shown below. In Test 1, we compare models M1–M6 to the proposed model M7 for two images which are hard to segment. The first is a CT scan from which we would like to segment the lower portion of the heart, the second is an MRI scan of a knee and we would like to segment the top of the Tibia. See Fig. 9 for the test images and the marker sets used in the experiments. In Test 2, we will review the sensitivity of the proposed model to the main parameters. In Test 3, we will give several results achieved by the model using marker and anti-marker sets. In Test 4, we show the initialisation independence and marker independence of the Geodesic Model on real images.

For M7, we denote by \({\tilde{u}}\) the thresholded \(u > {\tilde{\gamma }}\) at some value \({\tilde{\gamma }}\in (0,1)\) to define the segmented region. Although the threshold can be chosen arbitrarily in (0, 1) from the work by [10, Theorem 1] and [38], we usually take \({\tilde{\gamma }}=0.5\).

Quantitative comparisons. To measure the quality of a segmentation, we use the Tanimoto coefficient (TC) (or Jaccard coefficient [21]) defined by

$$\begin{aligned} \hbox {TC}({\tilde{u}},\hbox {GT}) = \frac{|{\tilde{u}}\cap \hbox {GT}|}{|{\tilde{u}}\cup \hbox {GT}|} \end{aligned}$$

where GT is the ‘ground truth’ segmentation and \({\tilde{u}}\) is the result from a particular model. This measure takes value one for a segmentation which coincides perfectly with the ground truth and reduces to zero as the quality of the segmentation gets worse. In the other tests, where a ground truth is not available, we use visual plots.

Parameter choices and implementation. We set \(\mu = 1\), \(\tau = 10^{-2}\) and vary \(\lambda = \lambda _{1} = \lambda _{2}\) and \(\theta \). Following [10], we let \(\alpha = ||\lambda _{1}(z-c_{1})^{2}-\lambda _{2}(z-c_{2})^{2}+\theta {\mathcal {D}}_{G}(x,y)||_{L^{\infty }}\). To implement the marker points in MATLAB, we use roipoly for choosing a small number of points by clicking and also freedraw which allows the user to draw a path of marker points. The stopping criteria used are the dynamic residual falling below a given threshold; i.e. once \(||u^{k+1}-u^{k}||/||u^{k}|| < \hbox {tol}\), the iterations stop (we use \(\hbox {tol} = 10^{-6}\) in the tests shown).

Test 1—Comparison of models M1–M7. In this test, we give the segmentation results for models M1–M7 for the two challenging test images shown in Fig. 9. The marker and anti-marker sets used in the experiments are also shown in this figure. After extensive parameter tuning, the best final segmentation results for each of the models are shown in Figs. 10 and 11. For M1–M4, we obtain incorrect segmentations in both cases. In particular, the results of M2 and M4 are interesting as the former gives poor results for both images, and the latter gives a reasonable result for Test Image 1 and a poor result for Test Image 2. In the case of M2, the regularisation term includes the edge detector and the distance penalty term [see (4)]. It is precisely this which permits the poor result in Figs. 10b and 11b as the edge detector is zero along the contour and the fitting terms are satisfied there (both intensity and area constraints)—the distance term is not large enough to counteract the effect of these. In the case of M4, the distance term and edge detector are separated from the regulariser and are used to weight the Chan–Vese fitting terms [see (9)]. The poor segmentation in Fig. 11(b) is due to the Chan–Vese terms encouraging segmentation of bright objects (in this case), and weighting \(\omega \) enforces these terms at all edges in the image and near \({\mathcal {M}}\). In experiments, we find that M4 performs well when the object to segment is of approximately the highest or lowest intensity in the image; however, when this is not the case, results tend to be poor. We see that, in both cases, models M5 and M6 give much improved results to M2 and M4 (obtained by incorporating the geodesic distance penalty into each). The proposed Geodesic Model M7 gives an accurate segmentation in both cases. It remains to compare M5, M6 and M7. We see that M5 is a non-convex model (and cannot be made convex [38]); therefore, results are initialisation dependent. It also requires one more parameter than M6 and M7, and an accurate set \({\mathcal {M}}\) to give a reasonable area constraint in (4). These limitations lead us to conclude M6 and M7 are better choices than M5. In the case of M6, it has the same number of parameters as M7 and gives good results. M6 can be viewed as the model M7 with weighted intensity fitting terms [compare (18) and (30)]. Experimentally, we find that the same quality of segmentation result can be achieved with both models generally; however, M6 is more parameter sensitive than M7. This can be seen in the parameter map in Fig. 12 with M7 giving an accurate result for a wider range of parameters than M6. To show the improvement of M7 over previous models, we also give an image in Fig. 13 which can be accurately segmented with M7 but the correct result is never achieved with M6 (or M3). Therefore, we find that M7 outperforms all other models tested M1–M6.

Fig. 14
figure 14

Three further test results obtained using our Geodesic Model M7, all with parameters \(\theta =5\), \(\lambda = 5\). The first row shows the original image with the marker set (plus anti-marker set), the second row shows the final segmentation result and the final row shows the residual history

Fig. 15
figure 15

Test 4 on M7’s initialisations (\(\theta = 5, \lambda = 5\)). a The original image with marker set indicated; b initialisation 1 using the image itself; c segmentation result from Initialisation 1; d Initialisation 2 away from the object to be segmented; e segmentation 2 from Initialisation 2. Clearly M7 gives the same result

Fig. 16
figure 16

Test 4 on M7’s marker set (\(\theta = 5, \lambda = 3\)). Row 1 shows the original image with three marker points, the normalised geodesic distance map and the final segmentation result. Row 2 shows the original image with one marker point, the normalised geodesic distance map and the final segmentation result. Clearly, the second and third columns are the same for different marker points. Thus, M7 is robust

Remark 9

Models M2–M7 are coded in MATLAB and use exactly the same marker/anti-marker set. For model M1, the software of Nguyen et al. requires marker and anti-marker sets to be input to an interface. These have been drawn as close as possible to match those used in the MATLAB code.

Table 1 Tanimoto coefficient for various \(\varepsilon _{2}\) values, segmenting the images in Figs. 12 and 13

Test 2—Test of M7’s sensitivity to changes in its main parameters. In this test, we demonstrate that the proposed Geodesic Model is robust to changes in the main parameters. The main parameters in (20) are \(\mu , \lambda _{1},\lambda _{2},\theta \) and \(\varepsilon _{2}\). In all tests, we set \(\mu = 1\), which is simply a rescaling of the other parameters, and we set \(\lambda = \lambda _{1} = \lambda _{2}\). In the first example, in Fig. 12, we compare the TC value for various \(\lambda \) and \(\theta \) values for segmentation of a bone in a knee scan. We see that the segmentation is very good for a larger range of \(\theta \) and \(\lambda \) values. For the second example, in Fig. 13, we show an image and marker set for which the Spencer–Chen model (M3) and modified Liu et al. model M6 cannot achieve the desired segmentation for any parameter range, but which can be attained for the Geodesic Model for a vast range of parameters. The final example, in Table 1, compares the TC values for various \(\varepsilon _{2}\) values with fixed parameters \(\lambda = 2\) and \(\theta = 2\). We use the images and ground truth as shown in Figs. 12 and 13 : on the synthetic circles image, we obtain a perfect segmentation for all values of \(\varepsilon _{2}\) tested, and in the case of the knee segmentation the results are almost identical for any \(\varepsilon _{2} < 10^{-6}\), above which the quality slowly deteriorates.

Test 3—Further results from the Geodesic Model M7. In this test, we give some medical segmentation results obtained using the Geodesic Model M7. The results are shown in Fig. 14. In the final two columns, we use anti-markers to demonstrate how to overcome blurred edges and low- contrast edges in an image. These are challenging, and it is pleasing to see the correctly segmented results.

Test 4—Initialisation and marker set independence. In the first example, in Fig. 15, we see how the convex Geodesic Model M7 gives the same segmentation result regardless of initialisation, as expected of a convex model. Hence, the model is flexible in implementation. In our experiments we find that the algorithm converges to the final solution faster when we initialise using the polygon formed from the marker points rather than an arbitrary initialisation. In the second example, in Fig. 16, we show intuitively how Model M7 is robust to the number of markers and the location of the markers within the object to be segmented. The Euclidean distance term, used in the Spencer–Chen model M3, is sensitive to the position and number of marker points; however, regardless of where the markers are chosen, and how many are chosen, the geodesic distance map will be almost identical.

7 Conclusions

In this paper, a new convex selective segmentation model has been proposed, using geodesic distance as a penalty term. This model gives results that are unachievable by alternative selective segmentation models and are also more robust to the parameter choices. Adaptations to the penalty term have been discussed which make it robust to noisy images and blurry edges whilst also penalising objects far from the marker set (in a Euclidean distance sense). A proof for the existence and uniqueness of the viscosity solution to the PDE given by the Euler–Lagrange equation for the model has been given (which applies to an entire class of image segmentation PDEs). Finally, we have confirmed the advantages of using the geodesic distance with some experimental results. Future works will look for further extension of selective segmentation to other frameworks such as using high-order regularizers [13, 45] where only incomplete theories exist.