1 Introduction

Bottleneck identification constitutes an important task in network analysis, with applications including transportation planning and management (Berman & Handler, 1987), routing in computer networks (Shacham, 1992) and various bicriterion path problems (Hansen, 1980). The path-specific bottleneck, on a path between a source and a target node in a network, is defined as the edge with a maximal cost or weight according to some criterion such as transfer time, load, commute time, distance, etc. The goal of bottleneck identification and avoidance is then to find a path whose bottleneck is minimal. Thus, one may model bottleneck identification as the problem of computing the minimax edge over the given network/graph, to obtain an edge with a minimal largest gap between the source and target nodes. Equivalently, it can be formulated as a widest path problem or maximum capacity path problem (Pollack, 1960) where the edge weights have been negated.

In transportation, identifying bottlenecks is important for city governments and traffic managers to monitor and improve overall performance. For example, when driving between two locations, identifying a bottleneck means finding the section of road that slows down the traffic between two locations. Consider a social network where there is an information flow between two nodes (source and destination). Then we can define the bottleneck as a link in all the paths between these two nodes that has the weakest connection and causes the information flow to fail. So, bottlenecks here are related to the possibility of information loss.

The aforementioned formulations assume that the network or the graph is fully specified, i.e., that all the edge weights are fully known. However, in practice, the edge weights might not be known in advance or they might include some inherent uncertainty. In this paper, we tackle such situations by developing an online learning framework to learn the edge weight distributions of the underlying network, while solving the bottleneck identification problem for different problem instances.

For example, in the transportation scenario, city governments often have access to fleets of vehicles utilized for various municipal services. These may be used to sequentially and continuously to gain knowledge about traffic flow from the environment, while it is still desirable to avoid causing unnecessary inconvenience and stress (Hennessy & Wiesenthal, 1999) to the employees operating the vehicles by excessively exploring congested paths. If care is taken to spread the costs over time, exploration may be performed continuously without having a specific end time known in advance (i.e., the time horizon of the sequential decision making problem).

For this purpose, we view this as a multi-armed bandit (MAB) problem and focus on Thompson Sampling (TS) (Thompson, 1933), a method that suits probabilistic online learning well. Thompson Sampling is an early Bayesian method for addressing the trade-off between exploration and exploitation in sequential decision making problems. It balances these by randomly sampling available actions according to their posterior probability of being optimal, given prior beliefs and observations from previously selected actions. An action is more likely to be sampled if the posterior distribution over the expected reward of that action has high uncertainty (exploration) or high mean (exploitation).

The method has only recently been thoroughly evaluated through experimental studies (Chapelle & Li, 2011; Graepel et al., 2010) and theoretical analyses (Kaufmann et al., 2012; Agrawal et al., 2012; Russo & Van Roy, 2014), where it has been shown to be asymptotically optimal in the sense that it matches well-known lower bounds of these types of problems (Lai & Robbins, 1985). Furthermore, the algorithm does not assume knowledge of the time horizon, i.e., it is an anytime algorithm.

Among many other problem settings, Thompson Sampling has been adapted to online versions of combinatorial optimization problems with retained theoretical guarantees (Wang & Chen, 2018), where one application is to find shortest paths in graphs (Liu & Zhao, 2012; Gai et al., 2012; Zou et al., 2014; Åkerblom et al., 2020).

Another commonly used method for these problems is Upper Confidence Bound (UCB) (Auer, 2003), which utilizes optimism to balance exploration and exploitation. UCB has been adapted to combinatorial settings (Chen et al., 2013), and also exists in Bayesian variants (Kaufmann et al., 2012). Recently, a variant of UCB has been studied for bottleneck avoidance problems in a combinatorial pure exploration setting (Du et al., 2021). They consider a different problem setting and method than those we present in this paper, though their bottleneck reward function is similar to the one we use in our approximation method. The main difference between their setting and the standard combinatorial semi-bandit setting in how agents interact with the environment, is that instead of being restricted to selecting sets of actions respecting combinatorial constraints, they allow agents to sequentially try individual arms to identify the best feasible solution to the combinatorial problem. This is not applicable to our setting, since we may not observe the feedback of individual edges without also traversing a path containing those edges, potentially incurring cost from some other edge on that path.

Moreover, the objective in a pure exploration problem is to find the best action as quickly as possible, with either a fixed time budget or confidence level, using agents dedicated for this task. While identifying the best path is desirable in our problem setting as well, we are specifically interested in the case where existing agents are utilized and where using them exclusively for exploration is too costly. For that reason, we focus on anytime methods capable of distributing exploratory actions over time.

In this paper, we model the online bottleneck identification task as a stochastic combinatorial semi-bandit problem, for which we develop a combinatorial variant of Thompson Sampling. We then derive an upper bound on the corresponding Bayesian regret that is tight up to a polylogarithmic factor, which is consistent with the existing lower bounds for combinatorial semi-bandit problems. We face the issue of computational intractability with the exact problem formulation. We thus propose an approximation scheme, along with a theoretical analysis of its properties. Finally, we experimentally investigate the performance of the proposed method on directed and undirected real-world networks from transport and collaboration domains.

2 Bottleneck identification model

In this section, we first introduce the bottleneck identification problem over a fixed network and then describe a probabilistic model to be used in stochastic and uncertain situations.

2.1 Bottleneck identification over a network

We model a network by a graph G(VEw), where V denotes the set of vertices (nodes) and each \(e = (u,v) \in E\) indicates an edge between vertices u and v where \(u,v \in V\) and \(u \ne v\). Moreover, \(w: E\rightarrow \mathbb {R}\) is a weight function defined for each edge of the graph, where for convenience, we use \(w_e\) to denote the weight of edge e. If G is directed, the pair (uv) is ordered, otherwise, it is not (i.e., \((u,v)\equiv (v,u)\) for undirected graphs). A path p from vertex u (source) to vertex v (target) over G is a sequence of vertices \(<v_1, v_2, \dots ,v_{k-1}, v_k>\) where \(v_1=u\), \(v_k=v\) and \((v_i,v_{i+1}) \in E, \forall i \in [1, k-1]\). It can also be seen as a sequence of edges \(< (v_1, v_2),(v_2,v_3), \dots ,(v_{k-1}, v_k)>\).

As previously mentioned, a bottleneck on a path p can be described as an edge with a maximal weight on that path. To find the smallest feasible bottleneck edge between the source node u and the target node v, we consider all the paths between them. For each path, we pick an edge with a maximal weight, to obtain all path-specific bottleneck edges. We then identify the smallest path-specific bottleneck edge in order to find the best feasible bottleneck edge, i.e., such that bottleneck edges with higher weights are avoided.

Therefore, given graph G, the bottleneck edge between \(u \in V\) and \(v \in V\) can be identified via extracting the minimax edge between them. With \(P_{u,v}\) denoting the set of all possible paths from u to v over G, the bottleneck weight (incurred by the bottleneck edge) can be computed by

$$\begin{aligned} b(u,v;G) = \min _{p \in P_{u,v}} \max _{e\in p} w_e . \end{aligned}$$
(1)

The quantity in Eq. 1 satisfies the (ultra) metric properties under some basic assumptions on the edge weights such as symmetry and nonnegativity. Hence, it is sometimes used as a proper distance measure to extract manifolds and elongated clusters in a non-parametric way (Haghir Chehreghani, 2020; Kim & Choi, 2007).

However, in our setting, such conditions do not need to be fulfilled by the edge weights. In general, we tolerate positive as well as negative edge weights, and we assume the graph might be directed, i.e., the edge weights are not necessarily symmetric. Therefore, despite the absence of (ultra) metric properties, the concept of minimax edges is still relevant for bottleneck identification.

To compute the minimax edge, one does not need to investigate all possible paths between the source and target nodes, which might be computationally infeasible. As studied by Hu (1961), minimax edges and paths over an arbitrary undirected graph are equal to the minimax edges over any minimum spanning tree (MST) computed over that graph. This equivalence simplifies the calculation of minimax edges, as there is only one path between every two vertices over an MST, whose maximal edge weight yields the minimax edge, i.e., the desired bottleneck.

For directed graphs, an MST might not represent the minimax edges in a straightforward manner. Hence, we instead rely on a modification (Berman & Handler, 1987) of Dijkstra’s algorithm (Dijkstra, 1959) to extract minimax paths rather than the shortest paths.

2.2 Probabilistic model for bottleneck identification

In this paper, we study bottleneck identification in uncertain and stochastic settings. Therefore, instead of considering the weights \(w_e\) for \(e \in E\) to be fixed, we view them as stochastic with fixed, albeit unknown, distribution parameters. Additionally, we assume that the weight of each edge follows a Gaussian distribution with known and finite variance. The Gaussian edge weight assumption is common for many important problem settings, like minimization of travel time (Seshadri & Srinivasan, 2010) or energy consumption (Åkerblom et al., 2020) in road networks. Furthermore, we assume that all edge weights are mutually independent. Hence,

$$\begin{aligned} w_e \sim \mathcal {N}(\theta _e^*, \sigma _e^2) , \end{aligned}$$

where \(\theta _e^*\) denotes the unknown mean of edge e, and \(\sigma _e^2\) is the known variance. To reduce cumbersome notation in the proofs, since the variance is assumed to be finite, we let \(\sigma _e^2 \le 1\) (by scaling the edge weight distributions). However, we emphasize that we do not assume that \(w_e\) and \(\theta _e^*\) are bounded or non-negative.

It is convenient to be able to make use of prior knowledge in online learning problems where the action space is large, which motivates a Bayesian approach where we assume that the unknown mean \(\theta ^*_e\) is sampled from a known prior distribution:

$$\begin{aligned} \theta _e^* \sim \mathcal {N}(\mu _{e,0}, \varsigma _{e,0}^2) . \end{aligned}$$

We use a Gaussian prior for \(\theta _e^*\) since it is conjugate to the Gaussian likelihood and allows for efficient recursive updates of posterior parameters upon a new weight observation \(w_{e,t}\) at time t:

$$\begin{aligned}&\varsigma _{e,t+1}^2 \leftarrow \left( \frac{1}{\varsigma _{e,t}^2} + \frac{1}{\sigma ^2_e}\right) ^{-1} , \end{aligned}$$
(2)
$$\begin{aligned}&\mu _{e,t+1} \leftarrow \varsigma _{e,t+1}^2 \left( \frac{\mu _{e,t}}{\varsigma _{e,t}^2} + \frac{w_{e,t}}{\sigma ^2_e}\right) . \end{aligned}$$
(3)

Since our long-term objective is to find a path which minimizes the expected maximum edge weight along that path, we need a framework to sequentially select paths to update these parameters and learn enough information about the edge weight distributions.

The assumptions in this section might seem restrictive, and indeed, when the edge weights represent e.g., traffic congestion in a road network, it is reasonable to believe that edges are not independent, especially for neighboring road segments. There are ways of extending this setting to capture such dependencies, while retaining similar regret guarantees for the studied methods. Such extensions include the contextual setting, where expected edge weights are assumed to follow parameterized functions of contextual features (e.g., time-of-day, local ambient temperature, precipitation) revealed to the agent in each time step, before each action is taken. We leave such extensions to future work, though we note that the proofs in this work may be extended in a straightforward manner, analogous to the analysis of linear contextual Thompson Sampling by Russo and Van Roy (2014). Similarly, Thompson Sampling may be extended to the case where both the mean and variance are unknown, by assignment of a joint prior distribution over the parameters (Riquelme et al., 2018).

3 Online bottleneck learning framework

Consider a stochastic combinatorial semi-bandit problem (Cesa-Bianchi & Lugosi, 2012) with time horizon T, formulated as a problem of cost minimization rather than reward maximization. There is a set of base arms \(\mathcal {A}\) (where we let \(d := \vert \mathcal {A} \vert\)) from which we may, at each time step \(t \in [T]\), select a subset (or super arm) \(\varvec{a}_t \subseteq \mathcal {A}\). The selection is further restricted such that \(\varvec{a}_t \in \mathcal {I} \subseteq 2^\mathcal {A}\), where \(\mathcal {I}\) is called the set of feasible super arms.

Upon selection of \(\varvec{a}_t\), the environment reveals a feedback \(X_{i,t}\) drawn from some fixed and unknown distribution for each base arm \(i \in \varvec{a}_t\) (i.e., semi-bandit feedback). Furthermore, we then receive a super arm cost from the environment, \(c(\varvec{a}_t) := \max _{i \in \varvec{a}_t} X_{i,t}\), i.e., the maximum of all base arm feedback for the selected super arm and the current time step. The objective is to select super arms \(\varvec{a}_t\) to minimize \(\mathbb {E}\left[ \sum _{t=1}^{T} c(\varvec{a}_t)\right]\). This objective is typically reformulated as an equivalent regret minimization problem, where the (expected) regret is defined as

$$\begin{aligned} \text {Regret}(T) := \left( \sum _{t\in [T]} \mathbb {E}\left[ c(\varvec{a}_t)\right] \right) - T \cdot \min _{\varvec{a} \in \mathcal {I}} \mathbb {E}\left[ c(\varvec{a})\right] . \end{aligned}$$
(4)

To connect this to the probabilistic bottleneck identification model introduced in the previous section, we let each edge \(e \in E\) in the graph G correspond to exactly one base arm \(i \in \mathcal {A}\). For the online minimax path problem, the feasible set of super arms is then the set of all admissible paths in the graph, where the paths are directed or undirected depending on the type of graph. The feedback of each base arm i is simply the Gaussian weight of the matching edge e, with known variance \(\sigma _i^2\) and unknown mean \(\theta ^*_i\).

We denote the expected cost of a super arm \(f_{\varvec{\theta }}(\varvec{a})\), where \(\varvec{\theta }\) is a mean vector and \(f_{\varvec{\theta }}(\varvec{a}) := \mathbb {E}\left[ \max _{i \in \varvec{a}} C_i\right]\) such that \(C_i \sim \mathcal {N}(\theta _i, \sigma _i^2)\). For Bayesian bandit settings and algorithms, it is common to consider the notion of Bayesian regret, with an additional expectation over problem instances drawn from the prior distribution (where we denote the prior distribution \(\lambda\), over mean vectors \(\varvec{\theta }^*\)):

$$\begin{aligned} \text {BayesRegret}(T) := \mathbb {E}_{\varvec{\theta }^* \sim \lambda } \left[ \mathbb {E}\left[ \left( \sum _{t \in [T]} f_{\varvec{\theta }^*} (\varvec{a}_t)\right) - T \cdot \min _{\varvec{a} \in \mathcal {I}} f_{\varvec{\theta }^*} (\varvec{a}) \,\bigg \vert \,\varvec{\theta }^*\right] \right] . \end{aligned}$$
(5)

3.1 Thompson sampling with exact objective

It is not sufficient to find the super arm \(\varvec{a}\) which minimizes \(f_{\varvec{\mu }_t}(\varvec{a})\) in each time step t, since a strategy which is greedy with respect to possibly imperfect current cost estimates may converge to a sub-optimal super arm. Thompson Sampling is one of several methods developed to address the trade-off between exploration and exploitation in stochastic online learning problems. It has been shown to exhibit good performance in many formulations, e.g., linear contextual bandits and combinatorial semi-bandits.

The steps performed in each time step t by Thompson Sampling, adapted to our setting, are described in Algorithm 1 . First, a mean vector \(\tilde{\varvec{\theta }}\) is sampled from the current posterior distribution (or from the prior in the first time step). Then, an arm \(\varvec{a}_t\) is selected which minimizes the expected cost \(f_{\tilde{\varvec{\theta }}} (\varvec{a}_t)\) with respect to the sampled mean vector. These first two steps are equivalent to selecting the arm according to the posterior probability of it being optimal. In combinatorial semi-bandit problems, the method of finding the best super arm according to the sampled parameters is often called an oracle.

When the super arm \(\varvec{a}_t\) is played, the environment reveals the feedback \(X_{i,t}\) if and only if \(i \in \varvec{a}_t\), which is a property called semi-bandit feedback. Finally, these observations are used to update the posterior distribution parameters.

figure a

3.2 Regret analysis of Thompson sampling for minimax paths

We use the technique to analyze the Bayesian regret of Thompson Sampling for general bandit problems introduced by Russo and Van Roy (2014) and further elaborated by Slivkins (2019), carefully adapting it to our problem setting. This technique was originally devised to enable convenient conversion of existing UCB regret analyses to Thompson Sampling, but can also be applied to new TS applications. Here, we do a novel extension to combinatorial bandits with minimax super-arm cost functions, which includes establishing concentration properties for the mean estimates of the non-linear super-arm costs. In the rest of this section, we outline the most important steps of the proof of Theorem 1, leaving technical details to the supplementary material (Online Resource 1). In the analysis, for convenience, we assume that \(T \ge d\).

Theorem 1

The Bayesian regret of Algorithm 1 is \(\mathcal {O}(d\sqrt{T \log T})\).

We initially define a sequence of upper and lower confidence bounds, for each time step t:

$$\begin{aligned}&U_t(\varvec{a}) := f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a}) + \max _{i \in \varvec{a}} \sqrt{\frac{32 \log T}{N_{t-1}(i)}} \\&L_t(\varvec{a}) := f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a}) - \max _{i \in \varvec{a}} \sqrt{\frac{32 \log T}{N_{t-1}(i)}} \end{aligned}$$

where \(\hat{\theta }_{i,t}\) is the average feedback of base arm \(i \in \mathcal {A}\) until time t, \(\hat{\varvec{\theta }}_{t}\) is the average feedback vector for all arms in \(\mathcal {A}\), and \(N_t(i)\) is the number of times base arm \(i \in \mathcal {A}\) has been played as part of a super arm until time t.

Lemma 2

For Algorithm 1, we have that:

$$\begin{aligned}&\text {BayesRegret}(T) = \\&\sum _{t \in [T]} \mathbb {E}\left[ U_t(\varvec{a}_t) - L_t(\varvec{a}_t)\right] + \sum _{t \in [T]} \mathbb {E}\left[ f_{\varvec{\theta }^*}(\varvec{a}_t) - U_t (\varvec{a}_t) \right] + \sum _{t \in [T]} \mathbb {E}\left[ L_t (\varvec{a}^*) - f_{\varvec{\theta }^*}(\varvec{a}^*) \right] . \end{aligned}$$

This Bayesian regret decomposition is a direct application of Proposition 1 of Russo and Van Roy (2014). It utilizes the fact that given the history of selected arms and received feedback until time t, the played super arm \(\varvec{a}_t\) and the best possible super arm \(\varvec{a}^* := \arg \min _{\varvec{a} \in \mathcal {I}} f_{\varvec{\theta }^*}(\varvec{a})\) are identically distributed under Thompson Sampling. Furthermore, also given the history, \(U_t(\varvec{a})\) and \(L_t(\varvec{a})\) are deterministic functions of the super arm \(\varvec{a}\). This enables the decomposition of the regret into terms of the expected confidence width, the expected overestimation of the super arm with least mean cost, and the expected underestimation of the selected super arm. By showing that \(f_{\varvec{\theta }^*}(\varvec{a}) \in [L_t(\varvec{a}), U_t(\varvec{a})]\) with high probability, we can bound the last two of these terms.

Lemma 3

For any \(t \in [T]\), we have that \(\mathbb {E}\left[ f_{\varvec{\theta }^*}(\varvec{a}_t) - U_t (\varvec{a}_t) \right] \le \frac{4d}{T}\) and \(\mathbb {E}\left[ L_t (\varvec{a}^*) - f_{\varvec{\theta }^*}(\varvec{a}^*) \right] \le \frac{4d}{T}\).

Both terms are bounded in the same way, for which we need a few intermediary results. Focusing on the underestimation of the played super arm, we can see that:

$$\begin{aligned}&\mathbb {E}\left[ f_{\varvec{\theta }^*}(\varvec{a}_t) - U_t (\varvec{a}_t) \right] \\&= \mathbb {E}\left[ f_{\varvec{\theta }^*}(\varvec{a}_t) - f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a}_t) - \max _{i \in \varvec{a}_t} \sqrt{\frac{32 \log T}{N_{t-1}(i)}}\right] . \end{aligned}$$

First, in Lemma 4, the difference between the true mean cost \(f_{\varvec{\theta }^*}(\varvec{a})\) of a super arm \(\varvec{a}\) and the corresponding estimated mean \(f_{\hat{\varvec{\theta }}}(\varvec{a})\) is bounded. The resulting upper bound is the maximum of the differences of the true and estimated means of each individual base arm feedback, such that:

Lemma 4

For any super arm \(\varvec{a} \in \mathcal {I}\) and time step \(t \in [T]\), we have that \(\vert f_{\varvec{\theta }^*}(\varvec{a}) - f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a}) \vert \le 2 \max _{i\in \varvec{a}} \vert \theta ^*_i - \hat{\theta }_{i,t-1} \vert\).

This is achieved by decomposing the absolute value into a sum of the positive and negative portions of the difference, then bounding each individually. Focusing on the positive portion by assuming that \(f_{\varvec{\theta }^*}(\varvec{a}) \ge f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a})\), and letting \(Z_i \sim \mathcal {N}(\hat{\theta }_{i,t-1}, \sigma ^2_i)\), \(Y_i \sim \mathcal {N}(\theta ^*_{i}, \sigma ^2_i)\), \(\delta _{i,t-1} := \theta ^*_i - \hat{\theta }_{i,t-1}\) and \(Q_i := Y_i - \delta _{i,t-1}\), for \(i \in \varvec{a}\), we can see that:

$$\begin{aligned}&f_{\varvec{\theta }^*}(\varvec{a}) - f_{\hat{\varvec{\theta }}_{t-1}}(\varvec{a})\\&= \mathbb {E}\left[ \max _{i \in \varvec{a}} Y_i\right] - \mathbb {E}\left[ \max _{i \in \varvec{a}} Z_i\right] \\&= \mathbb {E}\left[ \max _{i \in \varvec{a}} (Q_i + \delta _{i,t-1})\right] - \mathbb {E}\left[ \max _{i \in \varvec{a}} Z_i\right] \\&\le \mathbb {E}\left[ \max _{i \in \varvec{a}} Q_i\right] + \max _{i \in \varvec{a}} \delta _{i,t-1} - \mathbb {E}\left[ \max _{i \in \varvec{a}} Z_i\right] \\&= \max _{i \in \varvec{a}} \delta _{i,t-1} \;\;. \end{aligned}$$

The negative portion is bounded in the same way, directly leading to the result of Lemma 4. With this result, we can proceed with Lemma 3, where we let \([x]^+ := \max (0, x)\):

$$\begin{aligned}&\mathbb {E}\left[ 2\max _{i \in \varvec{a}_t} \vert \delta _{i,t-1} \vert - \max _{i \in \varvec{a}_t} \sqrt{\frac{32 \log T}{N_{t-1}(i)}}\right] \nonumber \\&\le \mathbb {E}\left[ 2\max _{i \in \varvec{a}_t} \left[ \vert \delta _{i,t-1} \vert - \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right] ^+\right] \nonumber \\&\le 2 \sum _{i \in \mathcal {A}} \mathbb {E}\left[ \left[ \vert \delta _{i,t-1} \vert - \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right] ^+\right] \nonumber \\&= 2 \sum _{i \in \mathcal {A}} \Bigg ( \text {Pr}\left\{ \vert \delta _{i,t-1} \vert > \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right\} \cdot \end{aligned}$$
(6)
$$\begin{aligned}&\;\;\;\;\mathbb {E}\left[ \vert \delta _{i,t-1} \vert - \sqrt{\frac{8 \log T}{N_{t-1}(i)}} \;\;\bigg \vert \;\; \vert \delta _{i,t-1} \vert > \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right] \Bigg ) \end{aligned}$$
(7)

The probability in Eq. 6 is of the event that the difference between the estimated and true means of an arm i exceeds the confidence radius \(\sqrt{8 \log T / N_{t-1}(i)}\), while Eq. 7 is the expected difference conditional on that event. We bound Eq. 6 with Lemma 5 and Eq. 7 with Lemma 6.

Lemma 5

\(\text {Pr}\left\{ \forall t \in [T]\; \forall i \in \mathcal {A},\;\; \vert \delta _{i,t-1} \vert \le \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right\} \ge 1 - \frac{2}{T}\).

It is now sufficient to show that the difference \(\delta _{i,t-1}\) is small for all base arms \(i \in \mathcal {A}\) with high probability, which we accomplish using a standard concentration analysis through application of Hoeffding’s inequality and union bounds.

Lemma 6

For any \(t \in [T]\) and \(i \in \mathcal {A}\), we have

$$\begin{aligned} \mathbb {E}\left[ \vert \delta _{i,t-1} \vert - \sqrt{\frac{8 \log T}{N_{t-1}(i)}} \;\;\bigg \vert \;\; \vert \delta _{i,t-1} \vert > \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right] \le 1 . \end{aligned}$$

Though the rewards are unbounded, this expectation can be bounded by utilizing the fact that the mean of a truncated Gaussian distribution is increasing in the mean of the distribution before truncation, by Theorem 2 of Horrace (2015). We can see that:

$$\begin{aligned}&\mathbb {E}\left[ \vert \delta _{i,t-1} \vert - \sqrt{\frac{8 \log T}{N_{t-1}(i)}} \;\;\bigg \vert \;\; \vert \delta _{i,t-1} \vert> \sqrt{\frac{8 \log T}{N_{t-1}(i)}}\right] \\&= \mathbb {E}\left[ \delta _{i,t-1} - \sqrt{\frac{8 \log T}{N_{t-1}(i)}} \;\;\bigg \vert \;\; \delta _{i,t-1} - \sqrt{\frac{8 \log T}{N_{t-1}(i)}}> 0\right] \\&\le \mathbb {E}\left[ \delta _{i,t-1} \;\;\bigg \vert \;\; \delta _{i,t-1} > 0\right] . \end{aligned}$$

We know that \(\delta _{i,t-1}\) is zero-mean Gaussian with variance at most one, hence \(\mathbb {E}\left[ \delta _{i,t-1} \;\;\bigg \vert \;\; \delta _{i,t-1} > 0\right] \le 1\).

With the result from Lemma 3, the last two terms of the regret decomposition in Lemma 2 are bounded by constants in T. Focusing on the remaining term, we just need to show that \(\sum _{t \in [T]} \mathbb {E}\left[ U_t(\varvec{a}_t) - L_t(\varvec{a}_t)\right] \le \mathcal {O} (d \sqrt{T \log T})\) to prove Theorem 1:

$$\begin{aligned}&\sum _{t \in [T]} \mathbb {E}\left[ U_t(\varvec{a}_t) - L_t(\varvec{a}_t)\right] \\&= \sqrt{128 \log T} \sum _{t\in [T]} \mathbb {E}\left[ \max _{i\in \varvec{a}_t} \frac{1}{\sqrt{N_{t-1}(i)}}\right] \\&\le \sqrt{128 \log T} \sum _{t\in [T]} \mathbb {E}\left[ \sum _{i\in \varvec{a}_t} \frac{1}{\sqrt{N_{t-1}(i)}}\right] \\&= \sqrt{128 \log T} \sum _{i\in \mathcal {A}} \mathbb {E}\left[ \sum _{t:i\in \varvec{a}_t} \frac{1}{\sqrt{N_{t-1}(i)}}\right] \\&\le \sqrt{128 \log T} \sum _{i\in \mathcal {A}} \mathbb {E}\left[ 2 \sqrt{N_{T}(i)}\right] \\&\le \sqrt{128 \log T} \cdot \mathbb {E}\left[ 2 \sqrt{d \sum _{i\in \mathcal {A}} N_{T}(i)}\right] \\&\le \sqrt{128 \log T} \cdot \mathbb {E}\left[ 2 \sqrt{d^2 T}\right] \\&= 2 d \sqrt{128 T \log T} \\&= \mathcal {O} (d \sqrt{T \log T}) . \end{aligned}$$

We note that the final upper bound is tight up to a polylogarithmic factor, according to existing lower bounds for combinatorial semi-bandit problems (Kveton et al., 2015).

3.3 Thompson sampling with approximate objective

Unfortunately, exact expressions for computing the expected maximum of Gaussian random variables only exist when the variables are few. In other words, we cannot compute \(f_{\varvec{\theta }} (\varvec{a})\) exactly for a super arm \(\varvec{a}\) containing many base arms, necessitating some form of approximation approach. While it is possible to approximate \(f_{\varvec{\theta }} (\varvec{a})\) through e.g., Monte Carlo simulations, we want to be able to perform the cost minimization step using a computationally efficient oracle.

We note that, even with the capability to exactly compute \(f_{\varvec{\theta }} (\varvec{a})\), it would not be feasible to solve the minimization problem in line 6 of Algorithm 1. The expected cost \(f_{\varvec{\theta }} (\varvec{a})\) of a super arm \(\varvec{a}\) (i.e., the expected maximum base arm feedback) depends not only on the individual expected values of the base arm feedback distributions, but also on the shape of the joint distribution of all base arms in \(\varvec{a}\). Due to this fact, the stochastic version of the minimization problem lacks the property of optimal substructure (i.e., an optimal path does not necessarily consist of optimal sub-paths). For the deterministic version of the problem, as defined in Eq. 1, the presence of this property enables the usage of computationally efficient dynamic programming strategies, like Dijkstra’s algorithm, which is consequently infeasible with the objective in Algorithm 1.

Therefore, we propose the approximation method outlined in Algorithm 2, where the minimization step of line 6 has been modified from Algorithm 1 with an alternative super arm cost function \(\tilde{f}_{\tilde{\varvec{\theta }}} (\varvec{a}) := \max _{i \in \varvec{a}} \tilde{\theta }_i\). Switching objectives, from finding the super arm which minimizes the expected maximum base arm feedback, to instead minimize the maximum expected feedback, has the benefit of allowing us to utilize the efficient deterministic minimax path algorithms introduced earlier for both directed and undirected graphs. For directed graphs, the modified version of Dijkstra’s algorithm by Berman and Handler (1987) has a worst-case running time of \(\mathcal {O}(\vert E \vert + \vert V \vert \log \vert V \vert )\) with an efficient implementation using Fibonacci heaps (Fredman & Tarjan, 1987). Similarly, for undirected graphs, finding an MST (and subsequently a minimax path) can be achieved using Prim’s algorithm (Prim, 1957), with the same running time of \(\mathcal {O}(\vert E \vert + \vert V \vert \log \vert V \vert )\) if Fibonacci heaps are used. The other operations performed for each \(t \in [T]\) in Algorithm 2 (i.e., the posterior samples and updates) have a combined running time of, at worst, \(\mathcal {O}(\vert E \vert )\). The same oracles are also used for the baseline algorithms evaluated in Sections 4.1 and 4.2, with comparable running times.

It is possible to use alternative notions of regret to evaluate combinatorial bandit algorithms with approximate oracles (Chen et al., 2013, 2016). For our experimental evaluation of Algorithm 2, we introduce the following definition of approximate regret:

$$\begin{aligned} \text {ApproxRegret(T)} := \left( \sum _{t \in [T]} \tilde{f}_{\varvec{\theta }^*} (\varvec{a}_t)\right) - T \cdot \min _{\varvec{a} \in \mathcal {I}} \tilde{f}_{\varvec{\theta }^*} (\varvec{a}) . \end{aligned}$$

An alternative Bayesian bandit algorithm which can be used with the alternative objective is BayesUCB (Kaufmann et al., 2012), which we use as a baseline for our experiments. Like Thompson Sampling, BayesUCB has been adapted to combinatorial semi-bandit settings (Nuara et al., 2018; Åkerblom et al., 2020). Whereas Thompson Sampling in Algorithm 2 encourages exploration by applying the oracle to parameters sampled from the posterior distribution, with BayesUCB, the oracle is instead applied to optimistic estimates based on the posterior distribution. In practice, this is accomplished for our cost minimization problem by using lower quantiles of the posterior distribution of each base arm. This principle of selecting plausibly optimal arms is called optimism in the face of uncertainty and is the underlying idea of all bandit algorithms based on UCB.

We note that while in BayesUCB, as outlined in Algorithm 1 of Kaufmann et al. (2012), the horizon is used to calculate UCB values, the authors of that work also explain that upper quantiles of order \(1 - 1/t\) (calculated without the horizon) achieve good results in practice. For that reason, we use lower quantiles of order 1/t in the version of BayesUCB studied in this work, making it an anytime algorithm, like Thompson Sampling.

figure b

To connect the different objectives in Algorithm 1 and Algorithm 2, we note that by Jensen’s inequality, \(\tilde{f}_{\tilde{\varvec{\theta }}} (\varvec{a}) \le f_{\tilde{\varvec{\theta }}} (\varvec{a})\) and that the approximation objective consequently will underestimate super arm costs. However, we establish an upper bound on this difference through Theorem 7.

Theorem 7

Given the optimal super arm \(\varvec{a^*}\) for Algorithm 1 and the optimal super arm \(\tilde{\varvec{a}}^*\) for Algorithm 2, we have that \(f_{\varvec{\theta }^*}(\tilde{\varvec{a}}^*) - f_{\varvec{\theta }^*}(\varvec{a}^*) \le \sqrt{2 \log d}\).

For any super arm \(\varvec{a} \in \mathcal {I}\), let \(Y_i\) for \(i \in \varvec{a}\) be Gaussian random variables with \(Y_i \sim \mathcal {N}(\theta ^*_i, \sigma _i^2)\). Furthermore, let \(W_i := Y_i - \theta ^*_i\), such that \(W_i \sim \mathcal {N}(0, \sigma ^2_i)\). Then, the following holds:

$$\begin{aligned}&\mathbb {E}\left[ \max _{i \in \varvec{a}} Y_i\right] \\&= \mathbb {E}\left[ \max _{i \in \varvec{a}} (W_i + \theta _i^*)\right] \\&\le \mathbb {E}\left[ \max _{i \in \varvec{a}} W_i\right] + \max _{i \in \varvec{a}} \mathbb {E}\left[ Y_i\right] \\&\le \sqrt{2 \log d} + \max _{i \in \varvec{a}} \mathbb {E}\left[ Y_i\right] \, , \end{aligned}$$

where the last inequality is due to Lemma 9 of Orabona et al. (2015) and since \(\sigma _i^2 \le 1\) for all \(i \in \varvec{a}\). We also note that, by Jensen’s inequality, we have \(\max _{i \in \varvec{a}} \mathbb {E}\left[ Y_i\right] \le \mathbb {E}\left[ \max _{i \in \varvec{a}} Y_i\right]\). Moreover, by definition we know that \(\varvec{a}^* = \arg \min _{\varvec{a} \in \mathcal {I}} \mathbb {E}\left[ \max _{i \in \varvec{a}} Y_i\right]\) and \(\tilde{\varvec{a}}^* = \arg \min _{\varvec{a} \in \mathcal {I}} \max _{i \in \varvec{a}} \mathbb {E}\left[ Y_i\right]\). Consequently, we have,

$$\begin{aligned} \max _{i \in \tilde{\varvec{a}}^*} \mathbb {E}\left[ Y_i\right] \le \max _{i \in \varvec{a}^*} \mathbb {E}\left[ Y_i\right] \le \mathbb {E}\left[ \max _{i \in \varvec{a}^*} Y_i\right] \le \mathbb {E}\left[ \max _{i \in \tilde{\varvec{a}}^*} Y_i\right] \le \sqrt{2 \log d} + \max _{i\in \tilde{\varvec{a}}^*} \mathbb {E}\left[ Y_i \right] . \end{aligned}$$

Hence, we can conclude that

$$\begin{aligned} f_{\varvec{\theta }^*}(\tilde{\varvec{a}}^*) - f_{\varvec{\theta }^*}(\varvec{a}^*) = \mathbb {E}\left[ \max _{i \in \tilde{\varvec{a}}^*} Y_i\right] - \mathbb {E}\left[ \max _{i \in \varvec{a}^*} Y_i\right] \le \sqrt{2 \log d}\, . \end{aligned}$$

In other words, Theorem 7 holds and the optimal solutions of the exact Algorithm 1 and the approximate Algorithm 2 differ by at most \(\sqrt{2 \log d}\). This bound is independent of the mean vector \(\varvec{\theta }^*\), depending only on the number of base arms and that the variance is bounded.

4 Experimental results

In this section, we conduct bottleneck identification experiments using Algorithm 2 for two real-world applications, i) road (transport) networks, and ii) collaboration (social) networks. These experiments are performed with an extended version of the simulation framework by Russo et al. (2018) and evaluated using our approximate definition of regret. In addition, we compare Algorithm 1 to Algorithm 2 through a toy example.

4.1 Road networks

A bottleneck in a network is a segment of a path in the network that obstructs or stops flow. Identification of bottlenecks in a road network is a vital tool for traffic planners to analyze the network and prevent congestion. In this application, our goal is to find the bottleneck between a source and a target, i.e., a road segment which is necessary to pass and also has minimal traffic flow. In the road network model, we let the nodes represent intersections and the directed edges represent road segments, with travel time divided by distance (seconds per meter) as edge weights. The bottleneck between a pair of intersections is the minimum bottleneck over all paths connecting them, where the bottleneck for each of these paths is the largest weight over all road segments along it. Note that in order for the bottleneck between a pair of intersections to have a meaning, there needs to exist at least one path connecting them.

We collect road networks of four cities, shown in Table 1, from OpenStreetMap contributors (2017), where the average travel time as well as the distance is provided for each (directed) edge. We simulate an environment with the stochastic edge weights sampled from \(w_e \sim \mathcal {N}(\theta _e^*, \sigma _e^2)\), where the observation noise is \(\sigma _e = 0.4\). For the experiments, the environment samples the true unknown mean \(\theta _e^*\) from the known prior \(\theta _e^* \sim \mathcal {N}(\mu _{e,0}, {\varsigma _{e,0}}^2)\), where \(\varsigma _{e,0} = 0.4 \text {s/m}\), and \(\mu _{e,0}\) is the average travel time divided by distance provided by OpenStreetMap (OSM).

Table 1 A description of the road networks.

We consider one greedy agent (GR) and two \(\epsilon _t\)-greedy agents (e-GR) as baselines. The greedy agent (GR) always chooses the path with the lowest current estimate of expected cost. In each time step, each e-GR agent, with probability \(\epsilon _t\) decreasing with t (specifically, we let \(\epsilon _t = \min (1, 1/\sqrt{t})\)), chooses a random path, and acts like the greedy agent otherwise. In our experiments, we implement the two e-GR agents based on the combinatorial version of \(\epsilon _t\)-greedy introduced in Algorithm 1 in the Supplementary Material of Chen et al. (2013). The first e-GR agent chooses a path between the source and the target containing a uniformly chosen random node (e-GR-N), and the second e-GR agent chooses a path with a uniformly selected random edge (e-GR-E). We evaluate how the performance of the Thompson Sampling agent (TS) and the BayesUCB agent (B-UCB) compare to the baselines. We run the simulations with all five agents for each road network and report the cumulative regret at a given horizon T, averaged over five repetitions. The horizon is chosen such that the instant regret is almost stabilized for the agents.

Table 2 Average cumulative regret and corresponding standard error (SE) over five runs, at the horizon \(T = 6000\), for Thompson Sampling (TS), BayesUCB (B-UCB), \(\epsilon _t\)-greedy agents (e-GR-N and e-GR-E), and Greedy (GR) agent.

Table 2 shows the average cumulative regrets and their corresponding standard error over five runs at the horizon T. For all four road networks, the TS agent incurs the lowest average cumulative regret and standard error over five runs. Then, B-UCB follows TS and yields a better result than the baselines (GR and both e-GR variants).

Fig. 1
figure 1

Cumulative regret averaged over 5 runs with shaded standard error bars, for Thompson Sampling (TS), Bayes UCB (B-UCB), \(\epsilon _t\)-greedy agents (e-GR-N and e-GR-E), and greedy (GR) with horizon \(T= 6000\), on Eindhoven (a), Manhattan (c), Oslo (e) and Salzburg (g) road networks. Visualizations of the paths explored by the TS agent are shown in red, for Eindhoven (b), Manhattan (d), Oslo (f) and Salzburg (h) road networks. Opacity illustrates the exploration of each of the road segments.

Figure 1 illustrates the average cumulative regret with standard error (SE) bars on the road networks of the four aforementioned cities. For Eindhoven, Figure 1a shows the average cumulative regret, where at horizon \(T=6000\) the TS agent yields the lowest cumulative regret. Then, B-UCB follows TS and achieves a better result compared to the other baselines. As time progresses, we can see that first TS and then B-UCB start saturating by performing sufficient exploration. With respect to the SE bars, there are differences between the five agents. The TS agent has the smallest SE bars. Figure 1b visualizes the Eindhoven road network, where the paths explored by the TS agent are shown in red. The road segments explored (tried) more often by the TS agent are displayed more opaque. Figure 1c, e, and g show the average cumulative regret with SE bars for Manhattan, Oslo, and Salzburg, respectively. The results show that TS incurs the lowest cumulative regret and smallest SE bars. Then, B-UCB follows TS in both aspects and obtains a better result than the other baselines.

Fig. 2
figure 2

Cumulative regret averaged over 5 runs with shaded standard error bars, for Thompson Sampling (TS), Bayes UCB (B-UCB), \(\epsilon _t\)-greedy agnets (e-GR-N and e-GR-E), and greedy (GR) with horizon \(T= 2000\) for the collaboration network.

4.2 Collaboration network

We consider a collaboration network from computational geometry (Geom) (Jones, 2002) as an application of our approach to social networks. More specifically, we use the version provided by Handcock et al. (2003) and distributed among the Pajek datasets (Batagelj et al., 2006) where certain author duplicates, occurring in minor or major name variations, have been merged. The Handcock et al. (2003) version is based on the BibTeX bibliography (Beebe, 2002), to which the database from Jones (2002) has been exported. The network has 9072 vertices representing the authors and 22577 edges with the edge weights representing the number of mutual works between a pair of authors.

We simulate an environment where each edge weight is sampled as \(w_e \sim \mathcal {N}(\theta _e^*, \sigma _e^2)\), within which \(\theta _e^*\) is regarded as the true (negative) mean number of shared publications between a pair of authors linked by the edge e, and the observation noise is \(\sigma _e = 5\). Furthermore, in this experiment, while the true negative mean number of mutual publications are assumed (by the agent) to be distributed according to the prior \(\theta _e^* \sim \mathcal {N}(\mu _{e,0}, \varsigma _{e,0}^2)\) with \(\varsigma _{e,0} = 10\), we instead generate the mean from a wider prior \(\theta _e^* \sim \mathcal {N}(\mu _{e,0}, 20^2)\), simulating a scenario where the prior belief of the agent is too high. The assumed mean \(\mu _{e,0}\) of the prior is however consistent with the distribution from which \(\theta _e^*\) is sampled, and is directly determined by the pairwise negative number of mutual collaborations from the dataset by Handcock et al. (2003).

Figure 2 shows the cumulative regret, averaged over five runs for the different agents with horizon \(T=2000\), again chosen such that the regret is stabilized for all agents. One can see that the TS agent reaches the lowest cumulative regret, similar to the experimental studies on road networks.

4.3 Exact objective toy example

Fig. 3
figure 3

Experimental results with average cumulative regret on the toy example, with \(T=10000\), on exact TS, approximate TS and exact greedy.

While it is not feasible to evaluate Algorithm 1 on graphs representing real-life transportation or social networks, it is possible for small synthetic graphs. We construct a graph consisting of 6 nodes and 10 edges, with the source and target nodes connected by four paths of length 2 and four paths of length 3. For each edge e, we use the sample the mean from a standard Gaussian prior, such that \(\theta _e^* \sim \mathcal {N}(0,1)\). The stochastic weights are then generated in each time step t such that \(w_{e,t} \sim \mathcal {N}(\theta ^*_e, 1)\).

In order to calculate the expected cost of each path, we use existing exact expressions for the expected maximum of two (Clark, 1961) and three (Lo, 2020; DasGupta, 2021) independent Gaussian random variables. Instead of using an oracle, we simply enumerate the paths to find the one with minimum expected cost.

In Figure 3, we compare Algorithm 1 (TS with exact objective) and Algorithm 2 (TS with approximate objective) using the exact notion of (cumulative) regret as defined in Eq. 4. Furthermore, we include a greedy baseline which also uses the exact objective. We use a horizon of \(T=10000\) and average the results over 20 experiments, wherein each algorithm is applied to a problem instance sampled from the prior.

We can see that the regret of exact TS quickly saturates, while approximate TS and the greedy method tend to end up in sub-optimal solutions. For approximate TS, this is to be expected since optimal arms for the exact and approximate problems may be different. It is worth noting, however, that approximate TS performs better than the exact greedy method on average.

5 Conclusion

We developed an online learning framework for bottleneck identification in networks via minimax paths. In particular, we modeled this task as a combinatorial semi-bandit problem for which we proposed a combinatorial version of Thompson Sampling. We then established an upper bound on the Bayesian regret of the Thompson Sampling method. To deal with the computational intractability of the problem, we devised an alternative problem formulation which approximates the original objective. Finally, we investigated the framework on several directed and undirected real-world networks from transport and collaboration domains. Our experimental results demonstrate its effectiveness compared to alternatives such as greedy and B-UCB methods.