1 Introduction

Multilevel optimization is a class of mathematical optimization problems where other problems are embedded in the constraints. They are well suited to model sequential decision-making processes, where a first decision-maker, the leader intrinsically integrates the reaction of another decision-maker, the follower, into their decision-making problem. In recent years, most of the research focuses on the study and design of efficient solution methods for the case of two levels, namely bilevel problems [1], which fostered a growing range of applications.

Near-optimal robustness, defined in [2], is an extension of bilevel optimization. In this setting, the upper level anticipates limited deviations of the lower level from an optimal solution and aims at a solution that remains feasible for any feasible and near-optimal solution of the lower level. This protection of the upper level against uncertain deviations of the lower-level has led to the characterization of near-optimality robustness as a robust optimization approach for bilevel optimization. The models where the upper level is protected against all optimal lower level responses (that is, without deviations) are referred to as pessimistic bilevel optimization models. In near-optimal robustness, the lower-level response corresponds to the uncertain parameter and the maximum deviation of the objective value from an optimal solution to the uncertainty budget. Because the set of near-optimal lower-level solutions potentially has infinite cardinality and depends on the upper-level decision itself, near-optimality robustness adds generalized semi-infinite constraints to the bilevel problem. The additional constraint can also be viewed as a form of robustness under decision-dependent uncertainty.

In this paper, we prove complexity results on multilevel problems to which near-optimality robustness constraints are added under various forms. We show that under fairly general conditions, the near-optimal robust version of a multilevel problem remains on the same level of the polynomial hierarchy as the canonical problem. These results are non-trivial assuming that the polynomial hierarchy does not collapse and open the possibility of solution algorithms for near-optimal robust multilevel problems as efficient as for their canonical counterpart. Even though we focus on near-optimal robust multilevel problems, the complexity results we establish hold for all multilevel problems that present the same hierarchical structure, i.e. the same anticipation and parameterization between levels as the near-optimal formulation with the adversarial problems, as defined in Sect. 3. In particular, the results extend to pessimistic multilevel problems, which can be viewed as a special case of the equivalent near-optimal robust multilevel problem.

The rest of this paper is organized as follows. Section 2 introduces the notation and the background on near-optimality robustness and existing complexity results in multilevel optimization. Section 3 presents complexity results for the near-optimal robust version of bilevel problems, where the lower level belongs to \({\mathcal {P}}\) and \({{\mathcal {N}}}{{\mathcal {P}}}\). These results are extended in Section 4 to multilevel optimization problems, focusing on integer multilevel linear problems with near-optimal deviations of the topmost intermediate level. Section 5 provides complexity results for a generalized form of near-optimal robustness in integer multilevel problems, where multiple decision-makers anticipate near-optimal reactions of a lower level. Finally, we draw some conclusions in Sect. 6.

2 Multilevel optimization and near-optimality robustness

In this section, we introduce the notation and terminology for bilevel optimization and near-optimality robustness, and highlight prior complexity results in multilevel optimization. Let us define a bilevel problem as:

$$\begin{aligned} \min _{x}\,\,&F(x, v) \end{aligned}$$
(1a)
$$\begin{aligned} \text {s.t.}\,\,\,&G_k(x, v) \le 0&\forall k \in \left[ \![m_u\right] \!] \end{aligned}$$
(1b)
$$\begin{aligned}&x \in {\mathcal {X}} \end{aligned}$$
(1c)
$$\begin{aligned}&\text {where } v \in \mathop {\mathrm {arg \,min}}\limits _{y\in {\mathcal {Y}}} \{f(x,y) \text { s.t. }g_i(x, y) \le 0 \,\, \forall i \in \left[ \![m_l\right] \!] \}. \end{aligned}$$
(1d)

We denote by \({\mathcal {X}}\) and \({\mathcal {Y}}\) the domain of upper- and lower-level variables respectively. We use the convenience notation \(\left[ \![n\right] \!] = \{1, \dots ,n\}\) for a natural n.

Problem (1) is ill-posed, since multiple solutions to the lower level may exist [3, Ch. 1]. Models often rely on additional assumptions to alleviate this ambiguity, the two most common being the optimistic and pessimistic approaches. In the optimistic case (BiP), the lower level selects an optimal decision that most favours the upper level. In this setting, the lower-level decision can be taken by the upper level, as long as it is optimal for the lower-level problem. The upper level can thus optimize over both x and v, leading to:

$$\begin{aligned} \text {(BiP): } \min _{x,v}\,\,&F(x, v) \end{aligned}$$
(2a)
$$\begin{aligned} \text {s.t.}\,\,\,&G_k(x, v) \le 0&\forall k \in \left[ \![m_u\right] \!] \end{aligned}$$
(2b)
$$\begin{aligned}&x \in {\mathcal {X}} \end{aligned}$$
(2c)
$$\begin{aligned}&v \in \mathop {\mathrm {arg \,min}}\limits _{y\in {\mathcal {Y}}} \{f(x,y) \text { s.t. }g_i(x, y) \le 0 \,\, \forall i \in \left[ \![m_l\right] \!] \}. \end{aligned}$$
(2d)

Constraint (2d) implies that v is feasible for the lower level and that f(xv) is the optimal value of the lower-level problem, parameterized by x.

The pessimistic approach assumes that the lower level chooses an optimal solution that is the worst for the upper-level objective as in [1] or with respect to the upper-level constraints as in [4].

The near-optimal robust version of (BiP) considers that the lower-level solution may not be optimal but near-optimal with respect to the lower-level objective function. The tolerance for near-optimality, denoted by \(\delta \) is expressed as a maximum deviation of the objective value from optimality. The problem solved at the upper level must integrate this deviation and protects the feasibility of its constraints for any near-optimal lower-level decision. The problem is formulated as:

$$\begin{aligned} \text {(NORBiP): } \min _{x,v}\,\,&F(x, v) \end{aligned}$$
(3a)
$$\begin{aligned} \text {s.t.}\,\,\,&(2b)-(2d) \end{aligned}$$
(3b)
$$\begin{aligned}&G_k(x, z) \le 0\,\, \forall z \in {\mathcal {Z}}(x;\delta )&\forall k \in \left[ \![m_u\right] \!] \end{aligned}$$
(3c)
$$\begin{aligned}&\text {where } {\mathcal {Z}}(x;\delta ) = \{y\in {\mathcal {Y}}\,\,\text {|}\, f(x, y) \le f(x, v) + \delta , g(x, y) \le 0\}. \end{aligned}$$
(3d)

\({\mathcal {Z}}(x;\delta )\) denotes the near-optimal set, i.e. the set of near-optimal lower-level solutions, depending on both the upper-level decision x and \(\delta \). (NORBiP) is a generalization of the pessimistic bilevel problem since the latter is both a special case and a relaxation of (NORBiP) [2]. We refer to (BiP) as the canonical problem for (NORBiP) [or equivalently Problem (4)] and (NORBiP) as the near-optimal robust version of (BiP). In the formulation of (NORBiP), the upper-level objective depends on decision variables of both levels, but is not protected against near-optimal deviations. A more conservative formulation also protecting the objective by moving it to the constraints in an epigraph formulation [2] is given by:

$$\begin{aligned} \text {(NORBiP-Alt): } \min _{x,v,\tau }\,\,&\tau \\ \text {s.t.}\,\,\,&(2b) - (2d)\\&G_k(x, z) \le 0\,\, \forall z \in {\mathcal {Z}}(x;\delta )&\forall k \in \left[ \![m_u\right] \!]\\&F(x, z) \text { }\,\le \tau \text { } \forall z \in {\mathcal {Z}}(x;\delta ),\\ \end{aligned}$$

The optimal values of the three problems are ordered as:

$$\begin{aligned} \text {opt(BiP)} \le \text {opt(NORBiP)} \le \text {opt(NORBiP-Alt)}. \end{aligned}$$

We next provide a review of complexity results for bilevel and multilevel optimization problems. Bilevel problems are \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard in general, even when the objective functions and constraints at both levels are linear [5]. When the lower-level problem is convex, a common solution approach consists in replacing it with its KKT conditions [6, 7], which are necessary and sufficient if the problem satisfies certain constraint qualifications. This approach results in a single optimization problem with complementarity constraints, of which the decision problem is \({{\mathcal {N}}}{{\mathcal {P}}}\)-complete [8]. A specific form of the three-level problem is investigated in [9], where only the objective value of the bottom-level problem appears in the objective functions of the first and second levels. If these conditions hold and all objectives and constraints are linear, the problem can be reduced to a single level one with complementarity constraints of polynomial size.

Pessimistic bilevel problems for which no upper-level constraint depends on lower-level variables are studied in [10]. The problem of finding an optimal solution to the pessimistic case is shown to be \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard, even if a solution to the optimistic counterpart of the same problem is provided. A variant is also defined, where the lower level may pick a suboptimal response only impacting the upper-level objective. This variant is comparable to the Objective-Robust Near-Optimal Bilevel Problem defined in [2]. In [11], the lower-level is assumed to respond to the upper level with a decision derived from a heuristic algorithm from a predefined set. An uncertain bilevel setting with a pure binary lower level is considered in [12]. Lower-level response uncertainty is encoded as a maximum Hamming distance of the near-optimal decision to the optimal one. In [4], the independent case of the pessimistic bilevel problem is studied, corresponding to a special case of (NORBiP) with \(\delta =0\) and all lower-level constraints independent of the upper-level variables. It is shown that the linear independent pessimistic bilevel problem, and consequently the linear near-optimal robust bilevel problem, can be solved in polynomial time while it is strongly \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard in the non-linear case.

When the lower-level problem cannot be solved in polynomial time, the bilevel problem is in general \(\varSigma _2^P\)-hard. The notion of \(\varSigma _2^P\)-hardness and classes of the polynomial hierarchy are recalled in Sect. 3. Despite this complexity result, new algorithms and corresponding implementations have been developed to solve these problems and in particular, mixed-integer linear bilevel problems [13,14,15]. Variants of the bilevel knapsack were investigated in [16], and proven to be \(\varSigma _2^P\)-hard as the generic mixed-integer bilevel problem.

Multilevel optimization was initially investigated in [17] in the case of linear constraints and objectives at all levels. In this setting, the problem is shown to be in \(\varSigma _s^P\), with \(s+1\) being the number of levels. The linear bilevel problem corresponds to \(s=1\) and is in \(\varSigma _1^P\equiv {{\mathcal {N}}}{{\mathcal {P}}}\). If, on the contrary, at least the bottom-level problem involves integrality constraints (or more generally belongs to \({{\mathcal {N}}}{{\mathcal {P}}}\) but not \({\mathcal {P}}\)), the multilevel problem with s levels belongs to \(\varSigma _s^P\). A model unifying multistage stochastic and multilevel problems is defined in [18], based on a risk function capturing the component of the objective function which is unknown to a decision-maker at their stage. Complexity and completeness results in the polynomial hierarchy above the first level are compiled in [19]. We also refer the interested reader to Kleinert et al. [20] for a recent review on complexity results and computational approaches in bilevel optimization.

As highlighted in [18], most results in the literature on complexity of multilevel optimization use \({{\mathcal {N}}}{{\mathcal {P}}}\)-hardness as the sole characterization. This only indicates that a given problem is at least as hard as all problems in \({{\mathcal {N}}}{{\mathcal {P}}}\) and that no polynomial-time solution method should be expected unless \({{\mathcal {N}}}{{\mathcal {P}}} = {\mathcal {P}}\).

We characterize near-optimal robust multilevel problems not only on the hardness or “lower bound” on complexity, i.e. being at least as hard as all problems in a given class but through their complexity “upper bound”, i.e. the class of the polynomial hierarchy they belong to. The linear optimistic bilevel problem is for instance strongly \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard, but belongs to \({{\mathcal {N}}}{{\mathcal {P}}}\) and is therefore not \(\varSigma _2^P\)-hard. The results are established for (NORBiP) and directly apply to (NORBiP-Alt), to the constraint-based pessimistic bilevel problem from [4], and to the more classical objective-based pessimistic bilevel formulation which can be reformulated as constraint-based.

3 Near-optimal robust bilevel problems

We establish in this section complexity results for near-optimal robust bilevel problems for which the lower level \({\mathcal {L}}\) is a single-level problem parameterized by the upper-level decision. (NORBiP) can be reformulated as in [2] by replacing each k-th semi-infinite Constraint (3c) with the lower-level solution \(z_k\) in \({\mathcal {Z}}(x;\delta )\) that yields the highest value of \(G_k(x, z_k)\):

$$\begin{aligned} \min _{x,v}\,\,&F(x, v) \end{aligned}$$
(4a)
$$\begin{aligned} \text {s.t.}\,\,\,&(2b) - (2d) \end{aligned}$$
(4b)
$$\begin{aligned}&G_k(x, z_k) \le 0 \forall k \in \left[ \![m_u\right] \!] \end{aligned}$$
(4c)
$$\begin{aligned}&z_k \in \mathop {\mathrm {arg \, max}}\limits _{y\in {\mathcal {Y}}}\{G_k(x,y) \text { s.t. } f(x, y) \le f(x, v) + \delta , g(x, y) \le 0\}&\forall k \in \left[ \![m_u\right] \!]. \end{aligned}$$
(4d)

From a game-theoretical perspective, the near-optimal robust version of a bilevel problem can be seen as a three-player hierarchical game. The upper level \({\mathcal {U}}\) and lower level \({\mathcal {L}}\) are identical to the canonical bilevel problem. The third level is the adversarial problem \({\mathcal {A}}\) and selects the worst near-optimal lower-level solution with respect to upper-level constraints, as represented by the embedded maximization in Constraint (4d). If the upper-level problem has multiple constraints, the adversarial problem can be decomposed into problems \({\mathcal {A}}_k, k \in \left[ \![m_u\right] \!]\), where \(m_u\) is the number of upper-level constraints. The interaction among the three players is depicted in Fig. 1a. The adversarial problem can be split into \(m_u\) adversarial problems as done in [2], each finding the worst-case with respect to one of the upper-level constraints. The canonical problem refers to the optimistic bilevel problem without near-optimal robustness constraints. We refer to the variable v as the canonical lower-level decision.

Fig. 1
figure 1

Graphical representations of different near-optimal robust multilevel problems. Blue dashed arcs represent a parameterization of the source vertex by the decisions of the destination vertex, solid red arcs represent an anticipation of the decisions of the destination vertex in the problem of the source vertex. Section 3 focuses on the setting presented in a, Section 4 addresses the multilevel problem illustrated in b while c and d are the subject of Sect. 5

The complexity classes of the polynomial hierarchy are only defined for decision problems. We consider that an optimization problem belongs to a given class if that class contains the decision problem of determining if there exists a feasible solution for which the objective value at least as good as a given bound.

Definition 1

The decision problem associated with an optimization problem is in \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \), with \({\mathcal {H}}\) a set of real-valued functions on a vector space \({\mathcal {Y}}\), iff:

  1. i.

    it belongs to \({\mathcal {P}}\);

  2. ii.

    for any \(h \in {\mathcal {H}}\), the problem with an additional linear constraint and an objective function set as \(h(\cdot )\) is also in \({\mathcal {P}}\).

A broad range of problems in \({\mathcal {P}}\) are also in \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \) for certain sets of functions \({\mathcal {H}}\) (see Example 1 for linear problems and linear functions and Example 2 for some combinatorial problems in \({\mathcal {P}}\)). The classes \({{\mathcal {N}}}{{\mathcal {P}}}^*\left[ {\mathcal {H}}\right] \) and \(\varSigma _s^{P*}\left[ {\mathcal {H}}\right] \) are defined in a similar way. We next consider two examples illustrating these definitions.

Example 1

Denoting by \({\mathcal {H}}_L\) the set of linear functions from the space of lower-level variables to \({\mathbb {R}}\), linear optimization problems are in \({\mathcal {P}}^{*}\left[ {\mathcal {H}}_L\right] \), since any given problem with an additional linear constraint and a different linear objective function is also a linear optimization problem.

Example 2

Denoting by \({\mathcal {H}}_L\) the set of linear functions from the space of lower-level variables to \({\mathbb {R}}\), combinatorial optimization problems in \({\mathcal {P}}\) which can be formulated as linear optimization problems with totally unimodular matrices are not in \({\mathcal {P}}^{*}\left[ {\mathcal {H}}_L\right] \) in general. Such problems include network flow or bipartite matching problems. Indeed, adding a linear constraint may break the integrality of solutions of the linear relaxation of the lower-level problem.

The polynomial hierarchy is first defined in [21] and a link to multilevel games is established in [17]. The complexity class at the s-th level of the polynomial hierarchy is denoted by \(\varSigma _s^{P}\), defined recursively as \(\varSigma _0^{P} = {\mathcal {P}}\), \(\varSigma _1^{P} = {{\mathcal {N}}}{{\mathcal {P}}}\), and problems of the class \(\varSigma _s^{P}\), \(s>1\) being solvable in non-deterministic polynomial time, provided an oracle for problems of class \(\varSigma _{s-1}^{P}\). In particular, a positive answer to a decision problem in \({{\mathcal {N}}}{{\mathcal {P}}}\) can be verified, given a certificate, in polynomial time. If the decision problem associated with an optimization problem is in \({{\mathcal {N}}}{{\mathcal {P}}}\), and given a potential solution, the objective value of the solution can be compared to a given bound and the feasibility can be verified in polynomial time. We reformulate these statements in the following proposition:

Proposition 1

[17] An optimization problem is in \(\varSigma _{s+1}^P\) if verifying that a given solution is feasible and attains a given bound can be done in polynomial time, when equipped with an oracle solving problems in \(\varSigma _{s}^P\) in a single step.

Proposition 1 is the main property of the classes of the polynomial hierarchy used to determine the complexity of near-optimal robust bilevel problems in various settings throughout this paper.

Lemma 1

Given a bilevel problem in the form of Problem (2), if the lower-level problem is in \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \), and

$$\begin{aligned} -G_k(x,\cdot ) \in {\mathcal {H}}\,\,\forall x, \forall k \in \left[ \![m_u\right] \!], \end{aligned}$$

then the adversarial problem (4d) is in \({\mathcal {P}}\).

Proof

The lower-level problem can equivalently be written in an epigraph form:

$$\begin{aligned} (v,w) \in \mathop {\mathrm {arg \,min}}\limits _{y,u}\,\,&u\\ \text {s.t.}\,\,\,&f(x,y) - u \le 0 \\&g(x, y) \le 0. \end{aligned}$$

Given a solution of the lower-level problem (vw) and an upper-level constraint \(G_k(x,y) \le 0\), the adversarial problem is defined by:

$$\begin{aligned} \min _{y,u}\,\,&-G(x,y)\\ \text {s.t.}\,\,\,&f(x,y) - u \le 0\\&g(x, y) \le 0\\&u \le w. \end{aligned}$$

Compared to the lower-level problem, the adversarial problem contains an additional linear constraint \(u \le w\) and an objective function updated to \(-G(x,\cdot )\). \(\square \)

Lemma 1 highlights that the restriction imposed by the class \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \) on the lower level ensures the complexity class for the adversarial problem. This result is now leveraged in Theorem 1.

Theorem 1

Given a bilevel problem (P), if there exists \({\mathcal {H}}\) such that the lower-level problem is in \({{\mathcal {N}}}{{\mathcal {P}}}^*\left[ {\mathcal {H}}\right] \) and

$$\begin{aligned} -G_k(x, \cdot ) \in {\mathcal {H}}\,\, \forall x \in {\mathcal {X}}, \forall k \in \left[ \![m_u\right] \!], \end{aligned}$$

then the near-optimal robust version of the bilevel problem is in \(\varSigma _{2}^P\) similarly to the canonical problem.

Proof

The proof relies on the ability to verify that a given solution (xv) results in an objective value at least as low as a bound \(\varGamma \) according to Proposition 1. This verification can be carried out with the following steps:

  1. i.

    compute the upper-level objective value F(xv) and verify that \(F(x, v) \le \varGamma \);

  2. ii.

    verify that upper-level constraints are satisfied;

  3. iii.

    verify that lower-level constraints are satisfied;

  4. iv.

    compute the optimum value \({\mathcal {L}}(x)\) of the lower-level problem parameterized by x and check if:

    $$\begin{aligned} f(x,v) \le \min _{y} {\mathcal {L}}(x); \end{aligned}$$
  5. v.

    Compute the worst case: Find

    $$\begin{aligned} z_k \in \mathop {\mathrm {arg \, max}}\limits _{y\in {\mathcal {Y}}}\, {\mathcal {A}}_k(x, v)\,\,\, \forall k \in \left[ \![m_u\right] \!], \end{aligned}$$

    where \({\mathcal {A}}_k(x, v)\) is the k-th adversarial problem parameterized by (xv);

  6. vi.

    Verify near-optimal robustness: \(\forall k \in \left[ \![m_u\right] \!]\), verify that the k-th upper-level constraint is feasible for the worst-case \(z_k\).

Steps i and ii can be carried out in polynomial time by assumption. Step iii requires to check the feasibility of a solution to a problem in \({{\mathcal {N}}}{{\mathcal {P}}}\). This can be done in polynomial time. Step iv consists in solving the lower-level problem, while Step v corresponds to solving \(m_u\) adversarial problems belonging to \({{\mathcal {N}}}{{\mathcal {P}}}\) based on Lemma 1. \(\square \)

Theorem 2

Given a bilevel problem (P), if the lower-level problem is convex and in \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \) with \({\mathcal {H}}\) a set of convex functions, and if the upper-level constraints are such that \(-G_k(x,\cdot ) \in {\mathcal {H}}\), then the near-optimal robust version of the bilevel problem is in \({{\mathcal {N}}}{{\mathcal {P}}}\). If the upper-level constraints are convex non-affine with respect to the lower-level constraints, the near-optimal robust version is in general not in \({{\mathcal {N}}}{{\mathcal {P}}}\).

Proof

If the upper-level constraints are concave with respect to the lower-level variables, the adversarial problem defined as:

$$\begin{aligned} \max _{y\in {\mathcal {Y}}}&\,\, G_k(x, y) \end{aligned}$$
(5a)
$$\begin{aligned} \text {s.t.}\,\,\,&g(x, y) \le 0 \end{aligned}$$
(5b)
$$\begin{aligned}&f(x, y) \le f(x, v) + \delta \end{aligned}$$
(5c)

is convex. Furthermore, by definition of \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \), the adversarial problem is in \({\mathcal {P}}\).

Applying the same reasoning as in the proof of Theorem 1, Steps 1-3 are identical and can be carried out in polynomial time. Step 4 can be performed in polynomial time since \({\mathcal {L}}\) is in \({\mathcal {P}}\). Step 5 is also performed in polynomial time since \(\forall k \in \left[ \![m_u\right] \!]\), each k-th adversarial problem (5) is a convex problem that can be solved in polynomial time since \({\mathcal {L}}\) is in \({\mathcal {P}}^*\left[ {\mathcal {H}}\right] \). Step 6 simply is a simple comparison of two quantities.

If the upper-level constraints are convex non-affine with respect to the lower-level variables, Problem (5) maximizes a convex non-affine function over a convex set. Such a problem is \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard in general. Therefore, the verification that a given solution is feasible and satisfies a predefined bound on the objective value requires solving the \(m_u\) \({{\mathcal {N}}}{{\mathcal {P}}}\)-hard adversarial problems. If \({\mathcal {L}}\) is in \({{\mathcal {N}}}{{\mathcal {P}}}^*\left[ {\mathcal {H}}\right] \), then these adversarial problems are in \({{\mathcal {N}}}{{\mathcal {P}}}\) by Eq. (3), and the near-optimal robust problem is in \(\varSigma _{2}^P\) according to Proposition 1. \(\square \)

4 Near-optimal robust mixed-integer multilevel problems

In this section, we study the complexity of a near-optimal robust version of mixed-integer multilevel linear problems (MIMLP), where the lower level itself is a s-level problem and is \(\varSigma _s^P\)-hard. The canonical multilevel problem is, therefore, \(\varSigma _{s+1}^P\)-hard [17]. For some instances of mixed-integer bilevel, the optimal value can be approached arbitrarily but not reached [22]. To avoid such pathological cases, we restrict our attention to multilevel problems satisfying the criterion for mixed-integer bilevel problems from Fischetti et al. [13]:

Property 1

The continuous variables at any level s do not appear in the problems at levels that are lower than s (the levels deciding after s).

More specifically, we will focus on mixed-integer multilevel linear problems where the upper-most lower level \({\mathcal {L}}_1\) may pick a solution deviating from the optimal value, while we ignore deviations of the levels \({\mathcal {L}}_{i>1}\). This problem is noted (\(\text {NOMIMLP}_s\)) and depicted in Fig. 1b.

The adversarial problem corresponds to a decision of the level \({\mathcal {L}}_1\) different from the canonical decision. This decision induces a different reaction from the subsequent levels \({\mathcal {L}}_2\), \({\mathcal {L}}_3\). Since the top-level constraints depend on the joint reaction of all following levels, we will note \(z_{ki} = (z_{k1}, z_{k2}, z_{k3})\) the worst-case joint near-optimal solution of all lower levels with respect to the top-level constraint k.

Theorem 3

If \({\mathcal {L}}_1\) is in \(\varSigma _{s}^{P*}\left[ {\mathcal {H}}_L\right] \), the decision problem associated with (\(\text {NOMIMLP}_s\)) is in \(\varSigma _{s+1}^P\) as the canonical multilevel problem.

Proof

Given a solution to all levels \((x_U,v_{1}, v_{2},\dots v_s)\) and a bound \(\varGamma \), verifying that this solution is (i) feasible, (ii) near-optimal robust of parameter \(\delta \), and (iii) has an objective value at least as good as the bound \(\varGamma \) can be done through the following steps:

  1. i.

    Compute the objective value and verify that it is lower than \(\varGamma \);

  2. ii.

    verify variable integrality;

  3. iii.

    solve the problem \({\mathcal {L}}_1\), parameterized by \(x_U\), and verify that the solution \((v_{1}, v_{2},\dots )\) is optimal;

  4. iv.

    \(\forall k \in \left[ \![m_u\right] \!]\), solve the k-th adversarial problem. Let \(z_k=(z_{k1}, z_{k2}\dots z_{ks})\) be the solution;

  5. v.

    \(\forall k \in \left[ \![m_u\right] \!]\), verify that the k-th upper-level constraint is feasible for the adversarial solution \(z_k\).

Steps i, ii, and v can be performed in polynomial time. Step iii requires solving a problem in \(\varSigma _{s}^P\), while Step iv consists in solving \(m_u\) problems in \(\varSigma _{s}^P\), since \({\mathcal {L}}_1\) is in \(\varSigma _{s}^{P*}\left[ {\mathcal {H}}_L\right] \). Checking the validity of a solution thus requires solving problems in \(\varSigma _{s}^P\) and is itself in \(\varSigma _{s+1}^P\), like the original problem. \(\square \)

5 Generalized near-optimal robust multilevel problem

In this section, we study the complexity of a variant of the problem presented in Sect. 4 with \(s+1\) decision-makers at multiple top levels \({\mathcal {U}}_1, {\mathcal {U}}_2, \dots , {\mathcal {U}}_s\) and a single bottom level \({\mathcal {L}}\). We denote by \({\mathcal {U}}_1\) the top-most level. We assume that the bottom-level entity may choose a solution deviating from optimality. This requires that the entities at all \({\mathcal {U}}_i\, \forall i \in \{1\ldots s\}\) levels anticipate this deviation, thus solving a near-optimal robust problem to protect their feasibility from it. The variant, coined \(\text {GNORMP}_s\), is illustrated in Fig. 1c, d. We assume throughout this section that Property 1 holds in order to avoid the unreachability problem previously mentioned. The decision variables of all upper levels are denoted by \(x_{(i)}\), and the objective functions by \(F_{(i)}(x_{(1)}, x_{(2)},\ldots , x_{(s)})\). The lower-level canonical decision is denoted v as in previous sections.

If the lowest level \({\mathcal {L}}\) belongs to \({{\mathcal {N}}}{{\mathcal {P}}}\), \({\mathcal {U}}_s\) belongs to \(\varSigma _{2}^P\) and the original problem is in \(\varSigma _{s+1}^P\). In a more general multilevel case, if the lowest level \({\mathcal {L}}\) solves a problem in \(\varSigma _{r}^P\), \({\mathcal {U}}_s\) solves a problem in \(\varSigma _{r+1}^P\) and \({\mathcal {U}}_1\) in \(\varSigma _{r+s}^P\).

We note that for all fixed decisions \(x_{(i)} \forall i \in \{1\ldots s-1\}\), \({\mathcal {U}}_s\) is a near-optimal robust bilevel problem. This differs from the model presented in Sect. 4 where, for a fixed upper-level decision, the top-most lower level \({\mathcal {L}}_1\) is the same parameterized problem as in the canonical setting. Furthermore, as all levels \({\mathcal {U}}_i\) anticipate deviations of the lower-level decision in the near-optimal set, the worst case can be formulated with respect to the constraints of each of these levels. In conclusion, distinct adversarial problems \({\mathcal {A}}_{i}\, \forall i \in \{1\ldots s\}\) can be formulated. Each upper level \({\mathcal {U}}_i\) integrates the reaction of the corresponding adversarial problem in its near-optimality robustness constraint. This formulation of \((\text {GNORMP}_s)\) is depicted Fig. 1d.

Theorem 4

Given a \(s+1\)-level problem \((\mathrm{GNORMP}_s)\), if the bottom-level problem parameterized by all upper-level decisions \({\mathcal {L}}(x_{(1)}, x_{(2)}\dots , x_{(s)})\) is in \(\varSigma _{r}^{P*}\left[ {\mathcal {H}}_L\right] \), then \((\mathrm{GNORMP}_s)\) is in \(\varSigma _{r+s}^P\) like the corresponding canonical bilevel problem.

Proof

We denote by \(x_U = (x_{(1)}, x_{(2)},\dots , x_{(s)})\) and \(m_{U_i}\) the number of constraints of problem \({\mathcal {U}}_i\). As for Theorem 1, this proof is based on the complexity of verifying that a given solution \((x_U, v)\) is feasible and results in an objective value below a given bound. The verification requires the following steps:

  1. i.

    compute the top-level objective value and assert that it is below the bound;

  2. ii.

    verify feasibility of \((x_U, v)\) with respect to the constraints at all levels;

  3. iii.

    verify optimality of v for \({\mathcal {L}}\) parameterized by \(x_U\);

  4. iv.

    verify optimality of \(x_{(i)}\) for the near-optimal robust problem solved by the i-th level \({\mathcal {U}}_i(x_{(1)}, x_{(2)}\dots x_{(i-1)}; \delta )\) parameterized by all the decisions at levels above and the near-optimality tolerance \(\delta \);

  5. v.

    compute the near-optimal lower-level solution \(z_{k}\) which is the worst-case with respect to the k-th constraint of the top-most level \(\forall k \in [\![m_{U_1}]\!]\);

  6. vi.

    verify that each \(k \in [\![m_{U_1}]\!]\) top-level constraint is satisfied with respect to the corresponding worst-case solution \(z_k\).

Steps i-ii are performed in polynomial time. Step iii requires solving Problem \({\mathcal {L}}(x_U)\), belonging to \(\varSigma _r^P\). Step iv consists in solving a generalized near-optimal robust multilevel problem \((\text {GNORMP}_{s-1})\) with one level less than the current problem. Step v requires the solution of \(m_{U_1}\) adversarial problems belonging to \(\varSigma _r^P\) since \({\mathcal {L}}\) is in \(\varSigma _{r}^{P*}\left[ {\mathcal {H}}_L\right] \). Step vi is an elementary comparison of two quantities for each \(k\in \left[ \![m_u\right] \!]\). The step of highest complexity is Step 4. If it requires to solve a problem in \(\varSigma _{r+s-1}^P\), then \((\text {GNORMP}_{s})\) is in \(\varSigma _{r+s}^P\) similarly to its canonical problem.

Let us assume that Step iv requires to solve a problem outside \(\varSigma _{r+s-1}^P\). Then \((\text {GNORMP}_{s-1})\) is not in \(\varSigma _{r+s-1}^P\) as the associated canonical problem, and that Step iv requires to solve a problem not in \(\varSigma _{r+s-2}^P\). By recurrence, \((\text {GNORMP}_{1})\) is not in \(\varSigma _{r+1}^P\). However, \((\text {GNORMP}_{1})\) is a near-optimal robust bilevel problem where the lower level itself in \(\varSigma _{r}^P\); this corresponds to the setting of Section 4. This contradicts Theorem 3, \((\text {GNORMP}_{s-1})\) is therefore in \(\varSigma _{r+s-1}^P\). Verifying the feasibility of a given solution to \((\text {GNORMP}_{s})\) requires solving a problem at most in \(\varSigma _{r+s-1}^P\). Based on Proposition 1, \((\text {GNORMP}_{s})\) is in \(\varSigma _{r+s}^P\) as its canonical multilevel problem. \(\square \)

In conclusion, Theorem 4 shows that adding near-optimality robustness at an arbitrary level of the multilevel problem does not increase its complexity in the polynomial hierarchy. By combining this property with the possibility to add near-optimal deviation at an intermediate level as in Theorem 3, near-optimality robustness can be added at multiple levels of a multilevel model without changing its complexity class in the polynomial hierarchy.

6 Conclusion

In this paper, we have shown that for many configurations of bilevel and multilevel optimization problems, adding near-optimality robustness to the canonical problem does not increase its complexity in the polynomial hierarchy. This result is obtained even though near-optimality robustness constraints add another level to the multilevel problem, which in general would change the complexity class.

We defined the class \(\varSigma _s^{P*}\left[ {\mathcal {H}}\right] \) as a slight restriction on problems from \(\varSigma _s^{P}\), ensuring that the adversarial problem derived from a lower-level problem from \(\varSigma _s^{P}\) lies in the same complexity class. While the definition of \(\varSigma _s^{P*}\left[ {\mathcal {H}}\right] \) is general enough to capture many non-linear multilevel problems, it avoids specific cases where the modified objective or additional linear constraint changes the complexity class for the adversarial problem.

Future work will consider specialized solution algorithms for some classes of near-optimal robust bilevel and multilevel problems.