Stochastics and Statistics
Applying oracles of on-demand accuracy in two-stage stochastic programming – A computational study

https://doi.org/10.1016/j.ejor.2014.05.010Get rights and content

Highlights

  • We devise variants of the L-shaped method using the concept of on-demand accuracy (ODA).

  • In many of the iterations only an approximate cut is added to the master problem.

  • These cuts do not require the solution of second-stage subproblems.

  • ODA reduces average solution time by 55% on 105 problems.

  • ODA combined with regularization reduces average solution time by 79%.

Abstract

Traditionally, two variants of the L-shaped method based on Benders’ decomposition principle are used to solve two-stage stochastic programming problems: the aggregate and the disaggregate version. In this study we report our experiments with a special convex programming method applied to the aggregate master problem. The convex programming method is of the type that uses an oracle with on-demand accuracy. We use a special form which, when applied to two-stage stochastic programming problems, is shown to integrate the advantages of the traditional variants while avoiding their disadvantages. On a set of 105 test problems, we compare and analyze parallel implementations of regularized and unregularized versions of the algorithms. The results indicate that solution times are significantly shortened by applying the concept of on-demand accuracy.

Introduction

Decomposition is an effective and time-honoured means of handling two-stage stochastic programming problems. It can be interpreted as a cutting-plane scheme applied to the first-stage variables. Traditionally, there are two approaches: one can use a disaggregate or an aggregate model. A major drawback of the aggregate model is that an aggregate master problem cannot contain all the information obtained by the solution of the second-stage problems. The disaggregate master problem, on the other hand, may grow excessively. It is not easy to find a balance between the effort spent in solving the master problem on the one hand, and the second-stage problems on the other hand. The computational results of Wolf and Koberstein (2013) give insights into this question.

In this study we report our experiments with a special inexact convex programming method applied to the aggregate master problem of the two-stage stochastic programming decomposition scheme. The convex programming method is of the type that uses an oracle with on-demand accuracy, a concept proposed by Oliveira and Sagastizábal (2014). We are going to use a special form which, when applied to two-stage stochastic programming problems, integrates the advantages of the aggregate and the disaggregate models. This latter feature is discussed in Fábián (2012). We also examine the on-demand accuracy idea in an un-regularized context, which results a pure cutting-plane method in contrast to the level bundle methods treated in Oliveira and Sagastizábal (2014).

The paper is organized as follows. In Section 1.1 we outline the on-demand accuracy approach to convex programming, and present an algorithmic sketch of the partly inexact level method. In Section 2 we overview two-stage stochastic programming models and methods. Specifically, in Section 2.1 we sketch a decomposition method for two-stage problems based on the partly inexact level method. Section 3 discusses implementation issues. Our computational results are reported in Section 4, and conclusions are drawn in Section 5.

Let us consider the problemminφ(x)suchthatxX,where φ:IRnIR is a convex function, and XIRn is a convex closed bounded polyhedron. We assume that φ is Lipschitz continuous over X with the constant Λ.

Oliveira and Sagastizábal (2014) developed special regularization methods for unconstrained convex optimization, namely, bundle level methods that use oracles with on-demand accuracy. The methods work with approximate function data, which is especially useful in solving stochastic problems. Approximate function values and subgradients are provided by an oracle with on-demand accuracy. The accuracy of the oracle is regulated by two parameters: the first is a descent target, and the second is a tolerance. If the estimated function value reaches the descent target, then the prescribed tolerance is observed. Otherwise the oracle just detects that the target cannot be met, and returns rough estimations of the function data, disregarding the prescribed tolerance. The method includes the ideas of Lemaréchal et al., 1995, Kiwiel, 1995, Fábián, 2000; and integrates the level-type and the proximal approach.

In this paper we are going to use a special method that falls into the ‘partly inexact’ category according to Oliveira and Sagastizábal, and applies only the level regularization of Lemaréchal et al. (1995). The method is discussed in detail in Fábián (2012).

In the following description, ϕ¯ denotes the best function value known, and ϕ̲ is a lower estimate of the optimum. The gap Δ=ϕ¯-ϕ̲ measures the quality of the current approximation. The descent target is ϕ¯-δ, where the tolarence δ is regulated by the current gap. If the descent target is reached, then the oracle returns an exact subgradient. Otherwise the oracle just detects that the target cannot be met, and returns rough estimations of the function data. Iterations where the descent target is reached will be called substantial.

Algorithm 1

A partly inexact level method.

In step 1.3, above, the projection of xi onto Xi means finding the point in Xi nearest to xi. It means solving a convex quadratic programming problem.

Convergence of Algorithm 1 follows from Theorem 3.9 in Oliveira and Sagastizábal (2014). It yields the following theoretical estimate: to obtain Δ<, it suffices to perform cV/2 iterations, where the constants c and V depend on parameter settings, and problem characteristics, respectively.

Remark 2

Concerning the practical efficiency of the (exact) level method of Lemaréchal et al. (1995), in Nemirovski (2005) (Chapter 5.3.2) observes the following experimental fact. When solving a problem of dimension n with accuracy , the level method makes no more than nln(V/) iterations, where V is a problem-dependent constant.

This observation was confirmed by the experiments reported in Fábián and Szőke, 2007, Zverovich et al., 2012, where the level method was applied in decomposition schemes for the solution of two-stage stochastic programming problems.

Following Lemaréchal et al. (1995), we define critical iterations for Algorithm 1. Let us consider a maximal sequence of iterations such that Δ1Δ2Δs(1-λ)Δ1 holds. Maximality of this sequence means that (1-λ)Δ1>Δs+1. Then xsxs+1 will be labeled critical. The above construction is repeated starting from the index s. Thus the iterations are grouped into sequences, and the sequences are separated with critical iterations.

There is an analogy between the critical iterations of level-type methods, and the serious steps of traditional bundle methods. In this paper we use the former terminology which we feel more precise in the present setting.

Section snippets

Two-stage stochastic programming models and methods

First we present the notation with a brief overview of the models. The first-stage decision is represented by the vector xX, the feasible domain being defined by a set of linear inequalities. We assume that the feasible domain is a non-empty convex bounded polyhedron, and that there are S possible outcomes (scenarios) of the random event, the s th outcome occurring with probability ps.

Suppose the first-stage decision has been made with the result x, and the s th scenario has realized. The

Implementation

The level decomposition method as well as the on-demand accuracy oracle were implemented in an existing implementation of a parallel nested Benders (PNB) decomposition algorithm (Wolf & Koberstein, 2013). The PNB solver supports cut aggregation by specifying scenario partitions (Trukhanov, Ntaimo, & Schaefer, 2010), but we only use the fully aggregated model. The parallel implementation allows to use all available cores, therefore solving subproblems as well as calling the on-demand accuracy

Computational study

We evaluated the performance of the following methods:

  • Level-ODA:

    level decomposition with on-demand accuracy. Namely, Algorithm 3 with the parameter setting λ=0.5,κ=0.5.

  • Level:

    level decomposition. Similar to Algorithm 3 but the second-stage problems are solved in each iteration. We have set λ=0.5.

  • Benders-SC:

    single-cut Benders decomposition. Similar to Algorithm 3 but the regularization is switched off by the extremal setting of λ=0, and the second-stage problems are solved in each iteration.

  • Benders-MC:

    multi-cut Benders decomposition.

Conclusion

In this study, we devised a regularized and an unregularized variant of the single-cut L-shaped method using the on-demand accuracy approach. An iteration is called substantial if the current optimal objective function value of the master problem falls below a certain descent target. Only in substantial iterations is a conventional optimality cut constructed. In unsubstantial iterations, an approximated cut is added to the master problem that can be obtained without solving the second-stage

Acknowledgements

The Authors wish to thank the anonymous Reviewers for their constructive comments that we think helped to improve the paper substantially.

Christian Wolf’s work has been supported by a grant from the Deutsche Forschungsgemeinschaft (DFG) under Grant No. SU136/8-1. Csaba Fábián’s work has been supported by the European Union and Hungary and co-financed by the European Social Fund through the Project TÁMOP-4.2.2.C-11/1/KONV-2012-0004: National Research Center for the Development and Market

References (22)

  • C.I. Fábián et al.

    Solving two-stage stochastic programming problems with level decomposition

    Computational Management Science

    (2007)
  • Cited by (31)

    • Two-stage and multi-stage decompositions for the medium-term hydrothermal scheduling problem: A computational comparison of solution techniques

      2021, International Journal of Electrical Power and Energy Systems
      Citation Excerpt :

      In particular, [14] equips the two-stage decomposition with an inexact proximal bundle algorithm aiming at reducing the computational effort for computing an approximated solution. The works [11,45] and [42] investigate level bundle methods for solving convex two-stage stochastic programs up to optimality, even though inexact information is employed along the iterative process to speed up calculations. Level bundle methods were introduced in [25] for deterministic convex optimization, and some variants possess (nearly) dimension independent iteration complexity [9].

    • Constraint generation for risk averse two-stage stochastic programs

      2021, European Journal of Operational Research
      Citation Excerpt :

      One particular method of interest as stabilization is concerned is the level bundle method (Fábián, 2000; Lemaréchal, Nemirovskii, & Nesterov, 1995; van Ackooij & de Oliveira, 2014). The level bundle method employing inexact oracles with on-demand accuracy has been applied to two-stage stochastic programs in de Oliveira and Sagastizábal (2014), Wolf, Fábián, Koberstein, and Stuhl (2014), and Fábián, Wolf, Koberstein, and Suhl (2015). In the stochastic programming context, the level bundle method is also known as level decomposition, see for instance Wolf et al. (2014).

    • Efficient solution selection for two-stage stochastic programs

      2019, European Journal of Operational Research
      Citation Excerpt :

      Alternatively, some studies exploited the problem structure and proposed decomposition-based optimisation algorithms, for instance, see Dantzig and Wolfe (1960) and Van Slyke and Wets (1969). Subsequently, numerous authors introduced advanced procedures such as the multi-cut approach (Birge & Louveaux, 1988), the trust region method (Linderoth & Wright, 2003), the regularised decomposition (Ruszczyński & Świetanowski, 1997) and the level bundle method (van Ackooij, de Oliveira, & Song, 2017; Wolf, Fábián, Koberstein, & Suhl, 2014) to improve the efficiency of utilising the decomposition principle. For a comprehensive review on decomposition approaches, the readers are referred to Rahmaniani, Crainic, Gendreau, and Rei (2017).

    View all citing articles on Scopus
    View full text