Stochastics and StatisticsApplying oracles of on-demand accuracy in two-stage stochastic programming – A computational study
Introduction
Decomposition is an effective and time-honoured means of handling two-stage stochastic programming problems. It can be interpreted as a cutting-plane scheme applied to the first-stage variables. Traditionally, there are two approaches: one can use a disaggregate or an aggregate model. A major drawback of the aggregate model is that an aggregate master problem cannot contain all the information obtained by the solution of the second-stage problems. The disaggregate master problem, on the other hand, may grow excessively. It is not easy to find a balance between the effort spent in solving the master problem on the one hand, and the second-stage problems on the other hand. The computational results of Wolf and Koberstein (2013) give insights into this question.
In this study we report our experiments with a special inexact convex programming method applied to the aggregate master problem of the two-stage stochastic programming decomposition scheme. The convex programming method is of the type that uses an oracle with on-demand accuracy, a concept proposed by Oliveira and Sagastizábal (2014). We are going to use a special form which, when applied to two-stage stochastic programming problems, integrates the advantages of the aggregate and the disaggregate models. This latter feature is discussed in Fábián (2012). We also examine the on-demand accuracy idea in an un-regularized context, which results a pure cutting-plane method in contrast to the level bundle methods treated in Oliveira and Sagastizábal (2014).
The paper is organized as follows. In Section 1.1 we outline the on-demand accuracy approach to convex programming, and present an algorithmic sketch of the partly inexact level method. In Section 2 we overview two-stage stochastic programming models and methods. Specifically, in Section 2.1 we sketch a decomposition method for two-stage problems based on the partly inexact level method. Section 3 discusses implementation issues. Our computational results are reported in Section 4, and conclusions are drawn in Section 5.
Let us consider the problemwhere is a convex function, and is a convex closed bounded polyhedron. We assume that is Lipschitz continuous over with the constant .
Oliveira and Sagastizábal (2014) developed special regularization methods for unconstrained convex optimization, namely, bundle level methods that use oracles with on-demand accuracy. The methods work with approximate function data, which is especially useful in solving stochastic problems. Approximate function values and subgradients are provided by an oracle with on-demand accuracy. The accuracy of the oracle is regulated by two parameters: the first is a descent target, and the second is a tolerance. If the estimated function value reaches the descent target, then the prescribed tolerance is observed. Otherwise the oracle just detects that the target cannot be met, and returns rough estimations of the function data, disregarding the prescribed tolerance. The method includes the ideas of Lemaréchal et al., 1995, Kiwiel, 1995, Fábián, 2000; and integrates the level-type and the proximal approach.
In this paper we are going to use a special method that falls into the ‘partly inexact’ category according to Oliveira and Sagastizábal, and applies only the level regularization of Lemaréchal et al. (1995). The method is discussed in detail in Fábián (2012).
In the following description, denotes the best function value known, and is a lower estimate of the optimum. The gap measures the quality of the current approximation. The descent target is , where the tolarence is regulated by the current gap. If the descent target is reached, then the oracle returns an exact subgradient. Otherwise the oracle just detects that the target cannot be met, and returns rough estimations of the function data. Iterations where the descent target is reached will be called substantial. Algorithm 1 A partly inexact level method.
In step 1.3, above, the projection of onto means finding the point in nearest to . It means solving a convex quadratic programming problem.
Convergence of Algorithm 1 follows from Theorem 3.9 in Oliveira and Sagastizábal (2014). It yields the following theoretical estimate: to obtain , it suffices to perform iterations, where the constants and depend on parameter settings, and problem characteristics, respectively. Remark 2 Concerning the practical efficiency of the (exact) level method of Lemaréchal et al. (1995), in Nemirovski (2005) (Chapter 5.3.2) observes the following experimental fact. When solving a problem of dimension with accuracy , the level method makes no more than iterations, where is a problem-dependent constant. This observation was confirmed by the experiments reported in Fábián and Szőke, 2007, Zverovich et al., 2012, where the level method was applied in decomposition schemes for the solution of two-stage stochastic programming problems.
Following Lemaréchal et al. (1995), we define critical iterations for Algorithm 1. Let us consider a maximal sequence of iterations such that holds. Maximality of this sequence means that . Then will be labeled critical. The above construction is repeated starting from the index . Thus the iterations are grouped into sequences, and the sequences are separated with critical iterations.
There is an analogy between the critical iterations of level-type methods, and the serious steps of traditional bundle methods. In this paper we use the former terminology which we feel more precise in the present setting.
Section snippets
Two-stage stochastic programming models and methods
First we present the notation with a brief overview of the models. The first-stage decision is represented by the vector , the feasible domain being defined by a set of linear inequalities. We assume that the feasible domain is a non-empty convex bounded polyhedron, and that there are possible outcomes (scenarios) of the random event, the th outcome occurring with probability .
Suppose the first-stage decision has been made with the result , and the th scenario has realized. The
Implementation
The level decomposition method as well as the on-demand accuracy oracle were implemented in an existing implementation of a parallel nested Benders (PNB) decomposition algorithm (Wolf & Koberstein, 2013). The PNB solver supports cut aggregation by specifying scenario partitions (Trukhanov, Ntaimo, & Schaefer, 2010), but we only use the fully aggregated model. The parallel implementation allows to use all available cores, therefore solving subproblems as well as calling the on-demand accuracy
Computational study
We evaluated the performance of the following methods:
- Level-ODA:
level decomposition with on-demand accuracy. Namely, Algorithm 3 with the parameter setting .
- Level:
level decomposition. Similar to Algorithm 3 but the second-stage problems are solved in each iteration. We have set .
- Benders-SC:
single-cut Benders decomposition. Similar to Algorithm 3 but the regularization is switched off by the extremal setting of , and the second-stage problems are solved in each iteration.
- Benders-MC:
multi-cut Benders decomposition.
Conclusion
In this study, we devised a regularized and an unregularized variant of the single-cut L-shaped method using the on-demand accuracy approach. An iteration is called substantial if the current optimal objective function value of the master problem falls below a certain descent target. Only in substantial iterations is a conventional optimality cut constructed. In unsubstantial iterations, an approximated cut is added to the master problem that can be obtained without solving the second-stage
Acknowledgements
The Authors wish to thank the anonymous Reviewers for their constructive comments that we think helped to improve the paper substantially.
Christian Wolf’s work has been supported by a grant from the Deutsche Forschungsgemeinschaft (DFG) under Grant No. SU136/8-1. Csaba Fábián’s work has been supported by the European Union and Hungary and co-financed by the European Social Fund through the Project TÁMOP-4.2.2.C-11/1/KONV-2012-0004: National Research Center for the Development and Market
References (22)
- et al.
A multicut algorithm for two-stage stochastic linear programs
European Journal of Operational Research
(1988) - et al.
Adaptive multicut aggregation for two-stage stochastic linear programs with recourse
European Journal of Operational Research
(2010) - et al.
Dynamic sequencing and cut consolidation for the parallel hybrid-cut nested L-shaped method
European Journal of Operational Research
(2013) - Achterberg, T. (2007) Constraint integer programming. Ph.D. thesis, Technische Universität...
- et al.
On a new collection of stochastic linear programming test problems
INFORMS Journal on Computing
(2004) - et al.
Introduction to stochastic programming
(1997) Testing successive regression approximations by large-scale two-stage problems
Annals of Operations Research
(2011)- et al.
Benchmarking optimization software with performance profiles
Mathematical Programming
(2002) Bundle-type methods for inexact data
Central European Journal of Operations Research
(2000)- Fábián, C. I. (2012). Computational aspects of risk-averse optimization in two-stage stochastic models. Stochastic...
Solving two-stage stochastic programming problems with level decomposition
Computational Management Science
Cited by (31)
Stabilized Benders decomposition for energy planning under climate uncertainty
2024, European Journal of Operational ResearchThe Benders by batch algorithm: Design and stabilization of an enhanced algorithm to solve multicut Benders reformulation of two-stage stochastic programs
2023, European Journal of Operational ResearchTwo-stage and multi-stage decompositions for the medium-term hydrothermal scheduling problem: A computational comparison of solution techniques
2021, International Journal of Electrical Power and Energy SystemsCitation Excerpt :In particular, [14] equips the two-stage decomposition with an inexact proximal bundle algorithm aiming at reducing the computational effort for computing an approximated solution. The works [11,45] and [42] investigate level bundle methods for solving convex two-stage stochastic programs up to optimality, even though inexact information is employed along the iterative process to speed up calculations. Level bundle methods were introduced in [25] for deterministic convex optimization, and some variants possess (nearly) dimension independent iteration complexity [9].
Constraint generation for risk averse two-stage stochastic programs
2021, European Journal of Operational ResearchCitation Excerpt :One particular method of interest as stabilization is concerned is the level bundle method (Fábián, 2000; Lemaréchal, Nemirovskii, & Nesterov, 1995; van Ackooij & de Oliveira, 2014). The level bundle method employing inexact oracles with on-demand accuracy has been applied to two-stage stochastic programs in de Oliveira and Sagastizábal (2014), Wolf, Fábián, Koberstein, and Stuhl (2014), and Fábián, Wolf, Koberstein, and Suhl (2015). In the stochastic programming context, the level bundle method is also known as level decomposition, see for instance Wolf et al. (2014).
Efficient solution selection for two-stage stochastic programs
2019, European Journal of Operational ResearchCitation Excerpt :Alternatively, some studies exploited the problem structure and proposed decomposition-based optimisation algorithms, for instance, see Dantzig and Wolfe (1960) and Van Slyke and Wets (1969). Subsequently, numerous authors introduced advanced procedures such as the multi-cut approach (Birge & Louveaux, 1988), the trust region method (Linderoth & Wright, 2003), the regularised decomposition (Ruszczyński & Świetanowski, 1997) and the level bundle method (van Ackooij, de Oliveira, & Song, 2017; Wolf, Fábián, Koberstein, & Suhl, 2014) to improve the efficiency of utilising the decomposition principle. For a comprehensive review on decomposition approaches, the readers are referred to Rahmaniani, Crainic, Gendreau, and Rei (2017).
Probabilistic optimization via approximate p-efficient points and bundle methods
2017, Computers and Operations Research