Skip to main content
Log in

Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We present a stochastic extension of the mesh adaptive direct search (MADS) algorithm originally developed for deterministic blackbox optimization. The algorithm, called StoMADS, considers the unconstrained optimization of an objective function f whose values can be computed only through a blackbox corrupted by some random noise following an unknown distribution. The proposed method is based on an algorithmic framework similar to that of MADS and uses random estimates of function values obtained from stochastic observations since the exact deterministic computable version of f is not available. Such estimates are required to be accurate with a sufficiently large but fixed probability and to satisfy a variance condition. The ability of the proposed algorithm to generate an asymptotically dense set of search directions is then exploited using martingale theory to prove convergence to a Clarke stationary point of f with probability one.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The notations \(\tilde{f}(x,\xi )\) [20], \(\tilde{f}(x;\xi )\) [38] and \(f(x;\varepsilon )\) [23] are often used for the noisy computable versions of f, where \(\xi \) and \(\varepsilon \) are random variables. We use the more compact notation \(f_{\Theta }(x)\).

References

  1. Abramson, M.A., Audet, C., Dennis Jr., J.E., Le Digabel, S.: OrthoMADS: a deterministic MADS instance with orthogonal directions. SIAM J. Optim. 20(2), 948–966 (2009)

    Article  MathSciNet  Google Scholar 

  2. Alarie, S., Audet, C., Bouchet, P.-Y., Le Digabel, S.: Optimization of noisy blackboxes with adaptive precision. Technical Report G-2019-84, Les cahiers du GERAD (2019)

  3. Amaran, S., Sahinidis, N.V., Sharda, B., Bury, S.J.: Simulation optimization: a review of algorithms and applications. 4OR 12(4), 301–333 (2014)

    Article  MathSciNet  Google Scholar 

  4. Anderson, E.J., Ferris, M.C.: A direct search algorithm for optimization with noisy function evaluations. SIAM J. Optim. 11(3), 837–857 (2001)

    Article  MathSciNet  Google Scholar 

  5. Angün, E., Kleijnen, J.: An asymptotic test of optimality conditions in multiresponse simulation optimization. INFORMS J. Comput. 24(1), 53–65 (2012)

    Article  MathSciNet  Google Scholar 

  6. Audet, C.: A survey on direct search methods for blackbox optimization and their applications. In: Pardalos, P.M., Rassias, T.M. (eds.) Mathematics Without Boundaries: Surveys in Interdisciplinary Research, vol. 2, pp. 31–56. Springer, New York (2014)

    Google Scholar 

  7. Audet, C., Dennis Jr., J.E.: Analysis of generalized pattern searches. SIAM J. Optim. 13(3), 889–903 (2003)

    Article  MathSciNet  Google Scholar 

  8. Audet, C., Dennis Jr., J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1), 188–217 (2006)

    Article  MathSciNet  Google Scholar 

  9. Audet, C., Dennis Jr., J.E., Le Digabel, S.: Parallel space decomposition of the mesh adaptive direct search algorithm. SIAM J. Optim. 19(3), 1150–1170 (2008)

    Article  MathSciNet  Google Scholar 

  10. Audet, C., Hare, W.: Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer, Cham (2017)

    Book  Google Scholar 

  11. Audet, C., Ianni, A., Le Digabel, S., Tribes, C.: Reducing the number of function evaluations in mesh adaptive direct search algorithms. SIAM J. Optim. 24(2), 621–642 (2014)

    Article  MathSciNet  Google Scholar 

  12. Audet, C., Ihaddadene, A., Le Digabel, S., Tribes, C.: Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm. Optim. Lett. 12(4), 675–689 (2018)

    Article  MathSciNet  Google Scholar 

  13. Audet, C., Le Digabel, S., Tribes, C.: Dynamic scaling in the mesh adaptive direct search algorithm for blackbox optimization. Optimization and Engineering 17(2), 333–358 (2016)

    Article  MathSciNet  Google Scholar 

  14. Audet, C., Le Digabel, S., Tribes, C.: The mesh adaptive direct search algorithm for granular and discrete variables. SIAM J. Optim. 29(2), 1164–1189 (2019)

    Article  MathSciNet  Google Scholar 

  15. Augustin, F., Marzouk, Y.M.: A trust-region method for derivative-free nonlinear constrained stochastic optimization. Technical report, arXiv (2017)

  16. Balasubramanian, K., Ghadimi, S.: Zeroth-order nonconvex stochastic optimization: handling constraints, high-dimensionality and saddle-points. Technical report, arXiv (2019)

  17. Bandeira, A.S., Scheinberg, K., Vicente, L.N.: Convergence of trust-region methods based on probabilistic models. SIAM J. Optim. 24(3), 1238–1264 (2014)

    Article  MathSciNet  Google Scholar 

  18. Barton, R.R., Ivey Jr., J.S.: Nelder–Mead simplex modifications for simulation optimization. Manage. Sci. 42(7), 954–973 (1996)

    Article  Google Scholar 

  19. Bhattacharya, R.N., Waymire, E.C.: A Basic Course in Probability Theory, vol. 69. Springer, Berlin (2007)

    MATH  Google Scholar 

  20. Blanchet, J., Cartis, C., Menickelly, M., Scheinberg, K.: Convergence rate analysis of a stochastic trust region method via supermartingales. INFORMS J. Optim. 1(2), 92–119 (2019)

    Article  MathSciNet  Google Scholar 

  21. Cartis, C., Scheinberg, K.: Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Math. Program. 169(2), 337–375 (2018)

    Article  MathSciNet  Google Scholar 

  22. Chang, K.H.: Stochastic Nelder-Mead simplex method - A new globally convergent direct search method for simulation optimization. Eur. J. Oper. Res. 220(3), 684–694 (2012)

    Article  MathSciNet  Google Scholar 

  23. Chen, R., Menickelly, M., Scheinberg, K.: Stochastic optimization using a trust-region method and random models. Math. Program. 169(2), 447–487 (2018)

    Article  MathSciNet  Google Scholar 

  24. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York, (1983). Reissued in 1990 by SIAM Publications, Philadelphia, as vol. 5 in the series Classics in Applied Mathematics

  25. Conn, A.R., Le Digabel, S.: Use of quadratic models with mesh-adaptive direct search for constrained black box optimization. Optim. Methods Softw. 28(1), 139–158 (2013)

    Article  MathSciNet  Google Scholar 

  26. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MOS-SIAM Series on Optimization. SIAM, Philadelphia (2009)

    Book  Google Scholar 

  27. Curtis, F.E., Scheinberg, K., Shi, R.: A stochastic trust region algorithm based on careful step normalization. INFORMS J. Optim. 1(3), 200–220 (2019)

    Article  MathSciNet  Google Scholar 

  28. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  29. Durrett, R.: Probability: Theory and Examples. Cambridge University Press, Cambridge (2010)

    Book  Google Scholar 

  30. Fu, M.C.: Gradient estimation. Handb. Oper. Res. Manag. Sci. 13, 575–616 (2006)

    Google Scholar 

  31. Gould, N.I.M., Orban, D., Toint, PhL: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015)

    Article  MathSciNet  Google Scholar 

  32. Kiefer, J., Wolfowitz, J., et al.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)

    Article  MathSciNet  Google Scholar 

  33. Kulunchakov, A., Mairal, J.: Estimate sequences for stochastic composite optimization: variance reduction, acceleration, and robustness to noise. Technical report, arXiv (2019)

  34. Larson, J., Billups, S.C.: Stochastic derivative-free optimization using a trust region framework. Comput. Optim. Appl. 64(3), 619–645 (2016)

    Article  MathSciNet  Google Scholar 

  35. Le Digabel, S.: Algorithm 909: NOMAD: nonlinear optimization with the MADS algorithm. ACM Trans. Math. Softw. 37(4), 44:1–44:15 (2011)

    Article  MathSciNet  Google Scholar 

  36. Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009)

    Article  MathSciNet  Google Scholar 

  37. Nelder, J.A., Mead, R.: A simplex method for function minimization. Comput. J. 7(4), 308–313 (1965)

    Article  MathSciNet  Google Scholar 

  38. Paquette, C., Scheinberg, K.: A stochastic line search method with expected complexity analysis. SIAM J. Optim. 30(1), 349–376 (2020)

    Article  MathSciNet  Google Scholar 

  39. Shashaani, S., Hashemi, F.S., Pasupathy, R.: ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization. SIAM J. Optim. 28(4), 3145–3176 (2018)

    Article  MathSciNet  Google Scholar 

  40. Wang, X., Yuan, Y.: Stochastic trust region methods with trust region radius depending on probabilistic models. Technical report, arXiv (2019)

Download references

Acknowledgements

The authors are grateful to Erick Delage from HEC Montréal and Richard Labib from Polytechnique Montréal for valuable discussions and constructive suggestions. They would also like to thank an anonymous referee for his/her careful reading and helpful remarks that contributed to improve this work. This research is supported by the NSERC CRD RDCPJ 490744-15 Grant and by an InnovÉÉ grant, both in collaboration with Hydro-Québec and Rio Tinto, and by a FRQNT fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kwassi Joseph Dzahini.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Audet, C., Dzahini, K.J., Kokkolaras, M. et al. Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates. Comput Optim Appl 79, 1–34 (2021). https://doi.org/10.1007/s10589-020-00249-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00249-0

Keywords

Navigation