Skip to main content
Log in

A Theory and Algorithms for Combinatorial Reoptimization

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

Many real-life applications involve systems that change dynamically over time. Thus, throughout the continuous operation of such a system, it is required to compute solutions for new problem instances, derived from previous instances. Since the transition from one solution to another incurs some cost, a natural goal is to have the solution for the new instance close to the original one (under a certain distance measure). In this paper we develop a general framework for combinatorial repotimization, encompassing classical objective functions as well as the goal of minimizing the transition cost from one solution to the other. Formally, we say that \(\mathcal{A}\) is an \((r, \rho )\)-reapproximation algorithm if it achieves a \(\rho \)-approximation for the optimization problem, while paying a transition cost that is at most r times the minimum required for solving the problem optimally. Using our model we derive reoptimization and reapproximation algorithms for several classes of combinatorial reoptimization problems. This includes a fully polynomial time \((1+\varepsilon _1, 1+\varepsilon _2)\)-reapproximation scheme for the wide class of DP-benevolent problems introduced by Woeginger (Proceedings of Tenth Annual ACM-SIAM Symposium on Discrete Algorithms, 1999), a (1, 3)-reapproximation algorithm for the metric k-Center problem, and (1, 1)-reoptimization algorithm for polynomially solvable subset-selection problems. Thus, we distinguish here for the first time between classes of reoptimization problems by their hardness status with respect to the objective of minimizing transition costs, while guaranteeing a good approximation for the underlying optimization problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. As discussed in Sect. 1.2, this is different than multi-objective optimization.

  2. This is similar in nature, e.g., to incremental optimization studied in [43].

  3. A key property is that each problem in the class can be formulated via a dynamic program of certain structure, and the involved costs and transition functions satisfy certain arithmetic and structural conditions.

  4. see, e.g., [19] for the theory of fixed-parameter algorithms and parameterized complexity.

  5. Some earlier results rely on the assumption that I results from a slight modification of \(I_0\); see Sect. 1.2).

  6. As we show below, it may be that none, both, or only \(R(\varPi )\) are NP-hard.

  7. This is made precise below.

References

  1. Abu-Khzam, F.N., Egan, J., Fellows, M.R., Rosamond, F.A., Shaw, P.: On the parameterized complexity of dynamic problems. Theor. Comput. Sci. 607(P3), 426–434 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Amato, G., Cattaneo, G., Italiano, G.F.: Experimental analysis of dynamic minimum spanning tree algorithms. In: Proceedings of 8th ACM-SIAM Symposium on Discrete Algorithms (SODA) (1997)

  3. An, H.-C., Bhaskara, A., Chekuri, C., Gupta, S., Madan, V., Svensson, O.: Centrality of trees for capacitated \(k\)-center. In: Proceedings of IPCO, pp. 52–63 (2014)

  4. Archetti, C., Bertazzi, L., Speranza, M.G.: Reoptimizing the traveling salesman problem. Networks 42, 154–159 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  5. Archetti, C., Bertazzi, L., Speranza, M.G.: Reoptimizing the 0–1 knapsack problem. Discrete Appl. Math. 158(17), 1879 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ausiello, G., Escoffier, B., Monnot, J., Paschos, V.T.: Reoptimization of minimum and maximum traveling salesman’s tours. J. Discrete Algorithms 7(4), 453–463 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ausiello, G., Bonifaci, V., Escoffier, B.: Complexity and approximation in reoptimization. In: Cooper, B., Sorbi, A. (eds.) Computability in Context: Computation and Logic in the Real World. Imperial College Press/World Scientific, London (2011)

    Google Scholar 

  8. Balakrishnan, H., Seshan, S., Katz, R.H.: Improving reliable transport and handoff performance in cellular wireless networks. Wirel. Netw. 1(4), 469–481 (1995)

    Article  Google Scholar 

  9. Baram, G., Tamir, T.: Reoptimization of the minimum total flow-time scheduling problem. Sustain. Comput. Inform. Syst. 4(4), 241–251 (2014)

    MATH  Google Scholar 

  10. Bender, M., Farach-Colton, M., Fekete, S., Fineman, J., Gilbert, S.: Reallocation problems in scheduling. In: Proceedings of 22nd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 271–279 (2013)

  11. Bender, M., Farach-Colton, M., Fekete, S., Fineman, J., Gilbert, S.: Cost-oblivious storage reallocation. In: Proceedings of 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), pp. 278–288 (2014)

  12. Bhatia, R., Kodialam, M., Lakshman, T.V.: Fast network re-optimization schemes for MPLS and optical networks. Comput. Netw. 50(3), 317–331 (2006)

    Article  MATH  Google Scholar 

  13. Bilo, D., Böckenhauer, H., Komm, D., Královič, R., Mömke, T., Seibert, S., Zych, A.: Reoptimization of the shortest common superstring problem. In: Proceedings of 20th Symposium on Combinatorial Pattern Matching (CPM) (2009)

  14. Berger, A., Bonifaci, V., Grandoni, F., Schäfer, G.: Budgeted matching and budgeted matroid intersection via the gasoline puzzle. In: Proceedings of IPCO, pp. 273–287 (2008)

  15. Böckenhauer, H.J., Forlizzi, L., Hromkovič, J., Kneis, J., Kupke, J., Proietti, G., Widmayer, P.: On the approximability of TSP on local modifications of optimally solved instances. Algorithmic Oper. Res. 2(2), 83–93 (2007)

    MathSciNet  MATH  Google Scholar 

  16. Böckenhauer, H.J., Hromkovič, J., Mömke, T., Widmayer, P.: On the hardness of reoptimization. In: Proceedings of 34th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM), pp. 50–65 (2008)

  17. Chou, C.F., Golubchik, L., Lui, J.C.S.: A performance study of dynamic replication techniques in continuous media servers. In: Proceedings of International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (IEEE MASCOTS), pp. 256–264 (2000)

  18. Demetrescu, C., Finocchi, I., Italiano, G.F.: Dynamic graph algorithms. In: Yellen, J., Gross, J.L. (eds.) Handbook of Graph Theory, Chapter 10.2. Series in discrete mathematics and its applications. CRC Press, Boca Raton (2003)

    Google Scholar 

  19. Downey, R.G., Fellows, M.R.: Fundamentals of Parameterized Complexity. Springer, London (2013)

    Book  MATH  Google Scholar 

  20. Drezner, Z., Hamacher, H. (eds.): Facility Location: Applications and Theory. Springer, New York (2002)

    MATH  Google Scholar 

  21. Dyer, M.E., Frieze, A.M.: A simple heuristic for the \(p\)-center problem. Oper. Res. Lett. 3, 285–288 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  22. Eppstein, D., Galil, Z., Italiano, G.F.: Dynamic graph algorithms. In: Atallah, M.J. (ed.) CRC Handbook of Algorithms and Theory of Computation, Chapter 8. CRC Press, Boca Raton (1999)

    Google Scholar 

  23. Escoffier, B., Milanič, M., Paschos, V.T.: Simple and fast reoptimizations for the Steiner tree problem. Algorithmic Oper. Res. 4(2), 86–94 (2009)

    MathSciNet  MATH  Google Scholar 

  24. Feder, T., Greene, D.H.: Optimal algorithms for approximate clustering. In: Proceedings of 20th Annual ACM Symposium on Theory of Computing (STOC) 434–444 (1988)

  25. Frangioni, A., Manca, A.: A computational study of cost reoptimization for min-cost flow problems. INFORMS J. Comput. 18(1), 61–70 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  26. Gerald, A.: Dynamic Routing in Telecommunication Networks. McGraw-Hill, New York (1997)

    Google Scholar 

  27. Golubchik, L., Khanna, S., Khuller, S., Thurimella, R., Zhu, A.: Approximation algorithms for data placement on parallel disks. In: Proceedings of 11th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 223–232 (2000)

  28. Graham, R.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 17, 263–269 (1969)

    MathSciNet  MATH  Google Scholar 

  29. Grandoni, F., Ravi, R., Singh, M., Zenklusen, R.: New approaches to multi-objective optimization. Math. Program. 146(1–2), 525–554 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  30. Hochbaum, D.S.: Approximation Algorithms for NP-Hard Problems. PWS Publishing, Boston (1995)

    MATH  Google Scholar 

  31. Hochbaum, D.S., Shmoys, D.B.: A best possible heuristic for the \(k\)-center problem. Math. Oper. Res. 10, 180–184 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  32. Hochbaum, D.S., Shmoys, D.B.: A unified approach to approximation algorithms for bottleneck problems. J. ACM 33(3), 533–550 (1986)

    Article  MathSciNet  Google Scholar 

  33. Hulu. http://www.hulu.com/

  34. Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Comput. Surv. 31(3), 264–323 (1999)

    Article  Google Scholar 

  35. Karve, A., Kimbrel, T., Pacifici, G., Spreitzer, M., Steinder, M., Sviridenko, M., Tantawi, A.: Dynamic placement for clustered web applications. In: Proceedings of the 15th International Conference on World Wide Web (WWW ’06), pp. 595–604. ACM, New York, NY, USA (2006)

  36. Kashyap, S.: Algorithms for data placement, reconfiguration and monitoring in storage networks. Ph.D. Dissertation, Computer Science Department, Univ. of Maryland (2007)

  37. Kashyap, S., Khuller, S., Wan, Y-C., Golubchik, L.: Fast reconfiguration of data placement in parallel disks. In: Proceedings of the Meeting on Algorithm Engineering & Experiments, pp. 95–107. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA

  38. Łacki, J., Oćwieja, J., Pilipczuk, M., Sankowski, P., Zych, A.: The power of dynamic distance oracles: efficient dynamic algorithms for the Steiner tree. In: Proceedings of STOC (2015)

  39. Lawler, E.L., Moore, J.M.: A functional equation and its application to resource allocation and sequencing problems. Manag. Sci. 16, 77–84 (1969)

    Article  MATH  Google Scholar 

  40. Leon-Garcia, A., Widjaja, I.: Communication Networks. McGraw-Hill, New York (2003)

    Google Scholar 

  41. Levin, M.S.: Restructuring in combinatorial optimization. arXiv:1102.1745 (2011)

  42. Levin, M.S.: Towards integrated glance to restructuring in combinatorial optimization. arXiv:1512.06427 (2015)

  43. Lin, G., Nagarajan, C., Rajaraman, R., Williamson, D.P.: A general approach for incremental approximation and hierarchical clustering. SIAM J. Comput. 39(8), 3633–3669 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  44. Moore, J.M.: An \(n\)-job, one machine sequencing algorithm for minimizing the number of late jobs. Manag. Sci. 15(1), 102–109 (1968)

    Article  MATH  Google Scholar 

  45. Nagarajan, V., Schieber, B., Shachnai, H.: The euclidean k-supplier problem. In: Proceedings of 16th International Conference on Integer Programming and Combinatorial Optimization (IPCO) (2013)

  46. Nardelli, E., Proietti, G., Widmayer, P.: Swapping a failing edge of a single source shortest paths tree is good and fast. Algorithmica 35, 56–74 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  47. Neelakanta, P.S.: A Textbook on ATM Telecommunications, Principles and Implementation. CRC Press, Boca Raton (2000)

    Google Scholar 

  48. Netflix. http://www.netflix.com/

  49. Pallottino, S., Scutella, M.G.: A new algorithm for reoptimizing shortest paths when the arc costs change. Oper. Res. Lett. 31, 149–160 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  50. Ravi, R., Goemans, M.X.: The constrained minimum spanning tree problem. In: Proceedings of 5th Workshop on Algorithm Theory, pp. 66–75 (1996)

  51. Shachnai, H., Tamir, T.: On two class-constrained versions of the multiple knapsack problem. Algorithmica 29, 442–467 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  52. Shachnai, H., Tamir, G., Tamir, T.: Minimal cost reconfiguration of data placement in storage area network. In: Proceedings of 7rd Workshop on Approximation and Online Algorithms (WAOA) (2009)

  53. Shachnai, H., Tamir, G., Tamir, T.: A theory and algorithms for combinatorial reoptimization. In: Proceedings of 10th Latin American Theoretical Informatics symposium (LATIN) (2012)

  54. Thiongane, B., Nagih, A., Plateau, G.: Lagrangian heuristics combined with reoptimization for the 0–1 bidimensional knapsack problem. Discrete Appl. Math. 154(15), 2200–2211 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  55. Thorup, M., Karger, D.R.: Dynamic graph algorithms with applications. In: Proceedings of 7th Scandinavian Workshop on Algorithm (SWAT) (2000)

  56. Woeginger, G.J.: When does a dynamic programming formulation guarantee the existence of an FPTAS? In: Proceedings of 10th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 820–829 (1999)

  57. Wolf, J.L., Yu, P.S., Shachnai, H.: Disk load balancing for video-on-demand systems. ACM Multimed. Syst. 5, 358–370 (1997)

    Article  Google Scholar 

  58. Yue, F., Tang, J.: A new approach for tree alignment based on local re-optimization. In: Proceedings on International Conference on BioMedical Engineering and Informatics (2008)

Download references

Acknowledgements

This work was partly carried out during a visit of the second author to DIMACS supported by the National Science Foundation under Grant Number CCF-1144502. Work partially supported also by the Technion V.P.R. Fund, and by the Smoler Research Fund. We thank Rohit Khandekar and Viswanath Nagarajan for stimulating discussions on the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hadas Shachnai.

Additional information

A preliminary version of this paper appeared in the Proceedings of the 10th Latin American Symposium on Theoretical Informatics (LATIN), Arequipa, April 2012. Research supported by the Israel Science Foundation (Grant Number 1574/10), and by the Ministry of Trade and Industry MAGNET program through the NEGEV Consortium.

Appendices

Appendix A: Additional Properties of the Class DP-B

In this section we give some more definitions and the conditions that a problem \(\varPi \) must satisfy (as given in [56]) to be DP-benevolent. The class of DP-B problems contains a large set of problems that admit an FPTAS via dynamic programming. The reader may find it useful to refer to Sect. 3.4 that illustrates the definitions for the Knapsack problem.

An optimization problem \(\varPi \) is called DP-simple if it can be expressed via a simple dynamic programming formulation DP as specified in Definitions 810.

The following gives a measure for the closeness of two states, using a degree vector \({\bar{D}}\) of length \(\beta \), which indicates the distances between two states coordinate-wise, and a value \(\varDelta \).

Definition 19

For any real number \(\varDelta > 1\) and three vectors \(\bar{D},\bar{S},\bar{S'} \in \mathbb {N} ^\beta \), we say that \(\bar{S}\) is \((\bar{D},\varDelta )\)-close to \(\bar{S'}\) if \(\varDelta ^{-D_\ell } \cdot s_\ell \le s'_\ell \le \varDelta ^{D_\ell } \cdot s_\ell \) for every coordinate \( 1\le \ell \le \beta \).

Note that if two vectors are \((\bar{D},\varDelta )\)-close to each other then they must agree in all coordinates \(\ell \) with \(D_{\ell } = 0\).

Let \(\preceq _{dom}\) and \(\preceq _{qua}\) denote two partial orders on the state space of the dynamic program, where \(\preceq _{qua}\) is an extension of \(\preceq _{dom}\).Footnote 7 We say that \({\bar{S}}\preceq _{dom}{\bar{S}'}\) if \({\bar{S}'}\) is ‘better’ than \({\bar{S}}\) (whatever choices are made later). Note that the partial order \(\preceq _{dom}\) does not depend on the algorithm used for solving the problem. In some cases, the partial order \(\preceq _{dom}\) is undefined for \({\bar{S}}\) and \({\bar{S}'}\). In such cases, we may define the partial order using \(\preceq _{qua}\).

We now give some conditions that will be used to identify the class of DP-benevolent problems.

Condition A1

Conditions on the function in \( \mathcal{F} \).

The following holds for any \( \varDelta > 1\), \(F \in \mathcal{F}\) and \(\bar{X} \in \mathbb {N} ^ \alpha \), and for any \(\bar{S},\bar{S'} \in \mathbb {N} ^ \beta \).

  1. (i)

    If \(\bar{S}\) is \((\bar{D},\varDelta )\)-close to \(\bar{S'}\), and if \(\bar{S} \preceq _{qua} \bar{S'}\), then

    1. (a)

      \(F(\bar{X},\bar{S}) \preceq _{qua} F(\bar{X},\bar{S'})\), and \(F(\bar{X},\bar{S})\) is \((\bar{D},\varDelta )\)-close to \(F(\bar{X},\bar{S'})\), or

    2. (b)

      \(F(\bar{X},\bar{S}) \preceq _{dom} F(\bar{X},\bar{S'})\).

  2. (ii)

    If \(\bar{S} \preceq _{dom} \bar{S'}\) then \(F(\bar{X},\bar{S}) \preceq _{dom} F(\bar{X},\bar{S'})\).

Condition A2

Conditions on the functions in \(\mathcal{H}\).

The following holds for any \(\varDelta \ge 1\), \(H \in \mathcal{H}\) and \(\bar{X} \in \mathbb {N}^\alpha \), and for any \(\bar{S},\bar{S'} \in \mathbb {N}^\beta \).

  1. (i)

    If \(\bar{S}\) is \(\left( \bar{D},\varDelta \right) \)-close to \(\bar{S'}\), and if \(\bar{S} \preceq _{qua} \bar{S'}\) then \(H\left( \bar{X},\bar{S'}\right) \le H(\bar{X},\bar{S})\).

  2. (ii)

    If \(\bar{S} \preceq _{dom} \bar{S}'\) then \(H\left( \bar{X},\bar{S'}\right) \le H\left( \bar{X},\bar{S}\right) \).

We now refer to our objective function (assuming a maximization problem).

Condition A3

Conditions on the function G.

  1. (i)

    There exists an integer \(g \ge 0\) (whose value only depends on the function G and the degree-vector \(\bar{D}\)) such that, for any \(\varDelta \ge 1\), and for any \(\bar{S},\bar{S'} \in \mathbb {N}^\beta \) the following holds: If S is \((\bar{D},\varDelta )\)-close to \(\bar{S'}\), and \(\bar{S} \preceq _{qua} \bar{S'}\) then \(G(\bar{S}) \le \varDelta ^g \cdot G(\bar{S'})\).

  2. (ii)

    For any \(\bar{S},\bar{S'} \in \mathbb {N}^\beta \) with \(\bar{S} \preceq _{dom} \bar{S'}\), \(G(\bar{S'}) \ge G(\bar{S})\).

The next condition gives some technical properties guaranteeing that the running time of the algorithm is polynomial in the input size, i.e., in n and \({\bar{x}}\), where \({\bar{x}} = \sum _{k=1}^n \sum _{i=1}^\alpha x_{i,k}\) is the total sum of entries in the input vectors.

Condition A4

Technical properties.

  1. (i)

    Every \(F\in \mathcal{F}\) can be evaluated in polynomial time. Also, every \(H\in \mathcal{H}\) can be evaluated in polynomial time; the function G can be evaluated in polynomial time, and the relation \(\preceq _{qua}\) can be decided in polynomial time.

  2. (ii)

    The cardinality of \(\mathcal{F} \) is polynomially bounded in n and \(\log \bar{x}\).

  3. (iii)

    For any instance I of \(\varPi \), the state space \(\mathcal{S}_0\) can be computed in time that is polynomially bounded in n and \(\log \bar{x}\).

  4. (iv)

    For an instance I of \(\varPi \), and for a coordinate \(1 \le c \le \beta \), let \(\mathcal{V}_c(I)\) denote the set of the values of the c-th component of all vectors in all state spaces \(\mathcal{S}_k\), \(1\le k \le n\). Then the following holds for every instance I. For all coordinates \(1\le c \le \beta \), the natural logarithm of every value in \(\mathcal{V}_c(I)\) is bounded by a polynomial \(p_1(n,\log \bar{x})\) in n and \(\log \bar{x}\). Equivalently, the length of the binary encoding of every value is polynomially bounded in the input size, Moreover, for any coordinate c with \(D_c = 0\), the cardinality of \(\mathcal{V}_c(I)\) is bounded by a polynomial \(p_2(n,\log \bar{x})\) in n and \(\log \bar{x}\).

A DP-simple optimization problem \(\varPi \) is called DP-benevolent if there exist a partial order \(\preceq _{dom}\), a quasi-linear order \(\preceq _{qua}\), and a degree-vector \(\bar{D}\), such that its dynamic programming formulation satisfies Conditions A1A4.

1.1 Some Problems in DP-B

Woeginger showed in [56] that the following problems are in DP-B.

  • Makespan on two identical machines ,\(P2||C_{max}\).

  • Sum of cubed job completion times on two machines, \(P2||\sum C_j^3\)

  • Total weighted job completion time on two identical machines, \(P2||\sum w_jC_j\)

  • Total completion time on two identical machines with time dependent processing times, \(P2|time-dep|\sum C_j\)

  • Weighted earliness-tardiness about a common non-restrictive due date on a single machine, \(1||\sum w_j|C_j|\)

  • 0/1-knapsack problem

  • Weighted number of tardy jobs on a single machine, \(1||\sum w_j U_j\)

  • Batch scheduling to minimize the weighted number of tardy jobs, \(1|batch|\sum w_j U_j\)

  • Makespan of deteriorating jobs on a single machine, \(1|deteriorate|C_{max}\)

  • Total late work on a single machine, \(1||\sum V_j\)

  • Total weighted late work on a single machine, \(1||\sum w_j V_j\)

Appendix B: \(R(\varPi ,m)\in DP{\hbox {-}}B\) (Proof of Theorem 4)

Let \(\bar{D'} = \left( {\bar{D};0}\right) \). Thus, if two states are \(\left( \bar{D'},\varDelta \right) \)-close then they have the same transition cost. For any pair of states \(\bar{S},\bar{S'} \in \mathbb {N}^{\beta }\) and \(c,c' \in \mathbb {N}\), let \(\bar{\varUpsilon } = \left( \bar{S};c\right) \text{ and } \bar{\varUpsilon '} = \left( {\bar{S'},\bar{c'}}\right) \). We now define a partial order on the state space. We say that \(\bar{\varUpsilon } \preceq '_{dom} \bar{\varUpsilon '}\) iff \(\bar{S} \preceq _{dom} \bar{S'}\) and \(c' \le c.\) Also, \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\) iff \(\bar{S} \preceq _{qua} \bar{S'}\). We note that the following holds.

Observation 9

If \(\bar{\varUpsilon }\) is \(\left( \bar{D'},\varDelta \right) \)-close to \(\bar{\varUpsilon '}\), then \(\bar{S}\) is \(\left( \bar{D},\varDelta \right) \)-close to \(\bar{S'}\), and \(c = c'\).

We now show that the four conditions given in Appendix A are satisfied for \(R(\varPi ,m)\).

  1. C.1

    For any \(\varDelta > 1\) and \(F'\in \mathcal{F}'\), for any input \(\bar{Y}=\left( {\bar{X};\bar{R}}\right) \in \mathbb {N}^{\alpha '}\), and for any pair of states \(\bar{\varUpsilon } =\left( {\bar{S};c}\right) \), \(\bar{\varUpsilon '} = \left( {\bar{S'},c'}\right) \in \mathbb {N}^{\beta '}\):

    1. (i)

      If \(\bar{\varUpsilon }\) is \(\left( \bar{D'},\varDelta \right) \)-close to \(\bar{\varUpsilon '}\) and \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\) then, by the definition of \(\left( \bar{D'},\varDelta \right) \)-closeness, \(\bar{S}\) is \(\left( \bar{D},\varDelta \right) \)-close to \(\bar{S'}\). In addition, \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\) implies that \(\bar{S}\preceq _{qua} \bar{S'}\). Now, we consider two sub-cases.

      1. (a)

        If Condition A.1 (i) (a) holds then we need to show two properties.

  • We first show that \(F'\left( \bar{Y},\bar{\varUpsilon }\right) \preceq '_{qua} F'\left( \bar{Y},\bar{\varUpsilon '}\right) \). We have that \(F\left( \bar{X},\bar{S}\right) \preceq _{qua} F\left( \bar{X},\bar{S'}\right) \), therefore, \({F\left( \bar{X},\bar{S}\right) ;(c+r_F)}\preceq '_{qua} F\left( \bar{X},\bar{S'}\right) ;(c+r_F)\). Indeed, the \(\preceq '_{qua}\) relation is not affected by the cost component of two states. Moreover, by Observation 9, \(c=c'\). By the definition of \(F'\), \( F'\left( {\bar{X};\bar{R}},{\bar{S};c}\right) \preceq '_{qua} F'\left( {\bar{X};\bar{R}},{\bar{S'};c'}\right) \), i.e., \(F'\left( \bar{Y},\bar{\varUpsilon }\right) \preceq '_{qua} F'\left( \bar{Y},\bar{\varUpsilon '}\right) \).

  • We now show that \(F'\left( \bar{Y},\bar{\varUpsilon }\right) \) is \(\left( \bar{D'},\varDelta \right) \)-close to \(F'\left( \bar{Y},\bar{\varUpsilon '}\right) \). We have that \(F\left( \bar{X},\bar{S}\right) \) is \(\left( \bar{D},\varDelta \right) \)-close to \(F\left( \bar{X},\bar{S'}\right) \). This implies

    Claim 20 \({F\left( \bar{X},\bar{S}\right) ;(c+r_F)}\) is \(\left( \bar{D'},\varDelta \right) \)-close to \(F\left( \bar{X},\bar{S'};(c'+r_F)\right) \).

    The claim holds since \(\bar{D'}=\left( \bar{D};0\right) \). It follows that in the first \(\beta \) components in \(\bar{D'}\), \(F(\bar{X},\bar{S})\) and \(F(\bar{X},\bar{S'})\) are close with respect to \(\bar{D}\), and in the last coordinate we have equality, since \(c+r_F =c'+r_F\) (using Observation 9). By the definition of \(F'\), we have that \(F\left( \bar{X},\bar{S}\right) ;(c+r_F)) =F'\left( \bar{X};\bar{R},\bar{S};c\right) \), therefore \(F'\left( {\bar{X};\bar{R}},{\bar{S};c}\right) \) is \(\left( \bar{D'},\varDelta \right) \)-close to \(F'\left( {\bar{X};\bar{R}},{\bar{S'};c'}\right) \), or \(F'\left( \bar{Y},\bar{\varUpsilon }\right) \) is \((\bar{D'},\varDelta )\)-close to \(F'\left( \bar{Y},\bar{\varUpsilon '}\right) \).

  1. (b)

    If Condition A.1 (i) (b) holds then we have \(F\left( \bar{X},\bar{S}\right) \preceq _{dom} F\left( \bar{X},\bar{S'}\right) \). Therefore,

    $$\begin{aligned} F\left( \bar{X},\bar{S};(c+r_F)\right) \preceq '_{dom} F\left( \bar{X},\bar{S'}\right) ;(c+r_F). \end{aligned}$$
    (4)

    This follows from the fact that \(c'=c\) and from the definition of \(\preceq '_{dom}\). By the definition of \(F'\), we get that

    $$\begin{aligned} F'\left( \bar{X};\bar{R},\bar{S};c\right) \preceq '_{dom} F'\left( \bar{X};\bar{R},\bar{S'};c'\right) , \end{aligned}$$
    (5)

    or

    $$\begin{aligned} F'\left( \bar{Y},\bar{\varUpsilon }\right) \preceq '_{dom} F'\left( \bar{Y},\bar{\varUpsilon '}\right) . \end{aligned}$$
    (6)
  1. (ii)

    If \(\bar{\varUpsilon } \preceq _{dom} \bar{\varUpsilon '}\) then, by the definition of \(\preceq '_{dom}\), we have that \(\bar{S} \preceq _{dom} \bar{S'}\). Hence, we get (4), (5) and (6).

  1. C.2

    For any \(\varDelta \ge 1\), \(H_F'\in \mathcal{H}'\), \(\bar{Y}=\bar{X};\bar{R} \in \mathbb {N}^{\alpha } \times \{0,1\}^{|\mathcal{F}|}\) and any pair of states \(\bar{\varUpsilon }={\bar{S};c}\) and \(\bar{\varUpsilon '} = {\bar{S'};c'} \in \mathbb {N}^{\beta '}\), we show that the following two properties hold.

    1. (i)

      If \(\bar{\varUpsilon }\) is \((\bar{D'},\varDelta )\)-close to \(\bar{\varUpsilon '}\) and \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\) then we need to show that \(H_F'(\bar{Y},\bar{\varUpsilon '})\le H_F'(\bar{Y},\bar{\varUpsilon })\). We distinguish between two cases.

      1. (a)

        If \(c + r_F > m\) then \(H_F'\left( \bar{Y},\bar{\varUpsilon '}\right) \le H_F'\left( \bar{Y},\bar{\varUpsilon }\right) =1\).

      2. (b)

        If \(c + r_F \le m\) then since \(\bar{S}\) is \(\left( \bar{D},\varDelta \right) \)-close to \(\bar{S'}\), \(c=c'\) (by Observation 9) and \(\bar{S} \preceq '_{qua} \bar{S'}\) (follows immediately from the fact that \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\)), we have from Condition A.2 (i) that

        $$\begin{aligned} H_F\left( \bar{X},\bar{S'}\right) \le H_F\left( \bar{X},\bar{S}\right) . \end{aligned}$$
        (7)

        Hence, we have

        $$\begin{aligned} \begin{array}{ll} H_F'\left( \bar{Y},\bar{\varUpsilon '}\right) &{} =H_F'\left( {\bar{X};\bar{R}},{\bar{S'};c'}\right) = H_F\left( \bar{X},\bar{S'}\right) \\ &{} \le H_F\left( \bar{X},\bar{S}\right) = H'_F\left( {\bar{X};\bar{R}},{\bar{S};c}\right) = H_F'\left( \bar{Y},\bar{\varUpsilon }\right) . \end{array} \end{aligned}$$

        The second equality holds since \(c + r_F \le m\). The inequality follows from (7).

      3. (ii)

        We need to show that if \(\bar{\varUpsilon } \preceq '_{dom}\bar{\varUpsilon '}\) then \( H_F'\left( \bar{Y},\bar{\varUpsilon '}\right) \le H_F'\left( \bar{Y},\bar{\varUpsilon }\right) \). Note that if \(\bar{\varUpsilon } \preceq '_{dom}\bar{\varUpsilon '}\) then, by the definition of the order \(\preceq '_{dom}\), it holds that \(c' \le c\) and \(\bar{S} \preceq '_{dom}\bar{S'}\). By Condition A.2(ii), it holds that \(H_F\left( \bar{X},\bar{S'}\right) \le H_F\left( \bar{X},\bar{S}\right) \). Now, we distinguish between two cases.

        1. (a)

          If \(c+r_F > m\) then recall that \(H_F ( \cdot ) \le 1\). Thus, \(H_F'\left( \bar{Y},\bar{\varUpsilon '}\right) \le 1 = H_F'\left( \bar{Y},\bar{\varUpsilon }\right) \).

        2. (b)

          If \(c+r_F \le m\) then \(c' + r_F \le m\). Therefore,

          $$\begin{aligned} \begin{array}{ll} H_F'\left( \bar{Y},\bar{\varUpsilon '}\right) &{} =H_F'\left( {\bar{X};\bar{R}},{\bar{S'};c'}\right) = H_F\left( \bar{X},\bar{S'}\right) \\ \\ &{} \le H_F\left( \bar{X},\bar{S}\right) = H'_F\left( {\bar{X};\bar{R}},{\bar{S};c}\right) =H_F'\left( \bar{Y},\bar{\varUpsilon }\right) . \end{array} \end{aligned}$$

          The second equality follows from the definition of \(H'_F\). The inequality follows from Condition A.2 (ii).

  1. C.3

    This condition gives some properties of the function G.

    1. (i)

      We need to show that there exists an integer \(g \ge 0\) (which does not depend on the input), such that for all \(\varDelta \ge 0\), and for any two states \(\bar{\varUpsilon },\bar{ \varUpsilon '}\) satisfying: \(\bar{\varUpsilon }\) is \((\bar{D}, \varDelta )\)-close to \(\bar{ \varUpsilon '}\), it holds that

      $$\begin{aligned} G\left( \bar{\varUpsilon }\right) \le G\left( \bar{\varUpsilon '}\right) \cdot \varDelta ^g. \end{aligned}$$
      (8)

      Assume that \(\varPi \) is a maximization problem. Intuitively, if we choose \(\bar{\varUpsilon '}\) instead of \(\bar{\varUpsilon }\), since \(\bar{\varUpsilon } \preceq '_{qua} \bar{\varUpsilon '}\), then we are still close to the maximum profit, due to (8).

      Since \(\bar{\varUpsilon }\) is \((\bar{D}, \varDelta )\)-close to \(\bar{ \varUpsilon '}\), by Observation 9, it holds that \(c=c'\) and also \(\bar{S} \preceq '_{qua} \bar{S'}\). By Condition A.3(i), there exists \(g \ge 0\) whose value does not depend on the input, such that \(G(\bar{S}) \le \varDelta ^g G(\bar{S'})\). We get that

      $$\begin{aligned} \begin{array}{ll} G'\left( \bar{\varUpsilon }\right) &{} \le G'\left( \bar{S};c\right) = G \left( \bar{S}\right) \\ &{} \le \varDelta ^g G\left( \bar{S'}\right) =\varDelta ^g G\left( \bar{S'};c'\right) = \varDelta ^g G'\left( \bar{ \varUpsilon '}\right) . \end{array} \end{aligned}$$
    2. (ii)

      We need to show that if \(\bar{\varUpsilon } \preceq _{dom} \bar{\varUpsilon '}\) then \(G'(\bar{\varUpsilon '}) \ge G'(\bar{\varUpsilon })\). Note that if \(\bar{\varUpsilon } \preceq _{dom} \bar{\varUpsilon '}\) then it holds that \(\bar{S} \preceq _{dom} \bar{S'}\). Therefore, by Condition A.3 (ii), we have that \(G(\bar{S'})\ge G(\bar{S})\). Hence,

      $$\begin{aligned} \begin{array}{ll} G'\left( \bar{\varUpsilon '}\right) &{} = G'\left( {\bar{S'};c'}\right) = G\left( \bar{S'}\right) \\ &{} \ge G\left( \bar{S}\right) = G'\left( {\bar{S};c}\right) =G'\left( \bar{\varUpsilon }\right) . \end{array} \end{aligned}$$
  2. C.4

    This condition gives several technical properties. Generally, we want to show the size of the table used by the dynamic program \(DP'\) is polynomial in the input size.

    1. (i)

      By the definitions of \(F'\), \(H'\) and \(G'\), they can all be evaluated in polynomial time, based on F, H and G. The relation \(\preceq '_{qua}\) can be decided in polynomial time, based on \(\preceq _{qua}\). Also, the relation \(\preceq '_{dom}\) is polynomial if \(\preceq _{dom}\) is.

    2. (ii)

      We have that \(|\mathcal{F}'|= |\mathcal{F}|\), therefore \(| \mathcal{F}'|\) is polynomial in n and \(\log \bar{x}\).

    3. (iii)

      Recall that \({ \mathcal S}'_0 = \left\{ ({\bar{S};{0}})|~\bar{S} \in \mathcal{S}_0\right\} \). Therefore, if \(\mathcal{S}_0\) can be computed in time that is polynomial in n and \(\log \bar{x}\), the same holds for \(\mathcal{S}'_0\).

    4. (iv)

      Given an instance \(I_R\) of \(R(\varPi ,m)\) and a coordinate \(1 \le j \le \beta '\), let \(\mathcal{V}'_j(I_R)\) denote the set of values of the j-th component of all vectors in the state spaces \(\mathcal{S}'_k\), \(1\le k \le n\). Then, for any coordinate \(1\le j \le \beta \), \(\mathcal{V}'_j(I_R)=\mathcal{V}_j(I)\) since, by definition, the first \(\beta \) coordinates in \(S_k\) and \(S'_k\) are identical. Also, \(\mathcal{V}'_{\beta '}(I_R) \subseteq \{\ell ~| ~\ell \in \mathbb {N},~ \ell \le n \}\). This holds since we assume that the cost associated with each transition is in \(\{ 0,1\}\). Therefore, during the first k phases of DP’, the cost per element is bounded by k. In fact, we may assume that the transition costs are polynomially bounded in n. In this case we have that, in phase k, \(\left| \mathcal{V}'_{\beta '}(I_R)\right| \le k \cdot poly(n)\).

This completes the proof of the Theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schieber, B., Shachnai, H., Tamir, G. et al. A Theory and Algorithms for Combinatorial Reoptimization. Algorithmica 80, 576–607 (2018). https://doi.org/10.1007/s00453-017-0274-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-017-0274-8

Keywords

Navigation