Skip to main content
Log in

A new fully polynomial time approximation scheme for the interval subset sum problem

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

The interval subset sum problem (ISSP) is a generalization of the well-known subset sum problem. Given a set of intervals \(\left\{ [a_{i,1},a_{i,2}]\right\} _{i=1}^n\) and a target integer T, the ISSP is to find a set of integers, at most one from each interval, such that their sum best approximates the target T but cannot exceed it. In this paper, we first study the computational complexity of the ISSP. We show that the ISSP is relatively easy to solve compared to the 0–1 knapsack problem. We also identify several subclasses of the ISSP which are polynomial time solvable (with high probability), albeit the problem is generally NP-hard. Then, we propose a new fully polynomial time approximation scheme for solving the general ISSP problem. The time and space complexities of the proposed scheme are \({{\mathcal {O}}}\left( n \max \left\{ 1 / \epsilon ,\log n\right\} \right) \) and \(\mathcal{O}\left( n+1/\epsilon \right) ,\) respectively, where \(\epsilon \) is the relative approximation error. To the best of our knowledge, the proposed scheme has almost the same time complexity but a significantly lower space complexity compared to the best known scheme. Both the correctness and efficiency of the proposed scheme are validated by numerical simulations. In particular, the proposed scheme successfully solves ISSP instances with \(n=100{,}000\) and \(\epsilon =0.1\%\) within 1 s.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. An algorithm that solves a problem is called a pseudo-polynomial time algorithm if its time complexity function is bounded above by a polynomial function related to the numeric value of the input, but exponential in the length of the input.

  2. An algorithm is called an FPTAS for a maximization problem if, for any given instance of the problem and any relative error \(\epsilon \in (0,1),\) the algorithm returns a solution value \(v^A\) satisfying \(v^A\ge \left( 1-\epsilon \right) v^*,\) where \(v^*\) is the optimal value of the corresponding instance, and its time complexity function is polynomial both in the length of the given data of the problem and in \(1/\epsilon .\)

  3. Strictly speaking, the variables \(\left\{ x_i\right\} _{i=1}^n\) in the ISSP are not semi-continuous, since \(x_i\) in the ISSP can be either zero or integers in the interval \([a_{i,1}, a_{i,2}]\) while the semi-continuous variable \(x_i\) can be zero or any continuous value in the corresponding interval. However, as will become clear soon, the intrinsic difficulty of solving the ISSP lies in determining whether \(x_i\) should be zero or belong to \([a_{i,1},a_{i,2}].\) Once this is done, it is simple to obtain an optimal solution of the ISSP. Therefore, we actually can drop the constraint \(x_i\in \mathbb {Z}\) in the ISSP.

References

  1. Brickell, E.F.: Solving low density knapsacks. In: Chaum, D. (ed.) Advances in Cryptology, pp. 25–37. Springer (1984)

  2. Carrión, M., Arroyo, J.M.: A computationally efficient mixed-integer linear formulation for the thermal unit commitment problem. IEEE Trans. Power Syst. 21(3), 1371–1378 (2006)

    Article  Google Scholar 

  3. Chvátal, V.: Hard knapsack problems. Oper. Res. 28(6), 1402–1411 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  4. Coster, M.J., Joux, A., LaMacchia, B.A., Odlyzko, A.M., Schnorr, C.P., Stern, J.: Improved low-density subset sum algorithms. Comput. Complex. 2(2), 111–128 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ibarra, O.H., Kim, C.E.: Fast approximation algorithms for the knapsack and sum of subset problems. J. ACM 22(4), 463–468 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  6. Kellerer, H., Mansini, R., Pferschy, U., Speranza, M.G.: An efficient fully polynomial approximation scheme for the subset-sum problem. J. Comput. Syst. Sci. 66(2), 349–370 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kellerer, H., Pferschy, U.: A new fully polynomial time approximation scheme for the knapsack problem. J. Comb. Optim. 3(1), 59–71 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  8. Kothari, A., Suri, S., Zhou, Y.H.: Interval subset sum and uniform-price auction clearing. In: Wang, L. (ed.) Computing and Combinatorics, pp. 608–620. Springer (2005)

  9. Lagarias, J.C., Odlyzko, A.M.: Solving low-density subset sum problems. J. ACM 32(1), 229–246 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  10. Lawler, E.L.: Fast approximation algorithms for knapsack problems. Math. Oper. Res. 4(4), 339–356 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  11. Magazine, M.J., Oguz, O.: A fully polynomial approximation algorithm for the 0–1 knapsack problem. Eur. J. Oper. Res. 8(3), 270–273 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  12. Michael, R.G., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., San Francisco (1979)

    MATH  Google Scholar 

  13. Pisinger, D.: Linear time algorithms for knapsack problems with bounded weights. J. Algorithms 33(1), 1–14 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  14. Sun, X.L., Zheng, X.J., Li, D.: Recent advances in mathematical programming with semi-continuous variables and cardinality constraint. J. Oper. Res. Soc. China 1(1), 55–77 (2013)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We thank Prof. Nelson Maculan and Prof. Sergiy Butenko for the useful discussions on this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ya-Feng Liu.

Additional information

Y.-F. Liu was partially supported by NSFC Grants 11671419, 11331012, 11631013, and 11571221. Y.-H. Dai was partially supported by the Key Project of Chinese National Programs for Fundamental Research and Development Grant 2015CB856000, and NSFC Grants 11631013, 11331012, and 71331001.

Appendices

Appendix 1: Proofs of Lemmas/Theorems/Corollaries

1.1 Proof of Theorem 2.1

Proof

We prove the theorem by dividing the proof into two cases, i.e., whether there exists binary \(\left\{ {\bar{y}}_i\right\} _{i=1}^n\) such that

$$\begin{aligned} \sum _{i=1}^n a_{i,1} {\bar{y}}_i \le T \le \sum \limits _{i=1}^n a_{i,2} {\bar{y}}_i. \end{aligned}$$
(6.1)

Case A There exists binary \(\left\{ {\bar{y}}_i\right\} _{i=1}^n\) such that (6.1) holds true. Without loss of generality, assume

$$\begin{aligned} {\bar{y}}_i=1,\quad i=1,2,\ldots ,{\bar{I}};\quad {\bar{y}}_i=0, \quad i={\bar{I}}+1,\ldots , n. \end{aligned}$$
(6.2)

In this case, we claim that both the optimal value of problems (2.1) and (2.2) are equal to T. Let us argue the above claim holds. First, it follows from (6.1) that \(\left\{ {\bar{y}}_i\right\} _{i=1}^n\) is feasible to problem (2.2) and the optimal value of problem (2.2) is equal to T. Now, we evaluate the objective function of problem (2.1) at point \(\left\{ {\bar{y}}_i\right\} _{i=1}^n:\)

$$\begin{aligned} g(z):=\sum _{i=1}^n \left( a_{i,1}\bar{y}_i+z_i\right) =\sum _{i=1}^{{\bar{I}}}a_{i,1}+\sum _{i=1}^{{\bar{I}}}z_i, \end{aligned}$$

where the last equality is due to the assumption (6.2). Since \(z_i\) can take any value in the interval \([0, a_{i,2}-a_{i,1}]\) for each \(i=1,2,\dots ,{\bar{I}},\) we know that g(z) can take any value between \({\left[ \sum _{i=1}^{{\bar{I}}} a_{i, 1}, \, \sum _{i=1}^{\bar{I}} a_{i,2}\right] }.\) Combining this, (6.1), and the constraint \(\sum _{i=1}^n (a_{i,1} y_i + z_i) \le T,\) we know that the optimal value of problem (2.1) is exactly equal to T. As a matter of fact, an integer solution \(\left\{ \bar{z}_i\right\} _{i=1}^n\) to achieve the optimal value T can be found as follows. Calculate

$$\begin{aligned} v^i=\sum _{l=1}^{i} a_{l,2}+\sum _{l=i+1}^{{\bar{I}}} a_{l,1},\quad i=0,1,\ldots ,{\bar{I}}-1. \end{aligned}$$

Then there must exist an index \(i^*\in \{0,1,\ldots ,{\bar{I}}-1\}\) such that \(v^{i^*}<T\le v^{i^*+1},\) and \(\left\{ \bar{z}_i\right\} _{i=1}^n\) to achieve the optimal value T is given by

$$\begin{aligned} {\bar{z}}_i=\left\{ \begin{array}{ll} \displaystyle a_{i,2}-a_{i,1},&{}\quad i=1,\ldots ,i^*,\\ T-v^{i^*},&{}\quad i=i^*+1,\\ 0,&{}\quad i=i^*+2,\ldots ,n. \end{array}\right. \end{aligned}$$
(6.3)

We now show that there is a correspondence between the solutions of problems (2.1) and (2.2). On one hand, for any solution \(\left\{ y_i^*,z_i^*\right\} _{i=1}^n\) of problem (2.1) achieving the optimal value T (from the above claim), we have \(\sum _{i=1}^n a_{i,1} y^*_i \le \sum _{i=1}^n (a_{i,1} y^*_i + z^*_i) = T\), and \(\sum \nolimits _{i=1}^n a_{i,2} y^*_i \ge \sum \nolimits _{i=1}^n (a_{i,1} y^*_i + z^*_i) = T.\) This immediately shows that \(\left\{ y_i^*\right\} _{i=1}^n\) is a solution of problem (2.2). On the other hand, suppose that \(\left\{ y_i^*\right\} _{i=1}^n\) is a solution of problem (2.2) achieving the optimal value T (from the above claim). Then, there must hold \(\sum \nolimits _{i=1}^n a_{i,1} y^*_i \le T \le \sum \nolimits _{i=1}^n a_{i,2} y^*_i.\) By using the same argument as in the proof of the above claim, we can show that there exists integers \(\left\{ z_i^*\right\} _{i=1}^n\) such that \(\left\{ y_i^*,z_i^*\right\} _{i=1}^n\) is a solution of problem (2.1).

Case B There does not exist binary \(\left\{ \bar{y}_i\right\} _{i=1}^n\) such that (6.1) holds true. This means that, for any binary \(\left\{ y_i\right\} _{i=1}^n\) satisfying \(\sum _{i=1}^n a_{i,1} y_i \le T\), we must have \(\sum _{i=1}^n a_{i,2} y_i < T.\) Therefore, for any binary \(\left\{ y_i\right\} _{i=1}^n,\) we have

$$\begin{aligned} \sum _{i=1}^n a_{i,1} y_i \le T\Longleftrightarrow \sum _{i=1}^n a_{i,2} y_i < T. \end{aligned}$$
(6.4)

Hence, the optimal value of problem (2.2) is strictly less than T and it is equivalent to

$$\begin{aligned} \begin{array}{cl} \max \limits _{y} &{} \displaystyle \sum _{i=1}^n a_{i,2} y_i \\ \text{ s.t. } &{} \displaystyle \sum _{i=1}^n a_{i,1} y_i \le T, \\ &{} y_i \in \{0, 1\},\quad i=1,2,\ldots ,n. \end{array}\end{aligned}$$
(6.5)

Moreover, the relation (6.4) implies that the solution of problem (2.1) must satisfy \(z_i=y_i\left( a_{i,2}-a_{i,1}\right) \) for all \(i=1,2,\ldots ,n,\) since this maximizes the objective without violating the constraints. Combining this and (6.4), we know that problem (2.1) is equivalent to problem (6.5).

Combining Cases A and B, we can conclude that ISSP (2.1) is equivalent to problem (2.2). This completes the proof. \(\square \)

1.2 Proof of Theorem 2.2

Proof

We prove the theorem by considering the following two cases.

Case A \(T \ge \sum _{i=1}^{n} a_{i,1}.\) In this case, it is simple to find the solution of ISSP (2.1): if \(T\le \sum _{i=1}^{n} a_{i,2},\) then the solution to ISSP (2.1) is \(y_i=1\) for all \(i=1,2,\ldots ,n\) and \(z_i\) is given by (6.3) with \({\bar{I}}\) there being replaced with n;  otherwise the solution to ISSP (2.1) is \(y_i=1\) and \(z_i=a_{i,2}-a_{i,1}\) for all \(i=1,2,\ldots ,n.\)

Case B \(T < \sum _{i=1}^{n} a_{i,1}.\) Then, we can find \(I\ge 0\) such that

$$\begin{aligned} \sum _{i=1}^{I} a_{i,1} \le T < \sum _{i=1}^{I+1} a_{i,1} \end{aligned}$$
(6.6)

in polynomial time. If

$$\begin{aligned} \sum _{i=1}^I a_{i,2}>T, \end{aligned}$$
(6.7)

then we can easily construct a solution such that the optimal value of ISSP (2.1) is T in polynomial time as in Case A of the proof of Theorem 2.1 and thus ISSP (2.1) is polynomial time solvable. Next, we show the truth of (6.7) under the assumption (2.4).

From the right hand side of (6.6), we get

$$\begin{aligned} T < \sum _{i=1}^{I+1} a_{i,1} \le (I+1) \max \limits _{1 \le i \le n} \{ a_{i,1} \}, \end{aligned}$$

which further implies

$$\begin{aligned} I \ge \left\lfloor \frac{\displaystyle T}{\displaystyle \max \nolimits _{1 \le i \le n} \{ a_{i,1} \} } \right\rfloor . \end{aligned}$$

Combining this with (2.4) yields

$$\begin{aligned} I \ge \left\lceil \frac{\displaystyle \max \nolimits _{1 \le i \le n} \{ a_{i,1} \} }{\displaystyle \min \nolimits _{1\le i \le n} \{ a_{i,2} - a_{i,1} \} } \right\rceil , \end{aligned}$$
(6.8)

which means

$$\begin{aligned} I \min \limits _{1\le i \le n} \{ a_{i,2} - a_{i,1} \} \ge \max \limits _{1 \le i \le n} \{ a_{i,1} \}. \end{aligned}$$
(6.9)

Now, we can use (6.6) and (6.9) to obtain (6.7). In particular, we have

$$\begin{aligned} \sum _{i=1}^{I} a_{i,2}&= \sum _{i=1}^{I} (a_{i,2} - a_{i,1}) + \sum _{i=1}^{I} a_{i,1} \\&\ge I \min _{1\le i \le n} \{ a_{i,2} - a_{i,1} \} + \sum _{i=1}^{I} a_{i,1} \\&\ge \max _{1 \le i \le n} \{ a_{i,1} \} + \sum _{i=1}^{I} a_{i,1} \\&> T, \end{aligned}$$

where the second inequality is due to (6.9) and the last inequality is due to the right hand side of (6.6). This completes the proof of Theorem 2.2. \(\square \)

1.3 Proof of Theorem 2.3

Proof

Without loss of generality, assume \(a_{1,2} = \max \nolimits _{1\le i\le n} a_{i,2}\) in this proof. We have the following result. \(\square \)

Lemma 6.1

Suppose (2.5) holds true for some \(c > 1\) and T obeys the uniform distribution over the interval \(\left( a_{1,2}, \sum _{i=1}^n a_{i,2}\right] .\) Then, the probability that ISSP (1.3) is polynomial time solvable is at least

$$\begin{aligned} \frac{\displaystyle \sum \nolimits _{j=2}^n \min \left\{ \sum \nolimits _{i=1}^{j} \left( 1-\frac{\displaystyle 1}{\displaystyle c} \right) a_{i,2}, a_{j,2} \right\} }{\displaystyle \sum \nolimits _{j=2}^n a_{j,2} }. \end{aligned}$$

Combining Lemma 6.1 and the following inequality

$$\begin{aligned} \sum \limits _{j=2}^n \min \left\{ \sum \limits _{i=1}^{j} \left( 1-\frac{\displaystyle 1}{\displaystyle c} \right) a_{i,2}, a_{j,2} \right\}&\ge \sum \limits _{j=2}^n \min \left\{ 2 \left( 1-\frac{\displaystyle 1}{\displaystyle c} \right) a_{j,2}, a_{j,2} \right\} ,\\&= \min \left\{ 2 \left( 1-\frac{\displaystyle 1}{\displaystyle c} \right) ,1\right\} \sum \limits _{j=2}^n a_{j,2}, \end{aligned}$$

we immediately obtain Theorem 2.3.

Next, we show the truth of Lemma 6.1. The interval \(\left( a_{1,2}, \sum \nolimits _{i=1}^n a_{i,2}\right] \) can be partitioned as follows:

$$\begin{aligned} \left( a_{1,2}, \sum \limits _{i=1}^n a_{i,2}\right] = \bigcup \limits _{j=2}^n \left( \sum \limits _{i=1}^{j-1} a_{i,2}, \sum \limits _{i=1}^{j} a_{i,2} \right] . \end{aligned}$$
(6.10)

As shown in Case A of Theorem 2.1, for any \(T \in \left[ \sum \nolimits _{i=1}^j a_{i,1}, \sum \nolimits _{i=1}^{j} a_{i,2} \right] \) with \(j\in \{1,2,\ldots ,n\}\), ISSP (1.3) is polynomial time solvable. Therefore, the probability that ISSP (1.3) is polynomial time solvable is greater than or equal to

$$\begin{aligned} \frac{\displaystyle \left| \bigcup \nolimits _{j=2}^n \left( \left[ \sum \nolimits _{i=1}^j a_{i,1}, \sum \nolimits _{i=1}^{j} a_{i,2} \right] \bigcap \left( \sum \nolimits _{i=1}^{j-1} a_{i,2}, \sum \nolimits _{i=1}^{j} a_{i,2} \right] \right) \right| }{\displaystyle \left| \bigcup \nolimits _{j=2}^n \left( \sum \nolimits _{i=1}^{j-1} a_{i,2}, \sum \nolimits _{i=1}^{j} a_{i,2} \right] \right| }, \end{aligned}$$
(6.11)

where \(|\cdot |\) denotes the length of the corresponding set. Moreover, it can be verified that the denominator of (6.11) is equal to \(\sum \nolimits _{j=2}^n a_{j,2}\) and the numerator of (6.11) is lower bounded by \(\sum \nolimits _{j=2}^n \min \left\{ \sum \nolimits _{i=1}^{j} \left( 1-\frac{\displaystyle 1}{\displaystyle c} \right) a_{i,2}, a_{j,2} \right\} .\) This completes the proof of Lemma 6.1.

1.4 Proof of Lemma 3.4

Proof

By Lemma 3.3, the ISSP has an optimal solution with at most one midrange element and all left anchored intervals precede the midrange interval and all right anchored intervals follow the midrange interval. Without loss of generality, suppose \(\left\{ x_i^*\right\} _{i=1}^n\) is such an optimal solution with \([a_{j,1}, a_{j,2}]\) being the only midrange interval (i.e., \(x_j^*\in (a_{j,1}, a_{j,2})\)) and \([a_{k,1},a_{k,2}]\) with \(k>j\) being the last right anchored interval (i.e., \(x_k^*=a_{k,2}\)). Next, we construct an optimal solution \(\left\{ \tilde{x}_i^*\right\} _{i=1}^n\) as follows:

$$\begin{aligned} {\tilde{x}}_{i}^*=\left\{ \begin{array}{ll} x_i^*,&{}\quad \text {if }\, i\ne j~\text {or}~k,\\ a_{k,2}-a_{j,2}+x_j^*, &{}\quad \text {if}\, i=k,\\ a_{j,2}, &{}\quad \text {if}\, i=j.\\ \end{array} \right. \end{aligned}$$

Since \(a_{j,2}-a_{j,1}\le a_{k,2}-a_{k,1}\) and \(x_j^*\in (a_{j,1},a_{j,2}),\) it follows that

$$\begin{aligned} a_{k,1}=a_{k,2}-(a_{k,2}-a_{k,1}) \le a_{k,2}-(a_{j,2}-a_{j,1})<a_{k,2}-(a_{j,2}-x_{j}^*)<a_{k,2}. \end{aligned}$$

Therefore, \(\left\{ {\tilde{x}}_i^*\right\} _{i=1}^n\) constructed in the above is feasible and optimal to the ISSP. Moreover, the solution \(\left\{ {\tilde{x}}_i^*\right\} _{i=1}^n\) contains only one midrange element \({\tilde{x}}_k^*\) and there are neither left nor right anchored intervals following this midrange interval. The proof is completed. \(\square \)

1.5 Proof of Lemma 3.5

Proof

We prove the lemma by induction. Obviously, the lemma is true for \(i=1.\) Assume it is true for some \(i\ge 1.\) Next, we show it is also true for \(i+1.\) By the assumption and the fact \(\varDelta ^*_i \subseteq \varDelta ^*_{i+1},\) we only need to consider the elements \(\delta \) in the set

$$\begin{aligned} \varDelta ^*_{i+1}{\setminus }\varDelta ^*_{i}=\left\{ \delta + a_{i+1,1}, \delta + a_{i+1,2} ~|~ \delta \in \varDelta _{i}^* \} \cup \{ a_{i+1,1}, a_{i+1,2} \right\} . \end{aligned}$$
(6.12)

The lemma is trivially true if \(\delta =a_{i+1,1}\) or \(\delta =a_{i+1,2}.\) It remains to show that the lemma is true for \(\delta =\delta '+v\le {\tilde{T}}\) where \(\delta '\in \varDelta ^*_{i}\) and \(v\in \{a_{i+1,1}, a_{i+1,2}\}.\) According to the assumption, we divide the subsequent proof into two Cases A and B.

Case A There exist \(\underline{\delta '},\overline{\delta '} \in \varDelta _i\) such that

$$\begin{aligned} \underline{\delta '} \le \delta ' \le \overline{\delta '}~\text {and}~\overline{\delta '} - \underline{\delta '} \le \epsilon T. \end{aligned}$$
(6.13)

In this case, we further consider the following three subcases A1, A2, and A3.

  1. (A1)

    \(\underline{\delta '}+v \in I_j\) and \(\overline{\delta '}+v \in I_j\) for some j. Then, let \({\underline{\delta }}\) and \({\overline{\delta }}\) be the minimum and maximum values of \(\varDelta _{i+1}\) in the subinterval \(I_j,\) respectively. It is simple to check that \({\underline{\delta }} \le \underline{\delta '}+v \le \delta \le \overline{\delta '}+v \le {\overline{\delta }}\) and \({\overline{\delta }} - {\underline{\delta }} \le \epsilon T.\)

  2. (A2)

    \(\underline{\delta '}+v \in I_j\) and \(\overline{\delta '}+v \in I_{j+1}\) for some j. Let \(\underline{\delta _j}\) and \(\overline{\delta _j}\) (\(\underline{\delta _{j+1}}\) and \(\overline{\delta _{j+1}}\)) be the minimum and maximum values of \(\varDelta _{i+1}\) in the subinterval \(I_j\) (\(I_{j+1}\)), respectively. Then, by (6.13) and the assumption in the subcase A2, there must exist \(\underline{\delta _j},\,\overline{\delta _j},\,\underline{\delta _{j+1}},\,\overline{\delta _{j+1}} \in \varDelta _{i+1}\) such that \(\underline{\delta _j} \le \underline{\delta '}+v \le \overline{\delta _j} \le \underline{\delta _{j+1}} \le \overline{\delta '}+v \le \overline{\delta _{j+1}}\), \(\overline{\delta _{j}} - \underline{\delta _{j}} \le \epsilon T\), \(\underline{\delta _{j+1}} - \overline{\delta _j} \le ( \overline{\delta '}+v) - ( \underline{\delta '}+v ) \le \epsilon T\), and \(\overline{\delta _{j+1}} - \underline{\delta _{j+1}} \le \epsilon T.\) If \(\delta '+v\in [\underline{\delta _j}, \overline{\delta _j}],\) let \({\underline{\delta }}=\underline{\delta _j}\) and \({\overline{\delta }}=\overline{\delta _j};\) if \(\delta '+v\in [\overline{\delta _j}, \underline{\delta _{j+1}}],\) let \({\underline{\delta }}=\overline{\delta _j}\) and \({\overline{\delta }}=\underline{\delta _{j+1}};\) and if \(\delta '+v\in [\underline{\delta _{j+1}}, \overline{\delta _{j+1}}],\) let \({\underline{\delta }}=\underline{\delta _{j+1}}\) and \({\overline{\delta }}=\overline{\delta _{j+1}}.\) It is simple to verify that \({\underline{\delta }}\) and \({\overline{\delta }}\) constructed in the above satisfy \({\underline{\delta }} \le \delta \le {\overline{\delta }}\) and \({\overline{\delta }} - {\underline{\delta }} \le \epsilon T.\)

  3. (A3)

    \(\underline{\delta '}+v \in I_j\) for some j and \(\overline{\delta '}+v > {\tilde{T}}.\) Since \({\tilde{T}}-\epsilon T\) and \({\tilde{T}}\) are in different subintervals, we assume \(\tilde{T}-\epsilon T\in I_j\) and \({\tilde{T}}\in I_{j+1}\) without loss of generality. Then, we get either \(\delta '+v\in I_j\) or \(\delta '+v\in I_{j+1}.\) If \(\delta '+v\in I_j,\) let \({\underline{\delta }}\) and \({\overline{\delta }}(\le {\tilde{T}})\) to be the minimum and maximum values of \(\varDelta _{i+1}\) in the subinterval \(I_j,\) respectively, and we thus have \({\underline{\delta }} \le \delta \le {\overline{\delta }}\) and \({\overline{\delta }} - {\underline{\delta }} \le \epsilon T.\) If \(\delta '+v\in I_{j+1},\) let \(\underline{\delta }\) be the minimum value of \(\varDelta _{i+1}\) in \(I_{j+1}\). Since \({\tilde{T}}-\epsilon T\in I_j,\) it follows that \({\tilde{T}} - \epsilon T \le {\underline{\delta }} \le \delta \le {\tilde{T}}.\)

Case B There exists \(\underline{\delta '} \in \varDelta _i\) such that \({\tilde{T}} - \epsilon T \le \underline{\delta '} \le \delta ' \le {\tilde{T}}.\) Without loss of generality, assume \({\tilde{T}}-\epsilon T\in I_j\) and \({\tilde{T}}\in I_{j+1}\) for some j. We can show that the lemma is true for this case by using the same argument as in the above subcase A3. More specifically, we can show that: if \(\delta \in I_j,\) there exist \({\underline{\delta }},{\overline{\delta }} \in \varDelta _{i+1}\) such that \({\underline{\delta }} \le \delta \le {\overline{\delta }}\) and \({\overline{\delta }} - {\underline{\delta }} \le \epsilon T;\) and if \(\delta \in I_{j+1},\) there exists \(\underline{\delta }\) such that \({\tilde{T}} - \epsilon T \le {\underline{\delta }} \le \delta \le {\tilde{T}}.\)

This completes the proof of Lemma 3.5. \(\square \)

1.6 Proof of Corollary 3.6

Proof

For \(\delta ^* \in \varDelta _{{\tilde{\varLambda }}}^*,\) by Lemma 3.5, we have one of the following two statements:

  1. 1.

    there exist \({\underline{\delta }},\,{\overline{\delta }} \in \varDelta _{{\tilde{\varLambda }}}\) such that \({\underline{\delta }} \le \delta ^* \le {\overline{\delta }}\) and \({\overline{\delta }} - {\underline{\delta }} \le \epsilon T;\)

  2. 2.

    there exists \({\underline{\delta }} \in \varDelta _{\tilde{\varLambda }}\) such that \({\tilde{T}} - \epsilon T \le {\underline{\delta }} \le \delta ^* \le {\tilde{T}}\).

If the first statement is true, let \(\delta \) be \({\overline{\delta }}\) there. Since \(\delta ^*\) is the optimal value of the ISSP, we must have \(\delta =\delta ^*.\) If the second statement is true, let \(\delta \) be \({\underline{\delta }}\) there. Then, we immediately obtain the desired result. \(\square \)

1.7 Proof of Lemma 3.7

Proof

Since \(\delta \in \varDelta _{{\tilde{\varLambda }}}\subseteq \varDelta _{\tilde{\varLambda }^*}\), it follows that there must exist \(\delta _1 \in \left\{ 0\right\} \cup \varDelta _{{\tilde{\varLambda }}_1}^*\) and \(\delta _2 \in \left\{ 0\right\} \cup \varDelta _{{\tilde{\varLambda }}_2}^*\) such that \(\delta _1 + \delta _2 = \delta \). Invoking Lemma 3.5 again, we have one of the following two statements:

  1. 1.

    there exists \(\underline{\delta _1} \in \varDelta _{\tilde{\varLambda }_1}\) such that \({\tilde{T}} - \epsilon T \le \underline{\delta _1}\le \delta _1\le {\tilde{T}}\) or there exists \(\underline{\delta _2} \in \varDelta _{{\tilde{\varLambda }}_2}\) such that \({\tilde{T}} - \epsilon T \le \underline{\delta _2}\le \delta _2\le {\tilde{T}};\)

  2. 2.

    there exist \(\underline{\delta _{1}} \in \varDelta _{\tilde{\varLambda }_1}, \overline{\delta _1} \in \varDelta _{{\tilde{\varLambda }}_1} , \underline{\delta _2} \in \varDelta _{{\tilde{\varLambda }}_2}, \overline{\delta _2} \in \varDelta _{{\tilde{\varLambda }}_2}\) such that \(\underline{\delta _{1}} \le \delta _1 \le {\overline{\delta }}_{1},\) \(\underline{\delta _{2}} \le \delta _2 \le \overline{\delta _{2}},\) \(0\le \overline{\delta _{1}} - \underline{\delta _{1}} \le \epsilon T\), and \(0\le \overline{\delta _{2}} - \underline{\delta _{2}} \le \epsilon T.\)

If the first statement is true, then let \((u_1,u_2)=(\underline{\delta _1},0)\) or \((u_1,u_2)=(0,\underline{\delta _2})\). Obviously, \(u_1\) and \(u_2\) defined in the above satisfy \({\tilde{T}} - \epsilon T \le u_1 + u_2 \le {\tilde{T}}.\) It remains to show the lemma if the second statement is true. We consider the following three cases separately.

Case A \(\underline{\delta _{1}}+\underline{\delta _{2}} \ge \tilde{T} - \epsilon T.\) In this case, let \((u_1,u_2)=(\underline{\delta _{1}},\underline{\delta _2}).\) Then, combining the facts \(\delta _1 + \delta _2 = \delta ,\) \(\underline{\delta _{1}} \le \delta _1,\) \(\underline{\delta _{2}} \le \delta _2\) and \(\delta \le {\tilde{T}},\) we immediately obtain \({\tilde{T}} - \epsilon T\le u_1+u_2=\underline{\delta _{1}}+\underline{\delta _{2}}\le \delta _1+\delta _2=\delta \le {\tilde{T}}.\)

Case B \(\overline{\delta _{1}}+\overline{\delta _{2}} \le \tilde{T}.\) In this case, let \((u_1,u_2)=(\overline{\delta _{1}},\overline{\delta _2}).\) We can use essentially the same argument as in the above Case A to show \({\tilde{T}} - \epsilon T\le u_1+u_2\le {\tilde{T}}.\)

Case C \(\underline{\delta _{1}}+\underline{\delta _{2}} < {\tilde{T}} - \epsilon T\) and \(\overline{\delta _{1}}+\overline{\delta _{2}} > {\tilde{T}}.\) In this case, let \((u_1,u_2) = (\overline{\delta _{1}},\underline{\delta _{2}}).\) Combining the conditions assumed in this case, \(\overline{\delta _{1}} - \underline{\delta _{1}} \le \epsilon T,\) and \(\overline{\delta _{2}} - \underline{\delta _{2}} \le \epsilon T,\) we immediately obtain \(u_1+u_2=\overline{\delta _{1}} + \underline{\delta _{2}} = (\overline{\delta _{1}} + \overline{\delta _{2}}) - (\overline{\delta _{2}} -\underline{\delta _{2}} ) > {\tilde{T}} - \epsilon T\) and \(u_1+u_2=\overline{\delta _{1} } + \underline{\delta _{2}} = (\underline{\delta _{1}} + \underline{\delta _{2}}) + (\overline{\delta _{1}} -\underline{\delta _{1}} ) < {\tilde{T}}\).

From the above analysis, we conclude that there exist \(u_1 \in \{0\} \cup \varDelta _{{\tilde{\varLambda }}_1}\) and \(u_2 \in \{0\} \cup \varDelta _{{\tilde{\varLambda }}_2}\) such that \({\tilde{T}} - \epsilon T \le u_1 + u_2 \le {\tilde{T}}\). The proof is completed. \(\square \)

1.8 Proof of Lemma 3.8

Proof

By the assumption, the largest number in \(\varDelta ^*_{{\tilde{\varLambda }}}\) associated with the target \({\tilde{T}}\) must be in the interval \([{\tilde{T}} - \epsilon T, {\tilde{T}}]\). Then, it follows from Corollary 3.6 that the largest number in \(\varDelta _{{\tilde{\varLambda }}}\) associated with the target \({\tilde{T}}\) must also lie in the interval \([{\tilde{T}} - \epsilon T , {\tilde{T}}].\) Recall the procedure backtracking, we know the following facts: when the procedure starts, the largest number in \(\varDelta _{{\tilde{\varLambda }}}\) associated with the target \({\tilde{T}}\) (i.e., \(u=\max \{\delta ^+(j), \delta ^-(j) ~|~ \delta ^+(j)\le \tilde{T}, \delta ^-(j) \le {\tilde{T}} \}\)) is found in line 1; the recent intervals which contribute to generate u are backtracked in lines 4–18; and when the procedure terminates, \(y+u\) is in the interval \([{\tilde{T}} - \epsilon T, {\tilde{T}}].\) Since u is generated by the procedure relaxed dynamic programming, it follows that \(u \in \{ 0 \} \cup \varDelta _{\tilde{\varLambda } {\setminus } \varLambda ^E}.\) This further implies that there exists \(\delta \in \{ 0 \} \cup \varDelta ^*_{{\tilde{\varLambda }} {\setminus } \varLambda ^E}\) such that \({\tilde{T}} - \epsilon T \le y + \delta \le {\tilde{T}}.\) This completes the proof of Lemma 3.8. \(\square \)

1.9 Proof of Lemma 3.9

Proof

First, as argued in the proof of Lemma 3.8, we know that the largest number in \(\varDelta _{{\tilde{\varLambda }}}\) associated with the target \({\tilde{T}}\) must also lie in the interval \([{\tilde{T}} - \epsilon T , {\tilde{T}}].\) By Lemma 3.7, we can split \({\tilde{\varLambda }}\) into \({\tilde{\varLambda }}_1\) and \(\tilde{\varLambda }_2\) as in lines 2 and 3 of the procedure divide and conquer, and find \(u_1 \in \{0\} \cup \varDelta _{{\tilde{\varLambda }}_1}\) and \(u_2 \in \{0\} \cup \varDelta _{{\tilde{\varLambda }}_2}\) such that \({\tilde{T}} - \epsilon T \le u_1 + u_2 \le {\tilde{T}}\). Without loss of generality, assume both \(u_1\) and \(u_2\) are positive. Otherwise, if \(u_1=0\) (\(u_2=0\)), then we can remove the intervals in \({\tilde{\varLambda }}_1\) (\({\tilde{\varLambda }}_2\)) and split \({\tilde{\varLambda }}_2\) (\({\tilde{\varLambda }}_1\)) again. This implies that there exists positive \(u_1 \in \varDelta ^*_{{\tilde{\varLambda }}_1}\) satisfying \(u_1 \in [{\tilde{T}} - u_2 - \epsilon T, {\tilde{T}} - u_2].\) Moreover, by Lemma 3.8, we know that there exists \(\delta \in \{0\} \cup \varDelta ^*_{{\tilde{\varLambda }}_1 {\setminus } \varLambda ^E}\) such that \(\delta \in [{\tilde{T}}-u_2-y_1^B-\epsilon T, {\tilde{T}} -u_2-y_1^B],\) where \(y_1^B\ge 1\) is the output of line 7 of the procedure divide and conquer. This in turn shows that the assumption of Lemma 3.9 is satisfied for the procedure divide and conquer in line 10. Since at least one interval is removed after each recursive call, the recursive calls of the procedure divide and conquer will eventually end. Consequently, we get \(y_1^{DC} \in [{\tilde{T}} - u_2 - y_1^B - \epsilon T, {\tilde{T}} - u_2 - y_1^B]\), which is equivalent to \(u_2 \in [{\tilde{T}} - y_1^B - y_1^{DC} - \epsilon T, {\tilde{T}} - y_1^B - y_1^{DC} ]\). Similar analysis applies to lines 13, 14, and 17 of the procedure divide and conquer. This completes the proof of Lemma 3.9. \(\square \)

1.10 Proof of Theorem 3.10

Proof

By enumerating all intervals \(\left\{ [a_{i,1},a_{i,2}]\right\} _{i=1}^n,\) Algorithm 3.3 can successfully find the (possible) midrange interval \([a_{m,1},a_{m,2}]\) and the largest number \({\hat{\delta }}\) in \(\varDelta _{\varLambda {\setminus }\varLambda ^E}\) associated with the target \(T-{a_{m,1}}.\) Suppose that \(\delta ^*\) is the largest number in \(\varDelta ^*_{\varLambda {\setminus }\varLambda ^E}\) associated with the target \(T-{a_{m,1}}.\) Then, by Lemma 3.5, we have either \({\hat{\delta }}=\delta ^*\) or \(T-{a_{m,1}}-\epsilon T\le \hat{\delta }\le \delta ^*\le T-{a_{m,1}}.\) We consider the following four cases.

Case A \({\hat{\delta }}+\epsilon T<T-{a_{m,1}}\) and \(\hat{\delta }=\delta ^*.\) In this case, we have \({\hat{\delta }} \le \delta ^*\le {\hat{\delta }}+\epsilon T.\) Using Lemma 3.9, we immediately obtain \({\hat{\delta }}\le {\hat{T}}^A\le \hat{\delta }+\epsilon T,\) where \({\hat{T}}^A\) is the output the procedure divide and conquer \(\left( \varLambda {\setminus } \varLambda ^E, \hat{\delta }+\epsilon T\right) \) in line 27 of Algorithm 3.3. Since \({\hat{\delta }}=\delta ^*\) is the largest number in \(\varDelta ^*_{\varLambda {\setminus }\varLambda ^E}\) associated with the target \(T-{a_{m,1}},\) it follows that \(\hat{T}^A={\hat{\delta }}=\delta ^*.\) Hence,

$$\begin{aligned} T^A={\hat{T}}^A+\min \left\{ a_{m,2},T-\hat{T}^A\right\} =\delta ^*+\min \left\{ a_{m,2},T-\delta ^*\right\} \end{aligned}$$

is the optimal value of the ISSP.

Case B \({\hat{\delta }}+\epsilon T<T-{a_{m,1}}\) and \(T-{a_{m,1}}-\epsilon T\le {\hat{\delta }}\le \delta ^*\le T-{a_{m,1}}.\) This case will not happen, since the two conditions contradict with each other.

Case C \({\hat{\delta }}+\epsilon T\ge T-{a_{m,1}}\) and \(\hat{\delta }=\delta ^*.\) From these two conditions and the fact that \(\delta ^*\) is the largest number in \(\varDelta ^*_{\varLambda {\setminus }\varLambda ^E}\) associated with the target \(T-{a_{m,1}},\) we obtain \(\delta ^*\in [T-{a_{m,1}}-\epsilon T, T-{a_{m,1}}].\) Then, by Lemma 3.9, we know that the returned approximate value of \({\hat{T}}^A\) satisfies \(T-{a_{m,1}}\ge {\hat{T}}^A \ge T-{a_{m,1}}-\epsilon T,\) and \(T^A=\hat{T}^A+\min \left\{ a_{m,2},T-{\hat{T}}^A\right\} \ge {\hat{T}}^A+a_{m,1}\ge T-\epsilon T.\)

Case D \({\hat{\delta }}+\epsilon T\ge T-{a_{m,1}}\) and \(T-{a_{m,1}}-\epsilon T\le {\hat{\delta }}\le \delta ^*\le T-{a_{m,1}}.\) The same argument as in the above Case C shows that \(T-{a_{m,1}}\ge {\hat{T}}^A \ge T-{a_{m,1}}-\epsilon T\) and \(T^A\ge T-\epsilon T.\)

From the above analysis, we can conclude that Algorithm 3.3 either returns an optimal solution, or an approximate solution with the objective value being great than or equal to \(T-\epsilon T\). The proof is completed. \(\square \)

1.11 Proof of Theorem 3.11

Proof

We analyze the time and space complexities of Algorithm 3.3 separately. We first consider the time complexity of Algorithm 3.3. The time complexity of sorting n intervals by length (line 1 of Algorithm 3.3) is \({{\mathcal {O}}}(n \log n)\). The procedure relaxed dynamic programming is called many times in Algorithm 3.3 and performing the procedure relaxed dynamic programming is the dominated computational cost in the recursive framework of the procedure divide and conquer. It is simple to see that the time complexity of performing the procedure relaxed dynamic programming with inputs \(({\tilde{\varLambda }},{\tilde{T}})\) is \({{\mathcal {O}}}\left( {\tilde{n}} \tilde{l}\right) ,\) where \({\tilde{n}}=|{\tilde{\varLambda }}|\) and \(\tilde{l}=\left\lceil \frac{\displaystyle {\tilde{T}} }{\displaystyle \epsilon T } \right\rceil .\)

Now, we bound the times that the procedure divide and conquer is performed. To do so, let us denote the root node of the recursive tree as level 0. Then, there are at most \(2^l\le n\) nodes in the lth level of the recursive tree. For ease of presentation, we assume that there are \(2^l\) nodes in the lth level of the recursive tree and denote the targets of these \(2^l\) nodes by \({\tilde{T}}_{l,1},\) \({\tilde{T}}_{l,2},\) \(\ldots , \tilde{T}_{l,2^l}\) and item sets of these \(2^l\) nodes by \(\tilde{\varLambda }_{l,1},\) \({\tilde{\varLambda }}_{l,2},\) \(\ldots , \tilde{\varLambda }_{l,2^l}\) for all \(l=1,2,\ldots ,\lceil \log n\rceil .\) Then, we must have \(\sum \nolimits _{i=1}^{2^l} {\tilde{T}}_{l,i} \le T\) for all \(l=1,2,\ldots ,\lceil \log n\rceil \) and \(\left| \tilde{\varLambda }_{l,i}\right| ={{\mathcal {O}}}\left( \frac{n}{2^l}\right) \) for all \(i=1,2,\ldots ,2^l\) and \(l=1,2,\ldots , \lceil \log n\rceil .\) Therefore, the total time complexity of calling the procedure divide and conquer in Algorithm 3.3 is

$$\begin{aligned} \sum _{l=0}^{\lceil \log n\rceil }\sum _{i=1}^{2^l}\mathcal{O}\left( \left\lceil \frac{{\tilde{T}}_{l,i}}{\epsilon T}\right\rceil \left| \varLambda _{l,i}\right| \right) \le \mathcal{O}(n/\epsilon ), \end{aligned}$$

and the total time complexity of Algorithm 3.3 is \({{\mathcal {O}}}\left( n \max \left\{ 1 / \epsilon ,\log n\right\} \right) .\)

Next, we consider the space complexity of Algorithm 3.3. It takes \({{\mathcal {O}}}(n)\) space complexity to store the interval set \(\left\{ [a_{i,1}, a_{i,2}]\right\} _{i=1}^n.\) The space complexity required to store the relaxed dynamic programming arrays \(\delta _1^-(\cdot )\), \(\delta _1^+(\cdot )\), \(\delta _2^-(\cdot )\), \(\delta _2^+(\cdot )\), \(d_{1,1}(\cdot )\), \(d_{1,2}(\cdot )\) \(d_{2,1}(\cdot )\), \(d_{2,2}(\cdot )\) is \(\mathcal{O}(1/\epsilon )\). Since the memory space can be reused in the recursive calls of the procedure divide and conquer, we conclude that the total space complexity of Algorithm 3.3 is \({{\mathcal {O}}}(n+1/\epsilon ).\) \(\square \)

Appendix 2: An illustration of Algorithms 3.2 and 3.3

To make Algorithms 3.2 and 3.3 clear, an illustration of applying them to solve the following ISSP instance is given:

$$\begin{aligned}&T=100,\quad n=4,\\&{}[a_{1,1},a_{1,2}]=[10,20],\quad [a_{2,1},a_{2,2}]=[10,25],\\&{}[a_{3,1},a_{3,2}]=[60,85],\quad [a_{4,1},a_{4,2}]=[20,50]. \end{aligned}$$

If Algorithm 3.2 is applied to solve the above instance: when \(i=1,\) executing lines 4–12 gives

$$\begin{aligned} \delta ^*=0,\quad T^*=20,\quad m=1,\quad \varDelta _1^*=\left\{ 10,20\right\} ; \end{aligned}$$

then \(i=2\) and executing lines 4–12 gives

$$\begin{aligned} \delta ^*=20,\quad T^*=45,\quad m=2,\quad \varDelta _2^*=\left\{ 10,20,25,30,35,45\right\} ; \end{aligned}$$

then \(i=3\) and executing lines 4–12 gives \(\delta ^*=35,\) \(T^*=100,\) and \(m=3.\) Since \(T^*=T=100\) when \(i=3,\) Algorithm 3.2 goes to line 14 directly without computing

$$\begin{aligned} \varDelta _3^*=\left\{ 10,20,25,30,35,45,60,70,80,85,90,95\right\} . \end{aligned}$$

Then, executing lines 14–17 gives \(\delta ^*=35\) and returns the optimal solution

$$\begin{aligned} x_1^*=10,\quad x_2^*=25,\quad x_3^*=65,\quad x_4^*=0. \end{aligned}$$

It can be seen that \([a_{1,1},a_{1,2}]\) is a left anchored interval, \([a_{2,1},a_{2,2}]\) is a right anchored interval, \([a_{3,1},a_{3,2}]\) is the only midrange interval, and there is no left/right anchored intervals following the midrange interval. Therefore, the returned solution by Algorithm 3.2 satisfies the property in Lemma 3.4.

If Algorithm 3.3 with \(\epsilon =0.2\) (and thus \(l=5\)) is applied to solve the above instance: when \(i=1,\) executing lines 5–23 gives

$$\begin{aligned} {\bar{\delta }}=0,\quad {\hat{T}}=20,\quad {\hat{\delta }}=0,\quad m=1,\quad \tilde{\varDelta }=\left\{ 10,20\right\} ,\quad \delta ^{-}(1)=10,\quad \delta ^{+}(1)=20, \end{aligned}$$

and

$$\begin{aligned} \delta ^{-}(2)=\delta ^{+}(2)=\delta ^{-}(3)=\delta ^{+}(3)=\delta ^{-}(4)=\delta ^{+}(4)=\delta ^{-}(5)=\delta ^{+}(5)=0; \end{aligned}$$

then \(i=2\) and executing lines 5–23 gives

$$\begin{aligned}&{\bar{\delta }}=20,\quad {\hat{T}}=45,\quad {\hat{\delta }}=20,\quad m=2,\quad \tilde{\varDelta }=\left\{ 10,20,25,30,35,45\right\} ,\\&\quad \delta ^{-}(1)=10,\quad \delta ^{+}(1)=20, \end{aligned}$$

and

$$\begin{aligned} \delta ^{-}(2)=25,\quad \delta ^{+}(2)=35,\quad \delta ^{-}(3)=\delta ^{+}(3)=45,\quad \delta ^{-}(4)=\delta ^{+}(4)=\delta ^{-}(5)=\delta ^{+}(5)=0; \end{aligned}$$

then \(i=3\) and executing lines 5–23 gives

$$\begin{aligned} {\bar{\delta }}=35,\quad {\hat{T}}=100,\quad {\hat{\delta }}=35,\quad m=3. \end{aligned}$$

Since \(T^*={\hat{T}}=100\) when \(i=3,\) Algorithm 3.3 goes to line 25 directly without computing

$$\begin{aligned}&{\tilde{\varDelta }}=\left\{ 60,70,80,85,95\right\} ,\quad \delta ^{-}(1) =10,\quad \delta ^{+}(1)=20,\quad \delta ^{-}(2)=25,\quad \delta ^{+}(2)=35,\\&\delta ^{-}(3)=45,\quad \delta ^{+}(3)=60,\quad \delta ^{-}(4)=70,\quad \delta ^{+}(4) =80,\quad \delta ^{-}(5)=85,\quad \delta ^{+}(5)=95. \end{aligned}$$

Then, executing line 25 gives \(\varLambda ^{E}=\left\{ [60,85], [20,50]\right\} \) and Algorithm 3.3 calls the procedure divide and conquer \(\left( \varLambda {\setminus }\varLambda ^E, \min \left\{ {\hat{\delta }} + \epsilon T,T-a_{m,1}\right\} \right) ,\) where

$$\begin{aligned} \varLambda {\setminus }\varLambda ^E=\left\{ [10,20], [10,25]\right\} \end{aligned}$$

and

$$\begin{aligned} \min \left\{ {\hat{\delta }} + \epsilon T,T-a_{m,1}\right\} =\min \left\{ 35+0.2*100, 100-60\right\} =40. \end{aligned}$$

With these inputs, line 1 of the procedure divide and conquer gives \(\tilde{\varLambda }_1=\left\{ [10,20]\right\} \) and \(\tilde{\varLambda }_2=\left\{ [10,25]\right\} ;\) line 2 of the procedure divide and conquer gives

$$\begin{aligned}&\delta _1^{-}(1)=10,\quad \delta _1^{+}(1)=20,\quad \delta _1^{-}(2)=\delta _1^{+}(2)=0,\\&d_{1,1}(\delta _1^{-}(1))=d_{1,2}(\delta _1^{-}(1))=d_{1,1}(\delta _1^{+}(1))=1,\quad d_{1,2}(\delta _1^{+}(1))=2; \end{aligned}$$

line 3 of the procedure divide and conquer gives

$$\begin{aligned}&\delta _2^{-}(1)=\delta _2^{+}(1)=10,\quad \delta _2^{-}(2)=\delta _2^{+}(2)=25,\\&d_{2,1}(\delta _2^{-}(1))=d_{2,1}(\delta _2^{+}(1))=2,\quad d_{2,2}(\delta _2^{-}(1))=d_{2,2}(\delta _2^{+}(1))=1,\\&d_{2,1}(\delta _2^{-}(2))=d_{2,2}(\delta _2^{-}(2))=d_{2,1}(\delta _2^{+}(2))=d_{2,2}(\delta _2^{+}(2))=2; \end{aligned}$$

line 4 of the procedure divide and conquer finds \((u_1,u_2)=(0,25)\) satisfying \(u_1\in \left\{ 0,10,20\right\} ,\) \(u_2\in \left\{ 0,10,25\right\} ,\) and \(20={\tilde{T}}-\epsilon T\le u_1+u_2\le {\tilde{T}}=40.\) Since both \({\tilde{\varLambda }}_1\) and \({\tilde{\varLambda }}_2\) contain only one interval, the procedure backtracking can successfully return the approximate solution. More specifically, the procedure divide and conquer skips lines 7 and 10; executes line 14 and gives \(y_2^B=25\) and \(\varLambda ^E=\left\{ [a_{2,1},a_{2,2}]\right\} ;\) and skips line 17. Then, line 27 of Algorithm 3.3 returns \(x_1^A=0\), \(x_2^A=25\), and \({\hat{T}}^A=25;\) line 28 of Algorithm 3.3 returns \(x_3^A=75;\) line 29 of Algorithm 3.3 returns \(x_4^A=0;\) and line 30 of Algorithm 3.3 returns \(T^A=100.\) Again, the returned \((1-\epsilon )\)-optimal solution (actually the optimal solution) by Algorithm 3.3 satisfies the property in Lemma 3.4, i.e, \([a_{2,1},a_{2,2}]\) is a right anchored interval, \([a_{3,1},a_{3,2}]\) is the only midrange interval, and there is no left/right anchored intervals following the midrange interval.

Two remarks on Algorithms 3.2 and 3.3 are in order.

First, although Algorithm 3.3 finds an optimal solution for the above ISSP instance, it cannot be guaranteed to do so for the general ISSP. The reason is that Algorithm 3.3 partitions the interval (0, T] into \(\lceil 1/\epsilon \rceil \) subintervals and stores only the smallest and largest values lying in the subintervals at each iteration. This is sharply different from Algorithm 3.2, where all values in \(\varDelta _i^*\) are stored. For the above instance, when \(i=2\), we have

$$\begin{aligned} \left\{ \delta ^{-}(k),\delta ^{+}(k)\right\} _{k=1}^{5} =\left\{ 10,20,25,35,45\right\} \subset \varDelta _2^*; \end{aligned}$$

and when \(i=3,\) we have

$$\begin{aligned} \left\{ \delta ^{-}(k),\delta ^{+}(k)\right\} _{k=1}^{5} =\left\{ 10,20,25,35,45,60,70,80,85,95\right\} \subset \varDelta _3^*. \end{aligned}$$

Second, as mentioned below Lemma 3.7, the pair \((u_1,u_2)\) satisfying the inequality \({\tilde{T}}-\epsilon T\le u_1+u_2\le {\tilde{T}}\) is generally not unique and different choices of the pair \((u_1,u_2)\) might lead to different approximate solutions. For example, \((u_1,u_2)\) in the above instance can also be (10, 10),  which results in the approximate solution

$$\begin{aligned} x_1^A=10,\quad x_2^A=10,\quad x_3^A=80,\quad x_4^A=0; \end{aligned}$$

or can also be (10, 25),  which results in the approximate solution

$$\begin{aligned} x_1^A=10,\quad x_2^A=25,\quad x_3^A=65,\quad x_4^A=0; \end{aligned}$$

or can also be (20, 0),  which results in the approximate solution

$$\begin{aligned} x_1^A=20,\quad x_2^A=0,\quad x_3^A=80,\quad x_4^A=0. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Diao, R., Liu, YF. & Dai, YH. A new fully polynomial time approximation scheme for the interval subset sum problem. J Glob Optim 68, 749–775 (2017). https://doi.org/10.1007/s10898-017-0514-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0514-0

Keywords

Mathematics Subject Classification

Navigation