ABSTRACT
In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R, and the algorithm incurs cost c(st,wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st,wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed s ∈ S in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online TSP, online clustering, and online weighted set cover.
Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online bandit algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed s ∈ S in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α > 1, the previous approach only worked for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best s ∈ S, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a "Barycentric Spanner" for the problem is also (provably) necessary as input.Our algorithm can also be viewed as a method for playing largerepeated games, where one can only compute approximate best-responses, rather than best-responses.
- B. Awerbuch and R. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the 36th ACM Symposium on Theory of Computing (STOC), 2004. Google ScholarDigital Library
- M.-F. Balcan and A. Blum. Approximation algorithms and online mechanisms for item pricing. In Proceedings of the 7th ACM Conference on Electronic Commerce (EC), 2006. Google ScholarDigital Library
- R. Carr and S. Vempala. Randomized metarounding. Random Struct. Algorithms, 20(3):343--352, 2002. Google ScholarDigital Library
- D. Chakrabarty, A. Mehta, and V. Vazirani. Design is as easy as optimization. In 33rd International Colloquium on Automata, Languages and Programming (ICALP), 2006. Google ScholarDigital Library
- V. Dani and T.P. Hayes. Robbing the bandit: Less regret in online geometric optimization against an adaptive adversary. In Proceedings of the 17th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2006. Google ScholarDigital Library
- M.X. Goemans and D.P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 42(6):1115--1145, 1995. Google ScholarDigital Library
- J. Hannan. Approximation to Bayes risk in repeated play. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume III, pages 97--139. Princeton University Press, 1957.Google Scholar
- A. Kalai and S. Vempala. Efficient algorithms for online decision problems. J. Comput. Syst. Sci., 71(3):291--307, 2005. Google ScholarDigital Library
- H. McMahan and A. Blum. Online geometric optimization in the bandit setting against an adaptive adversary. In Proceedings of the 17th Annual Conference on Learning Theory (COLT), 2004.Google ScholarCross Ref
- H. Robbins. Some aspects of the sequential design of experiments. In Bulletin of the American Mathematical Society, volume 55, 1952.Google Scholar
- M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML), 2003.Google Scholar
Index Terms
Playing games with approximation algorithms
Recommendations
Playing Games with Approximation Algorithms
In an online linear optimization problem, on each period $t$, an online algorithm chooses $s_t\in\mathcal{S}$ from a fixed (possibly infinite) set $\mathcal{S}$ of feasible decisions. Nature (who may be adversarial) chooses a weight vector $w_t\in\...
Efficient Online Linear Optimization with Approximation Algorithms
We revisit the problem of online linear optimization in the case where the set of feasible actions is accessible through an approximated linear optimization oracle with a factor α multiplicative approximation guarantee. This setting in particular is ...
Comments