Optimization as Estimation with Gaussian Processes in Bandit Settings

Zi Wang, Bolei Zhou and Stefanie Jegelka

MIT CSAIL

ziw@csail.mit.edu; bolei@mit.edu; stefje@csail.mit.edu.

Accepted as oral presentation (6% acceptance rate) at International Conference on Artificial Intelligence and Statistics (AISTATS), 2016

Abstract

Recently, there has been rising interest in Bayesian optimization -- the optimization of an unknown function with assumptions usually expressed by a Gaussian Process (GP) prior. We study an optimization strategy that directly uses an estimate of the argmax of the function. This strategy offers both practical and theoretical advantages: no tradeoff parameter needs to be selected, and, moreover, we establish close connections to the popular GP-UCB and GP-PI strategies. Our approach can be understood as automatically and adaptively trading off exploration and exploitation in GP-UCB and GP-PI. We illustrate the effects of this adaptive tuning via bounds on the regret as well as an extensive empirical evaluation on robotics and vision tasks, demonstrating the robustness of this strategy for a range of performance criteria.

Full text

arXiv

Code

GitHub

Approximate BibTex Entry

@inproceedings{wang2015est,
    Year = {2016},
    Booktitle = {International Conference on Artificial Intelligence and Statistics (AISTATS)},
    Author = {Wang, Zi and Zhou, Bolei and Jegelka, Stefanie},
    Title = {Optimization as Estimation with Gaussian Processes in Bandit Settings}
}