Robustness of stochastic bandit policies
Salomon, Antoine; Audibert, Jean-Yves (2014), Robustness of stochastic bandit policies, Theoretical Computer Science, 519, p. 46-67. http://dx.doi.org/10.1016/j.tcs.2013.09.019
TypeArticle accepté pour publication ou publié
External document linkhttp://hal.inria.fr/hal-00821670
Journal nameTheoretical Computer Science
MetadataShow full item record
Abstract (EN)This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al.  exhibit a policy such that with probability at least 1−1/n, the regret of the policy is of order log n. They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. . This work first answers an open question: it extends this negative result to any anytime policy (i.e. any policy that does not take the number of plays n into account). Another contribution of this paper is to design robust anytime policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms. We also show that, for any policy (i.e. even when the number of plays n is known), the regret is of order log n with probability at least 1−1/n, so that the policy of Audibert et al. has the best possible deviation properties.
Subjects / KeywordsExploration–exploitation tradeoff; Multi-armed stochastic bandit; Regret deviations/risk
Showing items related by title and author.
Lower Bounds and Selectivity of Weak-Consistent Policies in Stochastic Multi-Armed Bandit Problem. El Alaoui, Issam; Audibert, Jean-Yves; Salomon, Antoine (2013-01) Article accepté pour publication ou publié
Trimouille, Frédérique; Saint-Martin, Anne; Lerais, Frédéric; Klein, Tristan; Kerbouch, Jean-Yves; Estrade, Marc-Antoine; Beaujolin, Rachel; Méda, Dominique (2002) Document de travail / Working paper