We consider a constrained, pure exploration, stochastic multi-armed bandit formulation under a fixed budget. Each arm is associated with an unknown, possibly multi-dimensional distribution and is described by multiple attributes that are a function of this distribution. The aim is to optimize a particular attribute subject to user-defined constraints on the other attributes. This framework models applications such as financial portfolio optimization, where it is natural to perform risk-constrained maximization of mean return. We assume that the attributes can be estimated using samples from the arms' distributions and that these estimators satisfy suitable concentration inequalities. We propose an algorithm called \textsc{Constrained-SR} based on the Successive Rejects framework, which recommends an optimal arm and flags the instance as being feasible or infeasible. A key feature of this algorithm is that it is designed on the basis of an information theoretic lower bound for two-armed instances. We characterize an instance-dependent upper bound on the probability of error under \textsc{Constrained-SR}, that decays exponentially with respect to the budget. We further show that the associated decay rate is nearly optimal relative to an information theoretic lower bound in certain special cases.
翻译:我们考虑的是固定预算下的有限、纯粹的勘探、随机多臂强盗配方。 每个手臂都与未知的、可能多维的分布相关,并用此分布功能的多种属性来描述。 目的是优化特定属性, 但须受用户定义的限制。 这种框架模型应用, 如金融组合优化, 进行风险限制的中值回报最大化是自然的。 我们假设这些属性可以使用武器分布的样本来估计, 而这些测量器满足适当的集中不平等。 我们提议基于“ 成功拒绝” 框架的算法, 名为“ Textsc{ Constrated- SR} ”, 推荐一种最佳的手臂, 并将场景标为可行或不可行的。 这种算法的一个关键特征是, 它的设计基于信息理论性较低、 约束双臂回归的最大范围。 我们根据在\ textc{Constrac-SR} 下的误差概率确定, 与预算呈指数指数化。 我们进一步表明, 相关的衰变率几乎是某种特殊情况下最优的。