The Prophet Inequality and Pandora's Box problems are fundamental stochastic problem with applications in Mechanism Design, Online Algorithms, Stochastic Optimization, Optimal Stopping, and Operations Research. A usual assumption in these works is that the probability distributions of the $n$ underlying random variables are given as input to the algorithm. Since in practice these distributions need to be learned, we initiate the study of such stochastic problems in the Multi-Armed Bandits model. In the Multi-Armed Bandits model we interact with $n$ unknown distributions over $T$ rounds: in round $t$ we play a policy $x^{(t)}$ and receive a partial (bandit) feedback on the performance of $x^{(t)}$. The goal is to minimize the regret, which is the difference over $T$ rounds in the total value of the optimal algorithm that knows the distributions vs. the total value of our algorithm that learns the distributions from the partial feedback. Our main results give near-optimal $\tilde{O}(\mathsf{poly}(n)\sqrt{T})$ total regret algorithms for both Prophet Inequality and Pandora's Box. Our proofs proceed by maintaining confidence intervals on the unknown indices of the optimal policy. The exploration-exploitation tradeoff prevents us from directly refining these confidence intervals, so the main technique is to design a regret upper bound that is learnable while playing low-regret Bandit policies.
翻译:先知的不平等和潘多拉的框问题是机制设计、在线解算器、斯托切最佳优化、最佳停止和操作研究应用中根本的随机问题。 这些作品通常的假设是,将基本随机变量的概率分布作为算法的输入。 由于在实际中需要了解这些分布, 我们开始研究多武装强盗模式中的这种随机问题。 在多武装强盗模式中, 我们与机制设计、 在线解算器、 软优化、 优化和操作研究应用中, 应用机制设计、 在线解算器、 软优化、 优化和操作研究应用中, 使用机制设计应用时, 使用机制设计、 在线解算器、 软性优化的不为美元, 并接收部分( 斜度) 用于 美元 基本随机随机随机随机变量的反馈。 目标是尽量减少遗憾, 也就是在了解分布和我们从部分反馈中了解分布的算法的总价值时, 我们的主要算法的结果是接近于精度 美元 和 最佳分析法的精确度政策, 学习我们最精确的精确度 。