This paper unifies the design and simplifies the analysis of risk-averse Thompson sampling algorithms for the multi-armed bandit problem for a generic class of risk functionals \r{ho} that are continuous. Using the contraction principle in the theory of large deviations, we prove novel concentration bounds for these continuous risk functionals. In contrast to existing works in which the bounds depend on the samples themselves, our bounds only depend on the number of samples. This allows us to sidestep significant analytical challenges and unify existing proofs of the regret bounds of existing Thompson sampling-based algorithms. We show that a wide class of risk functionals as well as "nice" functions of them satisfy the continuity condition. Using our newly developed analytical toolkits, we analyse the algorithms $\rho$-MTS (for multinomial distributions) and $\rho$-NPTS (for bounded distributions) and prove that they admit asymptotically optimal regret bounds of risk-averse algorithms under the mean-variance, CVaR, and other ubiquitous risk measures, as well as a host of newly synthesized risk measures. Numerical simulations show that our bounds are reasonably tight vis-\`a-vis algorithm-independent lower bounds.
翻译:本文统一了设计,并简化了对多臂强盗问题的风险偏向汤普森抽样算法的分析。 我们用大规模偏差理论中的收缩原则,证明了这些连续风险功能的新集中界限。 与目前由样本本身决定界限的工程相比, 我们的界限只取决于样本数量。 这使我们能够回避重大分析挑战,并统一现有汤普森抽样算法中现有反风险算法的遗憾界限的现有证据。 我们显示,一大批风险功能以及这些功能的“ 良性” 功能符合连续性条件。 我们利用我们新开发的分析工具包,分析这些连续风险功能的算法$\rho$-MTS(用于多数值分布)和$rho$-NPTS(用于约束分布), 并证明它们承认现有汤普森抽样算法中风险偏差法的非最佳遗憾界限。 我们的“ 良性” 功能和“ 良性” 功能的功能和“ 良性” 功能都满足了连续性条件。 我们使用我们新开发的分析工具包, 分析算法 $rho$- MTHes (用于多数值分布分布) 和 $rho- NPT 的严格模拟, 并显示我们的风险比重的模型的模拟, 的稳重风险。