The contextual combinatorial semi-bandit problem with linear payoff functions is a decision-making problem in which a learner chooses a set of arms with the feature vectors in each round under given constraints so as to maximize the sum of rewards of arms. Several existing algorithms have regret bounds that are optimal with respect to the number of rounds $T$. However, there is a gap of $\tilde{O}(\max(\sqrt{d}, \sqrt{k}))$ between the current best upper and lower bounds, where $d$ is the dimension of the feature vectors, $k$ is the number of the chosen arms in a round, and $\tilde{O}(\cdot)$ ignores the logarithmic factors. The dependence of $k$ and $d$ is of practical importance because $k$ may be larger than $T$ in real-world applications such as recommender systems. In this paper, we fill the gap by improving the upper and lower bounds. More precisely, we show that the C${}^2$UCB algorithm proposed by Qin, Chen, and Zhu (2014) has the optimal regret bound $\tilde{O}(d\sqrt{kT} + dk)$ for the partition matroid constraints. For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C${}^2$UCB algorithm and demonstrate that it enjoys the optimal regret bound for a more general problem that can take into account other objectives simultaneously. We also show that our technique would be applicable to related problems. Numerical experiments support our theoretical results and considerations.
翻译:线性报酬函数的背景组合半弯曲问题是一个决策问题, 学习者在一定的限制下选择一组带有每个回合中特性矢量的军火, 以便最大限度地实现武器报酬的总和。 一些现有的算法有对数界限, 相对于圆数来说是最佳的 $T$。 然而, 在像建议系统这样的真实应用中, 美元可能大于$TU$, (sqrt{d},\ rqrt{k} 是一个决策问题。 在本文中, 我们通过改进上下界限来填补这一差距。 更准确地说, 我们显示, C $2 $B 的算法 是一个回合中选择的军火的极限数量, 而 $\ tde{ O} (\ cdd) 美元忽略了对圆数因素。 美元和 美元的依赖性在现实应用中可能大于$TUCUC( ) 。 我们用上下界限来填补这个差距。 更确切地说, 我们用C2$B的算法来显示, 最优的算法 。