We study non-modular function maximization in the online interactive bandit setting. We are motivated by applications where there is a natural complementarity between certain elements: e.g., in a movie recommendation system, watching the first movie in a series complements the experience of watching a second (and a third, etc.). This is not expressible using only submodular functions which can represent only competitiveness between elements. We extend the purely submodular approach in two ways. First, we assume that the objective can be decomposed into the sum of monotone suBmodular and suPermodular function, known as a BP objective. Here, complementarity is naturally modeled by the supermodular component. We develop a UCB-style algorithm, where at each round a noisy gain is revealed after an action is taken that balances refining beliefs about the unknown objectives (exploration) and choosing actions that appear promising (exploitation). Defining regret in terms of submodular and supermodular curvature with respect to a full-knowledge greedy baseline, we show that this algorithm achieves at most $O(\sqrt{T})$ regret after $T$ rounds of play. Second, for those functions that do not admit a BP structure, we provide analogous regret guarantees in terms of their submodularity ratio; this is applicable for functions that are almost, but not quite, submodular. We numerically study the tasks of movie recommendation on the MovieLens dataset, and selection of training subsets for classification. Through these examples, we demonstrate the algorithm's performance as well as the shortcomings of viewing these problems as being solely submodular.
翻译:我们研究在线互动土匪环境中的非模式函数最大化。 我们的动力来自某些元素之间自然互补的应用程序: 例如, 在电影推荐系统中, 观看系列中的第一部电影补充了观看第二部( 和第三部等) 的经验。 仅使用只能代表元素之间竞争力的亚模式函数, 这是无法表达的。 我们以两种方式扩展纯亚模式方法。 首先, 我们假设该目标可以分解成单调 suBmodular 和 suPermodular 函数的总和, 被称为 BP 目标。 这里, 互补性是自然由超级模式组件模拟的。 我们开发了UCB- 式算法, 在每轮计算突扰的算法中, 以平衡对未知目标( Exploration) 的信念, 并选择看起来有希望的行动( 探索 ) 。 首先, 我们假设这个算法在最高级的 $O ( sqrt{T} 格式中, 几乎以亚标准值表示其精度的精度 。