Motivated by online recommendation systems, we propose the problem of finding the optimal policy in multitask contextual bandits when a small fraction $\alpha < 1/2$ of tasks (users) are arbitrary and adversarial. The remaining fraction of good users share the same instance of contextual bandits with $S$ contexts and $A$ actions (items). Naturally, whether a user is good or adversarial is not known in advance. The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible. Without adversarial users, established results in collaborative filtering show that $O(1/\epsilon^2)$ per-user interactions suffice to learn a good policy, precisely because information can be shared across users. This parallelization gain is fundamentally altered by the presence of adversarial users: unless there are super-polynomial number of users, we show a lower bound of $\tilde{\Omega}(\min(S,A) \cdot \alpha^2 / \epsilon^2)$ {\it per-user} interactions to learn an $\epsilon$-optimal policy for the good users. We then show we can achieve an $\tilde{O}(\min(S,A)\cdot \alpha/\epsilon^2)$ upper-bound, by employing efficient robust mean estimators for both uni-variate and high-dimensional random variables. We also show that this can be improved depending on the distributions of contexts.
翻译:在网上建议系统推动下,我们提出在多任务背景土匪中找到最佳政策的问题,因为小部分 $\alpha < 1/2美元的任务(用户) 是任意的和对抗性的。 其余部分的好用户共享与美元背景和美元行动(项目)相同的背景土匪实例。 当然, 用户是好的还是对抗性的, 事先并不为人所知。 目标是强有力地学习给好用户尽可能少用户互动的最大奖赏的政策。 没有对抗性用户, 合作过滤的既定结果显示, 美元( 1/\ epsilon2) / 美元每个用户的相互作用足以学习好的政策, 准确地说, 因为信息可以由用户共享。 这种平行化因对抗性用户的存在而发生根本的改变: 除非用户的超垄断数量, 我们显示一个较低的约束 $tilde ~Omega} (S, A)\ docktretait $( liveraltialtial) 和 we- liveraltial- developtial ex) lax the lax a $\ intal- exliversal liental liotal liversal listral usl) exm) (我们可以显示高用户的汇率/ ==xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx