We study two-sided matching markets in which one side of the market (the players) does not have a priori knowledge about its preferences for the other side (the arms) and is required to learn its preferences from experience. Also, we assume the players have no direct means of communication. This model extends the standard stochastic multi-armed bandit framework to a decentralized multiple player setting with competition. We introduce a new algorithm for this setting that, over a time horizon $T$, attains $\mathcal{O}(\log(T))$ stable regret when preferences of the arms over players are shared, and $\mathcal{O}(\log(T)^2)$ regret when there are no assumptions on the preferences on either side. Moreover, in the setting where a single player may deviate, we show that the algorithm is incentive compatible whenever the arms' preferences are shared, but not necessarily so when preferences are fully general.
翻译:我们研究的是双面匹配市场,其中市场一方(参与者)没有先验地了解对另一方(武器)的偏好,需要从经验中学习其偏好。此外,我们假设参与者没有直接的沟通手段。这个模式将标准的随机多武装强盗框架扩展到一个分散的多个参与者的竞争环境。我们为这一环境引入一种新的算法,在一定的时间范围内,当武器对另一方(武器)的偏好得到共享时,这种算法会达到$\mathcal{O}((log(T))$稳定地遗憾,而当任何一方的偏好没有假设时,则会后悔$\mathcal{O}(log(T)2)$。此外,在单一参与者可能偏差的环境下,我们表明,在共享武器偏好时,算法是兼容的,但当优惠完全普遍时,这种算法不一定是兼容的。