In this paper, we propose a novel neural-based exploration strategy in contextual bandits, EE-Net, distinct from the standard UCB-based and TS-based approaches. Contextual multi-armed bandits have been studied for decades with various applications. To solve the exploitation-exploration tradeoff in bandits, there are three main techniques: epsilon-greedy, Thompson Sampling (TS), and Upper Confidence Bound (UCB). In recent literature, linear contextual bandits have adopted ridge regression to estimate the reward function and combine it with TS or UCB strategies for exploration. However, this line of works explicitly assumes the reward is based on a linear function of arm vectors, which may not be true in real-world datasets. To overcome this challenge, a series of neural-based bandit algorithms have been proposed, where a neural network is assigned to learn the underlying reward function and TS or UCB are adapted for exploration. In this paper, we propose "EE-Net", a neural-based bandit approach with a novel exploration strategy. In addition to utilizing a neural network (Exploitation network) to learn the reward function, EE-Net adopts another neural network (Exploration network) to adaptively learn potential gains compared to currently estimated reward for making explorations. Then, a decision-maker is constructed to combine the outputs from the Exploitation and Exploration networks. We prove that EE-Net can achieve $\mathcal{O}(\sqrt{T\log T})$ regret, which is tighter than existing state-of-the-art neural bandit algorithms ($\mathcal{O}(\sqrt{T}\log T)$ for both UCB-based and TS-based). Through extensive experiments on four real-world datasets, we show that EE-Net outperforms existing linear and neural bandit approaches.
翻译:在本文中, 我们提议在背景土匪、 { EE- Net 中采用与标准 UCB 和 TS- 基础方法不同的新型神经勘探战略。 已经用各种应用对背景多武装土匪进行了数十年的研究。 要解决土匪的剥削- 探索交易, 提出了三种主要技术: epsilon- greedy、 Thompson 抽样( TS) 和 Up Infority Bound( UCBB) 。 在最近的文献中, 线性背景土匪采用了山脊回归来估计奖赏功能, 并将其与 TS 或 UCB 勘探战略结合起来。 但是, 这行显然假设奖赏是基于武装矢量的线性功能, 而在现实世界数据集中可能并不是这样。 为了克服这个挑战, 一系列基于神经基的运量算算法, 用于学习基本奖赏功能和TSUCB 。 在本文中, 我们提出“ E- t- 网络” 以神经- or- real- real 工具与新勘探战略相结合。 除了利用一个神经- 网络网络, 将电算化网络升级网络, 来学习再升级网络, 数据, 以学习另一个的 Eral- real- real- real- real- real- real- real- real- real- tral- real- real- real- real- real- real- real- real- 。