We introduce a new and completely online contextual bandit algorithm called Gated Linear Contextual Bandits (GLCB). This algorithm is based on Gated Linear Networks (GLNs), a recently introduced deep learning architecture with properties well-suited to the online setting. Leveraging data-dependent gating properties of the GLN we are able to estimate prediction uncertainty with effectively zero algorithmic overhead. We empirically evaluate GLCB compared to 9 state-of-the-art algorithms that leverage deep neural networks, on a standard benchmark suite of discrete and continuous contextual bandit problems. GLCB obtains median first-place despite being the only online method, and we further support these results with a theoretical study of its convergence properties.
翻译:我们引入了一种新的、完全在线背景强盗算法,称为Ged线性线性环境强盗(GLCB ) 。 这一算法基于Ged线性网络(GLNs),这是最近引入的深层次学习结构,其属性与在线设置非常相适应。利用GLN的数据依赖格子特性,我们可以以有效的零算法间接费用来估计预测的不确定性。我们从经验上对GLCB进行了评估,与9种最先进的算法进行了对比,这些算法利用了深层神经网络,采用了一套标准的基准,包括离散和连续背景强盗问题。 GLNCB尽管是唯一的在线方法,但还是获得了中位第一,我们进一步通过理论研究其趋同特性来支持这些结果。