We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms. To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecifications. Our approach is conceptually very different from existing works that can either only handle low-dimensional linear bandits or passively deal with model misspecifications. We showcase the application of our approach to two pure exploration settings that were previously under-studied: (1) the reward function belongs to a possibly infinite-dimensional Reproducing Kernel Hilbert Space, and (2) the reward function is nonlinear and can be approximated by neural networks. Our main results provide sample complexity guarantees that only depend on the effective dimension of the feature spaces in the kernel or neural representations. Extensive experiments conducted on both synthetic and real-world datasets demonstrate the efficacy of our methods.
翻译:我们在土匪中研究纯粹的探索,在那里,地貌代表的层面可能大大大于武器的数量。为了克服维度的诅咒,我们提议将每个手臂的特征代表嵌入一个低维空间,并仔细处理诱导的模式错误。我们的方法在概念上与现有的工程有很大不同,这些工程只能处理低维线型土匪,或者被动地处理模型错误。我们展示了我们对两个先前研究不足的纯勘探环境采用的方法:(1) 奖励功能属于一个可能无限的再生Kernel Hilbert空间,(2) 奖励功能是非线性,可以由神经网络近似。我们的主要结果提供了样本复杂性保证,仅取决于内核或神经表层特征空间的有效层面。在合成和真实世界数据集上进行的广泛实验显示了我们方法的功效。