Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning (DQN). The current consensus for training neural networks on such complex environments is to rely on gradient-based optimization. Although alternative Bayesian deep learning methods exist, most of them still rely on gradient-based optimization, and they typically do not scale on benchmarks such as the Atari game environment. Moreover none of these approaches allow performing the analytical inference for the weights and biases defining the neural network. In this paper, we present how we can adapt the temporal difference Q-learning framework to make it compatible with the tractable approximate Gaussian inference (TAGI), which allows learning the parameters of a neural network using a closed-form analytical method. Throughout the experiments with on- and off-policy reinforcement learning approaches, we demonstrate that TAGI can reach a performance comparable to backpropagation-trained networks while using fewer hyperparameters, and without relying on gradient-based optimization.
翻译:强化学习(RL)自演示以来已越来越引起人们的兴趣。自演示以来,它利用深Q-学习(DQN)在视频游戏基准上达到了人的表现。目前对在这种复杂环境中培训神经网络的共识是依赖梯度优化。尽管存在其他贝叶斯深层学习方法,但大多数贝叶斯深层学习方法仍然依赖梯度优化,而且通常不以阿塔里游戏环境等基准为尺度。此外,这些方法都不允许对确定神经网络的重量和偏差进行分析推理。在本文中,我们介绍我们如何能够调整时间差异Q-学习框架,使之与可移植近似高斯语的推断(TAGIGI)兼容,后者允许使用封闭式分析方法学习神经网络的参数。在整个政策强化学习方法实验中,我们证明TAGI在使用较少的超参数的同时,不依赖梯度优化,可以达到与反向调整的网络相似的性能。