With the help of special neuromorphic hardware, spiking neural networks (SNNs) are expected to realize artificial intelligence with less energy consumption. It provides a promising energy-efficient way for realistic control tasks by combing SNNs and deep reinforcement learning (RL). There are only a few existing SNN-based RL methods at present. Most of them either lack generalization ability or employ Artificial Neural Networks (ANNs) to estimate value function in training. The former needs to tune numerous hyper-parameters for each scenario, and the latter limits the application of different types of RL algorithm and ignores the large energy consumption in training. To develop a robust spike-based RL method, we draw inspiration from non-spiking interneurons found in insects and propose the deep spiking Q-network (DSQN), using the membrane voltage of non-spiking neurons as the representation of Q-value, which can directly learn robust policies from high-dimensional sensory inputs using end-to-end RL. Experiments conducted on 17 Atari games demonstrate the effectiveness of DSQN by outperforming the ANN-based deep Q-network (DQN) in most games. Moreover, the experimental results show superior learning stability and robustness to adversarial attacks of DSQN.
翻译:在特殊神经形态硬件的帮助下,预计神经神经网络(SNNs)将实现人造智能,减少能源消耗;通过梳理 SNNs和深层强化学习(RL),它为现实的控制任务提供了一个有希望的节能方式,通过梳理SNNs和深层强化学习(RL),为现实的控制任务提供了一个有希望的节能方式。目前仅有少数基于SNN的RL方法。它们大多缺乏一般化能力,或采用人工神经网络来估计培训的价值功能。前者需要为每种情景调制许多超度参数,而后者则限制不同类型RL算法的应用,忽视培训中的大量能源消耗。为了开发一个强大的基于峰值的RL方法,我们从昆虫中发现的非喷射的内中子中汲取灵感,并提议采用深度跳动Q网络(DSQN),使用非喷射神经神经神经网络作为Q值的表示。 后者可以直接学习高度传感器输入的可靠政策,使用端对端到端RL。 实验在17Atary NQ游戏上进行的最激烈的NQ攻击,展示了DQ的实验结果。