It is well known that artificial neural networks are vulnerable to adversarial examples, in which great efforts have been made to improve the robustness. However, such examples are usually imperceptible to humans, and thus their effect on biological neural circuits is largely unknown. This paper will investigate the adversarial robustness in a simulated cerebellum, a well-studied supervised learning system in computational neuroscience. Specifically, we propose to study three unique characteristics revealed in the cerebellum: (i) network width; (ii) long-term depression on the parallel fiber-Purkinje cell synapses; (iii) sparse connectivity in the granule layer, and hypothesize that they will be beneficial for improving robustness. To the best of our knowledge, this is the first attempt to examine the adversarial robustness in simulated cerebellum models. The results are negative in the experimental phase -- no significant improvements in robustness are discovered from the proposed three mechanisms. Consequently, the cerebellum is expected to be vulnerable to adversarial examples as the deep neural networks under batch training. Neuroscientists are encouraged to fool the biological system in experiments with adversarial attacks.
翻译:众所周知,人工神经网络容易受到对抗性的例子的影响,在这种例子中,已经作出了巨大的努力来提高体力。然而,这些例子通常不为人类所觉察,因此对生物神经电路的影响基本上不为人所知。本文将调查在计算神经科学中经过仔细研究的受监督的模拟小脑神经学系统中的对抗性强力。具体地说,我们提议研究小脑中揭示的三种独特的特征:(一) 网络宽度;(二) 平行纤维-Purkinje细胞突触的长期抑郁症;(三) 颗粒层的互连互通性稀少,以及它们有利于增强体力的虚弱程度。据我们所知,这是在模拟小脑神经科学模型中研究对抗性强力的首次尝试。结果在实验阶段是负面的,从拟议的三种机制中没有发现强力的重大改善。因此,小脑预期在连续的实验中,作为深神经网络的对抗性例子,很容易成为对抗性攻击的对抗性实验系统。