Our ability to generalize beyond training data to novel, out-of-distribution, image degradations is a hallmark of primate vision. The predictive brain, exemplified by predictive coding networks (PCNs), has become a prominent neuroscience theory of neural computation. Motivated by the recent successes of variational autoencoders (VAEs) in machine learning, we rigorously derive a correspondence between PCNs and VAEs. This motivates us to consider iterative extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We further demonstrate that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs. In addition, we propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data. Overall, we hope this work will spur interest in iVAEs as a promising new direction for modeling in neuroscience.
翻译:从培训数据到新颖的、不分布的、图像退化的能力,是灵长类视觉的标志。预测性大脑,以预测编码网络(PCNs)为范例,已经成为神经计算中一个突出的神经科学理论。受机器学习中变异自动编码器(VAEs)最近的成功激励,我们严格地从多氯化萘和VAEs之间得出一种对应关系。这促使我们考虑将VAEs(iVAEs)的迭代扩展作为PCNs(iVAEs)的貌似变异扩展。我们进一步证明iVAEs一般化为分布性转变,大大优于PCNs和VAEs。此外,我们提出了个人样本的可识别性新标准,可以根据人类心理物理数据进行测试。总体而言,我们希望这项工作能激发人们对iAEs(iVAEs)的兴趣,作为神经科学建模的有希望的新方向。