Neural generative models can be used to learn complex probability distributions from data, to sample from them, and to produce probability density estimates. We propose a computational framework for developing neural generative models inspired by the theory of predictive processing in the brain. According to predictive processing theory, the neurons in the brain form a hierarchy in which neurons in one level form expectations about sensory inputs from another level. These neurons update their local models based on differences between their expectations and the observed signals. In a similar way, artificial neurons in our generative models predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality. In this work, we show that the neural generative models learned within our framework perform well in practice across several benchmark datasets and metrics and either remain competitive with or significantly outperform other generative models with similar functionality (such as the variational auto-encoder).
翻译:神经基因变现模型可以用来从数据中学习复杂的概率分布,从这些数据中提取样本,并得出概率密度估计。 我们提出一个计算框架,用于开发由大脑预测处理理论启发的神经基因变现模型。 根据预测处理理论,大脑中的神经元形成一个等级,一个层次的神经元形成对另一层次的感官输入的预期。这些神经元根据预期和观察到的信号之间的差异更新了本地模型。同样,我们基因变现模型中的人工神经元预测邻近神经元将做什么,并根据预测与现实相匹配的程度调整参数。在这项工作中,我们表明在我们框架内学习的神经基因变现模型在几个基准数据集和指标的实践中表现良好,并且要么与具有类似功能的其他基因变异模型(例如变异的自动编码)保持竞争力,要么大大优于其他基因变异形模型(例如变异的自动编码 ) 。