Predictive coding is a message-passing framework initially developed to model information processing in the brain, and now also topic of research in machine learning due to some interesting properties. One of such properties is the natural ability of generative models to learn robust representations thanks to their peculiar credit assignment rule, that allows neural activities to converge to a solution before updating the synaptic weights. Graph neural networks are also message-passing models, which have recently shown outstanding results in diverse types of tasks in machine learning, providing interdisciplinary state-of-the-art performance on structured data. However, they are vulnerable to imperceptible adversarial attacks, and unfit for out-of-distribution generalization. In this work, we address this by building models that have the same structure of popular graph neural network architectures, but rely on the message-passing rule of predictive coding. Through an extensive set of experiments, we show that the proposed models are (i) comparable to standard ones in terms of performance in both inductive and transductive tasks, (ii) better calibrated, and (iii) robust against multiple kinds of adversarial attacks.
翻译:预测性编码是最初为模拟大脑信息处理而开发的信息传递框架,现在也是由于某些有趣的特性而进行机器学习研究的主题。这种特性之一是,基因化模型自然有能力因其特殊的信用分配规则而学习强健的表达,从而使神经活动在更新合成重量之前能够趋同于解决办法。图表神经网络也是信息传递模型,最近在机器学习的各种任务中显示了突出的结果,在结构化数据方面提供了最先进的跨学科性能。然而,它们很容易受到无法察觉的对抗性攻击,并且不适合超出分配范围。在这项工作中,我们通过建立具有流行的图形神经网络结构结构的模型来解决这一问题,但依赖预测性编码的信息传递规则。我们通过一系列广泛的实验,表明拟议的模型(一)在诱导和传输性攻击的性能方面,与标准模型相近,(二)更好的校准,以及(三)抵御多种类型的对抗性攻击。