Neural networks are powerful function estimators, leading to their status as a paradigm of choice for modeling structured data. However, unlike other structured representations that emphasize the modularity of the problem -- e.g., factor graphs -- neural networks are usually monolithic mappings from inputs to outputs, with a fixed computation order. This limitation prevents them from capturing different directions of computation and interaction between the modeled variables. In this paper, we combine the representational strengths of factor graphs and of neural networks, proposing undirected neural networks (UNNs): a flexible framework for specifying computations that can be performed in any order. For particular choices, our proposed models subsume and extend many existing architectures: feed-forward, recurrent, self-attention networks, auto-encoders, and networks with implicit layers. We demonstrate the effectiveness of undirected neural architectures, both unstructured and structured, on a range of tasks: tree-constrained dependency parsing, convolutional image classification, and sequence completion with attention. By varying the computation order, we show how a single UNN can be used both as a classifier and a prototype generator, and how it can fill in missing parts of an input sequence, making them a promising field for further research.
翻译:神经网络是强大的功能测算器,导致它们作为结构化数据模型选择的范例的地位。然而,与其他强调问题模块性的结构化表示不同,例如,要素图 -- -- 神经网络通常是从输入到输出的单一图象,有固定的计算顺序。这种限制使它们无法捕捉模型变量之间不同的计算和互动方向。在本文中,我们结合了要素图和神经网络的代表性强,提出了非方向性神经网络:一个灵活框架,以具体说明可以按任何顺序进行的计算。对于特定选择,我们提议的模型包含并扩展了许多现有结构:饲料前方、经常性、自用网络、自动生成器和隐含层的网络。我们展示了非方向性神经结构的功效,两者都是结构化和结构化的,涉及一系列任务:树上受限制的依赖性区分、革命图像分类和顺序完成。我们通过改变计算顺序,我们展示了如何将单一的联合国网络的模型纳入并扩展许多现有的结构:即饲料前方、经常性、自用网络、自动生成的自动生成器和隐含层的网络。我们展示了无方向的神经结构结构结构结构结构结构结构结构结构结构结构结构结构,在一系列任务上的有效性:树上均受限制的依赖性区分的对成、革命图像进行区分、革命图像分类和顺序完成。我们通过不同的计算顺序,可以进一步使用一个单一的联合国模型的模型来填补一个缺失的模型,以填补一个缺失的模型,以进一步的模型,以填补一个完整的模型。