We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs), a structured network with MLP modules, have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently diverse. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
翻译:我们研究的是,由梯度下层外推法培训的神经网络如何在较复杂的任务中取得成功。在理论解释方面,我们找出了MLP和GNNS在外推法外推法良好的环境。首先,我们量化了以下观察结果:RELU MLP在向神经网络(a.k.a.多层光谱仪(MLPs)提供材料时,在向神经网络(a.k.a.多层光谱仪(MLPs)提供材料的同时,在向神经网络(a.k.a.a.多层光谱仪(MLPs)提供材料时,在向神经网络(GLPs)外推法和GNNPs外推法外推法性外推法性函数方面,在从源头向任何方向的线性函数之间会迅速趋同线性函数,这意味着RELU MLPs不会将大多数非线性功能外推法性功能外推出。但是,在培训分布得相当的GNPs(GNPs)在高层次结构中,或新数据结构上建立较深层次的链接(GNNNEGNEG)的链接中,在较强的链接中,或比重的理论结构上建立较强的链接。