The use of artificial neural networks as models of chaotic dynamics has been rapidly expanding, but the theoretical understanding of how neural networks learn chaos remains lacking. Here, we employ a geometric perspective to show that neural networks can efficaciously model chaotic dynamics by themselves becoming structurally chaotic. First, we confirm the efficacy of neural networks in emulating chaos by showing that parsimonious neural networks trained only on few data points suffice to reconstruct strange attractors, extrapolate outside training data boundaries, and accurately predict local divergence rates. Second, we show that the trained network's map comprises a series of geometric stretching, rotation, and compression operations. These geometric operations indicate topological mixing and chaos, explaining why neural networks are naturally suitable to emulate chaotic dynamics.
翻译:人造神经网络作为混乱动态模型的使用一直在迅速扩大,但是对神经网络如何学会混乱的理论理解仍然缺乏。在这里,我们使用几何视角来显示神经网络能够有效地模拟混乱动态,而神经网络本身就成为结构混乱。 首先,我们证实神经网络在模拟混乱方面的效力,我们通过显示仅仅在少数数据点上培训的分散神经网络就足以重建陌生的吸引者、外推培训数据边界和准确预测当地差异率。 其次,我们显示,经过培训的网络地图包含一系列几何范围、旋转和压缩操作。这些几何操作表明,在地形上混合和混乱,解释了神经网络为什么自然地适合模仿混乱动态。