Recurrent neural networks (RNNs) are a class of neural networks used in sequential tasks. However, in general, RNNs have a large number of parameters and involve enormous computational costs by repeating the recurrent structures in many time steps. As a method to overcome this difficulty, RNN pruning has attracted increasing attention in recent years, and it brings us benefits in terms of the reduction of computational cost as the time step progresses. However, most existing methods of RNN pruning are heuristic. The purpose of this paper is to study the theoretical scheme for RNN pruning method. We propose an appropriate pruning algorithm for RNNs inspired by "spectral pruning", and provide the generalization error bounds for compressed RNNs. We also provide numerical experiments to demonstrate our theoretical results and show the effectiveness of our pruning method compared with existing methods.
翻译:经常性神经网络(RNN)是连续任务中使用的神经网络类别。 但是,一般来说,RNN具有大量参数,在许多时间步骤中重复重复经常性结构,从而涉及巨大的计算成本。作为克服这一困难的一种方法,近年来,RNN运行吸引了越来越多的注意力,随着时间步骤的推进,在计算成本的削减方面给我们带来了好处。然而,RNN运行的现有方法大多是超自然的。本文的目的是研究RNN的运行方法的理论方案。我们建议了受“光谱运行”启发的RNN的适当的调整算法,并为压缩的RNN提供了一般的错误界限。我们还提供了数字实验,以展示我们的理论结果,并表明我们运行方法与现有方法相比的有效性。