Pruning techniques for neural networks with a recurrent architecture, such as the recurrent neural network (RNN), are strongly desired for their application to edge-computing devices. However, the recurrent architecture is generally not robust to pruning because even small pruning causes accumulation error and the total error increases significantly over time. In this paper, we propose an appropriate pruning algorithm for RNNs inspired by "spectral pruning", and provide the generalization error bounds for compressed RNNs. We also provide numerical experiments to demonstrate our theoretical results and show the effectiveness of our pruning method compared with existing methods.
翻译:对于具有经常性结构的神经网络,例如经常性神经网络(RNN),非常希望其谨慎技术能够应用于边缘计算装置。然而,通常情况下,经常性结构对于运行来说并不健全,因为即使是小的运行也会导致累积错误和总错误随时间而大幅增加。在本文中,我们建议对基于“光谱运行”的神经网络采用适当的运行算法,并为压缩神经网络(RNN)提供一般错误界限。我们还提供数字实验,以展示我们的理论结果,并表明我们与现有方法相比的运行方法的有效性。