Can we take a recurrent neural network (RNN) trained to translate between languages and augment it to support a new natural language without retraining the model from scratch? Can we fix the faulty behavior of the RNN by replacing portions associated with the faulty behavior? Recent works on decomposing a fully connected neural network (FCNN) and convolutional neural network (CNN) into modules have shown the value of engineering deep models in this manner, which is standard in traditional SE but foreign for deep learning models. However, prior works focus on the image-based multiclass classification problems and cannot be applied to RNN due to (a) different layer structures, (b) loop structures, (c) different types of input-output architectures, and (d) usage of both nonlinear and logistic activation functions. In this work, we propose the first approach to decompose an RNN into modules. We study different types of RNNs, i.e., Vanilla, LSTM, and GRU. Further, we show how such RNN modules can be reused and replaced in various scenarios. We evaluate our approach against 5 canonical datasets (i.e., Math QA, Brown Corpus, Wiki-toxicity, Clinc OOS, and Tatoeba) and 4 model variants for each dataset. We found that decomposing a trained model has a small cost (Accuracy: -0.6%, BLEU score: +0.10%). Also, the decomposed modules can be reused and replaced without needing to retrain.
翻译:我们能否用一个经过训练的经常性神经网络(RNN)来翻译语言,并增加它以支持一个新的自然语言,而不从头开始再培训模型?我们能否通过替换与错误行为相关的部分来纠正RNN的错误行为?最近将一个完全连接的神经网络(FCNN)和神经神经网络(NCN)分解成模块的工作显示了以这种方式将一个完全连接的神经网络(FCNN)和神经神经网络(CNN)分解成模块的工程深度模型的价值,这种模型在传统的SE中是标准的,但在深层次学习模型中是外国的。然而,先前的工作侧重于基于图像的多级分类问题,无法应用于RNNN,因为(a) 不同的层结构,(b) 循环结构,(c) 不同类型的输入输出结构结构,(c) 不同类型的输入输出结构结构,(c) 使用非线性神经系统(OOOA) 和物流(d) 使用非线性数据模式。我们用到4个模型, 找到了一种CODA, 和OVAL数据。我们也可以找到一种模式。