Recurrent Neural Networks (RNN) received a vast amount of attention last decade. Recently, the architectures of Recurrent AutoEncoders (RAE) found many applications in practice. RAE can extract the semantically valuable information, called context that represents a latent space useful for further processing. Nevertheless, recurrent autoencoders are hard to train, and the training process takes much time. In this paper, we propose an autoencoder architecture with sequence-aware encoding, which employs 1D convolutional layer to improve its performance in terms of model training time. We prove that the recurrent autoencoder with sequence-aware encoding outperforms a standard RAE in terms of training speed in most cases. The preliminary results show that the proposed solution dominates over the standard RAE, and the training process is order of magnitude faster.
翻译:经常性神经网络(RNN)在过去十年中得到了大量关注。最近,经常自动编码器(RAE)的架构在实际中发现了许多应用。RAE可以提取具有词义价值的信息,称为有助于进一步处理的潜在空间。然而,经常自动编码器很难培训,培训过程需要很长的时间。在本文中,我们建议使用一个具有序列识别编码的自动编码器结构,该结构使用1D革命层来改进其模型培训时间的性能。我们证明,具有序列识别编码的经常自动编码器在大多数情况下在培训速度方面超过了标准RAE。初步结果显示,拟议的解决方案高于标准RAE,而培训过程则速度更快。