Transformer variants dominate the state-of-the-art in different natural language processing tasks such as translation, reading comprehension and summarization. Our paper is more directed to use general memory slots added to the inputs and studying the results of adding these slots. This paper is a go on study of general memory slots rule that were added to the input of the proposed model in previous work. We have two main tasks;1) pretraining task using masked language modeling and b) fine tuning task using HotpotQA . This study aims to verify the ability of the proposed model to handle chunks as if they were one chunk comparing with the base model. As baseline we used T5 transformer. We studied the rule of memory slots augmented to each input chunk and studied the model performance without selector. We found that adding memory to input chunks helped the proposed model to overcome the baseline on Masked language modeling task with specific training parameters. Ablation study reveals the ability of using the compressed input chunks with a degradation in performance.
翻译:在翻译、阅读理解和总结等不同的自然语言处理任务中,变异器主宰着最先进的技术。 我们的论文更倾向于使用输入和研究添加这些变异器的结果时添加的一般记忆槽。 本文是对在先前工作中拟议模型输入中添加的一般记忆槽规则的研究。 我们有两个主要任务 ; 1 使用掩码语言模型的训练前任务 ; b) 使用HotpotQA 进行微调任务 。 本研究旨在验证拟议模型处理块的能力, 仿佛它们与基本模型相比是一块块。 作为基准, 我们使用了 T5 变异器 。 我们研究了每个输入块增加的记忆槽规则, 并研究了模型性能。 我们发现, 在输入时添加记忆有助于拟议模型用具体培训参数克服遮掩语言模型的基线 。 对比研究揭示了使用压缩输入块的能力, 其性能退化 。