Automatic summarization techniques aim to shorten and generalize information given in the text while preserving its core message and the most relevant ideas. This task can be approached and treated with a variety of methods, however, not many attempts have been made to produce solutions specifically for the Russian language despite existing localizations of the state-of-the-art models. In this paper, we aim to showcase ruGPT3 ability to summarize texts, fine-tuning it on the corpora of Russian news with their corresponding human-generated summaries. Additionally, we employ hyperparameter tuning so that the model's output becomes less random and more tied to the original text. We evaluate the resulting texts with a set of metrics, showing that our solution can surpass the state-of-the-art model's performance without additional changes in architecture or loss function. Despite being able to produce sensible summaries, our model still suffers from a number of flaws, namely, it is prone to altering Named Entities present in the original text (such as surnames, places, dates), deviating from facts stated in the given document, and repeating the information in the summary.
翻译:自动总和技术旨在缩短和普及文本中的信息,同时保留其核心信息及最相关的想法; 这项工作可以采用各种方法处理和处理,然而,尽管目前最先进的模型已经本地化,但并没有作出多少努力,专门为俄语提出解决办法; 在本文中,我们的目标是展示RuGPT3 的文本摘要能力,对俄罗斯新闻总库及其相应的人造摘要进行微调; 此外,我们采用超光谱调整,使模型的输出减少随机性能,并更多地与原始文本挂钩; 我们用一套衡量尺度对所产生的案文进行评价,表明我们的解决方案可以超过最新模型的性能,而无需对结构或损失功能作更多的改动; 尽管我们能够产生一些合理的摘要,但我们的模型仍然存在着许多缺陷,即很容易改变原始文本(例如姓氏、地点、日期)、偏离特定文件中陈述的事实以及重复摘要中的信息。