In this work, we train the first monolingual Lithuanian transformer model on a relatively large corpus of Lithuanian news articles and compare various output decoding algorithms for abstractive news summarization. Generated summaries are coherent and look impressive at the first glance. However, some of them contain misleading information that is not so easy to spot. We describe all the technical details and share our trained model and accompanying code in an online open-source repository, as well as some characteristic samples of the generated summaries.
翻译:在这项工作中,我们将第一种单一语言的立陶宛变压器模式培训成立陶宛新闻文章数量相对庞大的系列,并比较各种输出解码算法,以便抽象地总结新闻。生成摘要是连贯的,在第一眼中看似令人印象深刻。然而,其中一些含有误导信息,不容易发现。我们描述了所有的技术细节,并在一个在线公开源码库中分享我们经过培训的模型和配套代码,以及生成摘要的一些典型样本。