Fine-tuning the Natural Language Processing (NLP) models for each new data set requires higher computational time associated with increased carbon footprint and cost. However, fine-tuning helps the pre-trained models adapt to the latest data sets; what if we avoid the fine-tuning steps and attempt to generate summaries using just the pre-trained models to reduce computational time and cost. In this paper, we tried to omit the fine-tuning steps and investigate whether the Marginal Maximum Relevance (MMR)-based approach can help the pre-trained models to obtain query-focused summaries directly from a new data set that was not used to pre-train the models. First, we used topic modelling on Wikipedia Current Events Portal (WCEP) and Debatepedia datasets to generate queries for summarization tasks. Then, using MMR, we ranked the sentences of the documents according to the queries. Next, we passed the ranked sentences to seven transformer-based pre-trained models to perform the summarization tasks. Finally, we used the MMR approach again to select the query relevant sentences from the generated summaries of individual pre-trained models and constructed the final summary. As indicated by the experimental results, our MMR-based approach successfully ranked and selected the most relevant sentences as summaries and showed better performance than the individual pre-trained models.
翻译:微调每个新数据集的自然语言处理模型(NLP)精确调整每个新数据集的自然语言处理模型(NLP)要求与增加碳足迹和成本相关的更高计算时间,但微调有助于预先培训的模型适应最新数据集;如果我们避免微调步骤,并试图仅仅使用经过预先培训的模型来生成摘要,以减少计算时间和成本,那么会如何做?在本文件中,我们试图省略微调步骤,并调查以微调最高相关性为基础的模型(MMMR)是否有助于预先培训的模型直接从未用于预先培训模型的新数据集中获取以询问为重点的摘要。首先,我们在维基百科当前事件门户网站(WCEP)和Dudiopedia数据集上使用主题建模来生成关于总和任务的查询。然后,我们利用MMR(MR)将文件的句次排在顺序上,按查询顺序排列为7个基于改革的、经过预先培训的模型(MMMR)来完成总体化任务。最后的评语轮,我们用MMMR(M(M)方法)再次从所生成的单个模型摘要中选择的查询相关句次,并构建了最后摘要。</s>