The review process is essential to ensure the quality of publications. Recently, the increase of submissions for top venues in machine learning and NLP has caused a problem of excessive burden on reviewers and has often caused concerns regarding how this may not only overload reviewers, but also may affect the quality of the reviews. An automatic system for assisting with the reviewing process could be a solution for ameliorating the problem. In this paper, we explore automatic review summary generation for scientific papers. We posit that neural language models have the potential to be valuable candidates for this task. In order to test this hypothesis, we release a new dataset of scientific papers and their reviews, collected from papers published in the NeurIPS conference from 2013 to 2020. We evaluate state of the art neural summarization models, present initial results on the feasibility of automatic review summary generation, and propose directions for the future.
翻译:审评进程对于确保出版物质量至关重要。最近,提交机器学习和NLP最高场地的提交材料的增加给审评员造成了过重的负担问题,并经常引起人们的担忧,担心这不仅会影响审评的质量,而且会影响审评的质量。一个协助审评进程的自动系统可以作为改善问题的解决办法。我们在本文件中探讨科学论文的自动审评摘要生成。我们认为神经语言模型有可能成为这一任务的宝贵候选人。为了测试这一假设,我们发布了一套新的科学论文及其审查数据集,这些数据来自2013至2020年NeurIPS会议上发表的论文。我们评估艺术神经合成模型的状况,提出自动审评摘要生成可行性的初步结果,并为未来提出方向。