The growing prominence of large language models, such as GPT-4 and ChatGPT, has led to increased concerns over academic integrity due to the potential for machine-generated content and paraphrasing. Although studies have explored the detection of human- and machine-paraphrased content, the comparison between these types of content remains underexplored. In this paper, we conduct a comprehensive analysis of various datasets commonly employed for paraphrase detection tasks and evaluate an array of detection methods. Our findings highlight the strengths and limitations of different detection methods in terms of performance on individual datasets, revealing a lack of suitable machine-generated datasets that can be aligned with human expectations. Our main finding is that human-authored paraphrases exceed machine-generated ones in terms of difficulty, diversity, and similarity implying that automatically generated texts are not yet on par with human-level performance. Transformers emerged as the most effective method across datasets with TF-IDF excelling on semantically diverse corpora. Additionally, we identify four datasets as the most diverse and challenging for paraphrase detection.
翻译:人们对于大规模语言模型(如GPT-4和ChatGPT)的关注度日益增加,由于其产生机器生成的内容和改写内容的潜力,学术诚信引起了越来越多的担忧。尽管已经有一些研究探讨了人类和机器生成的改写内容检测,但是这两种类型内容的比较仍然未曾深入研究。在本文中,我们对常用的改写检测数据集进行了全面分析,并评估了各种改写检测方法。我们的研究结果显示,在表现上不同的检测方法在各个数据集上存在优缺点,其中机器生成的数据集与人类期望相距甚远。我们的主要发现是,无论在难度、多样性还是相似性方面,人类生成的改写内容都超过机器生成的内容,这说明自动生成的文本还未达到人类水平的性能。Transformer方法在不同数据集中表现最为出色,而TF-IDF在语义多样化的语料库上表现优秀。此外,我们确定了四个数据集作为改写检测中最多样化和最具挑战性的数据集。