The proliferation of fake reviews of doctors has potentially detrimental consequences for patient well-being and has prompted concern among consumer protection groups and regulatory bodies. Yet despite significant advancements in the fields of machine learning and natural language processing, there remains limited comprehension of the characteristics differentiating fraudulent from authentic reviews. This study utilizes a novel pre-labeled dataset of 38048 physician reviews to establish the effectiveness of large language models in classifying reviews. Specifically, we compare the performance of traditional ML models, such as logistic regression and support vector machines, to generative pre-trained transformer models. Furthermore, we use GPT4, the newest model in the GPT family, to uncover the key dimensions along which fake and genuine physician reviews differ. Our findings reveal significantly superior performance of GPT-3 over traditional ML models in this context. Additionally, our analysis suggests that GPT3 requires a smaller training sample than traditional models, suggesting its appropriateness for tasks with scarce training data. Moreover, the superiority of GPT3 performance increases in the cold start context i.e., when there are no prior reviews of a doctor. Finally, we employ GPT4 to reveal the crucial dimensions that distinguish fake physician reviews. In sharp contrast to previous findings in the literature that were obtained using simulated data, our findings from a real-world dataset show that fake reviews are generally more clinically detailed, more reserved in sentiment, and have better structure and grammar than authentic ones.
翻译:针对医生的伪造评论的滋生可能对病人的福祉产生潜在的不利影响,引起了消费者保护团体和监管机构的关注。然而,尽管机器学习和自然语言处理领域取得了重大进展,但仍然对区分真实评论和伪造评论的特征有限的理解。本研究利用了一个新颖的已标记数据集,其中包括38048条医生评论,以确定大型语言模型在分类评论方面的有效性。具体而言,我们比较传统的机器学习模型(例如逻辑回归和支持向量机)与基于生成预先训练的变压器模型的性能。此外,我们使用GPT4,这是GPT系列中的最新模型,以便揭示伪造和真实医生评论的关键维度。我们的研究发现,GPT-3在这种情况下比传统机器学习模型表现优异。另外,我们的分析表明,与传统模型相比,GPT3所需的训练样本较少,这意味着它适用于训练数据稀缺的任务。此外,在冷启动的情况下(即没有医生的先前评论)GPT3的性能优越性将增加。最后,我们使用GPT4揭示了区分伪造医生评论的重要维度。与以前使用模拟数据获得的发现形成鲜明对比的是,我们从真实数据集中发现,伪造评论通常更具有临床细节,情感更为克制,并具有更好的结构和语法。