Many online applications, such as online social networks or knowledge bases, are often attacked by malicious users who commit different types of actions such as vandalism on Wikipedia or fraudulent reviews on eBay. Currently, most of the fraud detection approaches require a training dataset that contains records of both benign and malicious users. However, in practice, there are often no or very few records of malicious users. In this paper, we develop one-class adversarial nets (OCAN) for fraud detection using training data with only benign users. OCAN first uses LSTM-Autoencoder to learn the representations of benign users from their sequences of online activities. It then detects malicious users by training a discriminator with a complementary GAN model that is different from the regular GAN model. Experimental results show that our OCAN outperforms the state-of-the-art one-class classification models and achieves comparable performance with the latest multi-source LSTM model that requires both benign and malicious users in the training phase.
翻译:许多在线应用程序,如在线社交网络或知识库,经常受到恶意用户的袭击,这些恶意用户实施不同类型的行动,如在维基百科上破坏他人财产或eBay上进行欺诈性审查。目前,大多数欺诈检测方法都需要一个包含良性和恶意用户记录的培训数据集。然而,在实践中,恶意用户的记录往往没有或很少。在本文中,我们开发了单级对抗网(OCAN),使用只有良性用户的培训数据来检测欺诈。OCAN首先使用LSTM-Autoencoder来了解良性用户在网上活动的序列中的表现。然后,通过培训一个与普通GAN模式不同的辅助性GAN模型来检测恶意用户。实验结果表明,我们的OCAN超越了最先进的单级分类模型,并取得了与培训阶段需要良性和恶意用户的最新多源LSTM模型的类似业绩。