In this paper, we focus on normative systems for online communities. The paper addresses the issue that arises when different community members interpret these norms in different ways, possibly leading to unexpected behavior in interactions, usually with norm violations that affect the individual and community experiences. To address this issue, we propose a framework capable of detecting norm violations and providing the violator with information about the features of their action that makes this action violate a norm. We build our framework using Machine Learning, with Logistic Model Trees as the classification algorithm. Since norm violations can be highly contextual, we train our model using data from the Wikipedia online community, namely data on Wikipedia edits. Our work is then evaluated with the Wikipedia use case where we focus on the norm that prohibits vandalism in Wikipedia edits.
翻译:在本文中,我们侧重于在线社区的规范体系。 本文探讨了当不同社区成员以不同方式解释这些规范时出现的问题,这可能导致在互动中出现出乎意料的行为,通常会产生影响个人和社区经验的违反规范行为。 为解决这一问题,我们提出了一个框架,能够发现违反规范的行为,并向违法者提供有关其行动特点的信息,从而使这种行为违反规范。我们用机器学习,以后勤示范树作为分类算法来构建我们的框架。由于违反规范行为可能具有高度的关联性,我们用维基百科在线社区的数据,即维基百科编辑数据来培训我们的模型。然后用维基百科使用案例来评估我们的工作,我们侧重于维基百科编辑中禁止破坏的规范。