The United Nations identified gender equality as a Sustainable Development Goal in 2015, recognizing the underrepresentation of women in politics as a specific barrier to achieving gender equality. Political systems around the world experience gender inequality across all levels of elected government as fewer women run for office than men. This is due in part to online abuse, particularly on social media platforms like Twitter, where women seeking or in power tend to be targeted with more toxic maltreatment than their male counterparts. In this paper, we present reflections on ParityBOT - the first natural language processing-based intervention designed to affect online discourse for women in politics for the better, at scale. Deployed across elections in Canada, the United States and New Zealand, ParityBOT was used to analyse and classify more than 12 million tweets directed at women candidates and counter toxic tweets with supportive ones. From these elections we present three case studies highlighting the current limitations of, and future research and application opportunities for, using a natural language processing-based system to detect online toxicity, specifically with regards to contextually important microaggressions. We examine the rate of false negatives, where ParityBOT failed to pick up on insults directed at specific high profile women, which would be obvious to human users. We examine the unaddressed harms of microaggressions and the potential of yet unseen damage they cause for women in these communities, and for progress towards gender equality overall, in light of these technological blindspots. This work concludes with a discussion on the benefits of partnerships between nonprofit social groups and technology experts to develop responsible, socially impactful approaches to addressing online hate.
翻译:联合国将性别平等确定为2015年的一项可持续发展目标,确认妇女在政治中的代表性不足是实现两性平等的具体障碍。世界各地的政治制度在民选政府各级都经历两性不平等,因为女性竞选公职的人数少于男性。部分原因是在线虐待,特别是在诸如推特等社交媒体平台上,寻求或掌权的妇女往往比男性同行受到更毒的虐待。我们在本文件中反思了PaityBOT,这是旨在影响妇女在政治中的在线对话的首个天然语言处理干预,目的是从规模上改善妇女的政治中的在线对话。在加拿大、美国和新西兰的选举期间,PacyBOT被运用于对超过1 200万次针对女性候选人的推特进行分析和分类,并用支持词反驳有毒的推特。我们从这些选举中提出三个案例研究,强调目前存在的局限性,以及未来研究和应用机会,利用基于自然语言的处理系统来检测在线毒性,特别是从背景上看重要的微观侵略。我们审视了虚假的负面比率,在加拿大、美国和新西兰的选举期间,PacyBOTOT在非赢利方面未能从非赢利到女性候选人的在线讨论,而从社会上审视了这些不透明的社会上对妇女造成伤害的潜在。