Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech. Despite the success of large language models and the spontaneous emergence of slang dictionaries, it is unclear how far their combination goes in terms of slang understanding for downstream social good tasks. In this paper, we provide a framework to study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding. Our experiments show the superiority of models that have been pre-trained on social media data, while the impact of dictionaries is positive only for static word embeddings. Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements, which can be traced to characteristics of slang as a quickly evolving and highly subjective language.
翻译:考虑到每分钟在网上创造的大量内容,对于促进社会公益,以及帮助决策者和主持人限制攻击性语言、虐待和仇恨言论的传播,非常需要使用朗声自动工具。尽管大型语言模式取得成功,而且自发出现了朗声词典,但尚不清楚它们在对下游社会公益任务的朗声理解方面有何程度的结合。在本文件中,我们提供了一个框架,用于研究代表学习模式和知识资源的不同组合,以研究依赖朗声理解的各种下游任务。我们的实验显示,在社交媒体数据方面经过预先培训的模式具有优越性,而字典的影响只对静态的词嵌入具有积极影响。我们的错误分析确定了朗声代言学习的核心挑战,包括语言外语、多语、差异和注解分歧,这可以追溯到 sang作为快速演变和高度主观语言的特点。