Warning: this paper contains content that may be offensive or upsetting. Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech. Despite the success of large language models and the spontaneous emergence of slang dictionaries, it is unclear how far their combination goes in terms of slang understanding for downstream social good tasks. In this paper, we provide a framework to study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding. Our experiments show the superiority of models that have been pre-trained on social media data, while the impact of dictionaries is positive only for static word embeddings. Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements, which can be traced to characteristics of slang as a quickly evolving and highly subjective language.
翻译:警告: 本文包含的内容可能是冒犯性或破坏性的内容。 考虑到每分钟在网络上创造的大量内容,非常需要使用lang-aware自动工具来推广社会公益,协助决策者和主持人限制攻击性语言、虐待和仇恨言论的传播。 尽管大型语言模式取得成功,而且自发出现了 sang 字典,但从对下游社会公益任务的语义理解来看,它们的结合程度还不清楚。 在本文中,我们提供了一个框架,用于研究代表学习模式和知识资源的不同组合,以研究各种依赖lang 理解的下游任务。 我们的实验展示了在社会媒体数据方面经过预先培训的模型的优越性,而词典的影响仅对静态的词嵌入具有积极影响。 我们的错误分析确定了 sang 表达学习的核心挑战,包括语言外语、多元性、差异和注释性分歧,可以追溯到 slang 的特征,即迅速演变和高度主观性的语言。