Current studies of bias in NLP rely mainly on identifying (unwanted or negative) bias towards a specific demographic group. While this has led to progress recognizing and mitigating negative bias, and having a clear notion of the targeted group is necessary, it is not always practical. In this work we extrapolate to a broader notion of bias, rooted in social science and psychology literature. We move towards predicting interpersonal group relationship (IGR) - modeling the relationship between the speaker and the target in an utterance - using fine-grained interpersonal emotions as an anchor. We build and release a dataset of English tweets by US Congress members annotated for interpersonal emotion -- the first of its kind, and 'found supervision' for IGR labels; our analyses show that subtle emotional signals are indicative of different biases. While humans can perform better than chance at identifying IGR given an utterance, we show that neural models perform much better; furthermore, a shared encoding between IGR and interpersonal perceived emotion enabled performance gains in both tasks. Data and code for this paper are available at https://github.com/venkatasg/interpersonal-bias
翻译:目前对《国家劳工政策》偏见的研究主要取决于确定对特定人口群体的偏见(不想要或消极),虽然这已导致承认和减轻消极偏见的进展,而且明确认识目标群体是必要的,但并非始终是实用的。在这项工作中,我们推断出一种更广泛的偏见概念,植根于社会科学和心理学文献。我们着手预测人际群体关系——用细微的人格情感作为主轴,模拟演讲者与目标之间的关系。我们建立和发布美国国会成员的英文推文数据集,说明人际情感 -- -- 此类情感的首个,以及IGR标签的 " 固定监督 " ;我们的分析表明,微妙的情感信号是不同偏见的象征。虽然人类在识别IGR时表现得更好,但我们显示神经模型的效果要好得多;此外,IGR与人际感知的情感的共同编码使得这两项任务都取得了业绩收益。本文的数据和代码可在https://github.com/venkatag/interperman-bia中查阅。