Slanted news coverage, also called media bias, can heavily influence how news consumers interpret and react to the news. To automatically identify biased language, we present an exploratory approach that compares the context of related words. We train two word embedding models, one on texts of left-wing, the other on right-wing news outlets. Our hypothesis is that a word's representations in both word embedding spaces are more similar for non-biased words than biased words. The underlying idea is that the context of biased words in different news outlets varies more strongly than the one of non-biased words, since the perception of a word as being biased differs depending on its context. While we do not find statistical significance to accept the hypothesis, the results show the effectiveness of the approach. For example, after a linear mapping of both word embeddings spaces, 31% of the words with the largest distances potentially induce bias. To improve the results, we find that the dataset needs to be significantly larger, and we derive further methodology as future research direction. To our knowledge, this paper presents the first in-depth look at the context of bias words measured by word embeddings.
翻译:剪切式新闻报道,也称为媒体偏见,可以对新闻消费者如何解读和回应新闻产生很大影响。 为了自动识别偏见语言, 我们提出一种探索性的方法, 比较相关词的背景。 我们训练了两个字嵌入模型, 一个是左翼文本, 另一个是右翼新闻发布。 我们的假设是, 在两个字嵌入空间中的单词表示比偏向性词更相似。 我们发现,不同新闻发布站中偏向词的背景比非偏向词的背景要大得多, 因为对一个单词的印象视其背景不同而不同。 虽然我们并不认为接受这个假设具有统计意义, 但结果显示了这个方法的有效性。 例如, 在对两个单词嵌入空间进行线性绘图后, 31%的单词与最远的单词表示可能会引起偏见。 为了改进结果, 我们发现数据集需要大得多得多, 我们作为未来的研究方向来得出进一步的方法。 根据我们的知识, 本文首先深入地审视用词嵌入计量的偏差词的背景。