Advances in Natural Language Processing (NLP) have revolutionized the way researchers and practitioners address crucial societal problems. Large language models are now the standard to develop state-of-the-art solutions for text detection and classification tasks. However, the development of advanced computational techniques and resources is disproportionately focused on the English language, sidelining a majority of the languages spoken globally. While existing research has developed better multilingual and monolingual language models to bridge this language disparity between English and non-English languages, we explore the promise of incorporating the information contained in images via multimodal machine learning. Our comparative analyses on three detection tasks focusing on crisis information, fake news, and emotion recognition, as well as five high-resource non-English languages, demonstrate that: (a) detection frameworks based on pre-trained large language models like BERT and multilingual-BERT systematically perform better on the English language compared against non-English languages, and (b) including images via multimodal learning bridges this performance gap. We situate our findings with respect to existing work on the pitfalls of large language models, and discuss their theoretical and practical implications. Resources for this paper are available at https://multimodality-language-disparity.github.io/.
翻译:自然语言处理(NLP)的进展使研究人员和从业者解决重大社会问题的方式发生了革命性的变化; 大型语言模型现在已成为为文本检测和分类任务制定最先进的解决办法的标准; 然而,先进的计算技术和资源的开发不成比例地侧重于英语,将全球通用的大多数语言排挤在一边; 虽然现有的研究已经开发出更好的多语种和单一语言模式,以弥合英语和非英语语言之间的这种语言差异,但我们探索了通过多式联运机学习将图像中所含信息纳入其中的前景; 我们对以危机信息、假新闻和情绪识别以及五种高资源非英语语言为重点的三项检测任务进行比较分析,表明:(a) 以事先培训过的大型语言模式(如BERT和多语言-ERT)为基础的检测框架,相对于非英语有系统性地更好地运用英语,以及(b) 通过多语种学习将图像连接到这一绩效差距。我们将我们的调查结果与关于大型语言模型的缺陷的现有工作联系起来,并讨论其理论和实际影响。