Hateful memes are a growing menace on social media. While the image and its corresponding text in a meme are related, they do not necessarily convey the same meaning when viewed individually. Hence, detecting hateful memes requires careful consideration of both visual and textual information. Multimodal pre-training can be beneficial for this task because it effectively captures the relationship between the image and the text by representing them in a similar feature space. Furthermore, it is essential to model the interactions between the image and text features through intermediate fusion. Most existing methods either employ multimodal pre-training or intermediate fusion, but not both. In this work, we propose the Hate-CLIPper architecture, which explicitly models the cross-modal interactions between the image and text representations obtained using Contrastive Language-Image Pre-training (CLIP) encoders via a feature interaction matrix (FIM). A simple classifier based on the FIM representation is able to achieve state-of-the-art performance on the Hateful Memes Challenge (HMC) dataset with an AUROC of 85.8, which even surpasses the human performance of 82.65. Experiments on other meme datasets such as Propaganda Memes and TamilMemes also demonstrate the generalizability of the proposed approach. Finally, we analyze the interpretability of the FIM representation and show that cross-modal interactions can indeed facilitate the learning of meaningful concepts. The code for this work is available at https://github.com/gokulkarthik/hateclipper.
翻译:对社交媒体来说,令人憎恶的图像是越来越严重的威胁。 虽然图像及其在网格中对应的文本与图像有关, 但是它们不一定在单独看待时传达同样的含义。 因此, 发现仇恨的网格需要仔细考虑视觉和文字信息。 多式培训前会有益于这项任务, 因为它通过在相似的功能空间中代表图像和文字, 从而有效地捕捉图像和文字之间的关系。 此外, 有必要通过中间聚合来模拟图像和文字特征之间的相互作用。 大多数现有方法要么采用多式联运培训前或中间聚合, 而不是两者兼而有之。 在这项工作中, 我们提出了仇恨- CLIPper 结构, 它明确模拟了使用对比性语言- 语言预训练( CLIP) 获得的图像和文字表达方式之间的跨模式互动。 以 FIM 代表方式为基础的简单解析器能够实现关于仇恨性Meume 挑战( HMC) 的状态-艺术表现, 并且有85.8的AUROC, 这甚至超越了人类在82- 65 泰米尔/Prokincom 上的表现。 实验性实验性 也展示了我们提出的一般的可理解性概念。