Existing self-supervised learning strategies are constrained to either a limited set of objectives or generic downstream tasks that predominantly target uni-modal applications. This has isolated progress for imperative multi-modal applications that are diverse in terms of complexity and domain-affinity, such as meme analysis. Here, we introduce two self-supervised pre-training methods, namely Ext-PIE-Net and MM-SimCLR that (i) employ off-the-shelf multi-modal hate-speech data during pre-training and (ii) perform self-supervised learning by incorporating multiple specialized pretext tasks, effectively catering to the required complex multi-modal representation learning for meme analysis. We experiment with different self-supervision strategies, including potential variants that could help learn rich cross-modality representations and evaluate using popular linear probing on the Hateful Memes task. The proposed solutions strongly compete with the fully supervised baseline via label-efficient training while distinctly outperforming them on all three tasks of the Memotion challenge with 0.18%, 23.64%, and 0.93% performance gain, respectively. Further, we demonstrate the generalizability of the proposed solutions by reporting competitive performance on the HarMeme task. Finally, we empirically establish the quality of the learned representations by analyzing task-specific learning, using fewer labeled training samples, and arguing that the complexity of the self-supervision strategy and downstream task at hand are correlated. Our efforts highlight the requirement of better multi-modal self-supervision methods involving specialized pretext tasks for efficient fine-tuning and generalizable performance.
翻译:现有的自监督学习战略要么局限于有限的一组目标,要么限于主要针对单式应用的通用下游任务,这在复杂程度和领域亲近性(如 meme 分析)等复杂程度和领域亲和性(如Meme 分析)方面孤立了必要的多模式应用程序的进展。在这里,我们引入了两种自监督的训练前方法,即Ext-PIE-Net和MM-SimCLR,即(一) 在培训前采用现成的多式仇恨感应数据,或者(二) 通过纳入多种专门借口任务,有效地满足必要的复杂多式代表学习,实现自我监督学习。我们实验了不同的自监督型多式应用战略,包括有助于学习丰富的跨式代表性并使用流行的线性说明来评估这些培训前期任务。 拟议的解决方案通过标签效率培训与完全监督的基线竞争,同时明显地在Memotion挑战的所有三项任务中表现优于0.18%、23.64%和0.93 %的自我监督性学习,最后,我们通过学习高层次的自我监督性标准,我们用总体业绩、更精确的学习的成绩、更精确的学习任务,我们用一般的方法,通过成本分析任务、更精细的成绩、更精细的成绩、更精细的成绩来证明。