Large-scale pre-training has recently revolutionized vision-and-language (VL) research. Models such as LXMERT and UNITER have significantly lifted the state of the art over a wide range of VL tasks. However, the large number of parameters in such models hinders their application in practice. In parallel, work on the lottery ticket hypothesis (LTH) has shown that deep neural networks contain small matching subnetworks that can achieve on par or even better performance than the dense networks when trained in isolation. In this work, we perform the first empirical study to assess whether such trainable subnetworks also exist in pre-trained VL models. We use UNITER as the main testbed (also test on LXMERT and ViLT), and consolidate 7 representative VL tasks for experiments, including visual question answering, visual commonsense reasoning, visual entailment, referring expression comprehension, image-text retrieval, GQA, and NLVR$^2$. Through comprehensive analysis, we summarize our main findings as follows. ($i$) It is difficult to find subnetworks that strictly match the performance of the full model. However, we can find "relaxed" winning tickets at 50%-70% sparsity that maintain 99% of the full accuracy. ($ii$) Subnetworks found by task-specific pruning transfer reasonably well to the other tasks, while those found on the pre-training tasks at 60%/70% sparsity transfer universally, matching 98%/96% of the full accuracy on average over all the tasks. ($iii$) Besides UNITER, other models such as LXMERT and ViLT can also play lottery tickets. However, the highest sparsity we can achieve for ViLT is far lower than LXMERT and UNITER (30% vs. 70%). ($iv$) LTH also remains relevant when using other training methods (e.g., adversarial training).
翻译:大型培训前的大规模培训最近使视觉和语言(VL)研究发生革命性的变化。 LXMERT 和 UITER 等模型大大提升了VL任务中的最新水平。 然而,这些模型中的大量参数妨碍了它们的实际应用。 与此同时,彩票假设(LTH) 方面的工作表明,深神经网络包含小相匹配的子网络,这些小网络比隔离培训时密集的网络平均或甚至更好。在这项工作中,我们进行了第一次实证研究,以评估这些可训练的子网络是否也存在于经过训练的VL模型中。我们用UNITER作为主测试台(LXMET和VLTT的测试台)大幅提升了最新的最新水平,并合并了7项具有代表性的VLLL任务,包括视觉问题回答、视觉常识推理、视觉要求、表达理解前LQA和NLVR$2元。通过全面分析,我们总结了我们的主要结果如下。 (美元) 很难找到子网络严格匹配的VLMLT值, 在完全模型的运行中达到98 % 。 然而,我们也可以发现, 在完全的运行中,联合国40个任务中,我们发现, 也发现,在完全的运行中,运行中,可以找到这些飞行的运行中运行中, 。