Pre-training over mixtured multi-task, multi-domain, and multi-modal data remains an open challenge in vision perception pre-training. In this paper, we propose GPPF, a General Perception Pre-training Framework, that pre-trains a task-level dynamic network, which is composed by knowledge "legos" in each layers, on labeled multi-task and multi-domain datasets. By inspecting humans' innate ability to learn in complex environment, we recognize and transfer three critical elements to deep networks: (1) simultaneous exposure to diverse cross-task and cross-domain information in each batch. (2) partitioned knowledge storage in separate lego units driven by knowledge sharing. (3) sparse activation of a subset of lego units for both pre-training and downstream tasks. Noteworthy, the joint training of disparate vision tasks is non-trivial due to their differences in input shapes, loss functions, output formats, data distributions, etc. Therefore, we innovatively develop a plug-and-play multi-task training algorithm, which supports Single Iteration Multiple Tasks (SIMT) concurrently training. SIMT lays the foundation of pre-training with large-scale multi-task multi-domain datasets and is proved essential for stable training in our GPPF experiments. Excitingly, the exhaustive experiments show that, our GPPF-R50 model achieves significant improvements of 2.5-5.8 over a strong baseline of the 8 pre-training tasks in GPPF-15M and harvests a range of SOTAs over the 22 downstream tasks with similar computation budgets. We also validate the generalization ability of GPPF to SOTA vision transformers with consistent improvements. These solid experimental results fully prove the effective knowledge learning, storing, sharing, and transfer provided by our novel GPPF framework.
翻译:有关混合多任务、多领域和多模式数据的培训前培训,在培训前的愿景感知前,我们建议GPF(GPPF),即GPPF(GPPF)(General Perview Pre-trade框架),在任务级动态网络前,由每层的知识“legos”组成,在标签多任务和多领域数据集上。通过检查人类在复杂环境中学习的内在能力,我们认识三个关键要素,并将其传送到深层次网络:(1)同时接触不同跨任务和跨领域的信息。 (2) 共享驱动的单独类链(GPF)(GPF),将知识储存在共享前和下游任务中进行分离。 值得注意的是,不同视觉任务的联合培训是非动态的,因为它们在投入形状、损失功能、产出格式、数据分布等方面存在差异。 因此,我们创新地开发了一个跨组合式的多重任务改进算法,通过共享连续的多层次(SIMMTF) 的多级实验(SIMTA) 基础,同时进行大规模的SIMFIMT(SMT) 培训。