Humans are incredibly good at transferring knowledge from one domain to another, enabling rapid learning of new tasks. Likewise, transfer learning has enabled enormous success in many computer vision problems using pretraining. However, the benefits of transfer in multi-domain learning, where a network learns multiple tasks defined by different datasets, has not been adequately studied. Learning multiple domains could be beneficial or these domains could interfere with each other given limited network capacity. In this work, we decipher the conditions where interference and knowledge transfer occur in multi-domain learning. We propose new metrics disentangling interference and transfer and set up experimental protocols. We further examine the roles of network capacity, task grouping, and dynamic loss weighting in reducing interference and facilitating transfer. We demonstrate our findings on the CIFAR-100, MiniPlaces, and Tiny-ImageNet datasets.
翻译:人类在将知识从一个领域转移到另一个领域方面非常出色,能够迅速学习新任务。同样,转让学习也使许多计算机视觉问题通过培训前取得了巨大成功。然而,在多领域学习的转移的好处,即网络学习不同数据集界定的多重任务,尚未得到充分研究。学习多个领域可能是有益的,或者由于网络能力有限,这些领域可能相互干扰。在这项工作中,我们破译了在多领域学习中发生干扰和知识转让的条件。我们提出了新的干扰和转让的分解标准,并建立了实验协议。我们进一步研究了网络能力、任务组合和动态减重在减少干扰和便利转让方面的作用。我们展示了我们在CIFAR-100、MiniPlaces和Tiny-ImageNet数据集方面的调查结果。