One of the main arguments behind studying disentangled representations is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in the multi-task learning setting. In this paper, we take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing. We perform a thorough empirical study of the representations obtained by neural networks trained on automatically generated supervised tasks. Using a set of standard metrics we show that disentanglement appears naturally during the process of multi-task neural network training.
翻译:研究分解的表述背后的一个主要论点是,假设它们很容易被重新用于不同的任务。同时,找到一种联合的、适应性强的数据表述方式是多任务学习环境中的关键挑战之一。在本文件中,我们更仔细地审视在硬参数共享基础上的分解和多任务学习之间的关系。我们对在自动生成的监管任务上受过训练的神经网络获得的表述方式进行了彻底的经验性研究。我们使用一套标准衡量标准来表明,在多任务神经网络培训过程中,分解是自然而然的。