Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be fine-tuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for applications in computer vision and healthcare. In this tutorial paper, we motivate the need for disentangled representations, present key theory, and detail practical building blocks and criteria for learning such representations. We discuss applications in medical imaging and computer vision emphasising choices made in exemplar key works. We conclude by presenting remaining challenges and opportunities.
翻译:提议将分解的代言学习作为学习一般陈述的一种方法,即使没有或只有有限的监督,也可以用少量数据对良好的一般陈述进行微调,以适应新的目标任务,或直接用于在无形领域,在相应任务中取得显著成绩。这种数据和注释要求的缓解为计算机视觉和保健方面的应用提供了诱人的前景。在这个辅导文件中,我们提出需要分解的表述,提出关键理论,并详细说明学习这种表述的实际构件和标准。我们讨论了医疗成像和计算机视觉方面的应用,强调在典型关键工作中所作的选择。我们最后提出其余的挑战和机会。