Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be fine-tuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for applications in computer vision and healthcare. In this tutorial paper, we motivate the need for disentangled representations, revisit key concepts, and describe practical building blocks and criteria for learning such representations. We survey applications in medical imaging emphasising choices made in exemplar key works, and then discuss links to computer vision applications. We conclude by presenting limitations, challenges, and opportunities.
翻译:提议将分解的代言学习作为学习一般陈述的一种方法,即使没有或只有有限的监督,也可以将良好的一般陈述用于学习新的目标任务,使用少量的数据,或直接用于在无形领域,在相应任务中取得显著成绩。这种数据和注释要求的缓解为计算机视觉和保健的应用提供了诱人的前景。在这个辅导文件中,我们提出需要分解的表述,重新审视关键概念,描述学习这种表述的实际基础和标准。我们调查在医学成像中强调在典型关键作品中作出的选择的应用,然后讨论与计算机视觉应用的联系。我们最后提出局限性、挑战和机遇。