Disentangled representation learning has been proposed as an approach to learning general representations. This can be done in the absence of, or with limited, annotations. A good general representation can be readily fine-tuned for new target tasks using modest amounts of data, or even be used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for tractable and affordable applications in computer vision and healthcare. Finally, disentangled representations can offer model explainability and can help us understand the underlying causal relations of the factors of variation, increasing their suitability for real-world deployment. In this tutorial paper, we will offer an overview of the disentangled representation learning, its building blocks and criteria, and discuss applications in computer vision and medical imaging. We conclude our tutorial by presenting the identified opportunities for the integration of recent machine learning advances into disentanglement, as well as the remaining challenges.
翻译:作为学习一般表述的一种方法,提出了采用分解的代言学习方法,这可以在没有或只有有限的说明的情况下做到。良好的一般代表可以很容易地对使用少量数据的新目标任务进行微调,甚至直接用于在无形领域,在相应任务中取得显著的成绩。这种减少数据和批注要求为计算机视觉和保健方面可移植和负担得起的应用提供了诱人的前景。最后,分解的代言可以提供示范性的解释性,并帮助我们理解变异因素的潜在因果关系,增加它们适合实际世界部署的程度。在这个指导性文件中,我们将概述混乱的代言学习、其构件和标准,并讨论计算机视觉和医学成像方面的应用。我们结束我们的辅导性工作,提出已确定的机会,将最近的机器学习进展纳入分离,以及其余的挑战。