Variational autoencoders (VAEs) are used for transfer learning across various research domains such as music generation or medical image analysis. However, there is no principled way to assess before transfer which components to retrain or whether transfer learning is likely to help on a target task. We propose to explore this question through the lens of representational similarity. Specifically, using Centred Kernel Alignment (CKA) to evaluate the similarity of VAEs trained on different datasets, we show that encoders' representations are generic but decoders' specific. Based on these insights, we discuss the implications for selecting which components of a VAE to retrain and propose a method to visually assess whether transfer learning is likely to help on classification tasks.
翻译:变分自编码器(VAEs)被用于各种研究领域的迁移学习,如音乐生成或医学图像分析。然而,在迁移之前,没有一种原则性的方法可以评估重新训练哪些组件或者迁移学习在目标任务上是否有帮助。我们提议通过代表性相似性的视角来探讨这个问题。具体而言,使用中心化核对齐(CKA)来评估在不同数据集上训练的VAEs的相似性,我们发现编码器的表示是通用的,但解码器的表示是特殊的。基于这些认识,我们讨论了选择重新训练VAE中的哪些组件的影响,并提出一种可视化的方法来评估迁移学习是否可能在分类任务上有帮助。