Learning visual representations with interpretable features, i.e., disentangled representations, remains a challenging problem. Existing methods demonstrate some success but are hard to apply to large-scale vision datasets like ImageNet. In this work, we propose a simple post-processing framework to disentangle content and style in learned representations from pre-trained vision models. We model the pre-trained features probabilistically as linearly entangled combinations of the latent content and style factors and develop a simple disentanglement algorithm based on the probabilistic model. We show that the method provably disentangles content and style features and verify its efficacy empirically. Our post-processed features yield significant domain generalization performance improvements when the distribution shift occurs due to style changes or style-related spurious correlations.
翻译:具有可解释特征的学习视觉表现,即分解的表达方式,仍然是一个具有挑战性的问题。现有方法显示出一定的成功,但很难适用于图像网络等大型视觉数据集。在这项工作中,我们提议了一个简单的后处理框架,将内容和风格与经过培训的视觉模型中学习的表达方式分解开来。我们将预先训练的特征作为潜在内容和风格因素的线性纠结组合进行模型,并根据概率模型发展一种简单的分解算法。我们表明,该方法可以分解内容和风格特征,并用经验来验证其效力。我们后处理的特征在由于样式变化或与风格相关的虚假关联而发生分配变化时,产生显著的域性通用性业绩改进。