Disentangled and invariant representations are two critical goals of representation learning and many approaches have been proposed to achieve either one of them. However, those two goals are actually complementary to each other so that we propose a framework to accomplish both of them simultaneously. We introduce a weakly supervised signal to learn disentangled representation which consists of three splits containing predictive, known nuisance and unknown nuisance information respectively. Furthermore, we incorporate contrastive method to enforce representation invariance. Experiments shows that the proposed method outperforms state-of-the-art (SOTA) methods on four standard benchmarks and shows that the proposed method can have better adversarial defense ability comparing to other methods without adversarial training.
翻译:代表性学习的两个关键目标是分解和无差别的表述,为了实现这两个目标,提出了许多方法,但是,这两个目标实际上相互补充,以便我们同时提出实现这两个目标的框架。我们引入了一个薄弱的监督信号,以学习分解的表述,分别包括三部分,分别包含预测性、已知的干扰和未知的骚扰信息。此外,我们采用了对比性方法,以强制实施差异性表述。实验表明,拟议的方法在四个标准基准方面优于最先进的方法,并表明拟议的方法可以比其他方法有更好的对抗性辩护能力,而没有对抗性培训。