Unsupervised disentanglement learning is a crucial issue for understanding and exploiting deep generative models. Recently, SeFa tries to find latent disentangled directions by performing SVD on the first projection of a pre-trained GAN. However, it is only applied to the first layer and works in a post-processing way. Hessian Penalty minimizes the off-diagonal entries of the output's Hessian matrix to facilitate disentanglement, and can be applied to multi-layers.However, it constrains each entry of output independently, making it not sufficient in disentangling the latent directions (e.g., shape, size, rotation, etc.) of spatially correlated variations. In this paper, we propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations. It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation. We show that our OroJaR also encourages the output's Hessian matrix to be diagonal in an indirect manner. In contrast to the Hessian Penalty, our OroJaR constrains the output in a holistic way, making it very effective in disentangling latent dimensions corresponding to spatially correlated variations. Quantitative and qualitative experimental results show that our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods. Our code is available at https://github.com/csyxwei/OroJaR
翻译:不受监督的分解学习是了解和利用深度基因模型的关键问题。 最近, Sefa 试图通过在预训练的 GAN 首次投影时执行 SVD 来找到潜在的分解方向。 但是, 它只应用到第一个层, 并且以后处理方式工作。 Hessian 惩罚将输出的 Hessian 矩阵的离异条目最小化, 以方便分解, 并且可以应用到多层。 但是, 它会独立限制输出的每个输入, 使得它不足以分解空间相关变异的潜在方向( 如形状、 大小、 旋转等 ) 。 但是, 我们在此文件中建议一个简单的 Orthogoal Jacobian 正规化( OrojaR), 以鼓励深层次化模型来学习分解的表达方式。 它只是鼓励不同隐性维度的分解导致产出的变异性。 然而, 雅cocobian 对输入的表达方式来代表这种变异性。 我们的 OroJAR 将输出在 方向上显示一种有效的直径直径变 。