One aim of representation learning is to recover the original latent code that generated the data, a task which requires additional information or inductive biases. A recently proposed approach termed Independent Mechanism Analysis (IMA) postulates that each latent source should influence the observed mixtures independently, complementing standard nonlinear independent component analysis, and taking inspiration from the principle of independent causal mechanisms. While it was shown in theory and experiments that IMA helps recovering the true latents, the method's performance was so far only characterized when the modeling assumptions are exactly satisfied. Here, we test the method's robustness to violations of the underlying assumptions. We find that the benefits of IMA-based regularization for recovering the true sources extend to mixing functions with various degrees of violation of the IMA principle, while standard regularizers do not provide the same merits. Moreover, we show that unregularized maximum likelihood recovers mixing functions which systematically deviate from the IMA principle, and provide an argument elucidating the benefits of IMA-based regularization.
翻译:代表性学习的目的之一是恢复生成数据的原始潜在代码,这是一项需要额外信息或暗示偏差的任务。最近提出的一种称为独立机制分析(IMA)的方法假设,每个潜在来源应独立影响观察到的混合物,补充标准的非线性独立组成部分分析,并从独立因果机制原则中得到启发。虽然理论和实验表明IMA帮助恢复真实的隐伏,但该方法的性能直到模型假设完全令人满意时才具有特征。在这里,我们检验该方法是否稳健地违反了基本假设。我们发现,以IMA为基础的恢复真实来源的正规化的好处扩大到与不同程度违反IMA原则的功能混合,而标准正规化者并没有提供相同的优点。此外,我们表明,不正规的最大可能性是恢复与IMA原则系统地偏离的混合功能,并提供了解释IMA正规化的好处的论据。