Recently in the field of unsupervised representation learning, strong identifiability results for disentanglement of causally-related latent variables have been established by exploiting certain side information, such as class labels, in addition to independence. However, most existing work is constrained by functional form assumptions such as independent sources or further with linear transitions, and distribution assumptions such as stationary, exponential family distribution. It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between. In this work, we establish the identifiability theories of nonparametric latent causal processes from their nonlinear mixtures under fixed temporal causal influences and analyze how distribution changes can further benefit the disentanglement. We propose \textbf{\texttt{TDRL}}, a principled framework to recover time-delayed latent causal variables and identify their relations from measured sequential data under stationary environments and under different distribution shifts. Specifically, the framework can factorize unknown distribution shifts into transition distribution changes under fixed and time-varying latent causal relations, and under observation changes in observation. Through experiments, we show that time-delayed latent causal influences are reliably identified and that our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes. Our code is available at: \url{https://github.com/weirayao/tdrl}.
翻译:最近,在无人监督的代表性学习领域,通过利用某些侧面信息,如阶级标签,以及独立,确定了与因果关系相关的潜在变量的分解的强烈可辨性结果;然而,大多数现有工作受到功能形式假设的限制,如独立来源或线性过渡,以及固定、指数式家庭分布等分布假设。尚不清楚潜在潜在变量及其因果关系是否可识别,如果这些变量之间具有任意、非参数性因果关系。在这项工作中,我们根据固定的时间因果影响,建立非线性混合物中不可辨别的潜在因果进程的可辨性理论,并分析分配变化如何进一步有利于动荡。我们提出\ textb textf textt{TDRL ⁇ },这是一个原则框架,以恢复时间延迟的潜在因果变量,并根据固定环境和不同分布变化下测量的顺序数据确定其关系。具体地说,该框架可以将未知的分布变化纳入固定和时间变异的潜在因果关系,并在观察中进行观察。通过实验,我们展示了分配变化如何进一步惠及不易变现的模型。