The focus of disentanglement approaches has been on identifying independent factors of variation in data. However, the causal variables underlying real-world observations are often not statistically independent. In this work, we bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data in a large-scale empirical study (including 4260 models). We show and quantify that systematically induced correlations in the dataset are being learned and reflected in the latent representations, which has implications for downstream applications of disentanglement such as fairness. We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
翻译:分解方法的重点是确定数据差异的独立因素,然而,现实世界观测所依据的因果变数往往在统计上不独立。在这项工作中,我们通过分析大规模经验研究(包括4260模型)中最突出的相关数据分解方法的行为(包括4260模型),弥合与现实世界情景之间的差距。我们显示并量化,正在系统地学习数据集中的相关性,并反映在潜在表现中,这对分解的下游应用,例如公平性具有影响。我们还展示了如何解决这些潜在关联,要么在培训期间使用薄弱的监管手段,要么在选合后纠正一个事先经过培训的带有少量标签的模型。