Measuring the dependence of data plays a central role in statistics and machine learning. In this work, we summarize and generalize the main idea of existing information-theoretic dependence measures into a higher-level perspective by the Shearer's inequality. Based on our generalization, we then propose two measures, namely the matrix-based normalized total correlation ($T_\alpha^*$) and the matrix-based normalized dual total correlation ($D_\alpha^*$), to quantify the dependence of multiple variables in arbitrary dimensional space, without explicit estimation of the underlying data distributions. We show that our measures are differentiable and statistically more powerful than prevalent ones. We also show the impact of our measures in four different machine learning problems, namely the gene regulatory network inference, the robust machine learning under covariate shift and non-Gaussian noises, the subspace outlier detection, and the understanding of the learning dynamics of convolutional neural networks (CNNs), to demonstrate their utilities, advantages, as well as implications to those problems. Code of our dependence measure is available at: https://bit.ly/AAAI-dependence
翻译:测量数据依赖性在统计和机器学习方面发挥着核心作用。在这项工作中,我们总结并概括现有信息理论依赖性措施的主要概念,以Shearer的不平等为更高层次的视角看待Shearer的不平等。然后,根据我们的一般化,我们提出两项措施,即基于矩阵的正常化总相关关系(T ⁇ alpha ⁇ $)和基于矩阵的标准化双重总相关关系(D ⁇ alpha ⁇ $),以量化任意空间中多种变量的依赖性,而没有明确估计基本数据分布。我们表明,我们的措施在统计上比流行的措施更具有差异性,在统计上更强大。我们还显示了我们在四个不同的机器学习问题中的措施的影响,即基因管理网络的推断、在共变换和非加澳洲噪音下强有力的机器学习、次空间外探测以及对革命神经网络学习动态的理解,以显示其效用、优势以及这些问题的影响。我们的依赖度度度度标准见:https://bitily.AAI-slistimate。