The identification of relevant features, i.e., the driving variables that determine a process or the properties of a system, is an essential part of the analysis of data sets with a large number of variables. A mathematical rigorous approach to quantifying the relevance of these features is mutual information. Mutual information determines the relevance of features in terms of their joint mutual dependence to the property of interest. However, mutual information requires as input probability distributions, which cannot be reliably estimated from continuous distributions such as physical quantities like lengths or energies. Here, we introduce total cumulative mutual information (TCMI), a measure of the relevance of mutual dependences that extends mutual information to random variables of continuous distribution based on cumulative probability distributions. TCMI is a non-parametric, robust, and deterministic measure that facilitates comparisons and rankings between feature sets with different cardinality. The ranking induced by TCMI allows for feature selection, i.e., the identification of variable sets that are nonlinear statistically related to a property of interest, taking into account the number of data samples as well as the cardinality of the set of variables. We evaluate the performance of our measure with simulated data, compare its performance with similar multivariate-dependence measures, and demonstrate the effectiveness of our feature-selection method on a set of standard data sets and a typical scenario in materials science.
翻译:确定相关特征,即决定系统进程或特性的驱动变量,是分析具有大量变量的数据集分析的一个必要组成部分。用数学严格的方法量化这些特征的相关性是相互信息。相互信息决定了这些特征在共同依赖利益财产方面的特征的相关性。但是,相互信息需要作为输入概率分布,而这种概率分布不能从连续分布,例如诸如长度或能量等物理数量中可靠地估算。这里,我们引入了累积性共信息,这是衡量相互依存相关性的一个尺度,这种相互依存将相互信息扩大到基于累积概率分布的连续分布随机变量。TCMI是一种非参数性、稳健和决定性的计量,便于对不同基本特征组合进行比较和排序。TMI的排序允许进行特征选择,即确定与某一属性无关的不线性统计性的可变数据集,同时考虑到数据样品的数量以及成套变量的基本性。我们用模拟性、稳妥和典型性能模型对比了我们衡量的尺度的性能,并比较了我们所测定的模型的模型性能。