A fundamental step in many data-analysis techniques is the construction of an affinity matrix describing similarities between data points. When the data points reside in Euclidean space, a widespread approach is to from an affinity matrix by the Gaussian kernel with pairwise distances, and to follow with a certain normalization (e.g. the row-stochastic normalization or its symmetric variant). We demonstrate that the doubly-stochastic normalization of the Gaussian kernel with zero main diagonal (i.e., no self loops) is robust to heteroskedastic noise. That is, the doubly-stochastic normalization is advantageous in that it automatically accounts for observations with different noise variances. Specifically, we prove that in a suitable high-dimensional setting where heteroskedastic noise does not concentrate too much in any particular direction in space, the resulting (doubly-stochastic) noisy affinity matrix converges to its clean counterpart with rate $m^{-1/2}$, where $m$ is the ambient dimension. We demonstrate this result numerically, and show that in contrast, the popular row-stochastic and symmetric normalizations behave unfavorably under heteroskedastic noise. Furthermore, we provide examples of simulated and experimental single-cell RNA sequence data with intrinsic heteroskedasticity, where the advantage of the doubly-stochastic normalization for exploratory analysis is evident.
翻译:许多数据分析技术的一个根本步骤是构建一个描述数据点之间的相似点的亲和矩阵。当数据点位于欧几里德空间时,一种广泛的方法是从高斯内核的亲和矩阵中取自高斯内核的亲和矩阵,配有双向距离,并遵循某种正常化(例如,行对声正常化或其对称变量)。我们证明,高斯内核以零主要正对角(即,没有自环)的双相近性恢复正常状态对于交融性噪声是强大的。也就是说,双相近性正常化的好处在于它自动说明不同噪声差异的观测结果。具体地说,我们证明,在一个合适的高维环境中,热心噪音并不过多地集中在任何特定的空间方向,由此产生的(忧郁不相近的)杂乱的粘和粘结性矩阵与纯正对等值(即:$m ⁇ -1/2美元,其中美元是环境基级的噪声噪声噪音噪音。我们以数字和正比性的方式展示了这一结果。