This paper investigates theoretical properties of subsampling and hashing as tools for approximate Euclidean norm-preserving embeddings for vectors with (unknown) additive Gaussian noises. Such embeddings are sometimes called Johnson-lindenstrauss embeddings due to their celebrated lemma. Previous work shows that as sparse embeddings, the success of subsampling and hashing closely depends on the $l_\infty$ to $l_2$ ratios of the vector to be mapped. This paper shows that the presence of noise removes such constrain in high-dimensions, in other words, sparse embeddings such as subsampling and hashing with comparable embedding dimensions to dense embeddings have similar approximate norm-preserving dimensionality-reduction properties. The key is that the noise should be treated as an information to be exploited, not simply something to be removed. Theoretical bounds for subsampling and hashing to recover the approximate norm of a high dimension vector in the presence of noise are derived, with numerical illustrations showing better performances are achieved in the presence of noise.
翻译:本文调查了亚采样和散列的理论特性, 以其作为工具, 接近 Euclidean 规范保存嵌入, 用( 未知的) 添加高斯噪音嵌入矢量。 这种嵌入有时被称为 Johnson- lindenstraus 嵌入, 因其庆祝的 Lemma 。 先前的工作显示, 作为稀释嵌入, 子采样和散列的成功性, 密切取决于要绘制的矢量比例 $ infty$ 至 $l_ 2美元。 本文显示, 噪音的存在会消除高二分层中的这种限制, 换句话说, 稀释的嵌入, 如子采样, 与稠密嵌入的类似嵌入维度, 具有相似的嵌入维度。 关键是, 噪音应该被当作要开发的信息处理, 而不仅仅是要消除的东西。 子采样的理论界限是用来恢复噪音存在中高维度矢量矢量的大致规范。