Tensor completion exhibits an interesting computational-statistical gap in terms of the number of samples needed to perform tensor estimation. While there are only $\Theta(tn)$ degrees of freedom in a $t$-order tensor with $n^t$ entries, the best known polynomial time algorithm requires $O(n^{t/2})$ samples in order to guarantee consistent estimation. In this paper, we show that weak side information is sufficient to reduce the sample complexity to $O(n)$. The side information consists of a weight vector for each of the modes which is not orthogonal to any of the latent factors along that mode; this is significantly weaker than assuming noisy knowledge of the subspaces. We provide an algorithm that utilizes this side information to produce a consistent estimator with $O(n^{1+\kappa})$ samples for any small constant $\kappa > 0$.
翻译:完成 Tensor 的样本数量在进行“ 压力估计” 所需的样本数量上存在一个有趣的计算-统计差距。 虽然在以美元为单位的“ 单数” 中,只有$\ Theta( tn) $(tn) 的自由度, 以美元为单位, 但最已知的多元时间算法需要$O(n ⁇ t/2}) 美元样本, 以保证一致的估计。 在本文中, 我们显示, 微弱的侧面信息足以将样本复杂性降低到$(n) 。 侧面信息包含每种模式的重量矢量, 与该模式的任何潜伏因素无关; 这比假设对子空间的杂乱知识要弱得多。 我们提供一种算法, 利用这一侧面信息生成一个与$( n ⁇ 1 ⁇ ) {kappa} 一致的样本, 用于任何小常数的 $\ kappa > 0美元 。