Recently, it was discovered that for a given function class $\mathbf{F}$ the error of best linear recovery in the square norm can be bounded above by the Kolmogorov width of $\mathbf{F}$ in the uniform norm. That analysis is based on deep results in discretization of the square norm of functions from finite dimensional subspaces. In this paper we show how very recent results on universal discretization of the square norm of functions from a collection of finite dimensional subspaces lead to an inequality between optimal sparse recovery in the square norm and best sparse approximations in the uniform norm with respect to appropriate dictionaries.
翻译:暂无翻译