Given a Hilbert space $\mathcal H$ and a finite measure space $\Omega$, the approximation of a vector-valued function $f: \Omega \to \mathcal H$ by a $k$-dimensional subspace $\mathcal U \subset \mathcal H$ plays an important role in dimension reduction techniques, such as reduced basis methods for solving parameter-dependent partial differential equations. For functions in the Lebesgue--Bochner space $L^2(\Omega;\mathcal H)$, the best possible subspace approximation error $d_k^{(2)}$ is characterized by the singular values of $f$. However, for practical reasons, $\mathcal U$ is often restricted to be spanned by point samples of $f$. We show that this restriction only has a mild impact on the attainable error; there always exist $k$ samples such that the resulting error is not larger than $\sqrt{k+1} \cdot d_k^{(2)}$. Our work extends existing results by Binev at al. (SIAM J. Math. Anal., 43(3):1457--1472, 2011) on approximation in supremum norm and by Deshpande et al. (Theory Comput., 2:225--247, 2006) on column subset selection for matrices.
翻译:translated abstract:
给定希尔伯特空间$ \mathcal H $和有限测度空间$ \Omega $,将向量值函数$ f:\Omega \to \mathcal H $通过$k$维子空间 $ \mathcal U \subset \mathcal H $来逼近,在降维技术(如求解参数依赖的偏微分方程的简化基方法)中起着重要作用。对于Lebesgue-Bochner空间$ L^2(\Omega; \mathcal H)$中的函数,函数的奇异值可以刻画最佳子空间逼近误差$ d_k ^ {(2)} $。但是,由于实际原因,$\mathcal U $常常被限制为由函数$ f $的点样本张成。我们证明这种限制对可达误差仅有轻微影响;总是存在$k$个样本,使得得到的误差不大于$ \sqrt{k + 1} \cdot d_k ^ {(2)} $。我们的工作扩展了Binev等人在最优范数逼近(Supremum Norm)(SIAM J. Math. Anal.,43(3):1457--1472,2011)和Deshpande等人关于矩阵列子集选择(Theory Comput.,2:225--247,2006)的现有结果。