We study the randomized $n$-th minimal errors (and hence the complexity) of vector valued approximation. In a recent paper by the author [Randomized complexity of parametric integration and the role of adaption I. Finite dimensional case (preprint)] a long-standing problem of Information-Based Complexity was solved: Is there a constant $c>0$ such that for all linear problems $\mathcal{P}$ the randomized non-adaptive and adaptive $n$-th minimal errors can deviate at most by a factor of $c$? That is, does the following hold for all linear $\mathcal{P}$ and $n\in {\mathbb N}$ \begin{equation*} e_n^{\rm ran-non} (\mathcal{P})\le ce_n^{\rm ran} (\mathcal{P}) \, {\bf ?} \end{equation*} The analysis of vector-valued mean computation showed that the answer is negative. More precisely, there are instances of this problem where the gap between non-adaptive and adaptive randomized minimal errors can be (up to log factors) of the order $n^{1/8}$. This raises the question about the maximal possible deviation. In this paper we show that for certain instances of vector valued approximation the gap is $n^{1/2}$ (again, up to log factors).
翻译:暂无翻译