In the Random Subset Sum Problem, given $n$ i.i.d. random variables $X_1, ..., X_n$, we wish to approximate any point $z \in [-1,1]$ as the sum of a suitable subset $X_{i_1(z)}, ..., X_{i_s(z)}$ of them, up to error $\varepsilon$. Despite its simple statement, this problem is of fundamental interest to both theoretical computer science and statistical mechanics. More recently, it gained renewed attention for its implications in the theory of Artificial Neural Networks. An obvious multidimensional generalisation of the problem is to consider $n$ i.i.d.\ $d$-dimensional random vectors, with the objective of approximating every point $\mathbf{z} \in [-1,1]^d$. Rather surprisingly, after Lueker's 1998 proof that, in the one-dimensional setting, $n=O(\log \frac 1\varepsilon)$ samples guarantee the approximation property with high probability, little progress has been made on achieving the above generalisation. In this work, we prove that, in $d$ dimensions, $n = O(d^3\log \frac 1\varepsilon \cdot (\log \frac 1\varepsilon + \log d))$ samples suffice for the approximation property to hold with high probability. As an application highlighting the potential interest of this result, we prove that a recently proposed neural network model exhibits \emph{universality}: with high probability, the model can approximate any neural network within a polynomial overhead in the number of parameters.
翻译:在随机Subset Sum 问题中, 以 $ i. d. 随机变量 $X_ 1,..., X_ n 美元, 我们希望在 [ 1, 1, 1, 1美元] 中, 将任何点美 美 美 美 美 美 美 元之和, 美 美 美 美 元之和, 美 美 美 元之和 美 。 尽管其简单陈述, 这个问题对于理论计算机科学和统计力都具有根本意义。 最近, 它因其在人工神经网络理论中的影响而重新引起关注。 问题的一个明显的多层面概观是考虑 美 美 美 美 美 美 美 美 元之和 的随机矢量之和 。 令人惊讶的是, 尽管Lueker的1998年证明, 在一维度设置中, 美 元=O( 和 瓦 瑞朗) 样本能保证近似的内性能 美 。