We study the memorization power of feedforward ReLU neural networks. We show that such networks can memorize any $N$ points that satisfy a mild separability assumption using $\tilde{O}\left(\sqrt{N}\right)$ parameters. Known VC-dimension upper bounds imply that memorizing $N$ samples requires $\Omega(\sqrt{N})$ parameters, and hence our construction is optimal up to logarithmic factors. We also give a generalized construction for networks with depth bounded by $1 \leq L \leq \sqrt{N}$, for memorizing $N$ samples using $\tilde{O}(N/L)$ parameters. This bound is also optimal up to logarithmic factors. Our construction uses weights with large bit complexity. We prove that having such a large bit complexity is both necessary and sufficient for memorization with a sub-linear number of parameters.
翻译:我们研究Feedforward ReLU 神经网络的记忆力。 我们显示, 这些网络可以使用 $\ tilde{O ⁇ left (sqrt{N ⁇ right) 参数来存储任何能满足温和分离假设的美元值。 已知 VC - dimension 上边界限意味着, 以美元为单位的样本的记忆力需要$\ Omega (sqrt{N} 参数, 因此我们的构造最符合对数因素。 我们还为深度为 $\leq L\leq\lesq\sqrt{N} 的网络进行普遍建设, 深度为使用 $\ tilde{O} (N/ L) 参数来存储 $n$@ leftleft (n) 样本。 这约束也最符合对数的对数。 我们的构造使用非常复杂的权重。 我们证明, 如此大的复杂性既必要,又足以以子线数的参数进行记忆。