Monotone functions and data sets arise in a variety of applications. We study the interpolation problem for monotone data sets: The input is a monotone data set with $n$ points, and the goal is to find a size and depth efficient monotone neural network, with non negative parameters and threshold units, that interpolates the data set. We show that there are monotone data sets that cannot be interpolated by a monotone network of depth $2$. On the other hand, we prove that for every monotone data set with $n$ points in $\mathbb{R}^d$, there exists an interpolating monotone network of depth $4$ and size $O(nd)$. Our interpolation result implies that every monotone function over $[0,1]^d$ can be approximated arbitrarily well by a depth-4 monotone network, improving the previous best-known construction of depth $d+1$. Finally, building on results from Boolean circuit complexity, we show that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions.
翻译:单质函数和数据集在各种应用中产生。 我们研究单质数据集的内插问题: 输入是一个单质数据集, 带有美元点数, 目标是找到一个大小和深度高效的单质神经网络, 具有无负参数和阈值单位, 将数据集内插。 我们显示有单质数据集, 无法由一个深度的单质网络进行内插 $2美元。 另一方面, 我们证明, 对于每个单质数据集, 以$\mathbb{R ⁇ d$为单位的点数, 每个单质数据集, 都有一个内插的单质网络, 深度为4美元, 大小为 $O(nd) 。 我们的内插结果表明, $ $[0, 1,%d$ 的每个单质函数都可以被一个深度 4 单质网络任意地加以近似近, 改进以前最著名的深度 $d+1美元的构造 。 最后, 在 Boolean 电路复杂度的结果上, 我们显示, 具有正值参数的内偏差值的单质单质参数的单质导偏差可导致在神经号中进行控制时, 。