We consider general approximation families encompassing ReLU neural networks. On the one hand, we introduce a new property, that we call $\infty$-encodability, which lays a framework that we use (i) to guarantee that ReLU networks can be uniformly quantized and still have approximation speeds comparable to unquantized ones, and (ii) to prove that ReLU networks share a common limitation with many other approximation families: the approximation speed of a set C is bounded from above by an encoding complexity of C (a complexity well-known for many C's). The property of $\infty$-encodability allows us to unify and generalize known results in which it was implicitly used. On the other hand, we give lower and upper bounds on the Lipschitz constant of the mapping that associates the weights of a network to the function they represent in L^p. It is given in terms of the width, the depth of the network and a bound on the weight's norm, and it is based on well-known upper bounds on the Lipschitz constants of the functions represented by ReLU networks. This allows us to recover known results, to establish new bounds on covering numbers, and to characterize the accuracy of naive uniform quantization of ReLU networks.
翻译:我们考虑的是包含ReLU神经网络的一般近似家庭。一方面,我们引入了一种新的财产,我们称之为$infty$-encodidibility,这为我们提供了一个框架,以便我们使用(一) 保证RELU网络能够统一量化,并且仍然具有与未量化网络相类似的近似速度,以及(二) 证明RELU网络与其他许多其他近似家庭有着共同的限制:C集的近似速度受C的编码复杂性(对许多C来说是众所周知的复杂程度)的约束。 $infty$-encodicodididibility的特性使我们能够统一和概括已知结果,而我们使用这一特性提供了一个框架。另一方面,我们给Lipschitz的绘图常数提供了较低和上限,将网络的权重与其在LP中的功能联系起来。它以宽度、网络的深度和重重度规范为界限,它基于众所周知的利普西茨常数的上限,使得我们能够恢复已知的网络的准确度。