The expressiveness of deep neural networks of bounded width has recently been investigated in a series of articles. The understanding about the minimum width needed to ensure universal approximation for different kind of activation functions has progressively been extended (Park et al., 2020). In particular, it turned out that, with respect to approximation on general compact sets in the input space, a network width less than or equal to the input dimension excludes universal approximation. In this work, we focus on network functions of width less than or equal to the latter critical bound. We prove that in this regime the exact fit of partially constant functions on disjoint compact sets is still possible for ReLU network functions under some conditions on the mutual location of these components. Conversely, we conclude from a maximum principle that for all continuous and monotonic activation functions, universal approximation of arbitrary continuous functions is impossible on sets that coincide with the boundary of an open set plus an inner point of that set. We also show that some network functions of maximum width two, respectively one, allow universal approximation on finite sets.
翻译:最近,在一系列文章中调查了封闭宽度深层神经网络的清晰度。对确保不同类型激活功能的普遍近似所需的最低宽度的理解逐渐得到扩展(Park等人,2020年)。特别是,结果发现,关于输入空间一般集束的近似,一个小于或等于输入维度的网络宽度排除了通用近光度。在这项工作中,我们侧重于宽度小于或等于后一种关键约束的网络功能。我们证明,在这个制度中,对于RELU网络在这些部件的相互位置的某些条件下的功能来说,仍然有可能完全适合部分恒定的功能。相反,我们从一个最大原则得出结论,即对于所有连续和单调激活功能而言,任意连续功能不可能在与开放集的边界加上该设置的内点一致的组合上实现。我们还表明,一些最大宽度为两个的网络功能,分别是允许对定谱的组合进行普遍近光度。