The expressiveness of deep neural network (DNN) is a perspective to understandthe surprising performance of DNN. The number of linear regions, i.e. pieces thata piece-wise-linear function represented by a DNN, is generally used to measurethe expressiveness. And the upper bound of regions number partitioned by a rec-tifier network, instead of the number itself, is a more practical measurement ofexpressiveness of a rectifier DNN. In this work, we propose a new and tighter up-per bound of regions number. Inspired by the proof of this upper bound and theframework of matrix computation in Hinz & Van de Geer (2019), we propose ageneral computational approach to compute a tight upper bound of regions numberfor theoretically any network structures (e.g. DNN with all kind of skip connec-tions and residual structures). Our experiments show our upper bound is tighterthan existing ones, and explain why skip connections and residual structures canimprove network performance.
翻译:深神经网络(DNN) 的清晰度是理解 DNN 惊人性能的视角。 线性区域的数量, 即由 DNN 代表的片断线性函数的碎片, 通常用于测量显示性。 区域数的上层界限由一个校正仪网络分隔, 而不是数字本身, 是测量一个校正器 DNN 的清晰度的更实际的度量。 在这项工作中, 我们提出了一个新的、 更紧的跨区域数框。 在Hinz & Van de Geer( 2019年) 的这一上限和矩阵计算框架证明的启发下, 我们提出了一种一般的计算方法, 用于计算理论上的任何网络结构( 例如, DNNN, 包含所有类型的跳过连接和剩余结构) 的严格上限。 我们的实验显示我们的上层界限比现有的更紧, 并解释为什么跳过连接和剩余结构可以验证网络的性能。