The input space of a neural network with ReLU-like activations is partitioned into multiple linear regions, each corresponding to a specific activation pattern of the included ReLU-like activations. We demonstrate that this partition exhibits the following encoding properties across a variety of deep learning models: (1) {\it determinism}: almost every linear region contains at most one training example. We can therefore represent almost every training example by a unique activation pattern, which is parameterized by a {\it neural code}; and (2) {\it categorization}: according to the neural code, simple algorithms, such as $K$-Means, $K$-NN, and logistic regression, can achieve fairly good performance on both training and test data. These encoding properties surprisingly suggest that {\it normal neural networks well-trained for classification behave as hash encoders without any extra efforts.} In addition, the encoding properties exhibit variability in different scenarios. {Further experiments demonstrate that {\it model size}, {\it training time}, {\it training sample size}, {\it regularization}, and {\it label noise} contribute in shaping the encoding properties, while the impacts of the first three are dominant.} We then define an {\it activation hash phase chart} to represent the space expanded by {model size}, training time, training sample size, and the encoding properties, which is divided into three canonical regions: {\it under-expressive regime}, {\it critically-expressive regime}, and {\it sufficiently-expressive regime}. The source code package is available at \url{https://github.com/LeavesLei/activation-code}.
翻译:具有 ReLU 式激活功能的神经网络输入空间被分割成多个线性区域, 每个都与包含 ReLU 式激活功能的具体激活模式相对应。 我们证明这个分区在各种深层学习模式中显示出以下编码属性:(1) ~ 确定性 : 几乎每个线性区域都包含一个培训示例。 因此, 我们可以通过一个独特的激活模式来代表几乎每一个培训实例, 该模式由 ~ 神经代码作为参数; (2) 分类 : 根据神经代码, 简单的算法, 如 $- Means, $- K$- NN, 和后勤回归等, 可以在培训和测试数据两方面实现相当良好的性能。 这些编码性能令人惊讶地显示, 正常的神经网络在不做任何额外努力的情况下, 能够表现不同情景的变异性。 { 进一步的实验表明, 模型的大小, 正在分解, 和 数据序列中的源代码是 。