Neural networks are widely used as a model for classification in a large variety of tasks. Typically, a learnable transformation (i.e. the classifier) is placed at the end of such models returning a value for each class used for classification. This transformation plays an important role in determining how the generated features change during the learning process. In this work, we argue that this transformation not only can be fixed (i.e. set as non-trainable) with no loss of accuracy and with a reduction in memory usage, but it can also be used to learn stationary and maximally separated embeddings. We show that the stationarity of the embedding and its maximal separated representation can be theoretically justified by setting the weights of the fixed classifier to values taken from the coordinate vertices of the three regular polytopes available in $\mathbb{R}^d$, namely: the $d$-Simplex, the $d$-Cube and the $d$-Orthoplex. These regular polytopes have the maximal amount of symmetry that can be exploited to generate stationary features angularly centered around their corresponding fixed weights. Our approach improves and broadens the concept of a fixed classifier, recently proposed in \cite{hoffer2018fix}, to a larger class of fixed classifier models. Experimental results confirm the theoretical analysis, the generalization capability, the faster convergence and the improved performance of the proposed method. Code will be publicly available.
翻译:神经网络被广泛用作大量任务分类的模型。 通常, 学习的转换( 分类器) 被放在这些模型的结尾处, 返回用于分类的每类的值。 这种转换在确定学习过程中生成的特性如何变化方面起着重要作用。 在这项工作中, 我们争论说, 这种转换不仅可以固定( 设置为不可培训的), 并且不丢失准确性, 减少记忆使用, 也可以用来学习固定和最大分离的嵌入。 我们表明, 嵌入的固定性及其最大分离的表达方式在理论上是有道理的, 可以设定固定的分类器的重量到 $\\ mathb{ R ⁇ d$: $- 标准, $d$- Cube 和 $d$- Orthodolplex 。 这些常规的多式具有最大数量, 可以被利用来生成固定性特征, 将常规的分类法的固定性分析结果 放大。