Just like weights, bias terms are the learnable parameters of many popular machine learning models, including neural networks. Biases are believed to effectively increase the representational power of neural networks to solve a wide range of tasks in computer vision. However, we argue that if we consider the intrinsic distribution of images in the input space as well as some desired properties a model should have from the first principles, biases can be completely ignored in addressing many image-related tasks, such as image classification. Our observation indicates that zero-bias neural networks could perform comparably to neural networks with bias at least on practical image classification tasks. In addition, we prove that zero-bias neural networks possess a nice property called scalar (multiplication) invariance, which has great potential in learning and understanding images captured under poor illumination conditions. We then extend scalar invariance to more general cases that allow us to verify certain convex regions of the input space. Our experimental results show that zero-bias models could outperform the state-of-art models by a very large margin (over 60%) when predicting images under a low illumination condition (multiplying a scalar of 0.01); while achieving the same-level performance as normal models.
翻译:偏差是许多受欢迎的机器学习模型(包括神经网络)的可学习参数。 相信双向效应可以有效提高神经网络的代表性力量, 以解决计算机视觉中范围广泛的任务。 然而, 我们争辩说, 如果我们考虑到输入空间图像的内在分布以及一个模型应该从最初的原则中得出的一些预期属性, 在处理许多与图像有关的任务( 如图像分类)时, 偏差可以被完全忽略。 我们的观察表明, 零偏差神经网络可以与神经网络相比, 至少在实际图像分类任务上存在偏差。 此外, 我们还证明, 零偏差神经网络拥有一个叫做“ 变异性( 变异) ” 的好属性, 这在学习和理解在恶劣的光化条件下捕获的图像方面具有巨大潜力。 然后, 我们将偏差扩大到更一般的案例, 从而使我们能够核查输入空间的某些 convex 区域。 我们的实验结果表明, 零偏差模型可以在预测一个非常大的范围( 超过 60% ) 时, 在正常的模型下, 达到同一水平时, 水平 的 水平 。