Neural network classifiers have become the de-facto choice for current "pre-train then fine-tune" paradigms of visual classification. In this paper, we investigate $k$-Nearest-Neighbor (k-NN) classifiers, a classical model-free learning method from the pre-deep learning era, as an augmentation to modern neural network based approaches. As a lazy learning method, k-NN simply aggregates the distance between the test image and top-k neighbors in a training set. We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps: (1) Leverage k-NN predicted probabilities as indications for easy \vs~hard examples during training. (2) Linearly interpolate the k-NN predicted distribution with that of the augmented classifier. Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration with additional insights: (1) k-NN achieves competitive results, sometimes even outperforming a standard linear classifier. (2) Incorporating k-NN is especially beneficial for tasks where parametric classifiers perform poorly and / or in low-data regimes. We hope these discoveries will encourage people to rethink the role of pre-deep learning, classical methods in computer vision. Our code is available at: https://github.com/KMnP/nn-revisit.
翻译:神经网络分类器已成为当前视觉分类的“ 预培训” 和“ 微调” 范式的“ 视觉分类” 的脱法选择。 在本文中,我们调查了“ $k$- Nearest- Whighbor (k-NNN) ” 分类器,这是从深层次学习时代开始的经典无示范学习方法,是现代神经网络基础方法的增强。 k-NNN只是作为一种懒惰的学习方法,简单地将测试图像和顶级邻居在一组培训中之间的距离汇总起来。 我们采用k-NNN(k-NN), 由监督或自我监督的方法制作了预先训练的视觉演示。 我们采用K-NNN(K-NN) 预测的概率作为训练中简单易行的示例。 (2) 线性地将k-NNN(k-NN) 预测的分布与增强的分类器的分布相交织。 我们的研究显示, k-NN(K) 实现竞争结果,有时甚至超越标准的线性分类/ 标准分类/ 校正(K- NNN) 制度, 将特别有利于我们这些分析者在计算机前学习中的低级/ 学习方法。