A windowed version of the Nearest Neighbour (WNN) classifier for images is described. While its construction is inspired by the architecture of Artificial Neural Networks, the underlying theoretical framework is based on approximation theory. We illustrate WNN on the datasets MNIST and EMNIST of images of handwritten digits. In order to calibrate the parameters of WNN, we first study it on the classical MNIST dataset. We then apply WNN with these parameters to the challenging EMNIST dataset. It is demonstrated that WNN misclassifies 0.42% of the images of EMNIST and therefore significantly outperforms predictions by humans and shallow ANNs that both have more than 1.3% of errors.
翻译:描述近邻图像分类的窗口版本。 虽然其构建受人造神经网络架构的启发, 其基本理论框架以近似理论为基础。 我们在数据集MNIST和手写数字图像的 EMNIST 上演示WNN。 为了校准WNN的参数, 我们首先在经典的MNIST数据集上研究它。 然后我们将这些参数应用到具有挑战性的 EMNIST 数据集中。 事实证明, WNN错误地分类了 EMNIST 图像的0. 42%, 从而大大超过人类和浅非国民预测, 两者的误差率都超过1.3% 。