The availability of massive image databases resulted in the development of scalable machine learning methods such as convolutional neural network (CNNs) filtering and processing these data. While the very recent theoretical work on CNNs focuses on standard nonparametric denoising problems, the variability in image classification datasets does, however, not originate from additive noise but from variation of the shape and other characteristics of the same object across different images. To address this problem, we consider a simple supervised classification problem for object detection on grayscale images. While from the function estimation point of view, every pixel is a variable and large images lead to high-dimensional function recovery tasks suffering from the curse of dimensionality, increasing the number of pixels in our image deformation model enhances the image resolution and makes the object classification problem easier. We propose and theoretically analyze two different procedures. The first method estimates the image deformation by support alignment. Under a minimal separation condition, it is shown that perfect classification is possible. The second method fits a CNN to the data. We derive a rate for the misclassification error depending on the sample size and the number of pixels. Both classifiers are empirically compared on images generated from the MNIST handwritten digit database. The obtained results corroborate the theoretical findings.
翻译:大型图像数据库的可用性导致开发了可缩放的机器学习方法,如神经神经网络(CNNs)过滤和处理这些数据。尽管CNN最近的理论工作侧重于标准的非参数分解问题,但图像分类数据集的变异性并非来自添加噪音,而是来自不同图像中同一对象的形状和其他特征的变异性。为了解决这个问题,我们认为灰度图像上物体探测的分类存在简单监督的分类问题。从功能估计点看,每个像素是一个变异和大图像导致高维功能恢复任务,受维度诅咒的影响,增加我们图像变异模型中的像素数量会增强图像分辨率,并使对象分类问题更容易处理。我们提议和从理论上分析两个不同的程序。第一个方法通过支持对图像的形状变异性进行估算。在最小的分离条件下,可以证明完美的分类是可能的。第二个方法适合CNNM和数据。我们根据样本大小和像素模型数得出了错误分类错误的恢复率,我们从数字数据库中得出了错误分类错误的频率。两个分析者都比较了模拟结果。