An increasing number of applications in computer vision, specially, in medical imaging and remote sensing, become challenging when the goal is to classify very large images with tiny informative objects. Specifically, these classification tasks face two key challenges: $i$) the size of the input image is usually in the order of mega- or giga-pixels, however, existing deep architectures do not easily operate on such big images due to memory constraints, consequently, we seek a memory-efficient method to process these images; and $ii$) only a very small fraction of the input images are informative of the label of interest, resulting in low region of interest (ROI) to image ratio. However, most of the current convolutional neural networks (CNNs) are designed for image classification datasets that have relatively large ROIs and small image sizes (sub-megapixel). Existing approaches have addressed these two challenges in isolation. We present an end-to-end CNN model termed Zoom-In network that leverages hierarchical attention sampling for classification of large images with tiny objects using a single GPU. We evaluate our method on four large-image histopathology, road-scene and satellite imaging datasets, and one gigapixel pathology dataset. Experimental results show that our model achieves higher accuracy than existing methods while requiring less memory resources.
翻译:在计算机视野中,特别是在医疗成像和遥感中,越来越多的应用在计算机视野中变得日益具有挑战性,当目标是用微小的信息对象对非常大图像进行分类时,这些分类任务面临两大挑战:一美元,输入图像的大小通常按巨型或千兆像素排序,然而,由于记忆力的限制,现有的深层建筑不易在这种大图像上操作,因此,我们寻求一种记忆效率高的方法来处理这些图像,因此,我们寻求一种处理这些图像的记忆效率方法;而只有很小一部分输入图像能够提供有关标签的信息,从而导致兴趣区域(ROI)与图像的比率较低。然而,目前大多数的共生神经网络(CNNs)是设计为图像分类数据集设计的,这些图像的大小通常为超大型ROI和千兆瓦像素等,但现有的方法是孤立地应对这两个挑战。我们采用了一种端到端的CNN模型,称为Zoom-Infer,它利用单一的GPU对大型图像进行分级关注取样。我们评估了四种大型图像的方法,我们在四种大型神经图象学方法上评估了比现有卫星图像的精确度,同时需要一种更高的路径和实验性数据显示。