This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification. The core of such an approach is a loss function that computes the distances between instances of interest and support vectors. The objective is to update the weights of CLs iteratively to learn a representation with a large margin between classes. Each iteration results in a large-margin discriminant model represented by support vectors based on such a representation. The advantage of the proposed approach w.r.t. convolutional neural networks (CNNs) is two-fold. First, it allows representation learning with a small amount of data due to the reduced number of parameters compared to an equivalent CNN. Second, it has a low training cost since the backpropagation considers only support vectors. The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
翻译:本文介绍了一种新颖的方法,将进化层(CLs)和大型边际衡量学习相结合,用于培训受监督的小数据集模型,用于质谱分类。这种方法的核心是计算有兴趣和辅助矢量之间距离的损失函数。目的是更新CLs的权重,反复学习具有较大等级间间差的表示法。每种迭代结果都形成一个以这种代表法为基础的支持矢量代表的大型边际共振模型。拟议方法的优势是w.r.t. convolution 神经网络(CNNs)的双重优势。首先,由于参数数量比同等的CNN减少,它允许以少量的数据进行代言学习。其次,由于反向调整只考虑支持矢量,因此培训成本较低。关于Texture 及其图象的实验结果显示,与对应CNN相比,拟议的方法具有竞争性的准确性,计算成本较低,并且更快的趋同速度。