Prototype is widely used to represent internal structure of category for few-shot learning, which was proposed as a simple inductive bias to address the issue of overfitting. However, since prototype representation is normally averaged from individual samples, it cannot flexibly adjust the retention ability of sample differences that may leads to underfitting in some cases of sample distribution. To address this problem, in this work, we propose Shrinkage Exemplar Networks (SENet) for few-shot classification. SENet balances the prototype representations (high-bias, low-variance) and example representations (low-bias, high-variance) using a shrinkage estimator, where the categories are represented by the embedings of samples that shrink to their mean via spectral filtering. Furthermore, a shrinkage exemplar loss is proposed to replace the widely used cross entropy loss for capturing the information of individual shrinkage samples. Several experiments were conducted on miniImageNet, tiered-ImageNet and CIFAR-FS datasets. We demonstrate that our proposed model is superior to the example model and the prototype model for some tasks.
翻译:暂无翻译