Recent years have seen a growth in user-centric applications that require effective knowledge transfer across tasks in the low-data regime. An example is personalization, where a pretrained system is adapted by learning on small amounts of labeled data belonging to a specific user. This setting requires high accuracy under low computational complexity, therefore the Pareto frontier of accuracy vs. adaptation cost plays a crucial role. In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context). We use meta-trained CaSE blocks to conditionally adapt the body of a network and a fine-tuning routine to adapt a linear head, defining a method called UpperCaSE. UpperCaSE achieves a new state-of-the-art accuracy relative to meta-learners on the 26 datasets of VTAB+MD and on a challenging real-world personalization benchmark (ORBIT), narrowing the gap with leading fine-tuning methods with the benefit of orders of magnitude lower adaptation cost.
翻译:近些年来,用户中心应用增加了,需要在低数据制度中各任务之间进行有效的知识转让。一个例子是个人化,通过学习属于特定用户的少量标签数据,对预先培训的系统进行调整。这种设置要求低计算复杂性下高精度,因此,Pareto准确度与适应成本的边框具有关键作用。在本文中,我们将这一Pareto边框推入微小图像分类设置中,并做出关键贡献:一个新的适应性块,称为 " 环境液化和Exaccess(CASE) ",它调整了在新任务上预先培训的神经网络,以便用用户数据的单一远端传送(文字)大大改进性能。我们使用经过元培训的CASE区块有条件地调整网络体形和微调常规,以调整线性头,确定名为 " UpperCASEEE " 的方法。上CASESESESE实现了与MD26数据集和具有挑战性的实际个人化基准(ORBITI)上的新状态的精确精确精确精确精确准确度,缩小了成本幅度的调整标准。