Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network. While recent GZSL methods have explored various techniques to leverage the capacity of these features, there has been an extensive growth of representation learning techniques that remain under-explored. In this work, we investigate the utility of different GZSL methods when using different feature extractors, and examine how these models' pre-training objectives, datasets, and architecture design affect their feature representation ability. Our results indicate that 1) methods using generative components for GZSL provide more advantages when using recent feature extractors; 2) feature extractors pre-trained using self-supervised learning objectives and knowledge distillation provide better feature representations, increasing up to 15% performance when used with recent GZSL techniques; 3) specific feature extractors pre-trained with larger datasets do not necessarily boost the performance of GZSL methods. In addition, we investigate how GZSL methods fare against CLIP, a more recent multi-modal pre-trained model with strong zero-shot performance. We found that GZSL tasks still benefit from generative-based GZSL methods along with CLIP's internet-scale pre-training to achieve state-of-the-art performance in fine-grained datasets. We release a modular framework for analyzing representation learning issues in GZSL here: https://github.com/uvavision/TV-GZSL
翻译:通用零热学习( GZSL) 旨在培训一个分类器,该分类器可以作为辅助信息使用一组属性,以及从受过训练的神经神经网络中提取的视觉特征,将其推广到隐蔽的班级。虽然最近GZSL方法探索了各种技术来利用这些特征的能力,但代表性学习技术的增幅仍然未得到充分探索。在这项工作中,我们调查了不同GZSL方法在使用不同特性提取器时的实用性,并考察了这些模型的培训前目标、数据集和结构设计如何影响其特征表达能力。我们的结果表明:(1) GZSL使用基因化组件的方法在使用最新的特征提取器时提供了更多的优势;(2) 使用自上上式学习目标和知识蒸馏器进行预先训练,提供了更好的特征展示,在使用最新的GZSL技术技术时,将业绩提高到15%;(3) 具体特征提取器在使用较大型的数据集之前经过训练,并不一定能提高GZSLSL方法的性能。此外,我们调查GZSLSL方法在使用C- SLSLSL的高级性能模型模型上如何超越了C- CLSLSLSLSL的高级工作。