In this work, we investigate the problem of Model-Agnostic Zero-Shot Classification (MA-ZSC), which refers to training non-specific classification architectures (downstream models) to classify real images without using any real images during training. Recent research has demonstrated that generating synthetic training images using diffusion models provides a potential solution to address MA-ZSC. However, the performance of this approach currently falls short of that achieved by large-scale vision-language models. One possible explanation is a potential significant domain gap between synthetic and real images. Our work offers a fresh perspective on the problem by providing initial insights that MA-ZSC performance can be improved by improving the diversity of images in the generated dataset. We propose a set of modifications to the text-to-image generation process using a pre-trained diffusion model to enhance diversity, which we refer to as our $\textbf{bag of tricks}$. Our approach shows notable improvements in various classification architectures, with results comparable to state-of-the-art models such as CLIP. To validate our approach, we conduct experiments on CIFAR10, CIFAR100, and EuroSAT, which is particularly difficult for zero-shot classification due to its satellite image domain. We evaluate our approach with five classification architectures, including ResNet and ViT. Our findings provide initial insights into the problem of MA-ZSC using diffusion models. All code will be available on GitHub.
翻译:在这项工作中,我们研究了模型无关零样本分类(MA-ZSC)的问题,它指的是训练非特定分类体系结构(下游模型)以在训练期间不使用任何真实图像来对真实图像进行分类。最近的研究已经证明,使用扩散模型生成合成训练图像提供了解决MA-ZSC的潜在方法。然而,这种方法的性能目前仍然无法达到大规模视觉语言模型所实现的性能。其中一个可能的解释是合成和真实图像之间存在潜在的显著领域差距。我们的工作为该问题提供了新的观点,通过提高生成的数据集中图像的多样性,从而改善了MA-ZSC性能。我们提出了一组修改方法,使用预训练的扩散模型来增强多样性,我们将其称为我们的“技巧袋”。我们的方法在各种分类体系结构中都显示出显著的改善,结果可与诸如CLIP之类的最先进模型相媲美。为了验证我们的方法,我们在CIFAR10,CIFAR100和EuroSAT上进行了实验,由于其卫星图像的领域,对零样本分类具有特别的困难。我们使用五种分类架构进行了评估,包括ResNet和ViT。我们的研究结果为使用扩散模型解决MA-ZSC中的问题提供了初步见解。我们的代码将全部公开在GitHub上。