We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR), whose common challenge is generalizing to the unseen-classes by only training on the seen-classes. Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution, which causes severe recognition rate imbalance between the seen-class (high) and unseen-class (low). We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged? Thanks to the faithfulness, we can apply the Consistency Rule to perform unseen/seen binary classification, by asking: Would its counterfactual still look like itself? If ``yes'', the sample is from a certain class, and ``no'' otherwise. Through extensive experiments on ZSL and OSR, we demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance. Note that this framework is orthogonal to existing methods, thus, it can serve as a new baseline to evaluate how ZSL/OSR models generalize. Codes are available at https://github.com/yue-zhongqi/gcm-cf.
翻译:我们为零热学习(ZSL)和开放承认(OSR)提出了一个新的反事实框架,这两个框架的共同挑战是通过对可见类的培训来推广到看不见类。我们的想法来自这样的观察,即生成的无形类样本往往在真实分布之外,这会造成可见类(高)和不可见类(低)之间的严重识别率不平衡。我们表明,关键的原因是,这一代不是反事实信仰,因此,我们建议一个忠实的,其一代来自抽样特定反事实问题:如果我们将其类属性设定为某一类,同时保持其样本属性不变,样本会是什么样子?由于忠诚,我们可以应用一致性规则进行隐性/视觉Z的分类,这样,它是否看起来像它本身一样?如果“是”,样本来自某个类,或者“no”其他。通过对ZSL和OSR的广泛实验,我们提出的一个忠实的版本将是什么样子:如果我们将其类属性设定为某一类,同时保持其样本属性特性属性不变?由于忠诚,我们可以应用一致性规则来进行隐蔽/观的分类;如果它看起来像本身一样?如果“是“是”那么,样本来自某类,那么样的模型,或者“com'com。通过对某类的反事实实验,我们的框架会如何有效地改进这个常规框架,那么,我们如何改进了它作为整体/观察的基线。我们如何看它是如何改进了它是如何改进了整个的。