Reasoning the human-object interactions (HOI) is essential for deeper scene understanding, while object affordances (or functionalities) are of great importance for human to discover unseen HOIs with novel objects. Inspired by this, we introduce an affordance transfer learning approach to jointly detect HOIs with novel objects and recognize affordances. Specifically, HOI representations can be decoupled into a combination of affordance and object representations, making it possible to compose novel interactions by combining affordance representations and novel object representations from additional images, i.e. transferring the affordance to novel objects. With the proposed affordance transfer learning, the model is also capable of inferring the affordances of novel objects from known affordance representations. The proposed method can thus be used to 1) improve the performance of HOI detection, especially for the HOIs with unseen objects; and 2) infer the affordances of novel objects. Experimental results on two datasets, HICO-DET and HOI-COCO (from V-COCO), demonstrate significant improvements over recent state-of-the-art methods for HOI detection and object affordance detection. Code is available at https://github.com/zhihou7/HOI-CL
翻译:人类物体相互作用(HOI)对于更深层次的场景理解至关重要,而物体试验(或功能)对于人类发现有新物品的看不见的HOI非常重要。受此启发,我们采用了一种有新物品联合检测HOI和承认花生的转让学习方法。具体地说,HOI的表象可以分解成有新物品和物体表示的组合,从而有可能通过将花生表示和新物体表示与额外图像(即将花生表示和新物体表示)相结合来形成新的相互作用。根据拟议的转移学习,该模型还可以从已知的花生表示中推断出新物品的买家。因此,拟议的方法可以用来(1) 改进HOI的探测性能,特别是对有新物品的HOI;和(2) 推导出新物品的买家。两个数据集,即HICO-DET和HOI-CO(来自V-COCO)的实验结果,显示了最近对HOI探测和花生物体的状态方法的重大改进。http://CLSHO/CLGHOC/CUGOC守则可在http://GUCLCLCL/CL/CL/CLCLCR/CR/CR/http://