Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.
翻译:随着机器学习方法进入涉及终端用户的现实世界应用,人类和智能机器之间的相互作用和协作变得日益重要。虽然许多先前的工作是在自然语言和视觉的交汇点上,例如图像说明或根据文字说明生成图像,但较少注重使用语言指导或改进经学习的视觉处理算法的性能。在本文中,我们探索了通过用户输入灵活指导经过训练的神经神经网络的方法,以提高其推论期间的性能。我们这样做的方法是在网络中插入一个层,作为spatio-semanic 指南。本指南经过培训,可以直接通过能源最小化计划或通过将人类语言查询转换为交互权重的经常模式间接地修改网络的激活。学习口头互动是完全自动的,不需要手动文本说明。我们评估了两个数据集上的方法,显示指导经过训练的网络可以改进性能,并广泛了解指南与CNN之间的相互作用。