Sketch drawings capture the salient information of visual concepts. Previous work has shown that neural networks are capable of producing sketches of natural objects drawn from a small number of classes. While earlier approaches focus on generation quality or retrieval, we explore properties of image representations learned by training a model to produce sketches of images. We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting. Additionally, we find that these learned representations exhibit interesting structure and compositionality.
翻译:牵线图案捕捉视觉概念的突出信息。 先前的工作表明神经网络能够制作从少数类中提取的自然物体的草图。 虽然早先的方法侧重于生成质量或检索,但我们探索了通过培训一个模型来制作图像草图所学到的图像展示的特性。 我们显示,这种基因化、 阶级不可知的模型在几张相片的设置中产生了信息化的嵌入图象。 此外,我们发现,这些学习的图像呈现出有趣的结构和构成性。