Many visualization techniques have been created to help explain the behavior of convolutional neural networks (CNNs), but they largely consist of static diagrams that convey limited information. Interactive visualizations can provide more rich insights and allow users to more easily explore a model's behavior; however, they are typically not easily reusable and are specific to a particular model. We introduce Visual Feature Search, a novel interactive visualization that is generalizable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar CNN features. It supports searching through large image datasets with an efficient cache-based search implementation. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on supervised, self-supervised, and human-edited CNNs. We also release a portable Python library and several IPython notebooks to enable researchers to easily use our tool in their own experiments. Our code can be found at https://github.com/lookingglasslab/VisualFeatureSearch.
翻译:创建了许多视觉化技术来帮助解释神经神经网络(CNNs)的行为,但这些技术大多由传递有限信息的静态图解组成。交互式视觉化可以提供更丰富的洞察力,使用户更容易地探索模型的行为;然而,这些技术通常不易再使用,并且是特定模型所特有的。我们引入视觉特征搜索,这是一种可普及到任何CNN的新型互动视觉化,可以很容易地纳入研究人员的工作流程。我们的工具允许用户突出一个图像区域,并从具有最相似CNN特征的某个数据集中搜索图像。它支持通过大型图像数据集搜索,并高效的缓存搜索实施。我们展示了我们的工具如何通过在受监督、自我监督和经人类编辑的CNN进行实验来解释模型行为的不同方面。我们还发行了一个便携式的Python图书馆和几本IPython笔记本,使研究人员能够方便地在自己的实验中使用我们的工具。我们的代码可以在https://github.com/ lookingclaslab/Visual-FeatalSearcharchich。