The rapid development of deep natural language processing (NLP) models for text classification has led to an urgent need for a unified understanding of these models proposed individually. Existing methods cannot meet the need for understanding different models in one framework due to the lack of a unified measure for explaining both low-level (e.g., words) and high-level (e.g., phrases) features. We have developed a visual analysis tool, DeepNLPVis, to enable a unified understanding of NLP models for text classification. The key idea is a mutual information-based measure, which provides quantitative explanations on how each layer of a model maintains the information of input words in a sample. We model the intra- and inter-word information at each layer measuring the importance of a word to the final prediction as well as the relationships between words, such as the formation of phrases. A multi-level visualization, which consists of a corpus-level, a sample-level, and a word-level visualization, supports the analysis from the overall training set to individual samples. Two case studies on classification tasks and comparison between models demonstrate that DeepNLPVis can help users effectively identify potential problems caused by samples and model architectures and then make informed improvements.
翻译:案文分类的深入自然语言处理(NLP)模型的迅速发展,导致迫切需要统一理解个别提出的这些模型,现有方法无法满足在一个框架内理解不同模型的需要,因为缺乏解释低层次(例如单词)和高层次(例如词组)特征的统一措施。我们开发了一个视觉分析工具,DeepNLPVis,以便能够统一理解文本分类的NLP模型。关键的想法是相互的信息计量,它提供了对模型每一层如何维持一个样本中输入词的信息的定量解释。我们模拟了每个层中衡量一个词对最后预测的重要性以及词际之间关系的内部和字际信息,例如短语的形成。一个多层次的视觉化,包括一个分层、抽样一级和字级可视化,支持从对单个样本的总体培训中进行分析。关于分类任务的两个案例研究和模型之间的比较表明,Dep NLPVis能够帮助用户有效地查明由模型和知情的模型产生的潜在问题。