Natural language interfaces (NLIs) enable users to flexibly specify analytical intentions in data visualization. However, diagnosing the visualization results without understanding the underlying generation process is challenging. Our research explores how to provide explanations for NLIs to help users locate the problems and further revise the queries. We present XNLI, an explainable NLI system for visual data analysis. The system introduces a Provenance Generator to reveal the detailed process of visual transformations, a suite of interactive widgets to support error adjustments, and a Hint Generator to provide query revision hints based on the analysis of user queries and interactions. Two usage scenarios of XNLI and a user study verify the effectiveness and usability of the system. Results suggest that XNLI can significantly enhance task accuracy without interrupting the NLI-based analysis process.
翻译:自然语言界面(NLIs)使用户能够灵活地具体说明数据可视化的分析意图。然而,在不理解原始生成过程的情况下对可视化结果进行诊断是具有挑战性的。我们的研究探索了如何向国家可视化结果提供解释,以帮助用户找到问题并进一步修改查询。我们介绍了可解释的可视数据分析的NLI系统XNLI。这个系统引入了验证生成器,以揭示详细的视觉转换过程,一套支持错误调整的交互式部件,以及一个基于用户查询和互动分析提供查询修改提示的Hint生成器。 XNLI的两种使用情景和用户研究证实了系统的有效性和可用性。结果显示,XNLI在不中断基于NLI的分析过程的情况下可以大大提高任务的准确性。