Natural language generation (NLG) models have emerged as a focal point of research within natural language processing (NLP), exhibiting remarkable performance in tasks such as text composition and dialogue generation. However, their intricate architectures and extensive model parameters pose significant challenges to interpretability, limiting their applicability in high-stakes decision-making scenarios. To address this issue, human-computer interaction (HCI) and visualization techniques offer promising avenues to enhance the transparency and usability of NLG models by making their decision-making processes more interpretable. In this paper, we provide a comprehensive investigation into the roles, limitations, and impact of HCI and visualization in facilitating human understanding and control over NLG systems. We introduce a taxonomy of interaction methods and visualization techniques, categorizing three major research domains and their corresponding six key tasks in the application of NLG models. Finally, we summarize the shortcomings in the existing work and investigate the key challenges and emerging opportunities in the era of large language models (LLMs).
翻译:暂无翻译