The frequent need for analysts to create visualizations to derive insights from data has driven extensive research into the generation of natural Language to Visualization (NL2VIS). While recent progress in large language models (LLMs) suggests their potential to effectively support NL2VIS tasks, existing studies lack a systematic investigation into the performance of different LLMs under various prompt strategies. This paper addresses this gap and contributes a crucial baseline evaluation of LLMs' capabilities in generating visualization specifications of NL2VIS tasks. Our evaluation utilizes the nvBench dataset, employing six representative LLMs and eight distinct prompt strategies to evaluate their performance in generating six target chart types using the Vega-Lite visualization specification. We assess model performance with multiple metrics, including vis accuracy, validity and legality. Our results reveal substantial performance disparities across prompt strategies, chart types, and LLMs. Furthermore, based on the evaluation results, we uncover several counterintuitive behaviors across these dimensions, and propose directions for enhancing the NL2VIS benchmark to better support future NL2VIS research.
翻译:暂无翻译