AI-powered programming assistants are increasingly gaining popularity, with GitHub Copilot alone used by over a million developers worldwide. These tools are far from perfect, however, producing code suggestions that may be incorrect or incomplete in subtle ways. As a result, developers face a new set of challenges when they need to understand, validate, and choose between AI's suggestions. This paper explores whether Live Programming, a continuous display of a program's runtime values, can help address these challenges. We introduce Live Exploration of AI-Generated Programs, a new interaction model for AI programming assistants that supports exploring multiple code suggestions through Live Programming. We implement this interaction model in a prototype Python environment LEAP and evaluate it through a between-subject study. Our results motivate several design opportunities for future AI-powered programming tools.
翻译:暂无翻译