Programmers increasingly rely on Large Language Models (LLMs) for code generation. However, they now have to deal with issues like having to constantly switch between generating and verifying code, caused by misalignment between programmers' prompts and the generated code. Unfortunately, current LLM-driven code assistants provide insufficient support during the prompt authoring process to help programmers tackle these challenges emerging from the new workflow. To address these challenges, we employed an iterative design process to understand programmers' strategies when programming with LLMs. Based on our findings, we developed CoLadder, a system that assists programmers by enabling hierarchical task decomposition, incremental code generation, and verification of results during prompt authoring. A user study with 12 experienced programmers showed that CoLadder is effective in helping programmers externalize their mental models flexibly, improving their ability to navigate and edit code across various abstraction levels, from initial intent to final code implementation.
翻译:暂无翻译