Programmers increasingly rely on Large Language Models (LLMs) for code generation. However, misalignment between programmers' goals and generated code complicates the code evaluation process and demands frequent switching between prompt authoring and code evaluation. Yet, current LLM-driven code assistants lack sufficient scaffolding to help programmers format intentions from their overarching goals, a crucial step before translating these intentions into natural language prompts. To address this gap, we adopted an iterative design process to gain insights into programmers' strategies when using LLMs for programming. Building on our findings, we created CoLadder, a system that supports programmers by facilitating hierarchical task decomposition, direct code segment manipulation, and result evaluation during prompt authoring. A user study with 12 experienced programmers showed that CoLadder is effective in helping programmers externalize their problem-solving intentions flexibly, improving their ability to evaluate and modify code across various abstraction levels, from goal to final code implementation.
翻译:暂无翻译