Recent advancements in large language models (LLMs) have demonstrated impressive capabilities in code translation, typically evaluated using benchmarks like CodeTransOcean. However, these benchmarks fail to capture real-world complexities by focusing primarily on simple function-level translations and overlooking repository-level context (e.g., dependencies). Moreover, LLMs' effectiveness in translating to newer, low-resource languages like Rust remains largely underexplored. To address this gap, we introduce RustRepoTrans, the first repository-level code translation benchmark, comprising 375 tasks translating into Rust from C++, Java, and Python. Using this benchmark, we evaluate four state-of-the-art LLMs, analyzing their errors to assess limitations in complex translation scenarios. Among them, Claude-3.5 performs best with 43.5% Pass@1, excelling in both basic functionality and additional translation abilities, such as noise robustness and syntactical difference identification. However, even Claude-3.5 experiences a 30.8% performance drop (Pass@1 from 74.3% to 43.5%) when handling repository-level context compared to previous benchmarks without such context. We also find that LLMs struggle with language differences in complex tasks, and dependencies further increase translation difficulty.
翻译:暂无翻译