Library migration is the process of replacing one library with another library that provides similar functionality. Manual library migration is time consuming and error prone, as it requires developers to understand the APIs of both libraries, map them, and perform the necessary code transformations. Large Language Models (LLMs) are shown to be effective at generating and transforming code as well as finding similar code, which are necessary upstream tasks for library migration. Such capabilities suggest that LLMs may be suitable for library migration. Accordingly, this paper investigates the effectiveness of LLMs for migration between Python libraries. We evaluate three LLMs, Llama 3.1, GPT-4o mini, and GPT-4o on PyMigBench, where we migrate 321 real-world library migrations that include 2,989 migration-related code changes. To measure correctness, we (1) compare the LLM's migrated code with the developers' migrated code in the benchmark and (2) run the unit tests available in the client repositories. We find that LLama 3.1, GPT-4o mini, and GPT-4o correctly migrate 89%, 89%, and 94% of the migration-related code changes, respectively. We also find that 36%, 52% and 64% of the LLama 3.1, GPT-4o mini, and GPT-4o migrations pass the same tests that passed in the developer's migration. To ensure the LLMs are not reciting the migrations, we also evaluate them on 10 new repositories where the migration never happened. Overall, our results suggest that LLMs can be effective in migrating code between libraries, but we also identify some open challenges.
 翻译:暂无翻译