In recent years, large language models have achieved great success due to their unprecedented size. However, training these models poses a challenge for most researchers as it requires a substantial number of GPUs. To reduce GPU memory usage, memory partitioning, and memory offloading have been proposed. These approaches eliminate memory redundancies and offload memory usage to the CPU and NVMe memory, respectively, enabling training on small GPU clusters. However, directly deploying these solutions often leads to suboptimal efficiency. Only experienced experts can unleash the full potential of hardware by carefully tuning the distributed configuration. Thus, we present a novel solution, Elixir, which automates efficient large-model training based on pre-runtime model profiling. Elixir aims to identify the optimal combination of partitioning and offloading techniques to maximize training throughput. In our experiments, Elixir significantly outperforms the current state-of-the-art baseline. Our optimal configuration achieves up to a 3.4$\times$ speedup on GPT-2 models compared with SOTA solutions. We hope that our work will benefit individuals who lack computing resources and expertise, granting them access to large models. The beta version of Elixir is now available at https://github.com/hpcaitech/ColossalAI/tree/feature/elixir.
翻译:暂无翻译