Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with $\approx$ 1 step per second, which is enough for many interactive LLM applications. Unlike most inference APIs, Petals also natively exposes hidden states of served models, allowing to train and share custom model extensions based on efficient fine-tuning methods.
翻译:使用大型语言模型(LLMS)往往有1,000亿以上的参数。随着BLOOM-176B和OTP-175B的发布,每个人都可以下载这一规模的预选模型。但是,使用这些模型需要许多研究人员无法获得的高端硬件。在某些情况下,LLMS可以通过内存卸载或主机API更廉价地使用。然而,这些技术具有内在的局限性:卸载速度太慢,无法进行互动推断,而API也不够灵活,无法进行需要获得重量、注意或登录的研究。在这项工作中,我们提议Petals-通过加入多个缔约方的资源,对大型模型进行推论和微调。我们证明,这一战略比大型模型的卸载速度要高,无法用$\approx1 秒的推算,这对许多互动式LM应用程序来说就足够了。与大多数推论者不同的是,Petals还以本地方式暴露了隐藏的模型的隐藏状态,从而能够对高效的模型进行训练并进行定制的定制。</s>