The infrastructure necessary for training state-of-the-art models is becoming overly expensive, which makes training such models affordable only to large corporations and institutions. Recent work proposes several methods for training such models collaboratively, i.e., by pooling together hardware from many independent parties and training a shared model over the Internet. In this demonstration, we collaboratively trained a text-to-image transformer similar to OpenAI DALL-E. We invited the viewers to join the ongoing training run, showing them instructions on how to contribute using the available hardware. We explained how to address the engineering challenges associated with such a training run (slow communication, limited memory, uneven performance between devices, and security concerns) and discussed how the viewers can set up collaborative training runs themselves. Finally, we show that the resulting model generates images of reasonable quality on a number of prompts.
翻译:培训最先进的模型所需的基础设施正在变得过于昂贵,这使得培训这类模型只能为大型公司和机构所负担得起。最近的工作提出了合作培训这类模型的若干方法,即将许多独立政党的硬件集中起来,并在互联网上培训一个共享模型。在这次示范中,我们合作培训了类似于OpenAI DALL-E的文本到图像变压器。我们邀请观众参加正在进行的培训,向他们展示如何利用现有硬件作出贡献的指示。我们解释了如何应对与这种培训运行有关的工程挑战(低通信、有限的记忆、装置之间不均匀的性能和安全关切),并讨论了观众如何自己建立合作培训。最后,我们展示了由此产生的模型在很多方面产生合理质量的图像。