The field of natural language processing (NLP) has made significant strides in recent years, particularly in the development of large-scale vision-language models (VLMs). These models aim to bridge the gap between text and visual information, enabling a more comprehensive understanding of multimedia data. However, as these models become larger and more complex, they also become more challenging to train and deploy. One approach to addressing this challenge is the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the model into smaller, specialized sub-models that can jointly solve a task. In this paper, we explore the effectiveness of MoE in scaling vision-language models, demonstrating its potential to achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost. Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling VLMs. We hope our work will inspire further research into the use of MoE for scaling large-scale vision-language models and other multimodal machine learning applications.
翻译:自然语言处理领域近年来取得了长足的进步,特别是在开发大型视觉语言模型方面。这些模型旨在缩小文字和视觉信息之间的差距,从而能够更全面地理解多媒体数据。然而,随着这些模型的扩大和复杂程度的提高,这些模型在培训和部署方面也变得更加具有挑战性。一种应对这一挑战的方法是使用鲜为人知的专家混合技术,将模型分成能够共同解决一项任务的小型、专业化的子模型。在本文中,我们探讨了教育部在扩大视觉语言模型方面的有效性,展示了其在一系列基准方面实现最先进的业绩的潜力,超过了密集的等量计算成本模型。我们的研究为稳定教育部模型的培训、理解教育部对模型可解释性的影响以及平衡在缩小VLMMS时的计算性业绩之间的权衡平衡提供了宝贵的见解。我们希望我们的工作将鼓励进一步研究教育部在扩大大规模视觉模型和其他多式联运机应用方面的使用。</s>