As deep learning models become popular, there is a lot of need for deploying them to diverse device environments. Because it is costly to develop and optimize a neural network for every single environment, there is a line of research to search neural networks for multiple target environments efficiently. However, existing works for such a situation still suffer from requiring many GPUs and expensive costs. Motivated by this, we propose a novel neural network optimization framework named Bespoke for low-cost deployment. Our framework searches for a lightweight model by replacing parts of an original model with randomly selected alternatives, each of which comes from a pretrained neural network or the original model. In the practical sense, Bespoke has two significant merits. One is that it requires near zero cost for designing the search space of neural networks. The other merit is that it exploits the sub-networks of public pretrained neural networks, so the total cost is minimal compared to the existing works. We conduct experiments exploring Bespoke's the merits, and the results show that it finds efficient models for multiple targets with meager cost.
翻译:随着深层学习模型的流行,将它们部署到各种设备环境非常需要。由于开发和优化每个环境的神经网络的成本很高,因此需要开展一系列研究,以便有效地搜索神经网络以建立多个目标环境。然而,目前这种情况的工程仍然需要许多GPU和昂贵的成本。我们为此提出一个新的神经网络优化框架,名为Bespard,用于低成本的部署。我们的框架是寻找一个轻量模型,用随机选择的替代模型取代原始模型的一部分,每个模型都来自预先训练的神经网络或原始模型。在实际意义上,Bespok有两个重要优点。一个优点是设计神经网络的搜索空间需要近乎零的成本。另一个优点是它利用了公共预先训练的神经网络的子网络,因此总成本与现有工程相比是最低的。我们进行了探索Bespolog的优点,结果显示它找到了以低廉成本实现多个目标的高效模型。</s>