Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too expensive for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets, a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3 with similar accuracy. Despite higher accuracy and lower latency than MnasNet, we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPU-hours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than MobileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-X-optimized model achieves a 1.4x speedup on an iPhone X.
翻译:设计移动设备准确高效的 ConNet 具有挑战性, 因为设计空间是组合式的。 由于此, 先前的神经结构搜索( NAS) 方法计算成本昂贵。 ConNet 结构优化取决于输入分辨率和目标设备等因素。 但是, 现有的方法太昂贵, 无法进行逐案重新设计。 此外, 以前的工作主要侧重于减少 FLOP, 但 FLOP 的计算并不总是反映实际的延缓度 。 为了解决这些问题, 我们建议使用一个不同、 可变的神经结构搜索( DNAS ) 框架, 使用基于梯度的方法优化 ConNet 结构, 避免像以往方法那样进行计算和培训单个结构。 ConNet 结构优化取决于输入分辨率和目标设备等要素。 然而, 由DNASBNB 所发现的模型组合超过了手工设计并自动生成的先进模型。 FBNetB 在图像网络上实现了74.1%的顶级精确度, 但是在Samsung S8 手机上, 2.4x 和1.5x 每部移动V2-1.3 速度比 精确度要高, 在S60 B 轨道上实现一个更低的搜索。