While larger neural models are pushing the boundaries of what deep learning can do, often more weights are needed to train models rather than to run inference for tasks. This paper seeks to understand this behavior using search spaces -- adding weights creates extra degrees of freedom that form new paths for optimization (or wider search spaces) rendering neural model training more effective. We then show how we can augment search spaces to train sparse models attaining competitive scores across dozens of deep learning workloads. They are also are tolerant of structures targeting current hardware, opening avenues for training and inference acceleration. Our work encourages research to explore beyond massive neural models being used today.
翻译:虽然更大的神经模型正在推动深层学习能够做什么的界限,但往往需要更多重量来训练模型,而不是为任务进行推理。本文件试图利用搜索空间来理解这种行为。增加重量将创造额外自由度,形成优化(或更广泛的搜索空间)的新途径,使神经模型培训更加有效。然后我们展示我们如何能够扩大搜索空间,以培训在数十个深层学习工作量中达到竞争性分数的稀有模型。它们也容忍针对现有硬件的结构,开辟培训和推理加速的渠道。我们的工作鼓励研究超越今天使用的大规模神经模型。