Low-precision arithmetic trains deep learning models using less energy, less memory and less time. However, we pay a price for the savings: lower precision may yield larger round-off error and hence larger prediction error. As applications proliferate, users must choose which precision to use to train a new model, and chip manufacturers must decide which precisions to manufacture. We view these precision choices as a hyperparameter tuning problem, and borrow ideas from meta-learning to learn the tradeoff between memory and error. In this paper, we introduce Pareto Estimation to Pick the Perfect Precision (PEPPP). We use matrix factorization to find non-dominated configurations (the Pareto frontier) with a limited number of network evaluations. For any given memory budget, the precision that minimizes error is a point on this frontier. Practitioners can use the frontier to trade memory for error and choose the best precision for their goals.
翻译:低精度算术用较少的能量、记忆和时间来培养深层次学习模型。 但是,我们为节省开支付出了代价: 低精度可能导致更大的回合错误, 从而导致更大的预测错误。 随着应用的激增, 用户必须选择用来训练新模型的精确度, 芯片制造商必须决定制造的精确度。 我们认为这些精确度选择是一个超参数调问题, 从元学习中借用想法来学习记忆和错误的权衡。 在本文中, 我们引入了Pareto Estimation来选择完美精度(PPPPPPP ) 。 我们使用矩阵化来寻找非主控配置( Pareto边框 ), 以及数量有限的网络评价。 对于任何特定的记忆预算, 尽量减少错误的精确度是这一边框的一个点 。 执行者可以使用边框来交换记忆错误, 并选择其目标的最佳精确度 。