In-memory deep learning computes neural network models where they are stored, thus avoiding long distance communication between memory and computation units, resulting in considerable savings in energy and time. In-memory deep learning has already demonstrated orders of magnitude higher performance density and energy efficiency. The use of emerging memory technology promises to increase the gains in density, energy, and performance even further. However, emerging memory technology is intrinsically unstable, resulting in random fluctuations of data reads. This can translate to non-negligible accuracy loss, potentially nullifying the gains. In this paper, we propose three optimization techniques that can mathematically overcome the instability problem of emerging memory technology. They can improve the accuracy of the in-memory deep learning model while maximizing its energy efficiency. Experiments show that our solution can fully recover most models' state-of-the-art accuracy, and achieves at least an order of magnitude higher energy efficiency than the state-of-the-art.
翻译:内深深的学习计算了存储这些模型的神经网络模型,从而避免了记忆和计算单位之间的长距离通信,从而节省了大量的能量和时间。内深深的学习已经显示了数量级的更高性能密度和能源效率。使用新兴的记忆技术有可能进一步提高密度、能量和性能的增益。然而,新兴的记忆技术本质上是不稳定的,导致数据读数的随机波动。这可以转化为不可忽略的准确性损失,从而有可能抵消收益。在本文中,我们建议三种优化技术,从数学上可以克服新兴记忆技术的不稳定问题。它们可以提高内深层学习模型的准确性,同时最大限度地提高其能源效率。实验表明,我们的解决方案可以完全恢复大多数模型的先进准确性,至少实现比最新技术更高的能源使用效率。