Deep learning algorithms have made many breakthroughs and have various applications in real life. Computational resources become a bottleneck as the data and complexity of the deep learning pipeline increases. In this paper, we propose optimized deep learning pipelines in multiple aspects of training including time and memory. OpTorch is a machine learning library designed to overcome weaknesses in existing implementations of neural network training. OpTorch provides features to train complex neural networks with limited computational resources. OpTorch achieved the same accuracy as existing libraries on Cifar-10 and Cifar-100 datasets while reducing memory usage to approximately 50\%. We also explore the effect of weights on total memory usage in deep learning pipelines. In our experiments, parallel encoding-decoding along with sequential checkpoints results in much improved memory and time usage while keeping the accuracy similar to existing pipelines. OpTorch python package is available at available at \url{https://github.com/cbrl-nuces/optorch
翻译:深层学习算法取得了许多突破,在现实生活中具有各种应用。计算资源随着深层学习管道的数据和复杂性的增加而成为瓶颈。在本文件中,我们提议在培训的多个方面,包括时间和记忆方面优化深层学习管道。OpTorrch是一个机器学习图书馆,旨在克服目前实施神经网络培训方面存在的弱点。OpTorrch为培训计算资源有限的复杂神经网络提供特征。OpTorrch实现了与Cifar-10和Cifar-100数据集现有图书馆相同的准确性,同时将记忆用量减少到约50 ⁇ 。我们还探讨了深层学习管道中总记忆用量的权重。在我们的实验中,平行编码解码与连续检查站的结果,大大改进记忆和时间的使用,同时保持与现有管道相似的准确性。OpTorrch python软件包可在以下网址查阅:https://github.com/cbrl-nuess/optorch。