The development of smart systems (i.e., systems enhanced with AI components) has thrived thanks to the rapid advancements in neural networks (NNs). A wide range of libraries and frameworks have consequently emerged to support NN design and implementation. The choice depends on factors such as available functionalities, ease of use, documentation and community support. After adopting a given NN framework, organizations might later choose to switch to another if performance declines, requirements evolve, or new features are introduced. Unfortunately, migrating NN implementations across libraries is challenging due to the lack of migration approaches specifically tailored for NNs. This leads to increased time and effort to modernize NNs, as manual updates are necessary to avoid relying on outdated implementations and ensure compatibility with new features. In this paper, we propose an approach to automatically migrate neural network code across deep learning frameworks. Our method makes use of a pivot NN model to create an abstraction of the NN prior to migration. We validate our approach using two popular NN frameworks, namely PyTorch and TensorFlow. We also discuss the challenges of migrating code between the two frameworks and how they were approached in our method. Experimental evaluation on five NNs shows that our approach successfully migrates their code and produces NNs that are functionally equivalent to the originals. Artefacts from our work are available online.
翻译:智能系统(即集成人工智能组件的系统)的发展得益于神经网络(NNs)的快速进步而蓬勃发展。随之涌现出大量支持神经网络设计与实现的库和框架。选择取决于可用功能、易用性、文档和社区支持等因素。采用特定神经网络框架后,若性能下降、需求演变或引入新功能,组织后续可能选择切换至其他框架。遗憾的是,由于缺乏专门针对神经网络的迁移方法,跨库迁移神经网络实现具有挑战性。这导致神经网络现代化所需时间和精力增加,因为必须通过手动更新来避免依赖过时实现并确保与新功能的兼容性。本文提出一种跨深度学习框架自动迁移神经网络代码的方法。该方法利用枢轴神经网络模型在迁移前创建神经网络的抽象表示。我们使用两种主流神经网络框架(即PyTorch和TensorFlow)验证了该方法,并讨论了两框架间代码迁移的挑战及本方法的应对策略。在五个神经网络上的实验评估表明,本方法能成功迁移其代码并生成功能等效于原版的神经网络。本研究的实验材料已在线公开。