Recent advances in Artificial Intelligence (AI), especially in Machine Learning (ML), have introduced various practical applications (e.g., virtual personal assistants and autonomous cars) that enhance the experience of everyday users. However, modern ML technologies like Deep Learning require considerable technical expertise and resources to develop, train and deploy such models, making effective reuse of the ML models a necessity. Such discovery and reuse by practitioners and researchers are being addressed by public ML package repositories, which bundle up pre-trained models into packages for publication. Since such repositories are a recent phenomenon, there is no empirical data on their current state and challenges. Hence, this paper conducts an exploratory study that analyzes the structure and contents of two popular ML package repositories, TFHub and PyTorch Hub, comparing their information elements (features and policies), package organization, package manager functionalities and usage contexts against popular software package repositories (npm, PyPI, and CRAN). Through these studies, we have identified unique SE practices and challenges for sharing ML packages. These findings and implications would be useful for data scientists, researchers and software developers who intend to use these shared ML packages.
翻译:人工智能(AI)最近的进展,特别是机器学习(ML)方面的进展,引进了各种实际应用(例如虚拟个人助理和自主汽车),提高了日常用户的经验;然而,深学等现代ML技术需要大量技术专长和资源来开发、培训和部署这种模型,使有效再利用ML模型成为必要;从业人员和研究人员的这种发现和再利用正在由公共ML软件包储存库处理,该储存库将预先培训的模型捆绑成供出版的包件;由于这种储存库是最近出现的现象,因此没有关于其目前状况和挑战的经验性数据;因此,本文件进行一项探索性研究,分析两个受欢迎的ML软件包储存库(TFHub和PyTorrch 枢纽)的结构和内容,比较其信息要素(功能和政策)、组织、包经理功能和使用环境,以对抗大众软件包储存库(npm、PyPI和CRAN),通过这些研究,我们查明了SE独特的做法和分享ML软件包件的挑战。这些调查结果和所涉问题对打算使用这些共用ML软件的科学家、研究人员和软件开发商有用。