Traditional machine learning algorithms are designed to learn in isolation, i.e. address single tasks. The core idea of transfer learning (TL) is that knowledge gained in learning to perform one task (source) can be leveraged to improve learning performance in a related, but different, task (target). TL leverages and transfers previously acquired knowledge to address the expense of data acquisition and labeling, potential computational power limitations, and the dataset distribution mismatches. Although significant progress has been made in the fields of image processing, speech recognition, and natural language processing (for classification and regression) for TL, little work has been done in the field of scientific machine learning for functional regression and uncertainty quantification in partial differential equations. In this work, we propose a novel TL framework for task-specific learning under conditional shift with a deep operator network (DeepONet). Inspired by the conditional embedding operator theory, we measure the statistical distance between the source domain and the target feature domain by embedding conditional distributions onto a reproducing kernel Hilbert space. Task-specific operator learning is accomplished by fine-tuning task-specific layers of the target DeepONet using a hybrid loss function that allows for the matching of individual target samples while also preserving the global properties of the conditional distribution of target data. We demonstrate the advantages of our approach for various TL scenarios involving nonlinear PDEs under conditional shift. Our results include geometry domain adaptation and show that the proposed TL framework enables fast and efficient multi-task operator learning, despite significant differences between the source and target domains.
翻译:传统的机器学习算法旨在孤立地学习,即处理单项任务。转移学习的核心理念(TL)是,在学习完成一项任务(源)的过程中获得的知识可以用来提高相关但不同的任务(目标)的学习绩效。TL杠杆和转移先前获得的知识,以解决数据获取和标签费用、潜在计算功率限制和数据集分布不匹配。尽管在图像处理、语音识别和TL自然语言处理(用于分类和回归)领域取得了重大进展,但在部分差异方程式中,在学习功能回归和不确定性量化的科学机器领域(TL)几乎没有做任何工作。在这项工作中,我们提议了一个与深操作网络(DeepONet)进行有条件转换的特定任务学习的新TL框架。在有条件嵌入操作者理论的启发下,我们测量源域和目标特征域之间的统计距离,将有条件的分布嵌入复制的内尔伯特空间。具体任务操作员学习是通过细调T(DeepONet)目标变异和不确定性量化目标变异,同时利用混合模型显示我们各个目标域的目标变换目标域的路径,同时显示我们的各种数据分布功能的模型功能也能够显示我们各个目标变换目标域。