Discretization invariant learning aims at learning in the infinite-dimensional function spaces with the capacity to process heterogeneous discrete representations of functions as inputs and/or outputs of a learning model. This paper proposes a novel deep learning framework based on integral autoencoders (IAE-Net) for discretization invariant learning. The basic building block of IAE-Net consists of an encoder and a decoder as integral transforms with data-driven kernels, and a fully connected neural network between the encoder and decoder. This basic building block is applied in parallel in a wide multi-channel structure, which are repeatedly composed to form a deep and densely connected neural network with skip connections as IAE-Net. IAE-Net is trained with randomized data augmentation that generates training data with heterogeneous structures to facilitate the performance of discretization invariant learning. The proposed IAE-Net is tested with various applications in predictive data science, solving forward and inverse problems in scientific computing, and signal/image processing. Compared with alternatives in the literature, IAE-Net achieves state-of-the-art performance in existing applications and creates a wide range of new applications.
翻译:变异学习的分解性旨在在无限功能空间学习,有能力处理作为学习模式的投入和/或产出的、不同离散功能的分解表达功能,作为学习模型的投入和/或产出,本文件提出一个全新的深层次学习框架,以综合自动解析器(IAE-Net)为基础,进行分解学习。IAE-Net的基本构件包括一个编码器和一个解码器,作为由数据驱动内核进行整体转换的分解器,以及编码器和解码器之间充分连接的神经网络。这一基本构件平行地应用于一个广泛的多通道结构,该结构反复地组成一个深度和密集的连接神经网络,连接连接器在IAE-Net中跳过连接。IAE-Net得到随机化数据增强的培训,通过混合结构生成培训数据,以便利异性学习中的分解性工作。拟议的IAE-Net在预测数据科学、解决科学计算中的前向和反向问题以及信号/图像处理方面,经过各种应用的测试。与文献中的替代品相比,IAE-Net在创建新的应用中可以创造新的状态和新应用。