The extreme learning machine (ELM) method can yield highly accurate solutions to linear/nonlinear partial differential equations (PDEs), but requires the last hidden layer of the neural network to be wide to achieve a high accuracy. If the last hidden layer is narrow, the accuracy of the existing ELM method will be poor, irrespective of the rest of the network configuration. In this paper we present a modified ELM method, termed HLConcELM (hidden-layer concatenated ELM), to overcome the above drawback of the conventional ELM method. The HLConcELM method can produce highly accurate solutions to linear/nonlinear PDEs when the last hidden layer of the network is narrow and when it is wide. The new method is based on a type of modified feedforward neural networks (FNN), termed HLConcFNN (hidden-layer concatenated FNN), which incorporates a logical concatenation of the hidden layers in the network and exposes all the hidden nodes to the output-layer nodes. We show that HLConcFNNs have the remarkable property that, given a network architecture, when additional hidden layers are appended to the network or when extra nodes are added to the existing hidden layers, the approximation capacity of the HLConcFNN associated with the new architecture is guaranteed to be not smaller than that of the original network architecture. We present ample benchmark tests with linear/nonlinear PDEs to demonstrate the computational accuracy and performance of the HLConcELM method and the superiority of this method to the conventional ELM from previous works.
翻译:极端学习机器( ELM) 方法可以为线性/ 非线性部分方程式( PDE) 带来非常精确的解决方案, 但要求神经网络的最后隐藏层要宽度, 以便达到高精度。 如果最后隐藏层窄, 现有的 ELM 方法的准确性将较差, 不论网络配置的其余部分。 在本文中, 我们提出了一个修改的 ELM 方法, 名为 HLConcelM( 隐藏层连接 ELM ), 以克服常规 ELM 方法( PDE) 的上述偏差。 当网络的最后隐藏层窄度和宽度时, HLConCL 方法可以产生非常精确的线性能。 当 HLCONL 网络的隐藏性能不是隐藏性能, 当 HLCNF 的隐藏性能将 HL 结构添加到隐藏性能时, 当 HLNF 的网络的隐藏性能与 HL 的隐藏性能, 当 HLNF 和 的隐藏性能结构的隐藏性能是额外的性能时, 当 HLNF 的隐藏性能向前的隐藏性LL 向前网络结构向后, 当HL 将HCNF 的高级性能向新的性能向后, 我们将HL 向新的性能向新的性能向后, 我们将HCNF 向的隐藏性能向后加加加加。