Recently, the interpretability of deep learning has attracted a lot of attention. A plethora of methods have attempted to explain neural networks by feature visualization, saliency maps, model distillation, and so on. However, it is hard for these methods to reveal the intrinsic properties of neural networks. In this work, we studied the 1-D optimal piecewise linear approximation (PWLA) problem, and associated it with a designed neural network, named lattice neural network (LNN). We asked four essential questions as following: (1) What are the characters of the optimal solution of the PWLA problem? (2) Can an LNN converge to the global optimum? (3) Can an LNN converge to the local optimum? (4) Can an LNN solve the PWLA problem? Our main contributions are that we propose the theorems to characterize the optimal solution of the PWLA problem and present the LNN method for solving it. We evaluated the proposed LNNs on approximation tasks, forged an empirical method to improve the performance of LNNs. The experiments verified that our LNN method is competitive with the start-of-the-art method.
翻译:最近,深层学习的可解释性引起了人们的极大关注。 大量的方法试图通过地貌化、突出的地图、模型蒸馏等来解释神经网络。 然而,这些方法很难揭示神经网络的内在特性。 在这项工作中,我们研究了1-D最佳的片断线性近似(PWLA)问题,并将它与一个设计的神经网络(名为lattice神经网络)联系起来。 我们提出了以下四个基本问题:(1) 最佳解决PWLA问题的方法有哪些特点? (2) LNNN能否与全球最佳办法融合? (3) LNNN能否与当地最佳办法融合?(4) LNN能解决PWLA问题吗? 我们的主要贡献是,我们提出理论来说明最佳解决PWLA问题的方法,并提出解决LNN的方法。 我们评估了拟议的关于近似任务的LNNN(LN)方法,并形成了一种经验性方法来改进LNNW的绩效。 实验证实,我们的LNNN方法与开始的方法具有竞争力。