Parameterized max-affine (PMA) and parameterized log-sum-exp (PLSE) networks are proposed for general decision-making problems. The proposed approximators generalize existing convex approximators, namely, max-affine (MA) and log-sum-exp (LSE) networks, by considering function arguments of condition and decision variables and replacing the network parameters of MA and LSE networks with continuous functions with respect to the condition variable. The universal approximation theorem of PMA and PLSE is proven, which implies that PMA and PLSE are shape-preserving universal approximators for parameterized convex continuous functions. Practical guidelines for incorporating deep neural networks within PMA and PLSE networks are provided. A numerical simulation is performed to demonstrate the performance of the proposed approximators. The simulation results support that PLSE outperforms other existing approximators in terms of minimizer and optimal value errors with scalable and efficient computation for high-dimensional cases.
翻译:提议为一般决策问题建议了最大参数和参数化日志和参数化日志交换(PLSE)网络; 拟议的近似器对现有的 convex 近似器,即Max-affine(MA)和log-sum(LSE)网络进行普及,考虑条件和决定变量的功能参数,并以条件变量方面的连续功能取代MA和LSE网络的网络参数; PMA和PLSE的普遍近似理论得到证明,这意味着PMA和PLSE是参数化共振连续功能的形状保护通用近似器; 提供了将深神经网络纳入PMA和PLSE网络的实用指南; 进行了数字模拟,以证明拟议的近似器的性能; 模拟结果支持PLSE在尽量减少和最佳价值错误方面超越其他现有的近似器,并用可测量和高效的计算高度案例。