项目名称: 非凸非光滑优化的神经网络设计及其关键问题研究
项目编号: No.61462006
项目类型: 地区科学基金项目
立项/批准年度: 2015
项目学科: 计算机科学学科
项目作者: 喻昕
作者单位: 广西大学
项目金额: 42万元
中文摘要: 非凸非光滑优化问题涉及科学与工程应用的诸多领域,是目前国际上的研究热点。提出一种有效解决非凸非光滑优化问题的神经网络模型并对其关键问题进行研究将具有一定的理论意义和实用价值。本课题的主要工作有(1)针对已有基于早期罚函数神经网络解决非光滑优化问题的的不足,借鉴Lagarange乘子罚函数的思想拟提出一种有效解决非凸非光滑优化问题的递归神经网络模型并研究其相关性质。该网络模型的罚因子是变量,且无需计算罚因子的初始值仍能保证神经网络收敛到优化问题的最优解,更加便于网络计算。因此为解决此类问题提供了一个新的途径。(2)针对传统递归神经网络易于陷入局部最小值的缺点,在所提出解决非凸非光滑优化问题的递归神经网络模型基础之上,引入暂态混沌机制以提高其全局搜索能力。同时提出满足混沌产生的参数条件,以及研究暂态混沌参数对寻优过程的时间效率和准确性的影响,为选择合适的暂态混沌参数提高依据。
中文关键词: 非凸非光滑优化;神经网络;全局优化;有限时间收敛;暂态混沌
英文摘要: Nonconvex nomsmooth optimization problems are related to many fields of science and engineering applications, which are research hotspots in the world. Designing an effective neural network model to solve the nonconvex nomsmooth optimization problems and studing its key issues will have certain theoretical significance and practical value. The main works of this research are as follows. (1) For the lack of neural network based on early penalty function for nonsmooth optimization problems, a recurrent neural network model is proposed by using Lagarange multiplier penalty function to solve the nonconvex nonsmooth optimization problems, and its related properties are studied. The penalty factor in this network model is variable, and without calculating initial penalty factor value , the network can still guarantee convergence to the optimal solution,which is more convenient for network computing. Thus this research provides a new way to solve this problem.(2) The traditional recurrent neural network is easily trapped into local minimums. To overcome the shortage, the transiently chaotic mechanism is introduced to the proposed recursive neural network to improve the global search ability. Moreover, parameters conditions for producing chaos behavior are proposed, and the influence of transiently chaotic parameters on the efficiency and accuracy in optimization is studied, which helps for the selection of appropriate transient chaos parameters.
英文关键词: nonsmooth nonconvex optimization;neural networks;global optimization;convergence in finit time;transient chaos