This paper wants to increase our understanding and computational know-how for time--varying matrix problems and Zhang Neural Networks (ZNNs). These neural networks were invented for time or single parameter--varying matrix problems around 2001 in China and almost all of their advances have been made in and most still come from its birthplace. Zhang Neural Network methods have become a backbone for solving discretized sensor driven time--varying matrix problems in real-time, in theory and in on--chip applications for robots, in control theory and other engineering applications in China. They have become the method of choice for many time--varying matrix problems that benefit from or require efficient, accurate and predictive real--time computations. A typical discretized Zhang Neural Network algorithm needs seven distinct steps in its initial set-up. The construction of discretized Zhang Neural Network algorithms starts from a model equation with its associated error equation and the stipulation that the error function decrease exponentially fast. The error function differential equation is then mated with a convergent look-ahead finite difference formula to create a distinctly new multi--step style solver that predicts the future state of the system reliably from current and earlier state and solution data. Matlab codes of discretized Zhang Neural Network algorithms for time varying matrix problems typically consist of one linear equations solve and one recursion of already available data per time step. This makes discretized Zhang Neural network based algorithms highly competitive with ordinary differential equation initial value analytic continuation methods for function given data that are designed to work adaptively. .
翻译:暂无翻译