首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a new complex-valued recurrent neural network (CVRNN) called complex-valued Zhang neural network (CVZNN) is proposed and simulated to solve the complex-valued time-varying matrix-inversion problems. Such a CVZNN model is designed based on a matrix-valued error function in the complex domain, and utilizes the complex-valued first-order time-derivative information of the complex-valued time-varying matrix for online inversion. Superior to the conventional complex-valued gradient-based neural network (CVGNN) and its related methods, the state matrix of the resultant CVZNN model can globally exponentially converge to the theoretical inverse of the complex-valued time-varying matrix in an error-free manner. Moreover, by exploiting the design parameter γ>1, superior convergence can be achieved for the CVZNN model to solve such complex-valued time-varying matrix inversion problems, as compared with the situation without design parameter γ involved (i.e., the situation with γ=1). Computer-simulation results substantiate the theoretical analysis and further demonstrate the efficacy of such a CVZNN model for online complex-valued time-varying matrix inversion.  相似文献   

2.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献   

3.
In this paper, we propose a trust region method for minimizing a function whose Hessian matrix at the solutions may be singular. The global convergence of the method is obtained under mild conditions. Moreover, we show that if the objective function is LC 2 function, the method possesses local superlinear convergence under the local error bound condition without the requirement of isolated nonsingular solution. This is the first regularized Newton method with trust region technique which possesses local superlinear (quadratic) convergence without the assumption that the Hessian of the objective function at the solution is nonsingular. Preliminary numerical experiments show the efficiency of the method. This work is partly supported by the National Natural Science Foundation of China (Grant Nos. 70302003, 10571106, 60503004, 70671100) and Science Foundation of Beijing Jiaotong University (2007RC014).  相似文献   

4.
Online gradient method has been widely used as a learning algorithm for training feedforward neural networks. Penalty is often introduced into the training procedure to improve the generalization performance and to decrease the magnitude of network weights. In this paper, some weight boundedness and deterministic con- vergence theorems are proved for the online gradient method with penalty for BP neural network with a hidden layer, assuming that the training samples are supplied with the network in a fixed order within each epoch. The monotonicity of the error function with penalty is also guaranteed in the training iteration. Simulation results for a 3-bits parity problem are presented to support our theoretical results.  相似文献   

5.
用在线梯度法训练积单元神经网络的收敛性分析   总被引:1,自引:0,他引:1  
<正>1引言仅由加和单元构成的传统前向神经网络已经广泛应用于模式识别及函数逼近等领域.但在处理比较复杂的问题时,这种网络往往需要补充大量的隐节点,这样就不可避免地增  相似文献   

6.
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.  相似文献   

7.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

8.
一种快速且全局收敛的BP神经网络学习算法   总被引:1,自引:0,他引:1  
目前误差反向传播(BP)算法在训练多层神经网络方面有很多成功的应用.然而,BP算法也有一些不足:收敛缓慢和易陷入局部极小点等.提出一种快速且全局收敛的BP神经网络学习算法,并且对该优化算法的全局收敛性进行分析和详细证明.实证结果表明提出的算法比标准的BP算法效率更高且更精确.  相似文献   

9.
利用同胚映射原理、线性矩阵不等式和构造的Lyapunov泛函研究了一类Cohen-Grossberg神经网络平衡点的全局渐近稳定性,优化了现有文献中关于全局渐近稳定性的判据.  相似文献   

10.
It is well known that Newton’s method for a nonlinear system has quadratic convergence when the Jacobian is a nonsingular matrix in a neighborhood of the solution. Here we present a modification of this method for nonlinear systems whose Jacobian matrix is singular. We prove, under certain conditions, that this modified Newton’s method has quadratic convergence. Moreover, different numerical tests confirm the theoretical results and allow us to compare this variant with the classical Newton’s method.  相似文献   

11.
This paper presents a Pi-Sigma network to identify first-order Tagaki-Sugeno(T-S) fuzzy inference system and proposes a simplified gradient-based neuro-fuzzy learning algorithm.A comprehensive study on the weak and strong convergence for the learning method is made,which indicates that the sequence of error function goes to a fixed value,and the gradient of the error function goes to zero,respectively.  相似文献   

12.
In this paper, we discuss the nonlinear minimax problems with inequality constraints. Based on the stationary conditions of the discussed problems, we propose a sequential systems of linear equations (SSLE)-type algorithm of quasi-strongly sub-feasible directions with an arbitrary initial iteration point. By means of the new working set, we develop a new technique for constructing the sub-matrix in the lower right corner of the coefficient matrix of the system of linear equations (SLE). At each iteration, two systems of linear equations (SLEs) with the same uniformly nonsingular coefficient matrix are solved. Under mild conditions, the proposed algorithm possesses global and strong convergence. Finally, some preliminary numerical experiments are reported.  相似文献   

13.
For solving large sparse systems of linear equations, we construct a paradigm of two-step matrix splitting iteration methods and analyze its convergence property for the nonsingular and the positive-definite matrix class. This two-step matrix splitting iteration paradigm adopts only one single splitting of the coefficient matrix, together with several arbitrary iteration parameters. Hence, it can be constructed easily in actual applications, and can also recover a number of representatives of the existing two-step matrix splitting iteration methods. This result provides systematic treatment for the two-step matrix splitting iteration methods, establishes rigorous theory for their asymptotic convergence, and enriches algorithmic family of the linear iteration solvers, for the iterative solutions of large sparse linear systems.  相似文献   

14.
Without assuming the boundedness, strict monotonicity and differentiability of the activation functions, the authors utilize the Lyapunov functional method to analyze the global convergence of some delayed models. For the Hopfield neural network with time delays, a new sufficient condition ensuring the existence, uniqueness and global exponential stability of the equilibrium point is derived. This criterion concerning the signs of entries in the connection matrix imposes constraints on the feedback matrix independently of the delay parameters. From a new viewpoint, the bidirectional associative memory neural network with time delays is investigated and a new global exponential stability result is given.  相似文献   

15.
In this paper, a new class of complex-valued projective neural network is introduced and studied on a nonempty, closed, and convex subset of a finite-dimensional complex space. An existence and uniqueness result for the equilibrium point of complex-valued projective neural network is proved under some suitable conditions. Moreover, by utilizing the linear matrix inequality technique, some sufficient conditions are presented to ensure the asymptotical stability of the complex-valued projective neural network. Finally, two examples are given to illustrate the validity and feasibility of main results.  相似文献   

16.
In this paper, the global stability problem for a general discrete Cohen–Grossberg neural network with finite and infinite delays is investigated. A simple criterion ensuring the global asymptotical stability is established, by applying the Lyapunov method and graph theory. Finally, an example showing the effectiveness of the provided criterion is given.  相似文献   

17.
本文提供了在没有非奇异假设的条件下,求解有界约束半光滑方程组的投影信赖域算法.基于一个正则化子问题,求得类牛顿步,进而求得投影牛顿步.在合理的假设条件下,证明了算法不仅具有整体收敛性而且保持超线性收敛速率.  相似文献   

18.
伪Newton—B族的导出及其性质   总被引:7,自引:0,他引:7  
本文对无约束优化问题提出了一类新的近似牛顿法(伪牛顿-B族),此方法同样具有二次终止性,产生的矩阵序列保持正定对称传递性。并证明了算法的全局收敛性和超级性收敛性。  相似文献   

19.
We propose a non-interior continuation algorithm for the solution of the linear complementarity problem (LCP) with a P0 matrix. The proposed algorithm differentiates itself from the current continuation algorithms by combining good global convergence properties with good local convergence properties under unified conditions. Specifically, it is shown that the proposed algorithm is globally convergent under an assumption which may be satisfied even if the solution set of the LCP is unbounded. Moreover, the algorithm is globally linearly and locally superlinearly convergent under a nonsingularity assumption. If the matrix in the LCP is a P* matrix, then the above results can be strengthened to include global linear and local quadratic convergence under a strict complementary condition without the nonsingularity assumption.  相似文献   

20.
This paper studies the problem of global exponential stability and exponential convergence rate for a class of impulsive discrete-time neural networks with time-varying delays. Firstly, by means of the Lyapunov stability theory, some inequality analysis techniques and a discrete-time Halanay-type inequality technique, sufficient conditions for ensuring global exponential stability of discrete-time neural networks are derived, and the estimated exponential convergence rate is provided as well. The obtained results are then applied to derive global exponential stability criteria and exponential convergence rate of impulsive discrete-time neural networks with time-varying delays. Finally, numerical examples are provided to illustrate the effectiveness and usefulness of the obtained criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号