首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

2.
This paper presents an optimization technique for solving a maximum flow problem arising in widespread applications in a variety of settings. On the basis of the Karush–Kuhn–Tucker (KKT) optimality conditions, a neural network model is constructed. The equilibrium point of the proposed neural network is then proved to be equivalent to the optimal solution of the original problem. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the maximum flow problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

3.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

4.
在本文中,基于神经网络,提出了一类求解具有线性约束区间二次规划问题的方法,使用增广拉格朗日函数,建立了求解规划问题的神经网络模型。基于压缩不动点理论,证明了所提出神经网络的平衡点就是等式约束区间二次规划问题的最优解。使用适当的Lyapunov函数,证明了所提出的神经网络的平衡点是全局指数稳定的。最后,两个数值仿真结果验证了本文所用方法的可行性与有效性。  相似文献   

5.
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.  相似文献   

6.
In this paper, the optimization techniques for solving a class of non-differentiable optimization problems are investigated. The non-differentiable programming is transformed into an equivalent or approximating differentiable programming. Based on Karush–Kuhn–Tucker optimality conditions and projection method, a neural network model is constructed. The proposed neural network is proved to be globally stable in the sense of Lyapunov and can obtain an exact or approximating optimal solution of the original optimization problem. An example shows the effectiveness of the proposed optimization techniques.  相似文献   

7.
一类神经网络模型的稳定性   总被引:2,自引:1,他引:1  
本文将一种求解凸规划问题的神经网络模型推广到求解一般的非凸非线性规划问题.理论分析表明;在适当的条件下,本文提出的求解非凸非线性规划问题的神经网络模型的平衡点是渐近稳定的,对应于非线性规划问题的局部最优解  相似文献   

8.
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples.  相似文献   

9.
This article presents a novel neural network (NN) based on NCP function for solving nonconvex nonlinear optimization (NCNO) problem subject to nonlinear inequality constraints. We first apply the p‐power convexification of the Lagrangian function in the NCNO problem. The proposed NN is a gradient model which is constructed by an NCP function and an unconstrained minimization problem. The main feature of this NN is that its equilibrium point coincides with the optimal solution of the original problem. Under a proper assumption and utilizing a suitable Lyapunov function, it is shown that the proposed NN is Lyapunov stable and convergent to an exact optimal solution of the original problem. Finally, simulation results on two numerical examples and two practical examples are given to show the effectiveness and applicability of the proposed NN. © 2015 Wiley Periodicals, Inc. Complexity 21: 130–141, 2016  相似文献   

10.
In this paper, a class of recurrent neural networks with continuously distributed delays is discussed. Without resorting to the theory of exponential dichotomy, several new sufficient conditions are obtained ensuring the existence of an almost periodic solution for this model based on a special functional and analysis technique. Moreover, by constructing suitable Lyapunov functions, the attractivity and exponential stability of the almost periodic solution are also considered for the system. The results obtained are helpful to design globally stable almost periodic oscillatory neural networks. A numerical example is given to show the feasibility of the results obtained.  相似文献   

11.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem.  相似文献   

12.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

13.
基于解的充分必要条件,提出一类广义变分不等式问题的神经网络模型.通过构造Lyapunov函数,在适当的条件下证明了新模型是Lyapunov稳定的,并且全局收敛和指数收敛于原问题的解.数值试验表明,该神经网络模型是有效的和可行的.  相似文献   

14.
In [2], Alon and Tarsi proposed a conjecture about the nowhere-zero point in linear mappings. In this paper, we first study some generalizations of this problem, and obtain necessary and sufficient conditions for the existence of nowhere point in these generalized problems under the assumption |F|?n+2, where n is the number of rows of the matrix A. Then we apply the results in these generalizations to give a polynomial time algebraic construction of the acyclic network codings.  相似文献   

15.
In this work, we consider a new approach to the practical stability theory of impulsive functional differential equations. With Lyapunov functionals and Razumikhin technique, we use a new technique in the division of Lyapunov functions, given by Shunian Zhang, and obtain conditions sufficient for the uniform practical (asymptotical) stability of impulsive delay differential equations. An example is also discussed to illustrate the advantage of the proposed results.  相似文献   

16.
We are interested in models for vehicular traffic flow based on partial differential equations and their extensions to networks of roads. In this paper, we simplify a fluidodynamic traffic model and derive a new traffic flow model based on ordinary differential equations (ODEs). This is obtained by spatial discretization of an averaged density evolution and a suitable approximation of the coupling conditions at junctions of the network. We show that the new model inherits similar features of the full model, e.g., traffic jam propagation. We consider optimal control problems controlled by the ODE model and derive the optimality system. We present numerical results on the simulation and optimization of traffic flow in sample networks.  相似文献   

17.
This paper deals with the general periodic Lotka-Volterra type competition systems with feedback controls and deviating arguments. By employing fixed point index theory on cone, an explicit necessary and sufficient condition for the global existence of the positive periodic solution of the systems is proved. By constructing a suitable Lyapunov functional, a set of easily verifiable sufficient conditions for the global asymptotic stability of the positive periodic solution of the systems is given.  相似文献   

18.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

19.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献   

20.
对于双材料平面接头问题提出了一个分析应力奇性指数的新方法:微分求积法(DQM).首先,将平面接头连接点处位移场的径向渐近展开格式代入平面弹性力学控制方程,获得了关于应力奇性指数的常微分方程组(ODEs)特征值问题.然后,基于DQM理论,将ODEs的特征值问题转化为标准型广义代数方程组特征值问题,求解之可一次性地计算出双材料平面接头连接点处应力奇性指数,同时,一并求出了接头连接点处相应的位移和应力特征函数.数值计算结果说明该文DQM计算平面接头连接点处应力奇性指数的结果是正确的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号