首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

2.
This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

3.
In this paper, the optimization techniques for solving a class of non-differentiable optimization problems are investigated. The non-differentiable programming is transformed into an equivalent or approximating differentiable programming. Based on Karush–Kuhn–Tucker optimality conditions and projection method, a neural network model is constructed. The proposed neural network is proved to be globally stable in the sense of Lyapunov and can obtain an exact or approximating optimal solution of the original optimization problem. An example shows the effectiveness of the proposed optimization techniques.  相似文献   

4.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem.  相似文献   

5.
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples.  相似文献   

6.
廖伍代  周军 《运筹学学报》2023,27(1):103-114
为了在线求解时变凸二次规划问题,实现误差精度更高、求解时间更短和收敛速度更快的目标。本文采用了求解问题更快的时变网络设计参数,选择了有限时间可以收敛的Sign-bi-power激活函数,构造了一种改进的归零神经网络动力学模型。其后,分析了模型的稳定性和收敛性,得到其解能够在有限时间内收敛。最后,在仿真算例中,与传统的梯度神经网络和归零神经网络模型相比,所提模型具有更高的误差精度、更短的求解时间和更快的收敛速度,优于前两种网络模型。  相似文献   

7.
This paper presents an optimization technique for solving a maximum flow problem arising in widespread applications in a variety of settings. On the basis of the Karush–Kuhn–Tucker (KKT) optimality conditions, a neural network model is constructed. The equilibrium point of the proposed neural network is then proved to be equivalent to the optimal solution of the original problem. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the maximum flow problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

8.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献   

9.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

10.
This article presents a novel neural network (NN) based on NCP function for solving nonconvex nonlinear optimization (NCNO) problem subject to nonlinear inequality constraints. We first apply the p‐power convexification of the Lagrangian function in the NCNO problem. The proposed NN is a gradient model which is constructed by an NCP function and an unconstrained minimization problem. The main feature of this NN is that its equilibrium point coincides with the optimal solution of the original problem. Under a proper assumption and utilizing a suitable Lyapunov function, it is shown that the proposed NN is Lyapunov stable and convergent to an exact optimal solution of the original problem. Finally, simulation results on two numerical examples and two practical examples are given to show the effectiveness and applicability of the proposed NN. © 2015 Wiley Periodicals, Inc. Complexity 21: 130–141, 2016  相似文献   

11.
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.  相似文献   

12.
This paper considers the problem of passivity-based controller design for Hopfield neural networks. By making use of a convex representation of nonlinearities, a feedback control scheme based on passivity and Lyapunov theory is presented. A criterion for existence of the controller is given in terms of linear matrix inequality (LMI), which can be easily solved by a convex optimization problem. An example and its numerical simulation are given to show the effectiveness of the proposed method.  相似文献   

13.
In this paper, the problem of delay-dependent asymptotic stability criterion for neural networks with time-varying delay has been considered. A new class of Lyapunov functional which contains a triple-integral term is constructed to derive some new delay-dependent stability criteria. The obtained criteria are less conservative because free-weighting matrices method and a convex optimization approach are considered. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.  相似文献   

14.
In this paper we present a robust conjugate duality theory for convex programming problems in the face of data uncertainty within the framework of robust optimization, extending the powerful conjugate duality technique. We first establish robust strong duality between an uncertain primal parameterized convex programming model problem and its uncertain conjugate dual by proving strong duality between the deterministic robust counterpart of the primal model and the optimistic counterpart of its dual problem under a regularity condition. This regularity condition is not only sufficient for robust duality but also necessary for it whenever robust duality holds for every linear perturbation of the objective function of the primal model problem. More importantly, we show that robust strong duality always holds for partially finite convex programming problems under scenario data uncertainty and that the optimistic counterpart of the dual is a tractable finite dimensional problem. As an application, we also derive a robust conjugate duality theorem for support vector machines which are a class of important convex optimization models for classifying two labelled data sets. The support vector machine has emerged as a powerful modelling tool for machine learning problems of data classification that arise in many areas of application in information and computer sciences.  相似文献   

15.
An artificial neural network is proposed in this paper for solving the linear complementarity problem. The new neural network is based on a reformulation of the linear complementarity problem into the unconstrained minimization problem. Our new neural network can be easily implemented on a circuit. On the theoretical aspect, we analyze the existence of the equilibrium points for our neural network. In addition, we prove that if the equilibrium point exists for the neural network, then any such equilibrium point is both asymptotically and bounded (Lagrange) stable for any initial state. Furthermore, linear programming and certain quadratical programming problems (not necessarily convex) can be also solved by the neural network. Simulation results on several problems including a nonconvex one are also reported.  相似文献   

16.
一类神经网络模型的稳定性   总被引:2,自引:1,他引:1  
本文将一种求解凸规划问题的神经网络模型推广到求解一般的非凸非线性规划问题.理论分析表明;在适当的条件下,本文提出的求解非凸非线性规划问题的神经网络模型的平衡点是渐近稳定的,对应于非线性规划问题的局部最优解  相似文献   

17.
In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.  相似文献   

18.
Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method; nevertheless, it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex optimization problems, which also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for large-scale sparse convex optimization. Moreover, we present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and variables. The results indicate that the proposed algorithm is also practically efficient.  相似文献   

19.
In this paper, the problem of exponential stability analysis for neural networks is investigated. It is assumed that the considered neural networks have norm-bounded parametric uncertainties and interval time-varying delays. By constructing a new Lyapunov functional, new delay-dependent exponential stability criteria with an exponential convergence rate are established in terms of LMIs (linear matrix inequalities) which can be easily solved by various convex optimization algorithms. Two numerical examples are included to show the effectiveness of proposed criteria.  相似文献   

20.
This paper is concerned with the exponential stability for the discrete‐time bidirectional associative memory neural networks with time‐varying delays. Based on Lyapunov stability theory, some novel delay‐dependent sufficient conditions are obtained to guarantee the globally exponential stability of the addressed neural networks. In order to obtain less conservative results, an improved Lyapunov–Krasovskii functional is constructed and the reciprocally convex approach and free‐weighting matrix method are employed to give the upper bound of the difference of the Lyapunov–Krasovskii functional. Several numerical examples are provided to illustrate the effectiveness of the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号