首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

2.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

3.
在本文中,基于神经网络,提出了一类求解具有线性约束区间二次规划问题的方法,使用增广拉格朗日函数,建立了求解规划问题的神经网络模型。基于压缩不动点理论,证明了所提出神经网络的平衡点就是等式约束区间二次规划问题的最优解。使用适当的Lyapunov函数,证明了所提出的神经网络的平衡点是全局指数稳定的。最后,两个数值仿真结果验证了本文所用方法的可行性与有效性。  相似文献   

4.
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network.  相似文献   

5.
This paper presents an optimization technique for solving a maximum flow problem arising in widespread applications in a variety of settings. On the basis of the Karush–Kuhn–Tucker (KKT) optimality conditions, a neural network model is constructed. The equilibrium point of the proposed neural network is then proved to be equivalent to the optimal solution of the original problem. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the maximum flow problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.  相似文献   

6.
In this paper, the optimization techniques for solving a class of non-differentiable optimization problems are investigated. The non-differentiable programming is transformed into an equivalent or approximating differentiable programming. Based on Karush–Kuhn–Tucker optimality conditions and projection method, a neural network model is constructed. The proposed neural network is proved to be globally stable in the sense of Lyapunov and can obtain an exact or approximating optimal solution of the original optimization problem. An example shows the effectiveness of the proposed optimization techniques.  相似文献   

7.
一类神经网络模型的稳定性   总被引:2,自引:1,他引:1  
本文将一种求解凸规划问题的神经网络模型推广到求解一般的非凸非线性规划问题.理论分析表明;在适当的条件下,本文提出的求解非凸非线性规划问题的神经网络模型的平衡点是渐近稳定的,对应于非线性规划问题的局部最优解  相似文献   

8.
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples.  相似文献   

9.
This article presents a novel neural network (NN) based on NCP function for solving nonconvex nonlinear optimization (NCNO) problem subject to nonlinear inequality constraints. We first apply the p‐power convexification of the Lagrangian function in the NCNO problem. The proposed NN is a gradient model which is constructed by an NCP function and an unconstrained minimization problem. The main feature of this NN is that its equilibrium point coincides with the optimal solution of the original problem. Under a proper assumption and utilizing a suitable Lyapunov function, it is shown that the proposed NN is Lyapunov stable and convergent to an exact optimal solution of the original problem. Finally, simulation results on two numerical examples and two practical examples are given to show the effectiveness and applicability of the proposed NN. © 2015 Wiley Periodicals, Inc. Complexity 21: 130–141, 2016  相似文献   

10.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

11.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem.  相似文献   

12.
基于解的充分必要条件,提出一类广义变分不等式问题的神经网络模型.通过构造Lyapunov函数,在适当的条件下证明了新模型是Lyapunov稳定的,并且全局收敛和指数收敛于原问题的解.数值试验表明,该神经网络模型是有效的和可行的.  相似文献   

13.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

14.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献   

15.
对于双材料平面接头问题提出了一个分析应力奇性指数的新方法:微分求积法(DQM).首先,将平面接头连接点处位移场的径向渐近展开格式代入平面弹性力学控制方程,获得了关于应力奇性指数的常微分方程组(ODEs)特征值问题.然后,基于DQM理论,将ODEs的特征值问题转化为标准型广义代数方程组特征值问题,求解之可一次性地计算出双材料平面接头连接点处应力奇性指数,同时,一并求出了接头连接点处相应的位移和应力特征函数.数值计算结果说明该文DQM计算平面接头连接点处应力奇性指数的结果是正确的.  相似文献   

16.
In this paper, we consider the stochastic second-order cone complementarity problems (SSOCCP). We first formulate the SSOCCP contained expectation as an optimization problem using the so-called second-order cone complementarity function. We then use sample average approximation method and smoothing technique to obtain the approximation problems for solving this reformulation. In theory, we show that any accumulation point of the global optimal solutions or stationary points of the approximation problems are global optimal solution or stationary point of the original problem under suitable conditions. Finally, some numerical examples are given to explain that the proposed methods are feasible.  相似文献   

17.
本文探讨了线性规划的原问题与对偶问题理论,并在此基础上可开发出一种用于在线求解线性规划的递归神经网络和应用于冗余机器手臂逆运动学的求解问题上.如,Tang等人开展的原对偶神经网络.但鉴于对偶理论的复杂性和多样性,该原对偶神经网络模型仅可以得到线性规划问题的可行解,而本文对该网络模型改进后可得到线性规划问题的最优解.仿真结果证实了这种改进模型在解决线性规划问题上的有效性、正确性和高效率.  相似文献   

18.
Equilibrium problems play a central role in the study of complex and competitive systems. Many variational formulations of these problems have been presented in these years. So, variational inequalities are very useful tools for the study of equilibrium solutions and their stability. More recently a dynamical model of equilibrium problems based on projection operators was proposed. It is designated as globally projected dynamical system (GPDS). The equilibrium points of this system are the solutions to the associated variational inequality (VI) problem. A very popular approach for finding solution of these VI and for studying its stability consists in introducing the so-called "gap-functions", while stability analysis of an equilibrium point of dynamical systems can be made by means of Lyapunov functions. In this paper we show strict relationships between gap functions and Lyapunov functions.  相似文献   

19.
廖伍代  周军 《运筹学学报》2023,27(1):103-114
为了在线求解时变凸二次规划问题,实现误差精度更高、求解时间更短和收敛速度更快的目标。本文采用了求解问题更快的时变网络设计参数,选择了有限时间可以收敛的Sign-bi-power激活函数,构造了一种改进的归零神经网络动力学模型。其后,分析了模型的稳定性和收敛性,得到其解能够在有限时间内收敛。最后,在仿真算例中,与传统的梯度神经网络和归零神经网络模型相比,所提模型具有更高的误差精度、更短的求解时间和更快的收敛速度,优于前两种网络模型。  相似文献   

20.
In this paper, we present a general class of BAM neural networks with discontinuous neuron activations and impulses. By using the fixed point theorem in differential inclusions theory, we investigate the existence of periodic solution for this neural network. By constructing the suitable Lyapunov function, we give a sufficient condition which ensures the uniqueness and global exponential stability of the periodic solution. The results of this paper show that the Forti’s conjecture is true for BAM neural networks with discontinuous neuron activations and impulses. Further, a numerical example is given to demonstrate the effectiveness of the results obtained in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号