共查询到20条相似文献,搜索用时 93 毫秒
1.
A.R. Nazemi 《Journal of Computational and Applied Mathematics》2011,236(6):1282-1295
This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper. 相似文献
2.
Alireza Nazemi Elahe Sharifi 《Communications in Nonlinear Science & Numerical Simulation》2013,18(3):692-709
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient. 相似文献
3.
4.
《Communications in Nonlinear Science & Numerical Simulation》2014,19(4):789-798
In this paper, the optimization techniques for solving pseudoconvex optimization problems are investigated. A simplified recurrent neural network is proposed according to the optimization problem. We prove that the optimal solution of the optimization problem is just the equilibrium point of the neural network, and vice versa if the equilibrium point satisfies the linear constraints. The proposed neural network is proven to be globally stable in the sense of Lyapunov and convergent to an exact optimal solution of the optimization problem. A numerical simulation is given to illustrate the global convergence of the neural network. Applications in business and chemistry are given to demonstrate the effectiveness of the neural network. 相似文献
5.
This paper presents an optimization technique for solving a maximum flow problem arising in widespread applications in a variety of settings. On the basis of the Karush–Kuhn–Tucker (KKT) optimality conditions, a neural network model is constructed. The equilibrium point of the proposed neural network is then proved to be equivalent to the optimal solution of the original problem. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the maximum flow problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper. 相似文献
6.
In this paper, the optimization techniques for solving a class of non-differentiable optimization problems are investigated. The non-differentiable programming is transformed into an equivalent or approximating differentiable programming. Based on Karush–Kuhn–Tucker optimality conditions and projection method, a neural network model is constructed. The proposed neural network is proved to be globally stable in the sense of Lyapunov and can obtain an exact or approximating optimal solution of the original optimization problem. An example shows the effectiveness of the proposed optimization techniques. 相似文献
7.
一类神经网络模型的稳定性 总被引:2,自引:1,他引:1
本文将一种求解凸规划问题的神经网络模型推广到求解一般的非凸非线性规划问题.理论分析表明;在适当的条件下,本文提出的求解非凸非线性规划问题的神经网络模型的平衡点是渐近稳定的,对应于非线性规划问题的局部最优解 相似文献
8.
A.R. Nazemi 《Communications in Nonlinear Science & Numerical Simulation》2012,17(4):1696-1705
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples. 相似文献
9.
A novel neural network based on NCP function for solving constrained nonconvex optimization problems 下载免费PDF全文
This article presents a novel neural network (NN) based on NCP function for solving nonconvex nonlinear optimization (NCNO) problem subject to nonlinear inequality constraints. We first apply the p‐power convexification of the Lagrangian function in the NCNO problem. The proposed NN is a gradient model which is constructed by an NCP function and an unconstrained minimization problem. The main feature of this NN is that its equilibrium point coincides with the optimal solution of the original problem. Under a proper assumption and utilizing a suitable Lyapunov function, it is shown that the proposed NN is Lyapunov stable and convergent to an exact optimal solution of the original problem. Finally, simulation results on two numerical examples and two practical examples are given to show the effectiveness and applicability of the proposed NN. © 2015 Wiley Periodicals, Inc. Complexity 21: 130–141, 2016 相似文献
10.
B. S. Goh 《Journal of Optimization Theory and Applications》1997,92(3):581-604
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem. 相似文献
11.
A neural network is proposed for solving a convex quadratic bilevel programming problem. Based on Lyapunov and LaSalle theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the equilibrium, which corresponds to the optimal solution of a convex quadratic bilevel programming problem. Numerical simulation results show that the proposed neural network is feasible and efficient for a convex quadratic bilevel programming problem. 相似文献
12.
基于解的充分必要条件,提出一类广义变分不等式问题的神经网络模型.通过构造Lyapunov函数,在适当的条件下证明了新模型是Lyapunov稳定的,并且全局收敛和指数收敛于原问题的解.数值试验表明,该神经网络模型是有效的和可行的. 相似文献
13.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network. 相似文献
14.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported. 相似文献
15.
对于双材料平面接头问题提出了一个分析应力奇性指数的新方法:微分求积法(DQM).首先,将平面接头连接点处位移场的径向渐近展开格式代入平面弹性力学控制方程,获得了关于应力奇性指数的常微分方程组(ODEs)特征值问题.然后,基于DQM理论,将ODEs的特征值问题转化为标准型广义代数方程组特征值问题,求解之可一次性地计算出双材料平面接头连接点处应力奇性指数,同时,一并求出了接头连接点处相应的位移和应力特征函数.数值计算结果说明该文DQM计算平面接头连接点处应力奇性指数的结果是正确的. 相似文献
16.
In this paper, we consider the stochastic second-order cone complementarity problems (SSOCCP). We first formulate the SSOCCP contained expectation as an optimization problem using the so-called second-order cone complementarity function. We then use sample average approximation method and smoothing technique to obtain the approximation problems for solving this reformulation. In theory, we show that any accumulation point of the global optimal solutions or stationary points of the approximation problems are global optimal solution or stationary point of the original problem under suitable conditions. Finally, some numerical examples are given to explain that the proposed methods are feasible. 相似文献
17.
18.
Equilibrium problems play a central role in the study of complex and competitive systems. Many variational formulations of these problems have been presented in these years. So, variational inequalities are very useful tools for the study of equilibrium solutions and their stability. More recently a dynamical model of equilibrium problems based on projection operators was proposed. It is designated as globally projected dynamical system (GPDS). The equilibrium points of this system are the solutions to the associated variational inequality (VI) problem. A very popular approach for finding solution of these VI and for studying its stability consists in introducing the so-called "gap-functions", while stability analysis of an equilibrium point of dynamical systems can be made by means of Lyapunov functions. In this paper we show strict relationships between gap functions and Lyapunov functions. 相似文献
19.
20.
In this paper, we present a general class of BAM neural networks with discontinuous neuron activations and impulses. By using the fixed point theorem in differential inclusions theory, we investigate the existence of periodic solution for this neural network. By constructing the suitable Lyapunov function, we give a sufficient condition which ensures the uniqueness and global exponential stability of the periodic solution. The results of this paper show that the Forti’s conjecture is true for BAM neural networks with discontinuous neuron activations and impulses. Further, a numerical example is given to demonstrate the effectiveness of the results obtained in this paper. 相似文献