首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
To deal with equality constrained optimization problems (ECP), we introduce in this paper "(ECP)-equation", a class of new systems of ordinary differential equations for (ECP), containing a matrix parameter called (ECP)-direction matrix, which plays a central role in it, and a scalar parameter called (ECP)-rate factor. It is shown that by following the trajectory of the equation, a stationary point or hopefully a local solution can be located under very mild conditions. As examples, several schemes of (ECP)-direction matrices and (ECP)-rate factors are given to construct concrete forms of the (ECP)-equation, including almost all the existing projected gradient type versions as special cases. As will be shown in a subsequent paper where the implementation problems are considered in detail, application of an example of these forms results in encouraging performance in experiments.  相似文献   

2.
In an optimization problem with equality constraints the optimal value function divides the state space into two parts. At a point where the objective function is less than the optimal value, a good iteration must increase the value of the objective function. Thus, a good iteration must be a balance between increasing or decreasing the objective function and decreasing a constraint violation function. This implies that at a point where the constraint violation function is large, we should construct noninferior solutions relative to points in a local search region. By definition, an accessory function is a linear combination of the objective function and a constraint violation function. We show that a way to construct an acceptable iteration, at a point where the constraint violation function is large, is to minimize an accessory function. We develop a two-phases method. In Phase I some constraints may not be approximately satisfied or the current point is not close to the solution. Iterations are generated by minimizing an accessory function. Once all the constraints are approximately satisfied, the initial values of the Lagrange multipliers are defined. A test with a merit function is used to determine whether or not the current point and the Lagrange multipliers are both close to the optimal solution. If not, Phase I is continued. If otherwise, Phase II is activated and the Newton method is used to compute the optimal solution and fast convergence is achieved.  相似文献   

3.
In this paper, we present two new Dai–Liao-type conjugate gradient methods for unconstrained optimization problems. Their convergence under the strong Wolfe line search conditions is analysed for uniformly convex objective functions and general objective functions, respectively. Numerical experiments show that our methods can outperform some existing Dai–Liao-type methods by using Dolan and Moré’s performance profile.  相似文献   

4.
1.IntroductionGivenasetRofmketnelitems(wecallthemkernels),R={ri2o,i=1,2,...,tn}andasetPofn52mnonkernelitems(wecallthemitems),P={Pj2o,j=1,2,...tn}.WewanttopartitionT=RUPintomsubsetssuchthatrjEMjandlMj153.DenoteCj=Zx,callittheloadofMj,Thendefinethemin-maxproblemastominimizethe2eMmaximumloadofthesemsubsets,andthemax-minproblemastomaximizetheminimumloadofthesemsubset.TheseproblemsareNP-complete[1],meantimeaHeuristicalgorithmKLPTisgivenforthemin-maxproblem,andtheworst-caseboundis:-C'thebo…  相似文献   

5.
本文提出了一种求解约束优化问题的新算法—投影梯度型中心方法.在连续可微和非退化的假设条件下,证明了其全局收敛性.本文算法计算简单且形式灵活.  相似文献   

6.
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.  相似文献   

7.
This paper aims at showing that the class of augmented Lagrangian functions, introduced by Rockafellar and Wets, can be derived, as a particular case, from a nonlinear separation scheme in the image space associated with the given problem; hence, it is part of a more general theory. By means of the image space analysis, local and global saddle-point conditions for the augmented Lagrangian function are investigated. It is shown that the existence of a saddle point is equivalent to a nonlinear separation of two suitable subsets of the image space. Under second-order sufficiency conditions in the image space, it is proved that the augmented Lagrangian admits a local saddle point. The existence of a global saddle point is then obtained under additional assumptions that do not require the compactness of the feasible set.  相似文献   

8.
In this paper we give a variant of the Topkis—Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz—John point of the problem. We introduce a Fritz—John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y ∈ N(z) {z } , f 0 (y) ≠ f 0 (z) , where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm. Accepted 21 September 1998  相似文献   

9.
We consider the problem of minimizing the weighted sum of a smooth function f and a convex function P of n real variables subject to m linear equality constraints. We propose a block-coordinate gradient descent method for solving this problem, with the coordinate block chosen by a Gauss-Southwell-q rule based on sufficient predicted descent. We establish global convergence to first-order stationarity for this method and, under a local error bound assumption, linear rate of convergence. If f is convex with Lipschitz continuous gradient, then the method terminates in O(n 2/ε) iterations with an ε-optimal solution. If P is separable, then the Gauss-Southwell-q rule is implementable in O(n) operations when m=1 and in O(n 2) operations when m>1. In the special case of support vector machines training, for which f is convex quadratic, P is separable, and m=1, this complexity bound is comparable to the best known bound for decomposition methods. If f is convex, then, by gradually reducing the weight on P to zero, the method can be adapted to solve the bilevel problem of minimizing P over the set of minima of f+δ X , where X denotes the closure of the feasible set. This has application in the least 1-norm solution of maximum-likelihood estimation. This research was supported by the National Science Foundation, Grant No. DMS-0511283.  相似文献   

10.
The recently proposed random cost method is applied to the topology optimization of trusses. Its performance is compared to previous genetic algorithm and evolution strategy simulations. Random cost turns out to be an optimization method with attractive features. In comparison to the genetic algorithm approach of Hajela, Lee and Lin, random cost turns out to be simpler and more efficient. Furthermore it is found that in contrast to evolution strategy, the random cost strategy's ability to find optima, is independent of the initial structure. This characteristic is related to the important capacity of escaping from local optima.  相似文献   

11.
We deal with the primal–dual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called large-update methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of large-update methods before. The new analysis does not improve the known complexity results but provides a unified way for the analysis of both large-update and small-update methods.  相似文献   

12.
We discuss recent positive experiences applying convex feasibility algorithms of Douglas–Rachford type to highly combinatorial and far from convex problems.  相似文献   

13.
Recently,a class of logarithmic-quadratic proximal(LQP)methods was intro- duced by Auslender,Teboulle and Ben-Tiba.The inexact versions of these methods solve the sub-problems in each iteration approximately.In this paper,we present a practical inexactness criterion for the inexact version of these methods.  相似文献   

14.
We propose a Gauss–Newton-type method for nonlinear constrained optimization using the exact penalty introduced recently by André and Silva for variational inequalities. We extend their penalty function to both equality and inequality constraints using a weak regularity assumption, and as a result, we obtain a continuously differentiable exact penalty function and a new reformulation of the KKT conditions as a system of equations. Such reformulation allows the use of a semismooth Newton method, so that local superlinear convergence rate can be proved under an assumption weaker than the usual strong second-order sufficient condition and without requiring strict complementarity. Besides, we note that the exact penalty function can be used to globalize the method. We conclude with some numerical experiments using the collection of test problems CUTE.  相似文献   

15.
We consider the minimization of a convex function on a bounded polyhedron (polytope) represented by linear equality constraints and non-negative variables. We define the Levenberg–Marquardt and central trajectories starting at the analytic center using the same parameter, and show that they satisfy a primal-dual relationship, being close to each other for large values of the parameter. Based on this, we develop an algorithm that starts computing primal-dual feasible points on the Levenberg–Marquardt trajectory and eventually moves to the central path. Our main theorem is particularly relevant in quadratic programming, where points on the primal-dual Levenberg–Marquardt trajectory can be calculated by means of a system of linear equations. We present some computational tests related to box constrained trust region subproblems.  相似文献   

16.
We have investigated variants of interval branch-and-bound algorithms for global optimization where the bisection step was substituted by the subdivision of the current, actual interval into many subintervals in a single iteration step. The convergence properties of the multisplitting methods, an important class of multisection procedures are investigated in detail. We also studied theoretically the convergence improvements caused by multisection on algorithms which involve the accelerating tests (like e.g. the monotonicity test). The results are published in two papers, the second one contains the numerical test result.  相似文献   

17.
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号