首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A trust region algorithm for minimization of locally Lipschitzian functions   总被引:7,自引:0,他引:7  
Qi  Liqun  Sun  Jie 《Mathematical Programming》1994,66(1-3):25-43
The classical trust region algorithm for smooth nonlinear programs is extended to the nonsmooth case where the objective function is only locally Lipschitzian. At each iteration, an objective function that carries both first and second order information is minimized over a trust region. The term that carries the first order information is an iteration function that may not explicitly depend on subgradients or directional derivatives. We prove that the algorithm is globally convergent. This convergence result extends the result of Powell for minimization of smooth functions, the result of Yuan for minimization of composite convex functions, and the result of Dennis, Li and Tapia for minimization of regular functions. In addition, compared with the recent model of Pang, Han and Rangaraj for minimization of locally Lipschitzian functions using a line search, this algorithm has the same convergence property without assuming positive definiteness and uniform boundedness of the second order term. Applications of the algorithm to various nonsmooth optimization problems are discussed.This author's work was supported in part by the Australian Research Council.This author's work was carried out while he was visiting the Department of Applied Mathematics at the University of New South Wales.  相似文献   

2.
We propose two restricted memory level bundle-like algorithms for minimizing a convex function over a convex set. If the memory is restricted to one linearization of the objective function, then both algorithms are variations of the projected subgradient method. The first algorithm, proposed in Hilbert space, is a conceptual one. It is shown to be strongly convergent to the solution that lies closest to the initial iterate. Furthermore, the entire sequence of iterates generated by the algorithm is contained in a ball with diameter equal to the distance between the initial point and the solution set. The second algorithm is an implementable version. It mimics as much as possible the conceptual one in order to resemble convergence properties. The implementable algorithm is validated by numerical results on several two-stage stochastic linear programs.  相似文献   

3.
Robinson's quadratically convergent algorithm for general nonlinear programming problems is modified in such a way that, instead of exact derivatives of the objective function and the constraints, approximations of these can be used which are computed by differences of function values. This locally convergent algorithm is then combined with a penalty function method to provide a globally and quadratically convergent algorithm that does not require the calculation of derivatives.  相似文献   

4.
一类带非精确线搜索的修改的Broyden算法   总被引:4,自引:0,他引:4  
对于文(8)和(14)中提出的修改的Broyden算法,本文讨论它在线搜索非精确时的收敛性质,证明这类算法作用于梯度满足Lipschitz条件的目标函数时是整体收敛的,当目标函数一致凸时,算法是Q-超线性收敛和二阶收敛的。  相似文献   

5.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

6.
We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction. A test based on the quadratic model of the objective function is used to select the most promising between the two search directions. Both the latter selection rule and the CG stopping criterion for approximately solving Newton’s equation, strongly rely on conjugacy conditions. An appropriate linesearch technique is adopted for each search direction: a nonmonotone stabilization is used with the approximate Newton step, while an Armijo type linesearch is used for the negative curvature direction. The proposed algorithm is both globally and superlinearly convergent to stationary points satisfying second order necessary conditions. We carry out a significant numerical experience in order to test our proposal.  相似文献   

7.
Two generalized trajectory methods are combined to provide a novel and powerful numerical procedure for systematically finding multiple local extrema of a multivariable objective function. This procedure can form part of a strategy for global optimization in which the greatest local maximum and least local minimum in the interior of a specified region are compared to the largest and smallest values of the objective function on the boundary of the region. The first trajectory method, a homotopy scheme, provides a globally convergent algorithm to find a stationary point of the objective function. The second trajectory method, a relaxation scheme, starts at one stationary point and systematically connects other stationary points in the specified region by a network of trjectories. It is noted that both generalized trajectory methods actually solve the stationarity conditions, and so they can also be used to find multiple roots of a set of nonlinear equations.  相似文献   

8.
Based on a continuously differentiable exact penalty function and a regularization technique for dealing with the inconsistency of subproblems in the SQP method, we present a new SQP algorithm for nonlinear constrained optimization problems. The proposed algorithm incorporates automatic adjustment rules for the choice of the parameters and makes use of an approximate directional derivative of the merit function to avoid the need to evaluate second order derivatives of the problem functions. Under mild assumptions the algorithm is proved to be globally convergent, and in particular the superlinear convergence rate is established without assuming that the strict complementarity condition at the solution holds. Numerical results reported show that the proposed algorithm is promising.  相似文献   

9.
In this paper, we present a predictor-corrector smoothing Newton method for solving nonlinear symmetric cone complementarity problems (SCCP) based on the symmetrically perturbed smoothing function. Under a mild assumption, the solution set of the problem concerned is just nonempty, we show that the proposed algorithm is globally and locally quadratic convergent. Also, the algorithm finds a maximally complementary solution to the SCCP. Numerical results for second order cone complementarity problems (SOCCP), a special case of SCCP, show that the proposed algorithm is effective.  相似文献   

10.
We analyze the convergence properties of Powell's UOBYQA method. A distinguished feature of the method is its use of two trust region radii. We first study the convergence of the method when the objective function is quadratic. We then prove that it is globally convergent for general objective functions when the second trust region radius ρ converges to zero. This gives a justification for the use of ρ as a stopping criterion. Finally, we show that a variant of this method is superlinearly convergent when the objective function is strictly convex at the solution.  相似文献   

11.
ON THE CONVERGENCE OF PARALLEL BFGS METHOD   总被引:1,自引:0,他引:1  
ONTHECONVERGENCEOFPARALLELBFGSMETHODChenZhongFeiPusheng(DepartmentofMathematics,WuhanUniversity,Wuhan430072,China.)ZhouYuncai...  相似文献   

12.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

13.
On the convergence property of the DFP algorithm   总被引:2,自引:0,他引:2  
The DFP algorithm of unconstrained optimization possesses excellent properties of convergence for convex functions. However, a convergence theory of the DFP algorithm without the convexity assumption has not yet been established. This paper gives the following result: If the objective function is suitably smooth, and if the DFP algorithm produces a convergent point sequence, then the limit point of the sequence is a critical point of the objective function. Also, some open questions are mentioned.Supported by the National Science Foundation of China.  相似文献   

14.
李倩  孙林岩  鲍亮 《运筹与管理》2009,18(6):117-125
本文基于克隆选择学说及基于克隆选择学说及生物免疫响应过程的相关机理,提出用于指数化投资的免疫记忆克隆算法,并将其应用于指数化投资组合优化构建模型的求解,旨在探索指数化投资的优化构建策略。文章首先提出多目标的指数化投资组合构建模型。其次,分别设计了适用于指数化投资组合构建策略的抗原、抗体、亲和度函数、克隆选择算子、免疫记忆算子和相应的进化算法。该算法有效避免了传统遗传算法所存在的计算后期解的多样性差、易早熟以及收敛速度慢等缺点。同时,提出了限制投资组合中股票数量的启发式算法。最后,使用包括上证180指数在内的6组世界主要股票市场指数及其成份股的历史数据对模型及算法进行测算,结果表明算法具有良好的求解能力和收敛速度,所建模型的合理性和有效性亦被论证,模型和算法均具有很强的实践价值;  相似文献   

15.
In the paper, we give a smoothing approximation to the nondifferentiable exact penalty function for nonlinear constrained optimization problems. Error estimations are obtained among the optimal objective function values of the smoothed penalty problems, of the nonsmooth penalty problem and of the original problem. An algorithm based on our smoothing function is given, which is showed to be globally convergent under some mild conditions.  相似文献   

16.
In this paper, we present a new one‐step smoothing Newton method for solving the second‐order cone complementarity problem (SOCCP). Based on a new smoothing function, the SOCCP is approximated by a family of parameterized smooth equations. At each iteration, the proposed algorithm only need to solve one system of linear equations and perform only one Armijo‐type line search. The algorithm is proved to be convergent globally and superlinearly without requiring strict complementarity at the SOCCP solution. Moreover, the algorithm has locally quadratic convergence under mild conditions. Numerical experiments demonstrate the feasibility and efficiency of the new algorithm. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
18.
一种修正的求约束总极值的积分-水平集方法   总被引:3,自引:0,他引:3  
对于有约束的全局最优化问题,在Chew-Zheng的《Integral Global Optimization》和邬冬华等的《一种修正的求总极值的积分-水平集方法的实现算法收敛性》的基础上,给出一种修正的求约束总极值的积分-水平集方法,它同样具有修正的求总极值的积分-水平集方法的两个特点: 1) 每一步构造一个新函数,它与原目标函数具有相同的总极值; 2) 避免了郑权算法在一般情况下,由于水平集不易求得而造成难以求出水平集的困难.同时给出了其实现算法,并证明了算法的收敛性.  相似文献   

19.
In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported. Project supported by the National Natural Science Foundation (No. 10261001), Guangxi Science Foundation (Nos. 0236001, 064001), and Guangxi University Key Program for Science and Technology Research (No. 2005ZD02) of China.  相似文献   

20.
This paper is concerned with the solution of the nonlinear least squares problems. A new secant method is suggested in this paper, which is based on an affine model of the objective function and updates the first-order approximation each step when the iterations proceed. We present an algorithm which combines the new secant method with Gauss-Newton method for general nonlinear least squares problems. Furthermore, we prove that this algorithm is Q-superlinearly convergent for large residual problems under mild conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号