首页 | 本学科首页   官方微博 | 高级检索  
    检索          
共有20条相似文献,以下是第1-20项 搜索用时 156 毫秒

1.  A SPARSE SUBSPACE TRUNCATED NEWTON METHOD FOR LARGE-SCALE BOUND CONSTRAINED NONLINEAR OPTIMIZATION  
   倪勤《高等学校计算数学学报(英文版)》,1997年第1期
   In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.    

2.  Monotone projected gradient methods for large-scale box-constrained quadratic programming  
   ZHOU Bin  GAO Li & DAI Yuhong School of Mathematical Sciences and LMAM  Peking University  Beijing 100871  China   State Key Laboratory of Scientific and Engineering Computing  Institute of Computational Mathematics and Scientific/Engineering Computing  Academy of Mathematics and Systems Science  Chinese Academy of Sciences  Beijing 100080  China《中国科学A辑(英文版)》,2006年第49卷第5期
   Inspired by the success of the projected Barzilai-Borwein (PBB) method for large-scale box-constrained quadratic programming, we propose and analyze the monotone projected gradient methods in this paper. We show by experiments and analyses that for the new methods, it is generally a bad option to compute steplengths based on the negative gradients. Thus in our algorithms, some continuous or discontinuous projected gradients are used instead to compute the steplengths. Numerical experiments on a wide variety of test problems are presented, indicating that the new methods usually outperform the PBB method.    

3.  A REVISED CONJUGATE GRADIENT PROJECTION ALGORITHM FOR INEQUALITY CONSTRAINED OPTIMIZATIONS  
   WeiWang Lian-shengZhang Yi-fanXu《计算数学(英文版)》,2005年第23卷第2期
   A revised conjugate gradient projection method for nonlinear inequality constrained optimization problems is proposed in the paper, since the search direction is the combination of the conjugate projection gradient and the quasi-Newton direction. It has two merits. The one is that the amount of computation is lower because the gradient matrix only needs to be computed one time at each iteration. The other is that the algorithm is of global convergence and locally superlinear convergence without strict complementary condition under some mild assumptions. In addition the search direction is explicit.    

4.  Parallel algorithms for large-scale linearly constrained minimization problem  
   Cong-ying Han  Fang-ying Zheng  Tian-de Guo  Guo-ping He《应用数学学报(英文版)》,2014年第30卷第3期
   In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.    

5.  求解无约束非线性规划问题的一个新的重开始共轭梯度算法的收敛性  
   孙清滢《数学季刊》,2003年第18卷第2期
   Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.    

6.  结合修正Curry-Altman步长搜索一个新的共轭梯度算法的收敛性  
   CAO Li-hua  SUN Qing-ying《数学季刊》,2004年第19卷第2期
   Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.    

7.  信赖域策略的内点投影既约Hessian方法解非线性约束优化  
   朱德通《高校应用数学学报(英文版)》,2004年第19卷第3期
   A interior point scaling projected reduced Hessian method with combination of nonmonotonic backtracking technique and trust region strategy for nonlinear equality constrained optimization with nonegative constraint on variables is proposed. In order to deal with large problems,a pair of trust region subproblems in horizontal and vertical subspaces is used to replace the general full trust region subproblem. The horizontal trust region subproblem in the algorithm is only a general trust region subproblem while the vertical trust region subproblem is defined by a parameter size of the vertical direction subject only to an ellipsoidal constraint. Both trust region strategy and line search technique at each iteration switch to obtaining a backtracking step generated by the two trust region subproblems. By adopting the l1 penalty function as the merit function, the global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion and the second order correction step are used to overcome Maratos effect and speed up the convergence progress in some ill-conditioned cases.    

8.  A new kernel function yielding the best known iteration bounds for primal-dual interior-point algorithms  
   Yan Qin Bai   Jin Li Guo and Cornelis Roos《数学学报(英文版)》,2009年第25卷第12期
   Kernel functions play an important role in defining new search directions for primal-dual interior-point algorithm for solving linear optimization problems. In this paper we present a new kernel function which yields an algorithm with the best known complexity bound for both large- and small-update methods.    

9.  AN AFFINE SCALING INTERIOR ALGORITHM VIA CONJUGATE GRADIENT PATH FOR SOLVING BOUND-CONSTRAINED NONLINEAR SYSTEMS  
   Chunxia Jia  ;Detong Zhu《计算数学(英文版)》,2008年第4期
   In this paper we propose an affine scaling interior algorithm via conjugate gradient path for solving nonlinear equality systems subject to bounds on variables. By employing the affine scaling conjugate gradient path search strategy, we obtain an iterative direction by solving the linearize model. By using the line search technique, we will find an acceptable trial step length along this direction which is strictly feasible and makes the objective func- tion nonmonotonically decreasing. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. Furthermore, the numerical results of the proposed algorithm indicate to be effective.    

10.  A SUPERLINEARLY CONVERGENT TRUST REGION ALGORITHM FOR LC^1 CONSTRAINED OPTIMIZATION PROBLEMS  
   欧宜贵 侯定丕《数学物理学报(B辑英文版)》,2005年第25卷第1期
   In this paper, a new trust region algorithm for nonlinear equality constrained LC^1 optimization problems is given. It obtains a search direction at each iteration not by solving a quadratic programming subproblem with a trust region bound, but by solving a system of linear equations. Since the computational complexity of a QP-Problem is in general much larger than that of a system of linear equations, this method proposed in this paper may reduce the computational complexity and hence improve computational efficiency. Furthermore, it is proved under appropriate assumptions that this algorithm is globally and super-linearly convergent to a solution of the original problem. Some numerical examples are reported, showing the proposed algorithm can be beneficial from a computational point of view.    

11.  AN EFFECTIVE CONTINUOUS ALGORITHM FOR APPROXIMATE SOLUTIONS OF LARGE SCALE MAX-CUT PROBLEMS  
   Cheng-xian Xu Xiao-liang Feng-min Xu《计算数学(英文版)》,2006年第24卷第6期
   An effective continuous algorithm is proposed to find approximate solutions of NP-hardmax-cut problems.The algorithm relaxes the max-cut problem into a continuous nonlinearprogramming problem by replacing n discrete constraints in the original problem with onesingle continuous constraint.A feasible direction method is designed to solve the resultingnonlinear programming problem.The method employs only the gradient evaluations ofthe objective function,and no any matrix calculations and no line searches are required.This greatly reduces the calculation cost of the method,and is suitable for the solutionof large size max-cut problems.The convergence properties of the proposed method toKKT points of the nonlinear programming are analyzed.If the solution obtained by theproposed method is a global solution of the nonlinear programming problem,the solutionwill provide an upper bound on the max-cut value.Then an approximate solution to themax-cut problem is generated from the solution of the nonlinear programming and providesa lower bound on the max-cut value.Numerical experiments and comparisons on somemax-cut test problems(small and large size)show that the proposed algorithm is efficientto get the exact solutions for all small test problems and well satisfied solutions for mostof the large size test problems with less calculation costs.    

12.  A RESTARTING DIRECTION FOR THE CONJUGATE GRADIENT METHOD  
   孙麟平《高等学校计算数学学报(英文版)》,1995年第2期
   The main purpose of this paper is to provide a restarting direction for improving on the standard conjugate gradient method.If a drastic non-quadratic behaviour of the objective function is observed in the neighbour of xk,then a restart should be done.The scaling symmetric rank-one update with Davidon's optimal criterion is applied to generate the restarting direction.It is proved that the conjugate gradient method with this strategy retains the quadratic termination.Numerical experiments show that it is successful.    

13.  A SUCCESSIVE QUADRATIC PROGRAMMING ALGORITHM FOR SDP RELAXATION OF MAX-BISECTION  
   References:《高校应用数学学报(英文版)》,2007年第22卷第4期
   A successive quadratic programming algorithm for solving SDP relaxation of Max- Bisection is provided and its convergence result is given.The step-size in the algorithm is obtained by solving n easy quadratic equations without using the linear search technique.The numerical experiments show that this algorithm is rather faster than the interior-point method.    

14.  A NONMOTOTONE ALGORITHM FOR MINIMIZING NONSMOOTH COMPOSITE FUNCTIONS  
   孙小玲  张连生《高等学校计算数学学报(英文版)》,1997年第2期
   In this paper, we present a nonmonotone algorithm for solving nonsmooth composite optimization problems. The objective function of these problems is composited by a nonsmooth convex function and a differentiable function. The method generates the search directions by solving quadratic programming successively, and makes use of the nonmonotone line search instead of the usual Armijo-type line search. Global convergence is proved under standard assumptions. Numerical results are given.    

15.  On the use of simplex methods in constructing quadratic models  
   Qing-hua ZHOU 1 College of Mathematics and Computer  Hebei University  Baoding 071002  China   2 State Key Laboratory of Scientific and Engineering Computing  Institute of Computational Mathematics and Scientific/Engineering Computing  Academy of Mathematics and Systems Science  Chinese Academy of Sciences  Beijing 100080  China《中国科学A辑(英文版)》,2007年第50卷第7期
   In this paper,we investigate the quadratic approximation methods.After studying the basic idea of simplex methods,we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces.And the quadratic model is solved in the new subspaces.The motivation is to use the information disclosed by the former steps to construct more promising directions.For most tested problems,the number of function evaluations have been reduced obviously through our algorithms.    

16.  基于增广Lagrange函数的RQP方法  被引次数:3
   王秀国  薛毅《计算数学》,2003年第25卷第4期
   Recursive quadratic programming is a family of techniques developd by Bartholomew-Biggs and other authors for solving nonlinear programming problems.This paperdescribes a new method for constrained optimization which obtains its search di-rections from a quadratic programming subproblem based on the well-known aug-mented Lagrangian function.It avoids the penalty parameter to tend to infinity.We employ the Fletcher‘s exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need toevaluate the second order derivatives of the problem functions.We prove that thealgorithm possesses global and superlinear convergence properties.At the sametime, numerical results are reported.    

17.  A New Superlinearly Convergent SQP Algorithm for Nonlinear Minimax Problems  被引次数:1
   Jin-bao Jian Ran Quan Qing-jie Hu《应用数学学报(英文版)》,2007年第23卷第3期
   In this paper, the nonlinear minimax problems are discussed. By means of the Sequential Quadratic Programming (SQP), a new descent algorithm for solving the problems is presented. At each iteration of the proposed algorithm, a main search direction is obtained by solving a Quadratic Programming (QP) which always has a solution. In order to avoid the Maratos effect, a correction direction is obtained by updating the main direction with a simple explicit formula. Under mild conditions without the strict complementarity, the global and superlinear convergence of the algorithm can be obtained. Finally, some numerical experiments are reported.    

18.  A SUPERLINEARLY AND QUADRATICALLY CONVERGENT SQP TYPE FEASIBLE METHOD FOR CONSTRAINED OPTIMIZATION  被引次数:6
   JianJinbao ZhangKecun XueShengjia《高校应用数学学报(英文版)》,2000年第15卷第3期
   A new SQP type feasible method for inequality constrained optimization is presented, it is a combination of a master algorithm and an auxiliary algorithm which is taken only in finite iterations. The directions of the master algorithm are generated by only one quadratic programming, and its step-size is always one, the directions of the auxiliary algorithm are new “secondorder“ feasible descent. Under suitable assumptions, the algorithm is proved to possess global and strong convergence, superlinear and quadratic convergence.    

19.  PRECONDITIONED SPECTRAL PROJECTED GRADIENT METHOD ON CONVEX SETS  
   LenysBello MarcosRaydan《计算数学(英文版)》,2005年第23卷第3期
   The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.    

20.  An SQP algorithm with cautious updating criteria for nonlinear degenerate problems  
   Tao-wen Liu and Jin-ping Zeng《应用数学学报(英文版)》,2009年第25卷第1期
   An efficient SQP algorithm for solving nonlinear degenerate problems is proposed in the paper. At each iteration of the algorithm, a quadratic programming subproblem, which is always feasible by introducing a slack variable, is solved to obtain a search direction. The steplength along this direction is computed by employing the 1∞ exact penalty function through Armijo-type line search scheme. The algorithm is proved to be convergent globally under mild conditions.    

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号