1.

A SPARSE SUBSPACE TRUNCATED NEWTON METHOD FOR LARGESCALE BOUND CONSTRAINED NONLINEAR OPTIMIZATION





倪勤《高等学校计算数学学报(英文版)》,1997年第1期


In this paper we report a sparse truncated Newton algorithm for handling largescale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.

2.

Monotone projected gradient methods for largescale boxconstrained quadratic programming





ZHOU Bin GAO Li & DAI Yuhong School of Mathematical Sciences and LMAM Peking University Beijing 100871 China State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and Systems Science Chinese Academy of Sciences Beijing 100080 China《中国科学A辑(英文版)》,2006年第49卷第5期


Inspired by the success of the projected BarzilaiBorwein (PBB) method for largescale boxconstrained quadratic programming, we propose and analyze the monotone projected gradient methods in this paper. We show by experiments and analyses that for the new methods, it is generally a bad option to compute steplengths based on the negative gradients. Thus in our algorithms, some continuous or discontinuous projected gradients are used instead to compute the steplengths. Numerical experiments on a wide variety of test problems are presented, indicating that the new methods usually outperform the PBB method.

3.

A REVISED CONJUGATE GRADIENT PROJECTION ALGORITHM FOR INEQUALITY CONSTRAINED OPTIMIZATIONS





WeiWang LianshengZhang YifanXu《计算数学(英文版)》,2005年第23卷第2期


A revised conjugate gradient projection method for nonlinear inequality constrained optimization problems is proposed in the paper, since the search direction is the combination of the conjugate projection gradient and the quasiNewton direction. It has two merits. The one is that the amount of computation is lower because the gradient matrix only needs to be computed one time at each iteration. The other is that the algorithm is of global convergence and locally superlinear convergence without strict complementary condition under some mild assumptions. In addition the search direction is explicit.

4.

Parallel algorithms for largescale linearly constrained minimization problem





Congying Han Fangying Zheng Tiande Guo Guoping He《应用数学学报(英文版)》,2014年第30卷第3期


In this paper, two PVDtype algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve largescale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.

5.

求解无约束非线性规划问题的一个新的重开始共轭梯度算法的收敛性





孙清滢《数学季刊》,2003年第18卷第2期


Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with CurryAltman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.

6.

结合修正CurryAltman步长搜索一个新的共轭梯度算法的收敛性





CAO Lihua SUN Qingying《数学季刊》,2004年第19卷第2期


Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the CurryAltman‘s stepsize rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.

7.

信赖域策略的内点投影既约Hessian方法解非线性约束优化





朱德通《高校应用数学学报(英文版)》,2004年第19卷第3期


A interior point scaling projected reduced Hessian method with combination of nonmonotonic backtracking technique and trust region strategy for nonlinear equality constrained optimization with nonegative constraint on variables is proposed. In order to deal with large problems,a pair of trust region subproblems in horizontal and vertical subspaces is used to replace the general full trust region subproblem. The horizontal trust region subproblem in the algorithm is only a general trust region subproblem while the vertical trust region subproblem is defined by a parameter size of the vertical direction subject only to an ellipsoidal constraint. Both trust region strategy and line search technique at each iteration switch to obtaining a backtracking step generated by the two trust region subproblems. By adopting the l1 penalty function as the merit function, the global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion and the second order correction step are used to overcome Maratos effect and speed up the convergence progress in some illconditioned cases.

8.

A new kernel function yielding the best known iteration bounds for primaldual interiorpoint algorithms





Yan Qin Bai Jin Li Guo Cornelis Roos《数学学报(英文版)》,2009年第25卷第12期


Kernel functions play an important role in defining new search directions for primaldual interiorpoint algorithm for solving linear optimization problems. In this paper we present a new kernel function which yields an algorithm with the best known complexity bound for both large and smallupdate methods.

9.

AN AFFINE SCALING INTERIOR ALGORITHM VIA CONJUGATE GRADIENT PATH FOR SOLVING BOUNDCONSTRAINED NONLINEAR SYSTEMS





Chunxia Jia ;Detong Zhu《计算数学(英文版)》,2008年第4期


In this paper we propose an affine scaling interior algorithm via conjugate gradient path for solving nonlinear equality systems subject to bounds on variables. By employing the affine scaling conjugate gradient path search strategy, we obtain an iterative direction by solving the linearize model. By using the line search technique, we will find an acceptable trial step length along this direction which is strictly feasible and makes the objective func tion nonmonotonically decreasing. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. Furthermore, the numerical results of the proposed algorithm indicate to be effective.

10.

A SUPERLINEARLY CONVERGENT TRUST REGION ALGORITHM FOR LC^1 CONSTRAINED OPTIMIZATION PROBLEMS





欧宜贵 侯定丕《数学物理学报(B辑英文版)》,2005年第25卷第1期


In this paper, a new trust region algorithm for nonlinear equality constrained LC^1 optimization problems is given. It obtains a search direction at each iteration not by solving a quadratic programming subproblem with a trust region bound, but by solving a system of linear equations. Since the computational complexity of a QPProblem is in general much larger than that of a system of linear equations, this method proposed in this paper may reduce the computational complexity and hence improve computational efficiency. Furthermore, it is proved under appropriate assumptions that this algorithm is globally and superlinearly convergent to a solution of the original problem. Some numerical examples are reported, showing the proposed algorithm can be beneficial from a computational point of view.

11.

AN EFFECTIVE CONTINUOUS ALGORITHM FOR APPROXIMATE SOLUTIONS OF LARGE SCALE MAXCUT PROBLEMS





Chengxian Xu Xiaoliang Fengmin Xu《计算数学(英文版)》,2006年第24卷第6期


An effective continuous algorithm is proposed to find approximate solutions of NPhardmaxcut problems.The algorithm relaxes the maxcut problem into a continuous nonlinearprogramming problem by replacing n discrete constraints in the original problem with onesingle continuous constraint.A feasible direction method is designed to solve the resultingnonlinear programming problem.The method employs only the gradient evaluations ofthe objective function,and no any matrix calculations and no line searches are required.This greatly reduces the calculation cost of the method,and is suitable for the solutionof large size maxcut problems.The convergence properties of the proposed method toKKT points of the nonlinear programming are analyzed.If the solution obtained by theproposed method is a global solution of the nonlinear programming problem,the solutionwill provide an upper bound on the maxcut value.Then an approximate solution to themaxcut problem is generated from the solution of the nonlinear programming and providesa lower bound on the maxcut value.Numerical experiments and comparisons on somemaxcut test problems(small and large size)show that the proposed algorithm is efficientto get the exact solutions for all small test problems and well satisfied solutions for mostof the large size test problems with less calculation costs.

12.

A RESTARTING DIRECTION FOR THE CONJUGATE GRADIENT METHOD





孙麟平《高等学校计算数学学报(英文版)》,1995年第2期


The main purpose of this paper is to provide a restarting direction for improving on the standard conjugate gradient method.If a drastic nonquadratic behaviour of the objective function is observed in the neighbour of xk,then a restart should be done.The scaling symmetric rankone update with Davidon's optimal criterion is applied to generate the restarting direction.It is proved that the conjugate gradient method with this strategy retains the quadratic termination.Numerical experiments show that it is successful.

13.

A SUCCESSIVE QUADRATIC PROGRAMMING ALGORITHM FOR SDP RELAXATION OF MAXBISECTION





References：《高校应用数学学报(英文版)》,2007年第22卷第4期


A successive quadratic programming algorithm for solving SDP relaxation of Max Bisection is provided and its convergence result is given.The stepsize in the algorithm is obtained by solving n easy quadratic equations without using the linear search technique.The numerical experiments show that this algorithm is rather faster than the interiorpoint method.

14.

A NONMOTOTONE ALGORITHM FOR MINIMIZING NONSMOOTH COMPOSITE FUNCTIONS





孙小玲 张连生《高等学校计算数学学报(英文版)》,1997年第2期


In this paper, we present a nonmonotone algorithm for solving nonsmooth composite optimization problems. The objective function of these problems is composited by a nonsmooth convex function and a differentiable function. The method generates the search directions by solving quadratic programming successively, and makes use of the nonmonotone line search instead of the usual Armijotype line search. Global convergence is proved under standard assumptions. Numerical results are given.

15.

On the use of simplex methods in constructing quadratic models





Qinghua ZHOU 1 College of Mathematics and Computer Hebei University Baoding 071002 China 2 State Key Laboratory of Scientific and Engineering Computing Institute of Computational Mathematics and Scientific/Engineering Computing Academy of Mathematics and Systems Science Chinese Academy of Sciences Beijing 100080 China《中国科学A辑(英文版)》,2007年第50卷第7期


In this paper,we investigate the quadratic approximation methods.After studying the basic idea of simplex methods,we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces.And the quadratic model is solved in the new subspaces.The motivation is to use the information disclosed by the former steps to construct more promising directions.For most tested problems,the number of function evaluations have been reduced obviously through our algorithms.

16.

基于增广Lagrange函数的RQP方法 被引次数：3





王秀国 薛毅《计算数学》,2003年第25卷第4期


Recursive quadratic programming is a family of techniques developd by BartholomewBiggs and other authors for solving nonlinear programming problems.This paperdescribes a new method for constrained optimization which obtains its search directions from a quadratic programming subproblem based on the wellknown augmented Lagrangian function.It avoids the penalty parameter to tend to infinity.We employ the Fletcher‘s exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need toevaluate the second order derivatives of the problem functions.We prove that thealgorithm possesses global and superlinear convergence properties.At the sametime, numerical results are reported.

17.

A New Superlinearly Convergent SQP Algorithm for Nonlinear Minimax Problems 被引次数：1





Jinbao Jian Ran Quan Qingjie Hu《应用数学学报(英文版)》,2007年第23卷第3期


In this paper, the nonlinear minimax problems are discussed. By means of the Sequential Quadratic Programming （SQP）, a new descent algorithm for solving the problems is presented. At each iteration of the proposed algorithm, a main search direction is obtained by solving a Quadratic Programming （QP） which always has a solution. In order to avoid the Maratos effect, a correction direction is obtained by updating the main direction with a simple explicit formula. Under mild conditions without the strict complementarity, the global and superlinear convergence of the algorithm can be obtained. Finally, some numerical experiments are reported.

18.

A SUPERLINEARLY AND QUADRATICALLY CONVERGENT SQP TYPE FEASIBLE METHOD FOR CONSTRAINED OPTIMIZATION 被引次数：6





JianJinbao ZhangKecun XueShengjia《高校应用数学学报(英文版)》,2000年第15卷第3期


A new SQP type feasible method for inequality constrained optimization is presented, it is a combination of a master algorithm and an auxiliary algorithm which is taken only in finite iterations. The directions of the master algorithm are generated by only one quadratic programming, and its stepsize is always one, the directions of the auxiliary algorithm are new “secondorder“ feasible descent. Under suitable assumptions, the algorithm is proved to possess global and strong convergence, superlinear and quadratic convergence.

19.

PRECONDITIONED SPECTRAL PROJECTED GRADIENT METHOD ON CONVEX SETS





LenysBello MarcosRaydan《计算数学(英文版)》,2005年第23卷第3期


The spectral gradient method has proved to be effective for solving largescale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving largescale problems.

20.

An SQP algorithm with cautious updating criteria for nonlinear degenerate problems





Taowen Liu Jinping Zeng《应用数学学报(英文版)》,2009年第25卷第1期


An efficient SQP algorithm for solving nonlinear degenerate problems is proposed in the paper. At each iteration of the algorithm, a quadratic programming subproblem, which is always feasible by introducing a slack variable, is solved to obtain a search direction. The steplength along this direction is computed by employing the 1∞ exact penalty function through Armijotype line search scheme. The algorithm is proved to be convergent globally under mild conditions.
