首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose two linearly convergent descent methods for finding a minimizer of a convex quadratic spline and establish global error estimates for the iterates. One application of such descent methods is to solve convex quadratic programs, since they can be reformulated as problems of unconstrained minimization of convex quadratic splines. In particular, we derive several new linearly convergent algorthms for solving convex quadratic programs. These algorithms could be classified as row-action methods, matrix-splitting methods, and Newton-type methods.  相似文献   

2.
The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of large-scale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.  相似文献   

4.
An algorithm for unconstrained minimization is proposed which is invariant to the nonlinear scaling of a strictly convex quadratic function and which generates mutually conjugate directions for extended quadratic functions. It is derived for inexact line searches and is designed for general use. It compares favorably in numerical tests (eight test functions, dimensionality up to 1000) with the 1975 Dixon algorithm on which this new algorithm is based.  相似文献   

5.
It is shown that the dual of the problem of minimizing the 2-norm of the primal and dual optimal variables and slacks of a linear program can be transformed into an unconstrained minimization of a convex parameter-free globally differentiable piecewise quadratic function with a Lipschitz continuous gradient. If the slacks are not included in the norm minimization, one obtains a minimization problem with a convex parameter-free quadratic objective function subject to nonnegativity constraints only.  相似文献   

6.
By introducing quadratic penalty terms, a convex non-separable quadratic network program can be reduced to an unconstrained optimization problem whose objective function is a piecewise quadratic and continuously differentiable function. A conjugate gradient method is applied to the reduced problem and its convergence is proved. The computation exploits the special network data structures originated from the network simplex method. This algorithmic framework allows direct extension to multicommodity cost flows. Some preliminary computational results are presented.  相似文献   

7.
针对共轭梯度法求解无约束二次凸规划时,在构造共轭方向上的局限性,对共轭梯度法进行了改进.给出了构造共轭方向的新方法,利用数学归纳法对新方法进行了证明.同时还给出了改进共轭梯度法在应用时的基本计算过程,并对方法的收敛性进行了证明.通过实例求解,说明了在求解二次无约束凸规划时,该方法相比共轭梯度法具有一定的优势.  相似文献   

8.
By means of a conjugate gradient strategy, we propose a trust region method for unconstrained optimization problems. The search direction is an adequate combination of the conjugate gradient direction and the trust-region direction. The global convergence and the quadratic convergence of this method are established under suitable conditions. Numerical results show that the presented method is competitive to the trust region method and the conjugate gradient method.  相似文献   

9.
We consider the augmented Lagrangian method (ALM) as a solver for the fused lasso signal approximator (FLSA) problem. The ALM is a dual method in which squares of the constraint functions are added as penalties to the Lagrangian. In order to apply this method to FLSA, two types of auxiliary variables are introduced to transform the original unconstrained minimization problem into a linearly constrained minimization problem. Each updating in this iterative algorithm consists of just a simple one-dimensional convex programming problem, with closed form solution in many cases. While the existing literature mostly focused on the quadratic loss function, our algorithm can be easily implemented for general convex loss. We also provide some convergence analysis of the algorithm. Finally, the method is illustrated with some simulation datasets.  相似文献   

10.
Optimization of a special class of geometric programs is achieved by a decomposition technique. Optimal vectors to certain subproblems are obtained by solving a single unconstrained minimization problem involving a strictly convex function. These optimal vector readily yield an optimal vector to the original program.This research was supported in part by funds from NSF Grant No. GP-15031.  相似文献   

11.
Yanyun Ding  Jianwei Li 《Optimization》2017,66(12):2309-2328
The recent designed non-linear conjugate gradient method of Dai and Kou [SIAM J Optim. 2013;23:296–320] is very efficient currently in solving large-scale unconstrained minimization problems due to its simpler iterative form, lower storage requirement and its closeness to the scaled memoryless BFGS method. Just because of these attractive properties, this method was extended successfully to solve higher dimensional symmetric non-linear equations in recent years. Nevertheless, its numerical performance in solving convex constrained monotone equations has never been explored. In this paper, combining with the projection method of Solodov and Svaiter, we develop a family of non-linear conjugate gradient methods for convex constrained monotone equations. The proposed methods do not require the Jacobian information of equations, and even they do not store any matrix in each iteration. They are potential to solve non-smooth problems with higher dimensions. We prove the global convergence of the class of the proposed methods and establish its R-linear convergence rate under some reasonable conditions. Finally, we also do some numerical experiments to show that the proposed methods are efficient and promising.  相似文献   

12.
In this paper, we introduce a class of nonmonotone conjugate gradient methods, which include the well-known Polak–Ribière method and Hestenes–Stiefel method as special cases. This class of nonmonotone conjugate gradient methods is proved to be globally convergent when it is applied to solve unconstrained optimization problems with convex objective functions. Numerical experiments show that the nonmonotone Polak–Ribière method and Hestenes–Stiefel method in this nonmonotone conjugate gradient class are competitive vis-à-vis their monotone counterparts.  相似文献   

13.
《Optimization》2012,61(3):359-369
In this article, we present an algorithm to compute the minimum norm solution of the positive semidefinite linear complementarity problem. We show that its solution can be obtained using the alternative theorems and a convenient characterization of the solution set of a convex quadratic programming problem. This problem reduces to an unconstrained minimization problem with once differentiable convex objective function. We propose an extension of Newton's method for solving the unconstrained optimization problem. Computational results show that convergence to high accuracy often occurs in just a few iterations.  相似文献   

14.
In this paper, we give an algorithm to compute the minimum norm solution to the absolute value equation (AVE) in a special case. We show that this solution can be obtained from theorems of the alternative and a useful characterization of solution sets of convex quadratic programs. By using an exterior penalty method, this problem can be reduced to an unconstrained minimization problem with once differentiable convex objective function. Also, we propose a quasi-Newton method for solving unconstrained optimization problem. Computational results show that convergence to high accuracy often occurs in just a few iterations.  相似文献   

15.
《Optimization》2012,61(3):235-243
In this paper, we derive an unconstrained convex programming approach to solving convex quadratic programming problems in standard form. Related duality theory is established by using two simple inequalities. An ?-optimal solution is obtained by solving an unconstrained dual convex program. A dual-to-primal conversion formula is also provided. Some preliminary computational results of using a curved search method is included  相似文献   

16.
In this paper, we consider a general class of nonlinear mixed discrete programming problems. By introducing continuous variables to replace the discrete variables, the problem is first transformed into an equivalent nonlinear continuous optimization problem subject to original constraints and additional linear and quadratic constraints. Then, an exact penalty function is employed to construct a sequence of unconstrained optimization problems, each of which can be solved effectively by unconstrained optimization techniques, such as conjugate gradient or quasi-Newton methods. It is shown that any local optimal solution of the unconstrained optimization problem is a local optimal solution of the transformed nonlinear constrained continuous optimization problem when the penalty parameter is sufficiently large. Numerical experiments are carried out to test the efficiency of the proposed method.  相似文献   

17.
《Optimization》2012,61(7):929-941
To take advantage of the attractive features of the Hestenes–Stiefel and Dai–Yuan conjugate gradient (CG) methods, we suggest a hybridization of these methods using a quadratic relaxation of a hybrid CG parameter proposed by Dai and Yuan. In the proposed method, the hybridization parameter is computed based on a conjugacy condition. Under proper conditions, we show that our method is globally convergent for uniformly convex functions. We give a numerical comparison of the implementations of our method and two efficient hybrid CG methods proposed by Dai and Yuan using a set of unconstrained optimization test problems from the CUTEr collection. Numerical results show the efficiency of the proposed hybrid CG method in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

18.
The linear conjugate gradient method is an optimal method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS method and Barzilai-Borwein gradient method, however, heavily restricted the use of conjugate gradient method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS method and the Barzilai-Bowein gradient method share the so-called asymptotical one stepsize per line-search property, namely, the trial stepsize in the method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate gradient method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate gradient (BBCG) methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.  相似文献   

19.
In various penalty/smoothing approaches to solving a linear program, one regularizes the problem by adding to the linear cost function a separable nonlinear function multiplied by a small positive parameter. Popular choices of this nonlinear function include the quadratic function, the logarithm function, and the x ln(x)-entropy function. Furthermore, the solutions generated by such approaches may satisfy the linear constraints only inexactly and thus are optimal solutions of the regularized problem with a perturbed right-hand side. We give a general condition for such an optimal solution to converge to an optimal solution of the original problem as the perturbation parameter tends to zero. In the case where the nonlinear function is strictly convex, we further derive a local (error) bound on the distance from such an optimal solution to the limiting optimal solution of the original problem, expressed in terms of the perturbation parameter.  相似文献   

20.
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC~1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC~1 objective function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号