首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

2.
New iterative methods for linear inequalities   总被引:1,自引:0,他引:1  
New iterative methods for solving systems of linear inequalities are presented. Each step in these methods consists of finding the orthogonal projection of the current point onto a hyperplane corresponding to a surrogate constraint which is constructed through a positive combination of a group of violated constraints. Both sequential and parallel implementations are discussed.The authors are grateful to a referee for pointing out the result in Lemma 5.1 and its importance in the proof of Theorem 5.1. This work was supported partially by NSF Grant No. ECS-85-21183.  相似文献   

3.
梯度硬阈值追踪算法是求解稀疏优化问题的有效算法之一.考虑到算法中投影对最优解的影响,提出一种比贪婪策略更好的投影算法是很有必要的.针对一般的稀疏约束优化问题,利用整数规划提出一种迭代投影策略,将梯度投影算法中的投影作为一个子问题求解.通过迭代求解该子问题得到投影的指标集,并以此继续求解原问题,以提高梯度硬阈值追踪算法的计算效果.证明了算法的收敛性,并通过数值实例验证了算法的有效性.  相似文献   

4.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

5.
Second degree normalized implicit conjugate gradient methods for the numerical solution of self-adjoint elliptic partial differential equations are developed. A proposal for the selection of certain values of the iteration parameters ?i, γi involved in solving two and three-dimensional elliptic boundary-value problems leading to substantial savings in computational work is presented. Experimental results for model problems are given.  相似文献   

6.
In this paper, a three-term conjugate gradient algorithm is developed for solving large-scale unconstrained optimization problems. The search direction at each iteration of the algorithm is determined by rectifying the steepest descent direction with the difference between the current iterative points and that between the gradients. It is proved that such a direction satisfies the approximate secant condition as well as the conjugacy condition. The strategies of acceleration and restart are incorporated into designing the algorithm to improve its numerical performance. Global convergence of the proposed algorithm is established under two mild assumptions. By implementing the algorithm to solve 75 benchmark test problems available in the literature, the obtained results indicate that the algorithm developed in this paper outperforms the existent similar state-of-the-art algorithms.  相似文献   

7.
《Optimization》2012,61(12):2679-2691
In this article, we present an improved three-term conjugate gradient algorithm for large-scale unconstrained optimization. The search directions in the developed algorithm are proved to satisfy an approximate secant equation as well as the Dai-Liao’s conjugacy condition. With the standard Wolfe line search and the restart strategy, global convergence of the algorithm is established under mild conditions. By implementing the algorithm to solve 75 benchmark test problems with dimensions from 1000 to 10,000, the obtained numerical results indicate that the algorithm outperforms the state-of-the-art algorithms available in the literature. It costs less CPU time and smaller number of iterations in solving the large-scale unconstrained optimization.  相似文献   

8.
This paper is concerned with the numerical solution of a symmetric indefinite system which is a generalization of the Karush–Kuhn–Tucker system. Following the recent approach of Luk?an and Vl?ek, we propose to solve this system by a preconditioned conjugate gradient (PCG) algorithm and we devise two indefinite preconditioners with good theoretical properties. In particular, for one of these preconditioners, the finite termination property of the PCG method is stated. The PCG method combined with a parallel version of these preconditioners is used as inner solver within an inexact Interior‐Point (IP) method for the solution of large and sparse quadratic programs. The numerical results obtained by a parallel code implementing the IP method on distributed memory multiprocessor systems enable us to confirm the effectiveness of the proposed approach for problems with special structure in the constraint matrix and in the objective function. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
The simplex algorithm is still the best known and most frequently used way to solve LP problems. Khachian has suggested a method to solve these problems in polynomial time. The average behaviour of his method, however, is still inferior to the modern simplex based LP codes. A new gradient based approach which also has polynomial worst-case behaviour has been suggested by Karmarkar. This method was modified, programmed and compared with other available LP codes. It is shown that the numerical efficiency of Karmarkar's method compares favourably with other LP codes, particularly for problems with high numbers of variables and few constraints.  相似文献   

10.
The projective method for solving linear matrix inequalities   总被引:2,自引:0,他引:2  
Numerous problems in control and systems theory can be formulated in terms of linear matrix inequalities (LMI). Since solving an LMI amounts to a convex optimization problem, such formulations are known to be numerically tractable. However, the interest in LMI-based design techniques has really surged with the introduction of efficient interior-point methods for solving LMIs with a polynomial-time complexity. This paper describes one particular method called the Projective Method. Simple geometrical arguments are used to clarify the strategy and convergence mechanism of the Projective algorithm. A complexity analysis is provided, and applications to two generic LMI problems (feasibility and linear objective minimization) are discussed.  相似文献   

11.
Parallel preconditioned conjugate gradient algorithm on GPU   总被引:1,自引:0,他引:1  
We propose a parallel implementation of the Preconditioned Conjugate Gradient algorithm on a GPU platform. The preconditioning matrix is an approximate inverse derived from the SSOR preconditioner. Used through sparse matrix–vector multiplication, the proposed preconditioner is well suited for the massively parallel GPU architecture. As compared to CPU implementation of the conjugate gradient algorithm, our GPU preconditioned conjugate gradient implementation is up to 10 times faster (8 times faster at worst).  相似文献   

12.
It is well-known that the HS method and the PRP method may not converge for nonconvex optimization even with exact line search. Some globalization techniques have been proposed, for instance, the PRP+ globalization technique and the Grippo-Lucidi globalization technique for the PRP method. In this paper, we propose a new efficient globalization technique for general nonlinear conjugate gradient methods for nonconvex minimization. This new technique utilizes the information of the previous search direction sufficiently. Under suitable conditions, we prove that the nonlinear conjugate gradient methods with this new technique are globally convergent for nonconvex minimization if the line search satisfies Wolfe conditions or Armijo condition. Extensive numerical experiments are reported to show the efficiency of the proposed technique.  相似文献   

13.
A system of linear inequalities subject to nonnegativity restrictions is considered. General criteria which are necessary and sufficient for a linear inequality to be redundant are derived. This general characterization provides a basis for unifying some of the existing techniques. After taking into consideration the existence of redundant linear inequalities, general necessary and sufficient criteria for a linear inequality to be nonredundant are also obtained. An example is given to illustrate the application of these new criteria.The author wishes to thank the referee for his comments.  相似文献   

14.
A three-parameter family of nonlinear conjugate gradient methods   总被引:3,自引:0,他引:3  

In this paper, we propose a three-parameter family of conjugate gradient methods for unconstrained optimization. The three-parameter family of methods not only includes the already existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear conjugate gradient methods as its subfamilies. With Powell's restart criterion, the three-parameter family of methods with the strong Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the three-parameter family of methods. This paper can also be regarded as a brief review on nonlinear conjugate gradient methods.

  相似文献   


15.
In this paper, we presented a new projection and contraction method for linear variational inequalities, which can be regarded as an extension of He's method. The proposed method includes several new methods as special cases. We used a self-adaptive technique to adjust parameter β at each iteration. This method is simple, the global convergence is proved under the same assumptions as He's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method.  相似文献   

16.
The conventional procedure for folding a system of linear inequalities based on the Fourier-Chernikov algorithm is supplemented with techniques for eliminating redundant inequalities, which considerably counteracts the increase in the system dimension. Exact and approximate methods are proposed, which are brought to algorithmic form and software implementation. Numerical results are discussed.  相似文献   

17.
The Kelley cutting plane method is one of the methods commonly used to optimize the dual function in the Lagrangian relaxation scheme. Usually the Kelley cutting plane method uses the simplex method as the optimization engine. It is well known that the simplex method leaves the current vertex, follows an ascending edge and stops at the nearest vertex. What would happen if one would continue the line search up to the best point instead? As a possible answer, we propose the face simplex method, which freely explores the polyhedral surface by following the Rosen’s gradient projection combined with a global line search on the whole surface. Furthermore, to avoid the zig-zagging of the gradient projection, we propose a conjugate gradient version of the face simplex method. For our preliminary numerical tests we have implemented this method in Matlab. This implementation clearly outperforms basic Matlab implementations of the simplex method. In the case of state-of-the-art simplex implementations in C or similar, our Matlab implementation is only competitive for the case of many cutting planes.  相似文献   

18.
Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank‐R canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank‐R canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay‐offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand‐alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non‐preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
随着图像采集设备的发展和对图像分辨率要求的提高,人们对图像处理算法在收敛速度和鲁棒性方面提出了更高的要求.从优化的角度对Chan-Vese模型进行算法上的改进,即将共轭梯度法应用到该模型中,使得新算法有更快的收敛速度.首先,简单介绍了Chan-Vese模型的变分水平集方法的理论框架;其次,将共轭梯度算法引入到该模型的求解,得到了模型的新的数值解方法;最后,将得到的算法与传统求解Chan-Vese模型的最速下降法进行了比较.数值实验表明,提出的共轭梯度算法在保持精度的前提下有更快的收敛速度.  相似文献   

20.
General algorithm for variational inequalities   总被引:7,自引:0,他引:7  
In this paper, we consider a general auxiliary principle technique to suggest and analyze a novel and innovative iterative algorithm for solving variational inequalities and optimization problems. We also discuss the convergence criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号