首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
研究含参数$l$非方矩阵对广义特征值极小扰动问题所导出的一类复乘积流形约束矩阵最小二乘问题.与已有工作不同,本文直接针对复问题模型,结合复乘积流形的几何性质和欧式空间上的改进Fletcher-Reeves共轭梯度法,设计一类适用于问题模型的黎曼非线性共轭梯度求解算法,并给出全局收敛性分析.数值实验和数值比较表明该算法比参数$l=1$的已有算法收敛速度更快,与参数$l=n$的已有算法能得到相同精度的解.与部分其它流形优化相比与已有的黎曼Dai非线性共轭梯度法具有相当的迭代效率,与黎曼二阶算法相比单步迭代成本较低、总体迭代时间较少,与部分非流形优化算法相比在迭代效率上有明显优势.  相似文献   

2.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

3.
Based on two modified secant equations proposed by Yuan, and Li and Fukushima, we extend the approach proposed by Andrei, and introduce two hybrid conjugate gradient methods for unconstrained optimization problems. Our methods are hybridizations of Hestenes-Stiefel and Dai-Yuan conjugate gradient methods. Under proper conditions, we show that one of the proposed algorithms is globally convergent for uniformly convex functions and the other is globally convergent for general functions. To enhance the performance of the line search procedure, we propose a new approach for computing the initial value of the steplength for initiating the line search procedure. We give a comparison of the implementations of our algorithms with two efficiently representative hybrid conjugate gradient methods proposed by Andrei using unconstrained optimization test problems from the CUTEr collection. Numerical results show that, in the sense of the performance profile introduced by Dolan and Moré, the proposed hybrid algorithms are competitive, and in some cases more efficient.  相似文献   

4.
In this article, we propose the Gauss-Newton methods via conjugate gradient path for solving nonlinear systems. By constructing and solving a linearized model of the nonlinear systems, we obtain the iterative direction by employing the conjugate gradient path. In successive iterations, the approximate Jacobian of the nonlinear systems is updated by a Broyden formula to construct the conjugate path. The global convergence and local superlinear convergence rate of the proposed algorithms are established under some reasonable conditions. Finally, the numerical results are reported to show the effectiveness of the proposed algorithms.  相似文献   

5.
A new conjugate gradient method is proposed by applying Powell’s symmetrical technique to conjugate gradient methods in this paper. Using Wolfe line searches, the global convergence of the method is analyzed by using the spectral analysis of the conjugate gradient iteration matrix and Zoutendijk’s condition. Based on this, some concrete descent algorithms are developed. 200s numerical experiments are presented to verify their performance and the numerical results show that these algorithms are competitive compared with the PRP+ algorithm. Finally, a brief discussion of the new proposed method is given.  相似文献   

6.
<正>1引言特征值问题在应用数学分支和工程中,尤其是在最优设计问题中,有很多的应用,所以特征值问题的最优化已经有了较为深入的研究,见在我们的研究当中,最优设计问题常常以一种指定载荷的设计下、能量的极小化问题的形式出现.在大多数关于最优设计的文章里面,我们更重视在一个固定载荷下条件下结构的最  相似文献   

7.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

8.
This paper deals with studying some of well‐known iterative methods in their tensor forms to solve a Sylvester tensor equation. More precisely, the tensor form of the Arnoldi process and full orthogonalization method are derived by using a product between two tensors. Then tensor forms of the conjugate gradient and nested conjugate gradient algorithms are also presented. Rough estimation of the required number of operations for the tensor form of the Arnoldi process is obtained, which reveals the advantage of handling the algorithms based on tensor format over their classical forms in general. Some numerical experiments are examined, which confirm the feasibility and applicability of the proposed algorithms in practice. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
A family of new conjugate gradient methods is proposed based on Perry’s idea, which satisfies the descent property or the sufficient descent property for any line search. In addition, based on the scaling technology and the restarting strategy, a family of scaling symmetric Perry conjugate gradient methods with restarting procedures is presented. The memoryless BFGS method and the SCALCG method are the special forms of the two families of new methods, respectively. Moreover, several concrete new algorithms are suggested. Under Wolfe line searches, the global convergence of the two families of the new methods is proven by the spectral analysis for uniformly convex functions and nonconvex functions. The preliminary numerical comparisons with CG_DESCENT and SCALCG algorithms show that these new algorithms are very effective algorithms for the large-scale unconstrained optimization problems. Finally, a remark for further research is suggested.  相似文献   

10.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

11.
For solving inverse gravimetry problems, efficient stable parallel algorithms based on iterative gradient methods are proposed. For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, a parallel matrix sweep algorithm, a square root method, and a conjugate gradient method with preconditioner are proposed. The algorithms are implemented numerically on a parallel computing system of the Institute of Mathematics and Mechanics (PCS-IMM), NVIDIA graphics processors, and an Intel multi-core CPU with some new computing technologies. The parallel algorithms are incorporated into a system of remote computations entitled “Specialized Web-Portal for Solving Geophysical Problems on Multiprocessor Computers.” Some problems with “quasi-model” and real data are solved.  相似文献   

12.
Based on a singular value analysis on an extension of the Polak–Ribière–Polyak method, a nonlinear conjugate gradient method with the following two optimal features is proposed: the condition number of its search direction matrix is minimum and also, the distance of its search direction from the search direction of a descent nonlinear conjugate gradient method proposed by Zhang et al. is minimum. Under proper conditions, global convergence of the method can be achieved. To enhance e?ciency of the proposed method, Powell’s truncation of the conjugate gradient parameters is used. The method is computationally compared with the nonlinear conjugate gradient method proposed by Zhang et al. and a modified Polak–Ribière–Polyak method proposed by Yuan. Results of numerical comparisons show e?ciency of the proposed method in the sense of the Dolan–Moré performance profile.  相似文献   

13.
In this paper, we consider an optimal control problem of switched systems with input and state constraints. Since the complexity of such constraint and switching laws, it is difficult to solve the problem using standard optimization techniques. In addition, although conjugate gradient algorithms are very useful for solving nonlinear optimization problem, in practical implementations, the existing Wolfe condition may never be satisfied due to the existence of numerical errors. And the mode insertion technique only leads to suboptimal solutions, due to only certain mode insertions being considered. Thus, based on an improved conjugate gradient algorithm and a discrete filled function method, an improved bi-level algorithm is proposed to solve this optimization problem. Convergence results indicate that the proposed algorithm is globally convergent. Three numerical examples are solved to illustrate the proposed algorithm converges faster and yields a better cost function value than existing bi-level algorithms.  相似文献   

14.
In this article, we study mixed equilibrium problems, and present algorithms and convergence theorems for the proposed algorithms, like proximal gradient method, Tikhonov regularization method, Mann’s type method, conjugate gradient method. As application, we study minimization problems.  相似文献   

15.
A new iterative scheme is described for the solution of large linear systems of equations with a matrix of the form A = ρU + ζI, where ρ and ζ are constants, U is a unitary matrix and I is the identity matrix. We show that for such matrices a Krylov subspace basis can be generated by recursion formulas with few terms. This leads to a minimal residual algorithm that requires little storage and makes it possible to determine each iterate with fairly little arithmetic work. This algorithm provides a model for iterative methods for non-Hermitian linear systems of equations, in a similar way to the conjugate gradient and conjugate residual algorithms. Our iterative scheme illustrates that results by Faber and Manteuffel [3,4] on the existence of conjugate gradient algorithms with short recurrence relations, and related results by Joubert and Young [13], can be extended.  相似文献   

16.
This paper studies the behaviour of a family of conjugate gradientoptimization algorithms, of which the best known is probablythat introduced in 1964 by Fletcher & Reeves. This familyhas the property that, on a quadratic function, the directionsgenerated by any member of the family are the same set of conjugatedirections providing that, at each iteration, an exact linearsearch is performed. In this paper a modification is introduced that enables thisset of conjugate directions to be generated without any accurateline searches. This enables the minimum of a quadratic functionto be found in, at most, (n+2) gradient evaluations. As themodification only requires the storage of two additional n-vectors,the storage advantage of conjugate gradient algorithms viz-?-vizvariable metric algorithms is maintained. Finally, a numerical study is reported in which the performanceof this new method is compared to that of various members ofthe unmodified family.  相似文献   

17.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

18.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

19.
For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, the parallel matrix sweep algorithm, conjugate gradient method with preconditioner, and square root method are proposed and implemented numerically on multi-core CPU Intel with graphics processors NVIDIA. Investigation of efficiency and optimization of parallel algorithms for solving the problem with quasi-model data are performed.  相似文献   

20.
The equivalence in exact arithmetic of the Lanczos tridiagonalization procedure and the conjugate gradient optimization procedure for solving Ax = b, where A is a real symmetric, positive definite matrix, is well known. We demonstrate that a relaxed equivalence is valid in the presence of errors. Specifically we demonstrate that local ε-orthonormality of the Lanczos vectors guarantees local ε-A-conjugacy of the direction vectors in the associated conjugate gradient procedure. Moreover we demonstrate that all the conjugate gradient relationships are satisfied approximately. Therefore, any statements valid for the conjugate gradient optimization procedure, which we show converges under very weak conditions, apply directly to the Lanczos procedure. We then use this equivalence to obtain an explanation of the Lanczos phenomenon: the empirically observed “convergence” of Lanczos eigenvalue procedures despite total loss of the global orthogonality of the Lanczos vectors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号