首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss a class of deflated block Krylov subspace methods for solving large scale matrix eigenvalue problems. The efficiency of an Arnoldi-type method is examined in computing partial or closely clustered eigenvalues of large matrices. As an improvement, we also propose a refined variant of the Arnoldi-type method. Comparisons show that the refined variant can further improve the Arnoldi-type method and both methods exhibit very regular convergence behavior.  相似文献   

2.
In the present paper, we give some convergence results of the global minimal residual methods and the global orthogonal residual methods for multiple linear systems. Using the Schur complement formulae and a new matrix product, we give expressions of the approximate solutions and the corresponding residuals. We also derive some useful relations between the norm of the residuals.  相似文献   

3.
4.
We consider solving eigenvalue problems or model reduction problems for a quadratic matrix polynomial 2 −  − B with large and sparse A and B. We propose new Arnoldi and Lanczos type processes which operate on the same space as A and B live and construct projections of A and B to produce a quadratic matrix polynomial with the coefficient matrices of much smaller size, which is used to approximate the original problem. We shall apply the new processes to solve eigenvalue problems and model reductions of a second order linear input-output system and discuss convergence properties. Our new processes are also extendable to cover a general matrix polynomial of any degree.  相似文献   

5.
Given a large square real matrix A and a rectangular tall matrix Q, many application problems require the approximation of the operation . Under certain hypotheses on A, the matrix preserves the orthogonality characteristics of Q; this property is particularly attractive when the associated application problem requires some geometric constraints to be satisfied. For small size problems numerical methods have been devised to approximate while maintaining the structure properties. On the other hand, no algorithm for large A has been derived with similar preservation properties. In this paper we show that an appropriate use of the block Lanczos method allows one to obtain a structure preserving approximation to when A is skew-symmetric or skew-symmetric and Hamiltonian. Moreover, for A Hamiltonian we derive a new variant of the block Lanczos method that again preserves the geometric properties of the exact scheme. Numerical results are reported to support our theoretical findings, with particular attention to the numerical solution of linear dynamical systems by means of structure preserving integrators. AMS subject classification (2000) 65F10, 65F30, 65D30  相似文献   

6.
Alternating methods for image deblurring and denoising have recently received considerable attention. The simplest of these methods are two-way methods that restore contaminated images by alternating between deblurring and denoising. This paper describes Krylov subspace-based two-way alternating iterative methods that allow the application of regularization operators different from the identity in both the deblurring and the denoising steps. Numerical examples show that this can improve the quality of the computed restorations. The methods are particularly attractive when matrix-vector products with a discrete blurring operator and its transpose can be evaluated rapidly, but the structure of these operators does not allow inexpensive diagonalization.  相似文献   

7.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

8.
Recently, Calvetti et al. have published an interesting paper [Linear Algebra Appl. 316 (2000) 157–169] concerning the least-squares solution of a singular system by using the so-called range restricted GMRES (RRGMRES) method. However, one of the main results (cf. [loc. cit., Theorem 3.3]) seems to be incomplete. As a complement of paper [loc. cit.], in this note we first make an example to show the incompleteness of that theorem, then we give a modified result.  相似文献   

9.
This paper considers the numerical solution of a transmissionboundary-value problem for the time-harmonic Maxwell equationswith the help of a special finite-volume discretization. Applyingthis technique to several three-dimensional test problems, weobtain large, sparse, complex linear systems, which are solvedby four types of algorithm, using biconjugate gradients, squaredconjugate gradients, stabilized conjugate gradients, and generalizedminimal residuals, respectively. Wecombine these methods withsuitably chosen preconditioning matrices and compare the speedof convergence.  相似文献   

10.
This paper concerns the use of Krylov subspace methods for the solution of nearly singular nonsymmetric linear systems. We show that the incomplete orthogonalization methods (IOM) in conjunction with certain deflation techniques of Stewart, Chan, and Saad can be used to solve large nonsymmetric linear systems which are nearly singular.This work was supported by the National Science Foundation, Grants DMS-8403148 and DCR-81-16779, and by the Office of Naval Research, Contract N00014-85-K-0725.  相似文献   

11.
James V. Lambers 《PAMM》2007,7(1):2020143-2020144
This paper reviews the main properties, and most recent developments, of Krylov subspace spectral (KSS) methods for time-dependent variable-coefficient PDE. These methods use techniques developed by Golub and Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral domain in order to achieve high-order accuracy in time and stability characteristic of implicit time-stepping schemes, even though KSS methods themselves are explicit. In fact, for certain problems, 1-node KSS methods are unconditionally stable. Furthermore, these methods are equivalent to high-order operator splittings, thus offering another perspective for further analysis and enhancement. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we propose a class of special Krylov subspace methods to solve continuous algebraic Riccati equation (CARE), i.e., the Hessenberg-based methods. The presented approaches can obtain efficiently the solution of algebraic Riccati equation to some extent. The main idea is to apply Kleinman-Newton"s method to transform the process of solving algebraic Riccati equation into Lyapunov equation at every inner iteration. Further, the Hessenberg process of pivoting strategy combined with Petrov-Galerkin condition and minimal norm condition is discussed for solving the Lyapunov equation in detail, then we get two methods, namely global generalized Hessenberg (GHESS) and changing minimal residual methods based on the Hessenberg process (CMRH) for solving CARE, respectively. Numerical experiments illustrate the efficiency of the provided methods.  相似文献   

14.
The task of fitting smoothing spline surfaces to meteorological data such as temperature or rainfall observations is computationally intensive. The generalized cross validation (GCV) smoothing algorithm, if implemented using direct matrix techniques, is O(n 3) computationally, and memory requirements are O(n 2). Thus, for data sets larger than a few hundred observations, the algorithm is prohibitively slow. The core of the algorithm consists of solving series of shifted linear systems, and iterative techniques have been used to lower the computational complexity and facilitate implementation on a variety of supercomputer architectures. For large data sets though, the execution time is still quite high. In this paper we describe a Lanczos based approach that avoids explicitly solving the linear systems and dramatically reduces the amount of time required to fit surfaces to sets of data.   相似文献   

15.
The task of extracting from a Krylov decomposition the approximation to an eigenpair that yields the smallest backward error can be phrased as finding the smallest perturbation which makes an associated matrix pair uncontrollable. Exploiting this relationship, we propose a new deflation criterion, which potentially admits earlier deflations than standard deflation criteria. Along these lines, a new deflation procedure for shift-and-invert Krylov methods is developed. Numerical experiments demonstrate the merits and limitations of this approach. This author has been supported by a DFG Emmy Noether fellowship and in part by the Swedish Foundation for Strategic Research under the Frame Programme Grant A3 02:128.  相似文献   

16.
The inverse-free preconditioned Krylov subspace method of Golub and Ye [G.H. Golub, Q. Ye, An inverse free preconditioned Krylov subspace method for symmetric generalized eigenvalue problems, SIAM J. Sci. Comp. 24 (2002) 312-334] is an efficient algorithm for computing a few extreme eigenvalues of the symmetric generalized eigenvalue problem. In this paper, we first present an analysis of the preconditioning strategy based on incomplete factorizations. We then extend the method by developing a block generalization for computing multiple or severely clustered eigenvalues and develop a robust black-box implementation. Numerical examples are given to illustrate the analysis and the efficiency of the block algorithm.  相似文献   

17.
In this paper, we first give a result which links any global Krylov method for solving linear systems with several right-hand sides to the corresponding classical Krylov method. Then, we propose a general framework for matrix Krylov subspace methods for linear systems with multiple right-hand sides. Our approach use global projection techniques, it is based on the Global Generalized Hessenberg Process (GGHP) – which use the Frobenius scalar product and construct a basis of a matrix Krylov subspace – and on the use of a Galerkin or a minimizing norm condition. To accelerate the convergence of global methods, we will introduce weighted global methods. In these methods, the GGHP uses a different scalar product at each restart. Experimental results are presented to show the good performances of the weighted global methods. AMS subject classification 65F10  相似文献   

18.
Krylov subspace methods often use short-recurrences for updating approximations and the corresponding residuals. In the bi-conjugate gradient (Bi-CG) type methods, rounding errors arising from the matrix–vector multiplications used in the recursion formulas influence the convergence speed and the maximum attainable accuracy of the approximate solutions. The strategy of a groupwise update has been proposed for improving the convergence of the Bi-CG type methods in finite-precision arithmetic. In the present paper, we analyze the influence of rounding errors on the convergence properties when using alternative recursion formulas, such as those used in the bi-conjugate residual (Bi-CR) method, which are different from those used in the Bi-CG type methods. We also propose variants of a groupwise update strategy for improving the convergence speed and the accuracy of the approximate solutions. Numerical experiments demonstrate the effectiveness of the proposed method.  相似文献   

19.
For solving least squares problems, the CGLS method is a typical method in the point of view of iterative methods. When the least squares problems are ill-conditioned, the convergence behavior of the CGLS method will present a deteriorated result. We expect to select other iterative Krylov subspace methods to overcome the disadvantage of CGLS. Here the GMRES method is a suitable algorithm for the reason that it is derived from the minimal residual norm approach, which coincides with least squares problems. Ken Hayami proposed BAGMRES for solving least squares problems in [\emph{GMRES Methods for Least Squares Problems, SIAM J. Matrix Anal. Appl., 31(2010)}, pp.2400-2430]. The deflation and balancing preconditioners can optimize the convergence rate through modulating spectral distribution. Hence, in this paper we utilize preconditioned iterative Krylov subspace methods with deflation and balancing preconditioners in order to solve ill-conditioned least squares problems. Numerical experiments show that the methods proposed in this paper are better than the CGLS method.  相似文献   

20.
We construct a novel multi-step iterative method for solving systems of nonlinear equations by introducing a parameter θ to generalize the multi-step Newton method while keeping its order of convergence and computational cost. By an appropriate selection of θ, the new method can both have faster convergence and have larger radius of convergence. The new iterative method only requires one Jacobian inversion per iteration, and therefore, can be efficiently implemented using Krylov subspace methods. The new method can be used to solve nonlinear systems of partial differential equations, such as complex generalized Zakharov systems of partial differential equations, by transforming them into systems of nonlinear equations by discretizing approaches in both spatial and temporal independent variables such as, for instance, the Chebyshev pseudo-spectral discretizing method. Quite extensive tests show that the new method can have significantly faster convergence and significantly larger radius of convergence than the multi-step Newton method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号