首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A simpler GMRES     
The generalized minimal residual (GMRES) method is widely used for solving very large, nonsymmetric linear systems, particularly those that arise through discretization of continuous mathematical models in science and engineering. By shifting the Arnoldi process to begin with Ar0 instead of r0, we obtain simpler Gram–Schmidt and Householder implementations of the GMRES method that do not require upper Hessenberg factorization. The Gram–Schmidt implementation also maintains the residual vector at each iteration, which allows cheaper restarts of GMRES(m) and may otherwise be useful.  相似文献   

2.
Accuracy of a Gram–Schmidt algorithm for the solution of linear least squares equations is compared with accuracy of least squares subroutines in three highly respected mathematical packages that use Householder transformations. Results from the four programs for 13 test problems were evaluated at 16 digit precision on four different desktop computers using four different compilers. Singular values obtained from the different programs are compared and the effect of pivoting to improve the accuracy is discussed. Solution vectors from the program using the Gram–Schmidt algorithm were generally more accurate or comparable to solution vectors from the programs using the Householder transformations. © 1997 John Wiley & Sons, Ltd.  相似文献   

3.
We investigate a variant of the reorthogonalized block classical Gram–Schmidt method for computing the QR factorization of a full column rank matrix. Our aim is to bound the loss of orthogonality even when the first local QR algorithm is only conditionally stable. In particular, this allows the use of modified Gram–Schmidt instead of Householder transformations as the first local QR algorithm. Numerical experiments confirm the stable behavior of the new variant. We also examine the use of non-QR local factorization and show by example that the resulting variants, although less stable, may also be applied to ill-conditioned problems.  相似文献   

4.
We describe a Krylov subspace technique, based on incomplete orthogonalization of the Krylov vectors, which can be considered as a truncated version of GMRES. Unlike GMRES(m), the restarted version of GMRES, the new method does not require restarting. Like GMRES, it does not break down. Numerical experiments show that DQGMRES(k) often performs as well as the restarted GMRES using a subspace of dimension m=2k. In addition, the algorithm is flexible to variable preconditioning, i.e., it can accommodate variations in the preconditioner at every step. In particular, this feature allows the use of any iterative solver as a right-preconditioner for DQGMRES(k). This inner-outer iterative combination often results in a robust approach for solving indefinite non-Hermitian linear systems.  相似文献   

5.
In [6] the Generalized Minimal Residual Method (GMRES) which constructs the Arnoldi basis and then solves the transformed least squares problem was studied. It was proved that GMRES with the Householder orthogonalization-based implementation of the Arnoldi process (HHA), see [9], is backward stable. In practical computations, however, the Householder orthogonalization is too expensive, and it is usually replaced by the modified Gram-Schmidt process (MGSA). Unlike the HHA case, in the MGSA implementation the orthogonality of the Arnoldi basis vectors is not preserved near the level of machine precision. Despite this, the MGSA-GMRES performs surprisingly well, and its convergence behaviour and the ultimately attainable accuracy do not differ significantly from those of the HHA-GMRES. As it was observed, but not explained, in [6], it is thelinear independence of the Arnoldi basis, not the orthogonality near machine precision, that is important. Until the linear independence of the basis vectors is nearly lost, the norms of the residuals in the MGSA implementation of GMRES match those of the HHA implementation despite the more significant loss of orthogonality. In this paper we study the MGSA implementation. It is proved that the Arnoldi basis vectors begin to lose their linear independenceonly after the GMRES residual norm has been reduced to almost its final level of accuracy, which is proportional to κ(A)ε, where κ(A) is the condition number ofA and ε is the machine precision. Consequently, unless the system matrix is very ill-conditioned, the use of the modified Gram-Schmidt GMRES is theoretically well-justified. This work was supported by the GA AS CR under grants 230401 and A 2030706, by the NSF grant Int 921824, and by the DOE grant DEFG0288ER25053. Part of the work was performed while the third author visited Department of Mathematics and Computer Science, Emory University, Atlanta, USA.  相似文献   

6.
We study decompositions of functions in the Hardy spaces into linear combinations of the basic functions in the orthogonal rational systems Bn, which are obtained in the respective contexts through Gram–Schmidt orthogonalization process on shifted Cauchy kernels. Those lead to adaptive decompositions of quaternionic‐valued signals of finite energy. This study is a generalization of the main results of the first author's recent research in relation to adaptive Takenaka–Malmquist systems in one complex variable. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
In 1907, Erhard Schmidt published a paper in which he introduced an orthogonalization algorithm that has since become known as the classical Gram‐Schmidt process. Schmidt claimed that his procedure was essentially the same as an earlier one published by J. P. Gram in 1883. The Schmidt version was the first to become popular and widely used. An algorithm related to a modified version of the process appeared in an 1820 treatise by P. S. Laplace. Although related algorithms have been around for almost 200 years, it is the Schmidt paper that led to the popularization of orthogonalization techniques. The year 2007 marked the 100th anniversary of that paper. In celebration of that anniversary, we present a comprehensive survey of the research on Gram‐Schmidt orthogonalization and its related QR factorization. Its application for solving least squares problems and in Krylov subspace methods are also reviewed. Software and implementation aspects are also discussed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Real linear approximation theory is developed further by regarding ? n as a module over the ring of circlets. By introducing a concept of orthogonality together with the respective Gram–Schmidt orthogonalization process, improved approximations upon the standard complex Hilbert space techniques follow. Related hierarchical bases are devised leading to a new family of rapidly constructible family of unitary matrices. With circlets, so-called oplets are introduced for approximation to improve the singular value decomposition of real matrices. Complex matrix approximation is also considered through finding the nearest real matrix in small rank perturbations.  相似文献   

9.
We consider the GMRES(m,k) method for the solution of linear systems Ax=b, i.e. the restarted GMRES with restart m where to the standard Krylov subspace of dimension m the other subspace of dimension k is added, resulting in an augmented Krylov subspace. This additional subspace approximates usually an A‐invariant subspace. The eigenspaces associated with the eigenvalues closest to zero are commonly used, as those are thought to hinder convergence the most. The behaviour of residual bounds is described for various situations which can arise during the GMRES(m,k) process. The obtained estimates for the norm of the residual vector suggest sufficient conditions for convergence of GMRES(m,k) and illustrate that these augmentation techniques can remove stagnation of GMRES(m) in many cases. All estimates are independent of the choice of an initial approximation. Conclusions and remarks assessing numerically the quality of proposed bounds conclude the paper. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
In this study, the parabolic partial differential equations with nonlocal conditions are solved. To this end, we use the reproducing kernel method (RKM) that is obtained from the combining fundamental concepts of the Galerkin method, and the complete system of reproducing kernel Hilbert space that was first introduced by Wang et al. who implemented RKM without Gram–Schmidt orthogonalization process. In this method, first the reproducing kernel spaces and their kernels such that satisfy the nonlocal conditions are constructed, and then the RKM without Gram–Schmidt orthogonalization process on the considered problem is implemented. Moreover, convergence theorem, error analysis theorems, and stability theorem are provided in detail. To show the high accuracy of the present method several numerical examples are solved.  相似文献   

11.
We give a geometric framework for analysing iterative methods on singular linear systems A x = b and apply them to Krylov subspace methods. The idea is to decompose the method into the ?(A) component and its orthogonal complement ?(A)?, where ?(A) is the range of A. We apply the framework to GMRES, GMRES(k) and GCR(k), and derive conditions for convergence without breakdown for inconsistent and consistent singular systems. The approach also gives a geometric interpretation and different proofs of the conditions obtained by Brown and Walker for GMRES. We also give examples arising in the finite difference discretization of two‐point boundary value problems of an ordinary differential equation. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Many applications, such as subspace‐based models in information retrieval and signal processing, require the computation of singular subspaces associated with the k dominant, or largest, singular values of an m×n data matrix A, where k?min(m,n). Frequently, A is sparse or structured, which usually means matrix–vector multiplications involving A and its transpose can be done with much less than ??(mn) flops, and A and its transpose can be stored with much less than ??(mn) storage locations. Many Lanczos‐based algorithms have been proposed through the years because the underlying Lanczos method only accesses A and its transpose through matrix–vector multiplications. We implement a new algorithm, called KSVD, in the Matlab environment for computing approximations to the singular subspaces associated with the k dominant singular values of a real or complex matrix A. KSVD is based upon the Lanczos tridiagonalization method, the WY representation for storing products of Householder transformations, implicit deflation, and the QR factorization. Our Matlab simulations suggest it is a fast and reliable strategy for handling troublesome singular‐value spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
The modified Gram–Schmidt (MGS) orthogonalization process—used for example in the Arnoldi algorithm—often constitutes the bottleneck that limits parallel efficiencies. Indeed, a number of communications, proportional to the square of the problem size, are required to compute the dot‐products. A block formulation is attractive but it suffers from potential numerical instability. In this paper, we address this issue and propose a simple procedure that allows the use of a block Gram—Schmidt algorithm while guaranteeing a numerical accuracy close to that of MGS. The main idea is to determine the size of the blocks dynamically. The main advantages of this dynamic procedure are two‐fold: first, high performance matrix–vector multiplications can be used to decrease the execution time. Next, in a parallel environment, the number of communications is reduced. Performance comparisons with the alternative Iterated CGS also show an improvement for a moderate number of processors. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
A reorthogonalized block classical Gram–Schmidt algorithm is proposed that factors a full column rank matrix $A$ into $A=QR$ where $Q$ is left orthogonal (has orthonormal columns) and $R$ is upper triangular and nonsingular. This block Gram–Schmidt algorithm can be implemented using matrix–matrix operations making it more efficient on modern architectures than orthogonal factorization algorithms based upon matrix-vector operations and purely vector operations. Gram–Schmidt orthogonal factorizations are important in the stable implementation of Krylov space methods such as GMRES and in approaches to modifying orthogonal factorizations when columns and rows are added or deleted from a matrix. With appropriate assumptions about the diagonal blocks of $R$ , the algorithm, when implemented in floating point arithmetic with machine unit $\varepsilon _M$ , produces $Q$ and $R$ such that $\Vert I- Q ^T\!~ Q \Vert =O(\varepsilon _M)$ and $\Vert A-QR \Vert =O(\varepsilon _M\Vert A \Vert )$ . The first of these bounds has not been shown for a block Gram–Schmidt procedure before. As consequence of these results, we provide a different analysis, with a slightly different assumption, that re-establishes a bound of Giraud et al. (Num Math, 101(1):87–100, 2005) for the CGS2 algorithm.  相似文献   

15.
Acceleration schemes can dramatically improve existing optimization procedures. In most of the work on these schemes, such as nonlinear generalized minimal residual (N‐GMRES), acceleration is based on minimizing the ?2 norm of some target on subspaces of R n . There are many numerical examples that show how accelerating general‐purpose and domain‐specific optimizers with N‐GMRES results in large improvements. We propose a natural modification to N‐GMRES, which significantly improves the performance in a testing environment originally used to advocate N‐GMRES. Our proposed approach, which we refer to as O‐ACCEL (objective acceleration), is novel in that it minimizes an approximation to the objective function on subspaces of R n . We prove that O‐ACCEL reduces to the full orthogonalization method for linear systems when the objective is quadratic, which differentiates our proposed approach from existing acceleration methods. Comparisons with the limited‐memory Broyden–Fletcher–Goldfarb–Shanno and nonlinear conjugate gradient methods indicate the competitiveness of O‐ACCEL. As it can be combined with domain‐specific optimizers, it may also be beneficial in areas where limited‐memory Broyden–Fletcher–Goldfarb–Shanno and nonlinear conjugate gradient methods are not suitable.  相似文献   

16.
Tim Stokes 《Semigroup Forum》2010,81(2):325-334
We characterize algebras of transformations on a set under the operations of composition and the pointwise switching function defined as follows: (f,g)[h,k](x)=h(x) if f(x)=g(x), and k(x) otherwise. The resulting algebras are both semigroups and comparison algebras in the sense of Kennison. The same characterization holds for partial transformations under composition and a suitable generalisation of the quaternary operation in which agreement of f,g includes cases where neither is defined. When a zero element is added (modelling the empty function), the resulting signature is rich enough to encompass many operations on semigroups of partial transformations previously considered, including set difference and intersection, restrictive product, and a functional analog of union. When an identity element is also added (modelling the identity function), further domain-related operations can be captured.  相似文献   

17.
The truncated version of the generalized minimal residual method (GMRES), the incomplete generalized minimal residual method (IGMRES), is studied. It is based on an incomplete orthogonalization of the Krylov vectors in question, and gives an approximate or quasi-minimum residual solution over the Krylov subspace. A convergence analysis of this method is given, showing that in the non-restarted version IGMRES can behave like GMRES once the basis vectors of Krylov subspace generated by the incomplete orthogonalization are strongly linearly independent. Meanwhile, some relationships between the residual norms for IOM and IGMRES are established. Numerical experiments are reported to show convergence behavior of IGMRES and of its restarted version IGMRES(m). Project supported by the China State Key Basic Researches, the National Natural Science Foundation of China (Grant No. 19571014), the Doctoral Program (97014113), the Foundation of Returning Scholars of China and the Natural Science Foundation of Liaoning Province.  相似文献   

18.
Boundary value methods (BVMs) for ordinary differential equations require the solution of non‐symmetric, large and sparse linear systems. In this paper, these systems are solved by using the generalized minimal residual (GMRES) method. A block‐circulant preconditioner with circulant blocks (BCCB preconditioner) is proposed to speed up the convergence rate of the GMRES method. The BCCB preconditioner is shown to be invertible when the BVM is Ak1,k2‐stable. The spectrum of the preconditioned matrix is clustered and therefore, the preconditioned GMRES method converges fast. Moreover, the operation cost in each iteration of the preconditioned GMRES method by using our BCCB preconditioner is less than that required by using block‐circulant preconditioners proposed earlier. In numerical experiments, we compare the number of iterations of various preconditioners. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
We propose a preconditioned variant of the modified HSS (MHSS) iteration method for solving a class of complex symmetric systems of linear equations. Under suitable conditions, we prove the convergence of the preconditioned MHSS (PMHSS) iteration method and discuss the spectral properties of the PMHSS-preconditioned matrix. Numerical implementations show that the resulting PMHSS preconditioner leads to fast convergence when it is used to precondition Krylov subspace iteration methods such as GMRES and its restarted variants. In particular, both the stationary PMHSS iteration and PMHSS-preconditioned GMRES show meshsize-independent and parameter-insensitive convergence behavior for the tested numerical examples.  相似文献   

20.
Iterative orthogonalization is aimed to ensure small deviation from orthogonality in the Gram–Schmidt process. Former applications of this technique are restricted to classical Gram–Schmidt (CGS) and column-oriented modified Gram–Schmidt (MGS). The major aim of this paper is to explain how iterative orthogonalization is incorporated into row-oriented MGS. The interest that we have in a row-oriented iterative MGS comes from the observation that this method is capable of performing column pivoting. The use of column pivoting delays the deteriorating effects of rounding errors and helps to handle rank-deficient least-squares problems.

A second modification proposed in this paper considers the use of Gram–Schmidt QR factorization for solving linear least-squares problems. The standard solution method is based on one orthogonalization of the r.h.s. vector b against the columns of Q. The outcome of this process is the residual vector, r*, and the solution vector, x*. The modified scheme is a natural extension of the standard solution method that allows it to apply iterative orthogonalization. This feature ensures accurate computation of small residuals and helps in cases when Q has some deviation from orthogonality.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号