首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
江燕  黄崇超  余谦 《数学杂志》2004,24(6):669-674
本文为框式线性规划给出了一个非精确不可行内点算法.该算法使用的搜索方向仅需要达到一个相对的精度,这样的搜索方向可以通过Krylov子空间迭代法,比如CG或QMR得到,本文最后证明了算法的全局收敛性。  相似文献   

2.
In this paper, by exploiting the special block and sparse structure of the coefficient matrix, we present a new preconditioning strategy for solving large sparse linear systems arising in the time-dependent distributed control problem involving the heat equation with two different functions. First a natural order-reduction is performed, and then the reduced- order linear system of equations is solved by the preconditioned MINRES algorithm with a new preconditioning techniques. The spectral properties of the preconditioned matrix are analyzed. Numerical results demonstrate that the preconditioning strategy for solving the large sparse systems discretized from the time-dependent problems is more effective for a wide range of mesh sizes and the value of the regularization parameter.  相似文献   

3.
Most current prevalent iterative methods can be classified into the socalled extended Krylov subspace methods, a class of iterative methods which do not fall into this category are also proposed in this paper. Comparing with traditional Krylov subspace methods which always depend on the matrix-vector multiplication with a fixed matrix, the newly introduced methods (the so-called (progressively) accumulated projection methods, or AP (PAP) for short) use a projection matrix which varies in every iteration to form a subspace from which an approximate solution is sought. More importantly, an accelerative approach (called APAP) is introduced to improve the convergence of PAP method. Numerical experiments demonstrate some surprisingly improved convergence behaviors. Comparisons between benchmark extended Krylov subspace methods (Block Jacobi and GMRES) are made and one can also see remarkable advantage of APAP in some examples. APAP is also used to solve systems with extremely ill-conditioned coefficient matrix (the Hilbert matrix) and numerical experiments shows that it can bring very satisfactory results even when the size of system is up to a few thousands.  相似文献   

4.
Krylov subspace methods and their variants are presently the favorite iterative methods for solving a system of linear equations. Although it is a purely linear algebra problem, it can be tackled by the theory of formal orthogonal polynomials. This theory helps to understand the origin of the algorithms for the implementation of Krylov subspace methods and, moreover, the use of formal orthogonal polynomials brings a major simplification in the treatment of some numerical problems related to these algorithms. This paper reviews this approach in the case of Lanczos method and its variants, the novelty being the introduction of a preconditioner.  相似文献   

5.
In this paper, we first give a result which links any global Krylov method for solving linear systems with several right-hand sides to the corresponding classical Krylov method. Then, we propose a general framework for matrix Krylov subspace methods for linear systems with multiple right-hand sides. Our approach use global projection techniques, it is based on the Global Generalized Hessenberg Process (GGHP) – which use the Frobenius scalar product and construct a basis of a matrix Krylov subspace – and on the use of a Galerkin or a minimizing norm condition. To accelerate the convergence of global methods, we will introduce weighted global methods. In these methods, the GGHP uses a different scalar product at each restart. Experimental results are presented to show the good performances of the weighted global methods. AMS subject classification 65F10  相似文献   

6.
The restarted FOM method presented by Simoncini[7]according to the natural collinearity of all residuals is an efficient method for solving shifted systems,which generates the same Krylov subspace when the shifts are handled simultaneously.However,restarting slows down the convergence.We present a practical method for solving the shifted systems by adding some Ritz vectors into the Krylov subspace to form an augmented Krylov subspace. Numerical experiments illustrate that the augmented FOM approach(restarted version)can converge more quickly than the restarted FOM method.  相似文献   

7.
In the present paper, we present numerical methods for the computation of approximate solutions to large continuous-time and discrete-time algebraic Riccati equations. The proposed methods are projection methods onto block Krylov subspaces. We use the block Arnoldi process to construct an orthonormal basis of the corresponding block Krylov subspace and then extract low rank approximate solutions. We consider the sequential version of the block Arnoldi algorithm by incorporating a deflation technique which allows us to delete linearly and almost linearly dependent vectors in the block Krylov subspace sequences. We give some theoretical results and present numerical experiments for large problems.  相似文献   

8.
Krylov iterative methods usually solve an optimization problem, per iteration, to obtain a vector whose components are the step lengths associated with the previous search directions. This vector can be viewed as the solution of a multiparameter optimization problem. In that sense, Krylov methods can be combined with the spectral choice of step length that has recently been developed to accelerate descent methods in optimization. In this work, we discuss different spectral variants of Krylov methods and present encouraging preliminary numerical experiments, with and without preconditioning.  相似文献   

9.
We consider numerical methods for the incompressible Reynolds averaged Navier–Stokes equations discretized by finite difference techniques on non-staggered grids in body-fitted coordinates. A segregated approach is used to solve the pressure–velocity coupling problem. Several iterative pressure linear solvers including Krylov subspace and multigrid methods and their combination have been developed to compare the efficiency of each method and to design a robust solver. Three-dimensional numerical experiments carried out on scalar and vector machines and performed on different fluid flow problems show that a combination of multigrid and Krylov subspace methods is a robust and efficient pressure solver. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
The concept of a representative spectrum is introduced in the context of Newton‐Krylov methods. This concept allows a better understanding of convergence rate accelerating techniques for Krylov‐subspace iterative methods in the context of CFD applications of the Newton‐Krylov approach to iteratively solve sets of non‐linear equations. The dependence of the representative spectrum on several parameters such as mesh, Mach number or discretization techniques is studied and analyzed. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2006  相似文献   

11.
We derive a priori error bounds for the block Krylov subspace methods in terms of “the sine” between the desired invariant subspace and the block Krylov subspace. The obtained results can be seen as the block analogue of the classical a priori estimates for standard projection methods.  相似文献   

12.
The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses contour integration to obtain the eigenvectors of a matrix for the eigenvalues that are located in any user‐defined region in the complex plane. By computing small numbers of eigenvalues in specific regions of the complex plane, FEAST is able to naturally parallelize the solution of eigenvalue problems by solving for multiple eigenpairs simultaneously. The traditional FEAST algorithm is implemented by directly solving collections of shifted linear systems of equations; in this paper, we describe a variation of the FEAST algorithm that uses iterative Krylov subspace algorithms for solving the shifted linear systems inexactly. We show that this iterative FEAST algorithm (which we call IFEAST) is mathematically equivalent to a block Krylov subspace method for solving eigenvalue problems. By using Krylov subspaces indirectly through solving shifted linear systems, rather than directly using them in projecting the eigenvalue problem, it becomes possible to use IFEAST to solve eigenvalue problems using very large dimension Krylov subspaces without ever having to store a basis for those subspaces. IFEAST thus combines the flexibility and power of Krylov methods, requiring only matrix–vector multiplication for solving eigenvalue problems, with the natural parallelism of the traditional FEAST algorithm. We discuss the relationship between IFEAST and more traditional Krylov methods and provide numerical examples illustrating its behavior.  相似文献   

13.
In the present paper, we propose Krylov‐based methods for solving large‐scale differential Sylvester matrix equations having a low‐rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low‐dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as the backward differentiation formula or Rosenbrock method, and the obtained solution is used to build the low‐rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches.  相似文献   

14.
In this paper, a relaxed Hermitian and skew-Hermitian splitting (RHSS) preconditioner is proposed for saddle point problems from the element-free Galerkin (EFG) discretization method. The EFG method is one of the most widely used meshfree methods for solving partial differential equations. The RHSS preconditioner is constructed much closer to the coefficient matrix than the well-known HSS preconditioner, resulting in a RHSS fixed-point iteration. Convergence of the RHSS iteration is analyzed and an optimal parameter, which minimizes the spectral radius of the iteration matrix is described. Using the RHSS pre- conditioner to accelerate the convergence of some Krylov subspace methods (like GMRES) is also studied. Theoretical analyses show that the eigenvalues of the RHSS precondi- tioned matrix are real and located in a positive interval. Eigenvector distribution and an upper bound of the degree of the minimal polynomial of the preconditioned matrix are obtained. A practical parameter is suggested in implementing the RHSS preconditioner. Finally, some numerical experiments are illustrated to show the effectiveness of the new preconditioner.  相似文献   

15.

The vast majority of linear programming interior point algorithms successively move from an interior solution to an improved interior solution by following a single search direction, which corresponds to solving a one-dimensional subspace linear program at each iteration. On the other hand, two-dimensional search interior point algorithms select two search directions, and determine a new and improved interior solution by solving a two-dimensional subspace linear program at each step. This paper presents primal and dual two-dimensional search interior point algorithms derived from affine and logarithmic barrier search directions. Both search directions are determined by randomly partitioning the objective function into two orthogonal vectors. Computational experiments performed on benchmark instances demonstrate that these new methods improve the average CPU time by approximately 12% and the average number of iterations by 14%.

  相似文献   

16.
We consider the approximation of operator functions in resolvent Krylov subspaces. Besides many other applications, such approximations are currently of high interest for the approximation of φ-functions that arise in the numerical solution of evolution equations by exponential integrators. It is well known that Krylov subspace methods for matrix functions without exponential decay show superlinear convergence behaviour if the number of steps is larger than the norm of the operator. Thus, Krylov approximations may fail to converge for unbounded operators. In this paper, we analyse a rational Krylov subspace method which converges not only for finite element or finite difference approximations to differential operators but even for abstract, unbounded operators whose field of values lies in the left half plane. In contrast to standard Krylov methods, the convergence will be independent of the norm of the discretised operator and thus of the spatial discretisation. We will discuss efficient implementations for finite element discretisations and illustrate our analysis with numerical experiments.  相似文献   

17.
The linear conjugate gradient method is an optimal method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS method and Barzilai-Borwein gradient method, however, heavily restricted the use of conjugate gradient method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS method and the Barzilai-Bowein gradient method share the so-called asymptotical one stepsize per line-search property, namely, the trial stepsize in the method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate gradient method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate gradient (BBCG) methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.  相似文献   

18.
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.  相似文献   

19.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

20.
The CMRH (Changing Minimal Residual method based on the Hessenberg process) method is a Krylov subspace method for solving large linear systems with non-symmetric coefficient matrices. CMRH generates a (non orthogonal) basis of the Krylov subspace through the Hessenberg process, and minimizes a quasi-residual norm. On dense matrices, the CMRH method is less expensive and requires less storage than other Krylov methods. In this work, we describe Matlab codes for the best of these implementations. Fortran codes for sequential and parallel implementations are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号