首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a parallel preconditioned iterative solver for large sparse symmetric positive definite linear systems. The preconditioner is constructed as a proper combination of advanced preconditioning strategies. It can be formally seen as being of domain decomposition type with algebraically constructed overlap. Similar to the classical domain decomposition technique, inexact subdomain solvers are used, based on incomplete Cholesky factorization. The proper preconditioner is shown to be near optimal in minimizing the so‐called K‐condition number of the preconditioned matrix. The efficiency of both serial and parallel versions of the solution method is illustrated on a set of benchmark problems in linear elasticity. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
The solution of large sparse linear systems is often the most time-consuming part of many science and engineering applications. Computational fluid dynamics, circuit simulation, power network analysis, and material science are just a few examples of the application areas in which large sparse linear systems need to be solved effectively. In this paper, we introduce a new parallel hybrid sparse linear system solver for distributed memory architectures that contains both direct and iterative components. We show that by using our solver one can alleviate the drawbacks of direct and iterative solvers, achieving better scalability than with direct solvers and more robustness than with classical preconditioned iterative solvers. Comparisons to well-known direct and iterative solvers on a parallel architecture are provided.  相似文献   

3.
The solution of large sparse linear systems is often the most time-consuming part of many science and engineering applications. Computational fluid dynamics, circuit simulation, power network analysis, and material science are just a few examples of the application areas in which large sparse linear systems need to be solved effectively. In this paper, we introduce a new parallel hybrid sparse linear system solver for distributed memory architectures that contains both direct and iterative components. We show that by using our solver one can alleviate the drawbacks of direct and iterative solvers, achieving better scalability than with direct solvers and more robustness than with classical preconditioned iterative solvers. Comparisons to well-known direct and iterative solvers on a parallel architecture are provided.  相似文献   

4.
Recent research has shown that in some practically relevant situations like multiphysics flows (Galvin et al., Comput Methods Appl Mech Eng, to appear) divergence‐free mixed finite elements may have a significantly smaller discretization error than standard nondivergence‐free mixed finite elements. To judge the overall performance of divergence‐free mixed finite elements, we investigate linear solvers for the saddle point linear systems arising in ((Pk)d,P k‐1disc) Scott‐Vogelius finite element implementations of the incompressible Navier–Stokes equations. We investigate both direct and iterative solver methods. Due to discontinuous pressure elements in the case of Scott‐Vogelius (SV) elements, considerably more solver strategies seem to deliver promising results than in the case of standard mixed finite elements such as Taylor‐Hood elements. For direct methods, we extend recent preliminary work using sparse banded solvers on the penalty method formulation to finer meshes and discuss extensions. For iterative methods, we test augmented Lagrangian and \begin{align*}\mathcal{H}\end{align*} ‐LU preconditioners with GMRES, on both full and statically condensed systems. Several numerical experiments are provided that show these classes of solvers are well suited for use with SV elements and could deliver an interesting overall performance in several applications.© 2012 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2013  相似文献   

5.
As the computational power of high‐performance computing systems continues to increase by using a huge number of cores or specialized processing units, high‐performance computing applications are increasingly prone to faults. In this paper, we present a new class of numerical fault tolerance algorithms to cope with node crashes in parallel distributed environments. This new resilient scheme is designed at application level and does not require extra resources, that is, computational unit or computing time, when no fault occurs. In the framework of iterative methods for the solution of sparse linear systems, we present numerical algorithms to extract relevant information from available data after a fault, assuming a separate mechanism ensures the fault detection. After data extraction, a well‐chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to restart the iterative scheme. We have developed these methods, referred to as interpolation–restart techniques, for Krylov subspace linear solvers. After a fault, lost entries of the current iterate computed by the solver are interpolated to define a new initial guess to restart the Krylov method. A well‐suited initial guess is computed by using the entries of the faulty iterate available on surviving nodes. We present two interpolation policies that preserve key numerical properties of well‐known linear solvers, namely, the monotonic decrease of the A‐norm of the error of the conjugate gradient or the residual norm decrease of generalized minimal residual algorithm for solving. The qualitative numerical behavior of the resulting scheme has been validated with sequential simulations, when the number of faults and the amount of data losses are varied. Finally, the computational costs associated with the recovery mechanism have been evaluated through parallel experiments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
In this report, we give a semi‐discrete defect correction finite element method for the unsteady incompressible magnetohydrodynamics equations. The defect correction method is an iterative improvement technique for increasing the accuracy of a numerical solution without applying a grid refinement. Firstly, the nonlinear magnetohydrodynamics equations is solved with an artificial viscosity term. Then, the numerical solutions are improved on the same grid by a linearized defect‐correction technique. Then, we give the numerical analysis including stability analysis and error analysis. The numerical analysis proves that our method is stable and has an optimal convergence rate. In order to show the effect of our method, some numerical results are shown. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
Additive Schwarz preconditioners, when including a coarse grid correction, are said to be optimal for certain discretized partial differential equations, in the sense that bounds on the convergence of iterative methods are independent of the mesh size h. Cai and Zou (Numer. Linear Algebra Appl. 2002; 9 :379–397) showed with a one‐dimensional example that in the absence of a coarse grid correction the usual GMRES bound has a factor of the order of . In this paper we consider the same example and show that for that example the behavior of the method is not well represented by the above‐mentioned bound: We use an a posteriori bound for GMRES from (SIAM Rev. 2005; 47 :247–272) and show that for that example a relevant factor is bounded by a constant. Furthermore, for a sequence of meshes, the convergence curves for that one‐dimensional example, and for several two‐dimensional model problems, are very close to each other; thus, the number of preconditioned GMRES iterations needed for convergence for a prescribed tolerance remains almost constant. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
Iterative methods of Krylov‐subspace type can be very effective solvers for matrix systems resulting from partial differential equations if appropriate preconditioning is employed. We describe and test block preconditioners based on a Schur complement approximation which uses a multigrid method for finite element approximations of the linearized incompressible Navier‐Stokes equations in streamfunction and vorticity formulation. By using a Picard iteration, we use this technology to solve fully nonlinear Navier‐Stokes problems. The solvers which result scale very well with problem parameters. © 2011 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2011  相似文献   

9.
The use of matchings is a powerful technique for scaling and ordering sparse matrices prior to the solution of a linear system Ax = b. Traditional methods such as implemented by the HSL software package MC64 use the Hungarian algorithm to solve the maximum weight maximum cardinality matching problem. However, with advances in the algorithms and hardware used by direct methods for the parallelization of the factorization and solve phases, the serial Hungarian algorithm can represent an unacceptably large proportion of the total solution time for such solvers. Recently, auction algorithms and approximation algorithms have been suggested as alternatives for achieving near‐optimal solutions for the maximum weight maximum cardinality matching problem. In this paper, the efficacy of auction and approximation algorithms as replacements for the Hungarian algorithm is assessed in the context of sparse symmetric direct solvers when used in problems arising from a range of practical applications. High‐cardinality suboptimal matchings are shown to be as effective as optimal matchings for the purposes of scaling. However, matching‐based ordering techniques require that matchings are much closer to optimality before they become effective. The auction algorithm is demonstrated to be capable of finding such matchings significantly faster than the Hungarian algorithm, but our ‐approximation matching approach fails to consistently achieve a sufficient cardinality. Copyright © 2015 The Authors Numerical Linear Algebra with Applications Published by John Wiley & Sons Ltd.  相似文献   

10.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

11.
Themultilevel adaptive iteration is an attempt to improve both the robustness and efficiency of iterative sparse system solvers. Unlike in most other iterative methods, the order of processing and sequence of operations is not determined a priori. The method consists of a relaxation scheme with an active set strategy and can be viewed as an efficient implementation of the Gauß-Southwell relaxation. With this strategy, computational work is focused on where it can efficiently improve the solution quality. To obtain full efficiency, the algorithm must be used on a multilevel structure. This algorithm is then closely related to multigrid or multilevel preconditioning algorithms, and can be shown to have asymptotically optimal convergence. In this paper the focus is on a variant that uses data structures with a locally uniform grid refinement. The resulting grid system consists of a collection of patches where each patch is a uniform rectangular grid and where adaptive refinement is accomplished by arranging the patches flexibly in space. This construction permits improved implementations that better exploit high performance computer designs. This will be demonstrated by numerical examples.  相似文献   

12.
We investigate different methods for computing a sparse approximate inverse M for a given sparse matrix A by minimizing ∥AM − E∥ in the Frobenius norm. Such methods are very useful for deriving preconditioners in iterative solvers, especially in a parallel environment. We compare different strategies for choosing the sparsity structure of M and different ways for solving the small least squares problem that are related to the computation of each column of M. Especially we show how we can take full advantage of the sparsity of A. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
Lanczos‐type product methods (LTPMs), in which the residuals are defined by the product of stabilizing polynomials and the Bi‐CG residuals, are effective iterative solvers for large sparse nonsymmetric linear systems. Bi‐CGstab(L) and GPBi‐CG are popular LTPMs and can be viewed as two different generalizations of other typical methods, such as CGS, Bi‐CGSTAB, and Bi‐CGStab2. Bi‐CGstab(L) uses stabilizing polynomials of degree L, while GPBi‐CG uses polynomials given by a three‐term recurrence (or equivalently, a coupled two‐term recurrence) modeled after the Lanczos residual polynomials. Therefore, Bi‐CGstab(L) and GPBi‐CG have different aspects of generalization as a framework of LTPMs. In the present paper, we propose novel stabilizing polynomials, which combine the above two types of polynomials. The resulting method is referred to as GPBi‐CGstab(L). Numerical experiments demonstrate that our presented method is more effective than conventional LTPMs.  相似文献   

14.
Iterative solvers appear to be very promising in the development of efficient software, based on Interior Point methods, for large-scale nonlinear optimization problems. In this paper we focus on the use of preconditioned iterative techniques to solve the KKT system arising at each iteration of a Potential Reduction method for convex Quadratic Programming. We consider the augmented system approach and analyze the behaviour of the Constraint Preconditioner with the Conjugate Gradient algorithm. Comparisons with a direct solution of the augmented system and with MOSEK show the effectiveness of the iterative approach on large-scale sparse problems. Work partially supported by the Italian MIUR FIRB Project Large Scale Nonlinear Optimization, grant no. RBNE01WBBB.  相似文献   

15.
The constant γ of the strengthened Cauchy–Bunyakowski–Schwarz (CBS) inequality plays a fundamental role in the convergence rate of multilevel iterative methods. The main purpose of this work is to give an estimate of the constant γ for a three‐dimensional elasticity system. The theoretical results obtained are practically important for the successful implementation of the finite element method to large‐scale modelling of complicated structures as they allow us to construct optimal order algebraic multilevel iterative solvers for a wide class of real‐life elasticity problems. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
We study inexact subspace iteration for solving generalized non-Hermitian eigenvalue problems with spectral transformation, with focus on a few strategies that help accelerate preconditioned iterative solution of the linear systems of equations arising in this context. We provide new insights into a special type of preconditioner with “tuning” that has been studied for this algorithm applied to standard eigenvalue problems. Specifically, we propose an alternative way to use the tuned preconditioner to achieve similar performance for generalized problems, and we show that these performance improvements can also be obtained by solving an inexpensive least squares problem. In addition, we show that the cost of iterative solution of the linear systems can be further reduced by using deflation of converged Schur vectors, special starting vectors constructed from previously solved linear systems, and iterative linear solvers with subspace recycling. The effectiveness of these techniques is demonstrated by numerical experiments.  相似文献   

17.
We consider Anderson extrapolation to accelerate the (stationary) Richardson iterative method for sparse linear systems. Using an Anderson mixing at periodic intervals, we assess how this benefits convergence to a prescribed accuracy. The method, named alternating Anderson–Richardson, has appealing properties for high‐performance computing, such as the potential to reduce communication and storage in comparison to more conventional linear solvers. We establish sufficient conditions for convergence, and we evaluate the performance of this technique in combination with various preconditioners through numerical examples. Furthermore, we propose an augmented version of this technique.  相似文献   

18.
《Applied Mathematics Letters》2006,19(11):1191-1197
When some rows of the system matrix and a preconditioner coincide, preconditioned iterations can be reduced to a sparse subspace. Taking advantage of this property can lead to considerable memory and computational savings. This is particularly useful with the GMRES method. We consider the iterative solution of a discretized partial differential equation on this sparse subspace. With a domain decomposition method and a fictitious domain method the subspace corresponds a small neighborhood of an interface. As numerical examples we solve the Helmholtz equation using a fictitious domain method and an elliptic equation with a jump in the diffusion coefficient using a separable preconditioner.  相似文献   

19.
20.
Ming Zhou 《PAMM》2010,10(1):553-554
We consider preconditioned subspace iterations for the numerical solution of discretized elliptic eigenvalue problems. For these iterative solvers, the convergence theory is still an incomplete puzzle. We generalize some results from the classical convergence theory of inverse subspace iterations, as given by Parlett, and some recent results on the convergence of preconditioned vector iterations. To this end, we use a geometric cone representation and prove some new trigonometric inequalities for subspace angles and canonical angles. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号