首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper introduces a robust preconditioner for general sparse matrices based on low‐rank approximations of the Schur complement in a Domain Decomposition framework. In this ‘Schur Low Rank’ preconditioning approach, the coefficient matrix is first decoupled by a graph partitioner, and then a low‐rank correction is exploited to compute an approximate inverse of the Schur complement associated with the interface unknowns. The method avoids explicit formation of the Schur complement. We show the feasibility of this strategy for a model problem and conduct a detailed spectral analysis for the relation between the low‐rank correction and the quality of the preconditioner. We first introduce the SLR preconditioner for symmetric positive definite matrices and symmetric indefinite matrices if the interface matrices are symmetric positive definite. Extensions to general symmetric indefinite matrices as well as to nonsymmetric matrices are also discussed. Numerical experiments on general matrices illustrate the robustness and efficiency of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This paper introduces techniques based on diagonal threshold tolerance when developing multi‐elimination and multi‐level incomplete LU (ILUM) factorization preconditioners for solving general sparse linear systems. Existing heuristics solely based on the adjacency graph of the matrices have been used to find independent sets and are not robust for matrices arising from certain applications in which the matrices may have small or zero diagonals. New heuristic strategies based on the adjacency graph and the diagonal values of the matrices for finding independent sets are introduced. Analytical bounds for the factorization and preconditioned errors are obtained for the case of a two‐level analysis. These bounds provide useful information in designing robust ILUM preconditioners. Extensive numerical experiments are conducted in order to compare robustness and efficiency of various heuristic strategies. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
We study the problem of reconstructing a low‐rank matrix, where the input is an n × m matrix M over a field and the goal is to reconstruct a (near‐optimal) matrix that is low‐rank and close to M under some distance function Δ. Furthermore, the reconstruction must be local, i.e., provides access to any desired entry of by reading only a few entries of the input M (ideally, independent of the matrix dimensions n and m). Our formulation of this problem is inspired by the local reconstruction framework of Saks and Seshadhri (SICOMP, 2010). Our main result is a local reconstruction algorithm for the case where Δ is the normalized Hamming distance (between matrices). Given M that is ‐close to a matrix of rank (together with d and ), this algorithm computes with high probability a rank‐d matrix that is ‐close to M. This is a local algorithm that proceeds in two phases. The preprocessing phase reads only random entries of M, and stores a small data structure. The query phase deterministically outputs a desired entry by reading only the data structure and 2d additional entries of M. We also consider local reconstruction in an easier setting, where the algorithm can read an entire matrix column in a single operation. When Δ is the normalized Hamming distance between vectors, we derive an algorithm that runs in polynomial time by applying our main result for matrix reconstruction. For comparison, when Δ is the truncated Euclidean distance and , we analyze sampling algorithms by using statistical learning tools. A preliminary version of this paper appears appears in ECCC, see: http://eccc.hpi-web.de/report/2015/128/ © 2017 Wiley Periodicals, Inc. Random Struct. Alg., 51, 607–630, 2017  相似文献   

4.
This paper presents a general preconditioning method based on a multilevel partial elimination approach. The basic step in constructing the preconditioner is to separate the initial points into two parts. The first part consists of ‘block’ independent sets, or ‘aggregates’. Unknowns of two different aggregates have no coupling between them, but those in the same aggregate may be coupled. The nodes not in the first part constitute what might be called the ‘coarse’ set. It is natural to call the nodes in the first part ‘fine’ nodes. The idea of the methods is to form the Schur complement related to the coarse set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solved recursively using as preconditioner the factorization that could be obtained from the next level. Iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other techniques. Numerical experiments are reported which indicate that the method can be fairly robust. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
This work is concerned with the numerical solution of large‐scale linear matrix equations . The most straightforward approach computes from the solution of an mn × mn linear system, typically limiting the feasible values of m,n to a few hundreds at most. Our new approach exploits the fact that X can often be well approximated by a low‐rank matrix. It combines greedy low‐rank techniques with Galerkin projection and preconditioned gradients. In turn, only linear systems of size m × m and n × n need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as m = n = O(105). Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations. Even for the case of standard Lyapunov equations, our methods can be advantageous, as we do not need to assume that C has low rank. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Preconditioned Krylov subspace (KSP) methods are widely used for solving large‐scale sparse linear systems arising from numerical solutions of partial differential equations (PDEs). These linear systems are often nonsymmetric due to the nature of the PDEs, boundary or jump conditions, or discretization methods. While implementations of preconditioned KSP methods are usually readily available, it is unclear to users which methods are the best for different classes of problems. In this work, we present a comparison of some KSP methods, including GMRES, TFQMR, BiCGSTAB, and QMRCGSTAB, coupled with three classes of preconditioners, namely, Gauss–Seidel, incomplete LU factorization (including ILUT, ILUTP, and multilevel ILU), and algebraic multigrid (including BoomerAMG and ML). Theoretically, we compare the mathematical formulations and operation counts of these methods. Empirically, we compare the convergence and serial performance for a range of benchmark problems from numerical PDEs in two and three dimensions with up to millions of unknowns and also assess the asymptotic complexity of the methods as the number of unknowns increases. Our results show that GMRES tends to deliver better performance when coupled with an effective multigrid preconditioner, but it is less competitive with an ineffective preconditioner due to restarts. BoomerAMG with a proper choice of coarsening and interpolation techniques typically converges faster than ML, but both may fail for ill‐conditioned or saddle‐point problems, whereas multilevel ILU tends to succeed. We also show that right preconditioning is more desirable. This study helps establish some practical guidelines for choosing preconditioned KSP methods and motivates the development of more effective preconditioners.  相似文献   

7.
A truncated ULV decomposition (TULVD) of an m×n matrix X of rank k is a decomposition of the form X = ULVT+E, where U and V are left orthogonal matrices, L is a k×k non‐singular lower triangular matrix, and E is an error matrix. Only U,V, L, and ∥EF are stored, but E is not stored. We propose algorithms for updating and downdating the TULVD. To construct these modification algorithms, we also use a refinement algorithm based upon that in (SIAM J. Matrix Anal. Appl. 2005; 27 (1):198–211) that reduces ∥EF, detects rank degeneracy, corrects it, and sharpens the approximation. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation.  相似文献   

9.
We discuss the convergence of a two‐level version of the multilevel Krylov method for solving linear systems of equations with symmetric positive semidefinite matrix of coefficients. The analysis is based on the convergence result of Brown and Walker for the Generalized Minimal Residual method (GMRES), with the left‐ and right‐preconditioning implementation of the method. Numerical results based on diffusion problems are presented to show the convergence.  相似文献   

10.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

11.
In this paper, we analyze the convergence of a projected fixed‐point iteration on a Riemannian manifold of matrices with fixed rank. As a retraction method, we use the projector splitting scheme. We prove that the convergence rate of the projector splitting scheme is bounded by the convergence rate of standard fixed‐point iteration without rank constraints multiplied by the function of initial approximation. We also provide counterexample to the case when conditions of the theorem do not hold. Finally, we support our theoretical results with numerical experiments.  相似文献   

12.
We give two generalizations of the induced dimension reduction (IDR) approach for the solution of linear systems. We derive a flexible and a multi‐shift quasi‐minimal residual IDR variant. These variants are based on a generalized Hessenberg decomposition. We present a new, more stable way to compute basis vectors in IDR. Numerical examples are presented to show the effectiveness of these new IDR variants and the new basis compared with existing ones and to other Krylov subspace methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we study the quadratic model updating problems by using symmetric low‐rank correcting, which incorporates the measured model data into the analytical quadratic model to produce an adjusted model that matches the experimental model data, and minimizes the distance between the analytical and updated models. We give a necessary and sufficient condition on the existence of solutions to the symmetric low‐rank correcting problems under some mild conditions, and propose two algorithms for finding approximate solutions to the corresponding optimization problems. The good performance of the two algorithms is illustrated by numerical examples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, the numerical evaluation of matrix functions expressed in partial fraction form is addressed. The shift‐and‐invert Krylov method is analyzed, with special attention to error estimates. Such estimates give insights into the selection of the shift parameter and lead to a simple and effective restart procedure. Applications to the class of Mittag–Leffler functions are presented. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
ABSTRACT

In this paper, based on the preconditioners presented by Zhang [A new preconditioner for generalized saddle matrices with highly singular(1,1) blocks. Int J Comput Maths. 2014;91(9):2091-2101], we consider a modified block preconditioner for generalized saddle point matrices whose coefficient matrices have singular (1,1) blocks. Moreover, theoretical analysis gives the eigenvalue distribution, forms of the eigenvectors and the minimal polynomial. Finally, numerical examples show the eigenvalue distribution with the presented preconditioner and confirm our analysis.  相似文献   

16.
We present the recurrence formulas for computing the approximate inverse factors of tridiagonal and pentadiagonal matrices using bordering technique. Resulting algorithms are used to approximate the inverse of pivot blocks needed for constructing block ILU preconditioners for solving the block tridiagonal linear systems, arising from discretization of partial differential equations. Resulting preconditioners are suitable for parallel implementation. Comparison with other methods are also included.  相似文献   

17.
We consider the iterative solution of symmetric positive‐definite linear systems whose coefficient matrix may be expressed as the outer product of low‐rank terms. We derive suitable preconditioners for such systems, and demonstrate their effectiveness on a number of test examples. We also consider combining these methods with existing techniques to cope with the commonly‐occuring case where the coefficient matrix is the linear sum of elements, some of which are of very low rank. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
19.
In this paper, we present a block triangular preconditioner for generalized saddle point matrices whose coefficient matrices have singular (1,1) blocks. Theoretical analysis shows that all the eigenvalues of the preconditioned matrix are strongly clustered when choosing an optimal parameter. Numerical experiments are given to demonstrate the efficiency of the presented preconditioner.  相似文献   

20.
We present an algebraic structured preconditioner for the iterative solution of large sparse linear systems. The preconditioner is based on a multifrontal variant of sparse LU factorization used with nested dissection ordering. Multifrontal factorization amounts to a partial factorization of a sequence of logically dense frontal matrices, and the preconditioner is obtained if structured factorization is used instead. This latter exploits the presence of low numerical rank in some off‐diagonal blocks of the frontal matrices. An algebraic procedure is presented that allows to identify the hierarchy of the off‐diagonal blocks with low numerical rank based on the sparsity of the system matrix. This procedure is motivated by a model problem analysis, yet numerical experiments show that it is successful beyond the model problem scope. Further aspects relevant for the algebraic structured preconditioner are discussed and illustrated with numerical experiments. The preconditioner is also compared with other solvers, including the corresponding direct solver. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号