首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An iterative method based on Lanczos bidiagonalization is developed for computing regularized solutions of large and sparse linear systems, which arise from discretizations of ill-posed problems in partial differential or integral equations. Determination of the regularization parameter and termination criteria are discussed. Comments are given on the computational implementation of the algorithm.Dedicated to Peter Naur on the occasion of his 60th birthday  相似文献   

2.
Abaffy, Broyden, and Spedicato (ABS) have recently proposed a class of direct methods for solving nonsparse linear systems. It is the purpose of this paper to demonstrate that with proper choice of parameters, ABS methods exploit sparsity in a natural way. In particular, we study the choice of parameters which corresponds to an LU-decomposition of the coefficient matrix. In many cases, the fill-in, represented by the nonzero elements of the deflection matrix, uses less storage than the corresponding fill-in of an explicit LU factorization. Hence the above can be a viable and simple method for solving sparse linear systems. A simple reordering scheme for the coefficient matrix is also proposed for the purpose of reducing fill-in of the deflection matrices.  相似文献   

3.
Summary We study the augmented system approach for the solution of sparse linear least-squares problems. It is well known that this method has better numerical properties than the method based on the normal equations. We use recent work by Arioli et al. (1988) to introduce error bounds and estimates for the components of the solution of the augmented system. In particular, we find that, using iterative refinement, we obtain a very robust algorithm and our estimates of the error are accurate and cheap to compute. The final error and all our error estimates are much better than the classical or Skeel's error analysis (1979) indicates. Moreover, we prove that our error estimates are independent of the row scaling of the augmented system and we analyze the influence of the Björck scaling (1967) on these estimates. We illustrate this with runs both on large-scale practical problems and contrived examples, comparing the numerical behaviour of the augmented systems approach with a code using the normal equations. These experiments show that while the augmented system approach with iterative refinement can sometimes be less efficient than the normal equations approach, it is comparable or better when the least-squares matrix has a full row, and is, in any case, much more stable and robust.This author was visiting Harwell and was funded by a grant from the Italian National Council of Research (CNR), Istituto di Elaborazione dell'Informazione-CNR, via S. Maria 46, I-56100 Pisa, ItalyThis author was visiting Harwell from Faculty of Mathematics and Computer Science of the University of Amsterdam  相似文献   

4.
We consider solvingx+Ay=b andA T x=c for givenb, c andm ×n A of rankn. This is called the augmented system formulation (ASF) of two standard optimization problems, which include as special cases the minimum 2-norm of a linear underdetermined system (b=0) and the linear least squares problem (c=0), as well as more general problems. We examine the numerical stability of methods (for the ASF) based on the QR factorization ofA, whether by Householder transformations, Givens rotations, or the modified Gram-Schmidt (MGS) algorithm, and consider methods which useQ andR, or onlyR. We discuss the meaning of stability of algorithms for the ASF in terms of stability of algorithms for the underlying optimization problems.We prove the backward stability of several methods for the ASF which useQ andR, inclusing a new one based on MGS, and also show under what circumstances they may be regarded as strongly stable. We show why previous methods usingQ from MGS were not backward stable, but illustrate that some of these methods may be acceptable-error stable. We point out that the numerical accuracy of methods that do not useQ does not depend to any significant extent on which of of the above three QR factorizations is used. We then show that the standard methods which do not useQ are not backward stable or even acceptable-error stable for the general ASF problem, and discuss how iterative refinement can be used to counteract these deficiencies.Dedicated to Carl-Eric Fröberg on the occasion of his 75th birthdayThis research was partially supported by NSERC of Canada Grant No. A9236.  相似文献   

5.
There exist many classes of block-projections algorithms for approximating solutions of linear least-squares problems. Generally, these methods generate sequences convergent to the minimal norm least-squares solution only for consistent problems. In the inconsistent case, which usually appears in practice because of some approximations or measurements, these sequences do no longer converge to a least-squares solution or they converge to the minimal norm solution of a “perturbed” problem. In the present paper, we overcome this difficulty by constructing extensions for almost all the above classes of block-projections methods. We prove that the sequences generated with these extensions always converge to a least-squares solution and, with a suitable initial approximation, to the minimal norm solution of the problem. Numerical experiments, described in the last section of the paper, confirm the theoretical results obtained.  相似文献   

6.
Based on the block-triangular product approximation to a 2-by-2 block matrix, a class of hybrid preconditioning methods is designed for accelerating the MINRES method for solving saddle-point problems. The appropriate values for the parameters involved in the new preconditioners are estimated, so that the numerical conditioning and the spectral property of the saddle-point matrix of the linear system can be substantially improved. Several practical hybrid preconditioners and the corresponding preconditioning iterative methods are constructed and studied, too.  相似文献   

7.
The effectiveness of sparse matrix techniques for directly solving large-scale linear least-squares problems is severely limited if the system matrix A has one or more nearly dense rows. In this paper, we partition the rows of A into sparse rows and dense rows (As and Ad) and apply the Schur complement approach. A potential difficulty is that the reduced normal matrix AsTAs is often rank-deficient, even if A is of full rank. To overcome this, we propose explicitly removing null columns of As and then employing a regularization parameter and using the resulting Cholesky factors as a preconditioner for an iterative solver applied to the symmetric indefinite reduced augmented system. We consider complete factorizations as well as incomplete Cholesky factorizations of the shifted reduced normal matrix. Numerical experiments are performed on a range of large least-squares problems arising from practical applications. These demonstrate the effectiveness of the proposed approach when combined with either a sparse parallel direct solver or a robust incomplete Cholesky factorization algorithm.  相似文献   

8.
Summary A parallel projection algorithm is proposed to solve the generalized linear least-squares problem: find a vector to minimize the 2-norm distance from its image under an affine mapping to a closed convex cone. In each iteration of the algorithm the problem is decomposed into several independent small problems of finding projections onto subspaces, which are simple and can be tackled parallelly. The algorithm can be viewed as a dual version of the algorithm proposed by Han and Lou [8]. For the special problem under consideration, stronger convergence results are established. The algorithm is also related to the block iterative methods of Elfving [6], Dennis and Steihaug [5], and the primal-dual method of Springarn [14].This material is based on work supported in part by the National Science foundation under Grant DMS-8602416 and by the Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign  相似文献   

9.
We describe a procedure for determining a few of the largest singular values of a large sparse matrix. The method by Golub and Kent which uses the method of modified moments for estimating the eigenvalues of operators used in iterative methods for the solution of linear systems of equations is appropriately modified in order to generate a sequence of bidiagonal matrices whose singular values approximate those of the original sparse matrix. A simple Lanczos recursion is proposed for determining the corresponding left and right singular vectors. The potential asynchronous computation of the bidiagonal matrices using modified moments with the iterations of an adapted Chebyshev semi-iterative (CSI) method is an attractive feature for parallel computers. Comparisons in efficiency and accuracy with an appropriate Lanczos algorithm (with selective re-orthogonalization) are presented on large sparse (rectangular) matrices arising from applications such as information retrieval and seismic reflection tomography. This procedure is essentially motivated by the theory of moments and Gauss quadrature.This author's work was supported by the National Science Foundation under grants NSF CCR-8717492 and CCR-910000N (NCSA), the U.S. Department of Energy under grant DOE DE-FG02-85ER25001, and the Air Force Office of Scientific Research under grant AFOSR-90-0044 while at the University of Illinois at Urbana-Champaign Center for Supercomputing Research and Development.This author's work was supported by the U.S. Army Research Office under grant DAAL03-90-G-0105, and the National Science Foundation under grant NSF DCR-8412314.  相似文献   

10.
The weighting method for solving a least squares problem with linear equality constraints multiplies the constraints by a large number and appends them to the top of the least squares problem, which is then solved by standard techniques. In this paper we give a new analysis of the method, based on the QR decomposition, that exhibits many features of the algorithm. In particular it suggests a natural criterion for chosing the weighting factor. This work was supported in part by the National Science Foundation under grant CCR 95503126.  相似文献   

11.
We propose a hybrid sparse system solver for handling linear systems using algebraic domain decomposition-based techniques. The solver consists of several stages. The first stage uses a reordering scheme that brings as many of the largest matrix elements as possible closest to the main diagonal. This is followed by partitioning the coefficient matrix into a set of overlapped diagonal blocks that contain most of the largest elements of the coefficient matrix. The only constraint here is to minimize the size of each overlap. Separating these blocks into independent linear systems with the constraint of matching the solution parts of neighboring blocks that correspond to the overlaps, we obtain a balance system. This balance system is not formed explicitly and has a size that is much smaller than the original system. Our novel solver requires only a one-time factorization of each diagonal block, and in each outer iteration, obtaining only the upper and lower tips of a solution vector where the size of each tip is equal to that of the individual overlap. This scheme proves to be scalable on clusters of nodes in which each node has a multicore architecture. Numerical experiments comparing the scalability of our solver with direct and preconditioned iterative methods are also presented.  相似文献   

12.
Positive semidefinite Hankel matrices arise in many important applications. Some of their properties may be lost due to rounding or truncation errors incurred during evaluation. The problem is to find the nearest matrix to a given matrix to retrieve these properties. The problem is converted into a semidefinite programming problem as well as a problem comprising a semidefined program and second-order cone problem. The duality and optimality conditions are obtained and the primal–dual algorithm is outlined. Explicit expressions for a diagonal preconditioned and crossover criteria have been presented. Computational results are presented. A possibility for further improvement is indicated.  相似文献   

13.
Wavelet sparse approximate inverse preconditioners   总被引:1,自引:0,他引:1  
We show how to use wavelet compression ideas to improve the performance of approximate inverse preconditioners. Our main idea is to first transform the inverse of the coefficient matrix into a wavelet basis, before applying standard approximate inverse techniques. In this process, smoothness in the entries ofA −1 are converted into small wavelet coefficients, thus allowing a more efficient approximate inverse approximation. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverses. Supported by grants from ONR: ONR-N00014-92-J-1890, and the Army Research Office: DAAL-03-91-C-0047 (Univ. of Tenn. subcontract ORA4466.04 Amendment 1 and 2). The first and the third author also acknowledge support from RIACS/NASA Ames NAS 2-96027 and the Alfred P. Sloan Foundation as Doctoral Dissertation Fellows, respectively. the work was supported by the Natural Sciences and Engineering Research Council of Canada, the Information Technology Research Centre (which is funded by the Province of Ontario), and RIACS/NASA Ames NAS 2-96027.  相似文献   

14.
The ULV decomposition (ULVD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SVD). The problem of adding and deleting rows from the ULVD (called updating and downdating, respectively) is considered. The ULVD can be updated and downdated much faster than the SVD, hence its utility. When updating or downdating the ULVD, it is necessary to compute its numerical rank. In this paper, we propose an efficient algorithm which almost always maintains rank-revealing structure of the decomposition after an update or downdate without standard condition estimation. Moreover, we can monitor the accuracy of the information provided by the ULVD as compared to the SVD by tracking exact Frobenius norms of the two small blocks of the lower triangular factor in the decomposition.  相似文献   

15.
We present a numerical algorithm for computing the implicit QR factorization of a product of three matrices, and we illustrate the technique by applying it to the generalized total least squares and the restricted total least squares problems. We also demonstrate how to take advantage of the block structures of the underlying matrices in order to reduce the computational work.  相似文献   

16.
We show how a direct active set method for solving definite and indefinite quadratic programs with simple bounds can be efficiently implemented for large sparse problems. All of the necessary factorizations can be carried out in a static data structure that is set up before the numeric computation begins. The space required for these factorizations is no larger than that required for a single sparse Cholesky factorization of the Hessian of the quadratic. We propose several improvements to this basic algorithm: a new way to find a search direction in the indefinite case that allows us to free more than one variable at a time and a new heuristic method for finding a starting point. These ideas are motivated by the two-norm trust region problem. Additionally, we also show how projection techniques can be used to add several constraints to the active set at each iteration. Our experimental results show that an algorithm with these improvements runs much faster than the basic algorithm for positive definite problems and finds local minima with lower function values for indefinite problems.Research partially supported by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under grant DE-FG02-86ER25013.A000.  相似文献   

17.
Perturbation bounds of subspaces, such as eigen-spaces, singular subspaces, and canonical subspaces, have been extensively studied in the literature. In this paper, we study perturbations of some constrained subspaces of 1×2, 2×1, and 2×2 block matrices, in which only one of the sub-matrices can be changed. Such problems rise from the least squares–total least squares problem, the constrained least squares problem, and the constrained total least squares problem.  相似文献   

18.
We describe how to maintain the triangular factor of a sparse QR factorization when columns are added and deleted and Q cannot be stored for sparsity reasons. The updating procedures could be thought of as a sparse counterpart of Reichel and Gragg’s package QRUP. They allow us to solve a sequence of sparse linear least squares subproblems in which each matrix Bk is an independent subset of the columns of a fixed matrix A, and Bk+1 is obtained by adding or deleting one column. Like Coleman and Hulbert [T. Coleman, L. Hulbert, A direct active set algorithm for large sparse quadratic programs with simple bounds, Math. Program. 45 (1989) 373-406], we adapt the sparse direct methodology of Björck and Oreborn of the late 1980s, but without forming ATA, which may be only positive semidefinite. Our Matlab 5 implementation works with a suitable row and column numbering within a static triangular sparsity pattern that is computed in advance by symbolic factorization of ATA and preserved with placeholders.  相似文献   

19.
The Newton method of Madsen and Nielsen (1990) for computing Huber's robust M-estimate in linear regression is considered. The original method was proved to converge finitely for full rank problems under some additional restrictions on the choice of the search direction and the step length in some degenerate cases. It was later observed that these requirements can be relaxed in a practical implementation while preserving the effectiveness and even improving the efficiency of the method. In the present paper these enhancements to the original algorthm are studied and the finite termination property of the algorithm is proved without any assumptions on the M-estimation problems. Research supported by NATO Collaborative Research Grant CRG-94-0609.  相似文献   

20.
The discrete picard condition for discrete ill-posed problems   总被引:1,自引:0,他引:1  
We investigate the approximation properties of regularized solutions to discrete ill-posed least squares problems. A necessary condition for obtaining good regularized solutions is that the Fourier coefficients of the right-hand side, when expressed in terms of the generalized SVD associated with the regularization problem, on the average decay to zero faster than the generalized singular values. This is the discrete Picard condition. We illustrate the importance of this condition theoretically as well as experimentally.This work was carried out during a visit to Dept. of Mathematics, UCLA, and was supported by the Danish Natural Science Foundation, by the National Science Foundation under contract NSF-DMS87-14612, and by the Army Research Office under contract No. DAAL03-88-K-0085.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号