首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 537 毫秒
1.
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of first-order gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TV-regularized image denoising, and sensor network localization.  相似文献   

2.
The purpose of this paper is to present optimal preconditioned iterative methods to solve indefinite linear systems of equations arising from symmetric coupling of finite elements and boundary elements. This is a block‐diagonal preconditioner together with a conjugate residual method and a preconditioned inner–outer iteration. We prove the efficiency of these methods by showing that the number of iterations to preserve a given accuracy is bounded independent of the number of unknowns. Numerical examples underline the efficiency of these methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered. When the point satisfies the linear inequalities of the complementarity problem, the residual consists of the complementarity condition plus its square root. This latter term is essential and without it the error bound cannot hold. We also show that another natural residual that has been employed to bound errors for strictly monotone linear complementarity problems fails to bound errors for the monotone case considered here. Sponsored by the United States Army under contract No. DAAG29-80-C-0041. This material is based on research sponsored by National Foundation Grant DCR-8420963 and Air Force Office of Scientific Research Grant AFOSR-ISSA-85-00080.  相似文献   

4.
We study the preconditioned iterative methods for the linear systems arising from the numerical solution of the multi-dimensional space fractional diffusion equations. A sine transform based preconditioning technique is developed according to the symmetric and skew-symmetric splitting of the Toeplitz factor in the resulting coefficient matrix. Theoretical analyses show that the upper bound of relative residual norm of the GMRES method when applied to the preconditioned linear system is mesh-independent which implies the linear convergence. Numerical experiments are carried out to illustrate the correctness of the theoretical results and the effectiveness of the proposed preconditioning technique.  相似文献   

5.
线性常微分方程初值问题求解在许多应用中起着重要作用.目前,已存在很多的数值方法和求解器用于计算离散网格点上的近似解,但很少有对全局误差(global error)进行估计和优化的方法.本文首先通过将离散数值解插值成为可微函数用来定义方程的残差;再给出残差与近似解的关系定理并推导出全局误差的上界;然后以最小化残差的二范数为目标将方程求解问题转化为优化求解问题;最后通过分析导出矩阵的结构,提出利用共轭梯度法对其进行求解.之后将该方法应用于滤波电路和汽车悬架系统等实际问题.实验分析表明,本文估计方法对线性常微分方程的初值问题的全局误差具有比较好的估计效果,优化求解方法能够在不增加网格点的情形下求解出线性常微分方程在插值解空间中的全局最优解.  相似文献   

6.
Summary We consider conjugate gradient type methods for the solution of large linear systemsA x=b with complex coefficient matrices of the typeA=T+iI whereT is Hermitian and a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidean error minimization, respectively, are investigated. In particular, we propose numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices and derive error bounds for all three methods. It is shown how the special shift structure ofA can be preserved by using polynomial preconditioning, and results on the optimal choice of the polynomial preconditioner are given. Also, we report on some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation.This work was supported in part by Cooperatives Agreement NCC 2-387 between the National Aeronautics and Space Administration (NASA) and the Universities Space Research Association (USRA) and by National Science Foundation Grant DCR-8412314  相似文献   

7.
Block (including s‐step) iterative methods for (non)symmetric linear systems have been studied and implemented in the past. In this article we present a (combined) block s‐step Krylov iterative method for nonsymmetric linear systems. We then consider the problem of applying any block iterative method to solve a linear system with one right‐hand side using many linearly independent initial residual vectors. We present a new algorithm which combines the many solutions obtained (by any block iterative method) into a single solution to the linear system. This approach of using block methods in order to increase the parallelism of Krylov methods is very useful in parallel systems. We implemented the new method on a parallel computer and we ran tests to validate the accuracy and the performance of the proposed methods. It is expected that the block s‐step methods performance will scale well on other parallel systems because of their efficient use of memory hierarchies and their reduction of the number of global communication operations over the standard methods. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
The simulation of large-scale fluid flow applications often requires the efficient solution of extremely large nonsymmetric linear and nonlinear sparse systems of equations arising from the discretization of systems of partial differential equations. While preconditioned conjugate gradient methods work well for symmetric, positive-definite matrices, other methods are necessary to treat large, nonsymmetric matrices. The applications may also involve highly localized phenomena which can be addressed via local and adaptive grid refinement techniques. These local refinement methods usually cause non-standard grid connections which destroy the bandedness of the matrices and the associated ease of solution and vectorization of the algorithms. The use of preconditioned conjugate gradient or conjugate-gradient-like iterative methods in large-scale reservoir simulation applications is briefly surveyed. Then, some block preconditioning methods for adaptive grid refinement via domain decomposition techniques are presented and compared. These techniques are being used efficiently in existing large-scale simulation codes.  相似文献   

9.
For the solution by preconditioned conjugate gradient methods of symmetric positive definite equations as arising in boundary value problems we consider preconditioning methods of AMLI type. Particular attention is devoted to providing methods of optimal order of computational complexity which in addition promise to be robust, i.e. with a convergence rate which is bounded above independently of size of discretization parameter h, jumps in problem coefficients, and shape of finite elements or, equivalently, anisotropy of problem coefficients. In addition, the computational cost per iteration step must have optimal order.New results on upper bounds of one of the important parameters in the methods, the Cauchy—Bunyakowski—Schwarz constant are given and an algebraic method how to improve its value is presented.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

10.
Summary The Chebyshev and second-order Richardson methods are classical iterative schemes for solving linear systems. We consider the convergence analysis of these methods when each step of the iteration is carried out inexactly. This has many applications, since a preconditioned iteration requires, at each step, the solution of a linear system which may be solved inexactly using an inner iteration. We derive an error bound which applies to the general nonsymmetric inexact Chebyshev iteration. We show how this simplifies slightly in the case of a symmetric or skew-symmetric iteration, and we consider both the cases of underestimating and overestimating the spectrum. We show that in the symmetric case, it is actually advantageous to underestimate the spectrum when the spectral radius and the degree of inexactness are both large. This is not true in the case of the skew-symmetric iteration. We show how similar results apply to the Richardson iteration. Finally, we describe numerical experiments which illustrate the results and suggest that the Chebyshev and Richardson methods, with reasonable parameter choices, may be more effective than the conjugate gradient method in the presence of inexactness.This work was supported in part by National Science Foundation Grants DCR-8412314 and DCR-8502014The work of this author was completed while he was on sabbatical leave at the Centre for Mathematical Analysis and Mathematical Sciences Research Institute at the Australian National University, Canberra, Australia  相似文献   

11.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

12.
The iterative solution of large linear discrete ill-posed problems with an error contaminated data vector requires the use of specially designed methods in order to avoid severe error propagation. Range restricted minimal residual methods have been found to be well suited for the solution of many such problems. This paper discusses the structure of matrices that arise in a range restricted minimal residual method for the solution of large linear discrete ill-posed problems with a symmetric matrix. The exploitation of the structure results in a method that is competitive with respect to computer storage, number of iterations, and accuracy.  相似文献   

13.
Summary. We propose an algorithm for the numerical solution of large-scale symmetric positive-definite linear complementarity problems. Each step of the algorithm combines an application of the successive overrelaxation method with projection (to determine an approximation of the optimal active set) with the preconditioned conjugate gradient method (to solve the reduced residual systems of linear equations). Convergence of the iterates to the solution is proved. In the experimental part we compare the efficiency of the algorithm with several other methods. As test example we consider the obstacle problem with different obstacles. For problems of dimension up to 24\,000 variables, the algorithm finds the solution in less then 7 iterations, where each iteration requires about 10 matrix-vector multiplications. Received July 14, 1993 / Revised version received February 1994  相似文献   

14.
Summary. The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small, and that this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given. Received November 21, 1995 / Revised version received February 2, 1998 / Published online July 28, 1999  相似文献   

15.
1. IntroductionThe generalized LS problemis frequently found in solving problems from statistics, engineering, economics, imageand signal processing. Here A e Rmxn with m 2 n, b E Re and W E Rmxm issymmetric positive definite. The large sparse rank deficient generalized LS problemsappeal in computational genetics when we consider mited linear model for tree oranimal genetics [2], [31, [5].Recentlyg Yuan [9] and [10], Yuan and lusem [11] considered direct iterative methodsfor the problem …  相似文献   

16.
Discrete solution to nonlinear systems problems that leads to a series of linear problems associated with non-invariant large-scale sparse symmetric positive matrices is herein considered. Each linear problem is solved iteratively by a conjugate gradient method. We introduce in this paper new solvers (IRKS, GIRKS and D-GIRKS) that rely on an iterative reuse of Krylov subspaces associated with previously solved linear problems. Numerical assessments are provided on large-scale engineering applications. Considerations related to parallel supercomputing are also addressed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
This study analyzes the influence of sparse matrix reordering on the solution of linear systems arising from interior point methods for linear programming. In particular, such linear systems are solved by the conjugate gradient method with a two-phase hybrid preconditioner that uses the controlled Cholesky factorization during the initial iterations and later adopts the splitting preconditioner. This approach yields satisfactory computational results for the solution of linear systems with symmetric positive-definite matrices. Three reordering heuristics are analyzed in this study: the reverse Cuthill–McKee heuristic, the Sloan algorithm, and the minimum degree heuristic. Through numerical experiments, it was observed that these heuristics can be advantageous in terms of accelerating the convergence of the conjugate gradient method and reducing the processing time.  相似文献   

18.
《Optimization》2012,61(7):1577-1591
We present an infeasible interior-point algorithm for symmetric linear complementarity problem based on modified Nesterov–Todd directions by using Euclidean Jordan algebras. The algorithm decreases the duality gap and the feasibility residual at the same rate. In this algorithm, we construct strictly feasible iterates for a sequence of perturbations of the given problem. Each main iteration of the algorithm consists of a feasibility step and a number of centring steps. The starting point in the first iteration is strictly feasible for a perturbed problem. The feasibility steps lead to a strictly feasible iterate for the next perturbed problem. By using centring steps for the new perturbed problem, a strictly feasible iterate is obtained to be close to the central path of the new perturbed problem. Furthermore, giving a complexity analysis of the algorithm, we derive the currently best-known iteration bound for infeasible interior-point methods.  相似文献   

19.
We focus on efficient preconditioning techniques for sequences of Karush‐Kuhn‐Tucker (KKT) linear systems arising from the interior point (IP) solution of large convex quadratic programming problems. Constraint preconditioners (CPs), although very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time‐consuming task at each IP iteration. We overcome this problem by computing the CP from scratch only at selected IP iterations and by updating the last computed CP at the remaining iterations, via suitable low‐rank modifications based on a BFGS‐like formula. This work extends the limited‐memory preconditioners (LMPs) for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in 2011, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared with the generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high.  相似文献   

20.
We deal with the iterative solution of linear systems arising from so-called dual-dual mixed finite element formulations. The linear systems are of a two-fold saddle point structure; they are indefinite and ill-conditioned. We define a special inner product that makes matrices of the two-fold saddle point structure, after a specific transformation, symmetric and positive definite. Therefore, the conjugate gradient method with this special inner product can be used as iterative solver. For a model problem, we propose a preconditioner which leads to a bounded number of CG-iterations. Numerical experiments for our model problem confirming the theoretical results are also reported.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号