首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Semiseparable matrices and many other rank‐structured matrices have been widely used in developing new fast matrix algorithms. In this paper, we generalize the hierarchically semiseparable (HSS) matrix representations and propose some fast algorithms for HSS matrices. We represent HSS matrices in terms of general binary HSS trees and use simplified postordering notation for HSS forms. Fast HSS algorithms including new HSS structure generation and HSS form Cholesky factorization are developed. Moreover, we provide a new linear complexity explicit ULV factorization algorithm for symmetric positive definite HSS matrices with a low‐rank property. The corresponding factors can be used to solve the HSS systems also in linear complexity. Numerical examples demonstrate the efficiency of the algorithms. All these algorithms have nice data locality. They are useful in developing fast‐structured numerical methods for large discretized PDEs (such as elliptic equations), integral equations, eigenvalue problems, etc. Some applications are shown. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use information belonging to only one level in the Hasse diagram adjacent to the level of the solution at hand. In this paper, we propose a data correcting algorithm that looks at multiple levels of the Hasse diagram and hence makes the data correcting algorithm more efficient. Our computations with quadratic cost partition problems show that this multilevel search effects a 8- to 10-fold reduction in computation times, so that some of the dense quadratic partition problem instances of size 500, currently considered as some of the most difficult problems and far beyond the capabilities of current exact methods, are solvable on a personal computer working at 300 MHz within 10 min.  相似文献   

3.
In this paper, two accelerated divide‐and‐conquer (ADC) algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N2r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices, which are Cauchy‐like matrices and are off‐diagonally low‐rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low‐rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off‐diagonal rank. Numerous experiments have been carried out to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than six times faster for some large matrices with few deflations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
In this work, we provide new analysis for a preconditioning technique called structured incomplete factorization (SIF) for symmetric positive definite matrices. In this technique, a scaling and compression strategy is applied to construct SIF preconditioners, where off‐diagonal blocks of the original matrix are first scaled and then approximated by low‐rank forms. Some spectral behaviors after applying the preconditioner are shown. The effectiveness is confirmed with the aid of a type of two‐dimensional and three‐dimensional discretized model problems. We further show that previous studies on the robustness are too conservative. In fact, the practical multilevel version of the preconditioner has a robustness enhancement effect, and is unconditionally robust (or breakdown free) for the model problems regardless of the compression accuracy for the scaled off‐diagonal blocks. The studies give new insights into the SIF preconditioning technique and confirm that it is an effective and reliable way for designing structured preconditioners. The studies also provide useful tools for analyzing other structured preconditioners. Various spectral analysis results can be used to characterize other structured algorithms and study more general problems.  相似文献   

5.
Motivated by the recently popular probabilistic methods for low‐rank approximations and randomized algorithms for the least squares problems, we develop randomized algorithms for the total least squares problem with a single right‐hand side. We present the Nyström method for the medium‐sized problems. For the large‐scale and ill‐conditioned cases, we introduce the randomized truncated total least squares with the known or estimated rank as the regularization parameter. We analyze the accuracy of the algorithm randomized truncated total least squares and perform numerical experiments to demonstrate the efficiency of our randomized algorithms. The randomized algorithms can greatly reduce the computational time and still maintain good accuracy with very high probability.  相似文献   

6.
This paper introduces a robust preconditioner for general sparse matrices based on low‐rank approximations of the Schur complement in a Domain Decomposition framework. In this ‘Schur Low Rank’ preconditioning approach, the coefficient matrix is first decoupled by a graph partitioner, and then a low‐rank correction is exploited to compute an approximate inverse of the Schur complement associated with the interface unknowns. The method avoids explicit formation of the Schur complement. We show the feasibility of this strategy for a model problem and conduct a detailed spectral analysis for the relation between the low‐rank correction and the quality of the preconditioner. We first introduce the SLR preconditioner for symmetric positive definite matrices and symmetric indefinite matrices if the interface matrices are symmetric positive definite. Extensions to general symmetric indefinite matrices as well as to nonsymmetric matrices are also discussed. Numerical experiments on general matrices illustrate the robustness and efficiency of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
This paper is concerned with the numerical solution of symmetric large‐scale Lyapunov equations with low‐rank right‐hand sides and coefficient matrices depending on a parameter. Specifically, we consider the situation when the parameter dependence is sufficiently smooth, and the aim is to compute solutions for many different parameter samples. On the basis of existing results for Lyapunov equations and parameter‐dependent linear systems, we prove that the tensor containing all solution samples typically allows for an excellent low multilinear rank approximation. Stacking all sampled equations into one huge linear system, this fact can be exploited by combining the preconditioned CG method with low‐rank truncation. Our approach is flexible enough to allow for a variety of preconditioners based, for example, on the sign function iteration or the alternating direction implicit method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, an inverse complementarity power iteration method (ICPIM) for solving eigenvalue complementarity problems (EiCPs) is proposed. Previously, the complementarity power iteration method (CPIM) for solving EiCPs was designed based on the projection onto the convex cone K. In the new algorithm, a strongly monotone linear complementarity problem over the convex cone K is needed to be solved at each iteration. It is shown that, for the symmetric EiCPs, the CPIM can be interpreted as the well‐known conditional gradient method, which requires only linear optimization steps over a well‐suited domain. Moreover, the ICPIM is closely related to the successive quadratic programming (SQP) via renormalization of iterates. The global convergence of these two algorithms is established by defining two nonnegative merit functions with zero global minimum on the solution set of the symmetric EiCP. Finally, some numerical simulations are included to evaluate the efficiency of the proposed algorithms.  相似文献   

9.
We propose in this work new algorithms associating asymptotic numerical method and meshless discretization (MFS‐MPS: Method of fundamental solutions‐Method of particular solutions) to compute branch solutions of nonlinear Poisson problems. To detect singular points on these branches, geometrical indicator, Padé approximants, and analytical bifurcation indicator are proposed. Numerical applications show the robustness and the effectiveness of the proposed algorithms. © 2014 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 30: 978–993, 2014  相似文献   

10.
We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off‐diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low‐rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low‐rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low‐rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew‐Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low‐rank perturbation. Numerical tests prove that this straightforward algorithm is effective.  相似文献   

12.
R. Callies 《PAMM》2003,3(1):559-562
A multi‐model approach is presented which allows the efficient and stable coupling between high precision algorithms for DAE boundary value problems arising from constrained optimal control problems and model data obtained with rather low accuracy from finite element solutions of PDEs. A model hierarchy with increasing approximation accuracy serves as an intermediate layer. Numerical efficiency is demonstrated for an example from flight mechanics.  相似文献   

13.
We present a general active set algorithm for the solution of a convex quadratic programming problem having a parametrized Hessian matrix. The parametric Hessian matrix is a positive semidefinite Hessian matrix plus a real parameter multiplying a symmetric matrix of rank one or two. The algorithm solves the problem for all parameter values in the open interval upon which the parametric Hessian is positive semidefinite. The algorithm is general in that any of several existing quadratic programming algorithms can be extended in a straightforward manner for the solution of the parametric Hessian problem.This research was supported by the Natural Sciences and Engineering Research Council under Grant No. A8189 and under a Postgraduate Scholarship, by an Ontario Graduate Scholarship, and by the University of Windsor Research Board under Grant No. 9432.  相似文献   

14.
We present a general active set algorithm for the solution of a convex quadratic programming problem having a parametrized Hessian matrix. The parametric Hessian matrix is a positive semidefinite Hessian matrix plus a real parameter multiplying a symmetric matrix of rank one or two. The algorithm solves the problem for all parameter values in the open interval upon which the parametric Hessian is positive semidefinite. The algorithm is general in that any of several existing quadratic programming algorithms can be extended in a straightforward manner for the solution of the parametric Hessian problem. This research was supported by the Natural Sciences and Engineering Research Council under Grant No. A8189 and under a Postgraduate Scholarship, by an Ontario Graduate Scholarship, and by the University of Windsor Research Board under Grant No. 9432.  相似文献   

15.
Model updating should be correlated with experimental data to ensure that it models the dynamics of the real structure and the updated model predicts the dynamics of a structure more accurately. Considering that the iterative methods for model updating have aroused little public attention, this paper studies an iterative algorithm for quadratic model updating problems which can incorporate the measured modal data into the finite element model to produce an adjusted finite element model on the mass, gyroscopic and stiffness matrices that closely match the experimental modal data. By this method, the best approximation symmetric and skew-symmetric solutions can be obtained by choosing a convergence factor. Numerical example shows that the introduced iterative algorithm is quite efficient.  相似文献   

16.
The aim of this article is to present several computational algorithms for numerical solutions of a nonlinear finite difference system that represents a finite difference approximation of a class of fourth‐order elliptic boundary value problems. The numerical algorithms are based on the method of upper and lower solutions and its associated monotone iterations. Three linear monotone iterative schemes are given, and each iterative scheme yields two sequences, which converge monotonically from above and below, respectively, to a maximal solution and a minimal solution of the finite difference system. This monotone convergence property leads to upper and lower bounds of the solution in each iteration as well as an existence‐comparison theorem for the finite difference system. Sufficient conditions for the uniqueness of the solution and some techniques for the construction of upper and lower solutions are obtained, and numerical results for a two‐point boundary‐value problem with known analytical solution are given. © 2001 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 17:347–368, 2001  相似文献   

17.
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximally complementary solutions. Maximally complementary solutions can be characterized by optimal partitions. On the other hand, the solutions provided by simplex–based pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A partition identification algorithm is an algorithm which generates a maximally complementary solution (and its corresponding partition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal pivot transform and the orthogonality property of basis tableaus. Received April 9, 1996 / Revised version received April 27, 1998? Published online May 12, 1999  相似文献   

18.
本文给出了求解非线性单调方程组的两个自调比对称秩1牛顿法,即投影SSR1法和投影有限储存SSR1法.这两个算法将自调比对称秩1校正参数进行了一个简单的修改并采用了保守策略.在非线性单调函数满足李普希茨连续的条件下,证明了算法的全局收敛性,并与相同类型的BFGS法进行了初步的数值比较试验,试验结果表明自调比对称秩1类投影算法求解非线性单调方程组与相同类型的BFGS数值结果相当.  相似文献   

19.
We focus on efficient preconditioning techniques for sequences of Karush‐Kuhn‐Tucker (KKT) linear systems arising from the interior point (IP) solution of large convex quadratic programming problems. Constraint preconditioners (CPs), although very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time‐consuming task at each IP iteration. We overcome this problem by computing the CP from scratch only at selected IP iterations and by updating the last computed CP at the remaining iterations, via suitable low‐rank modifications based on a BFGS‐like formula. This work extends the limited‐memory preconditioners (LMPs) for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in 2011, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared with the generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high.  相似文献   

20.
ADI preconditioned Krylov methods for large Lyapunov matrix equations   总被引:1,自引:0,他引:1  
In the present paper, we propose preconditioned Krylov methods for solving large Lyapunov matrix equations AX+XAT+BBT=0. Such problems appear in control theory, model reduction, circuit simulation and others. Using the Alternating Direction Implicit (ADI) iteration method, we transform the original Lyapunov equation to an equivalent symmetric Stein equation depending on some ADI parameters. We then define the Smith and the low rank ADI preconditioners. To solve the obtained Stein matrix equation, we apply the global Arnoldi method and get low rank approximate solutions. We give some theoretical results and report numerical tests to show the effectiveness of the proposed approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号