首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When solving large size systems of equations by preconditioned iterative solution methods, one normally uses a fixed preconditioner which may be defined by some eigenvalue information, such as in a Chebyshev iteration method. In many problems, however, it may be more effective to use variable preconditioners, in particular when the eigenvalue information is not available. In the present paper, a recursive way of constructing variable-step of, in general, nonlinear multilevel preconditioners for selfadjoint and coercive second-order elliptic problems, discretized by the finite element method is proposed. The preconditioner is constructed recursively from the coarsest to finer and finer levels. Each preconditioning step requires only block-diagonal solvers at all levels except at every k0, k0 ≥ 1 level where we perform a sufficient number ν, ν ≥ 1 of GCG-type variable-step iterations that involve the use again of a variable-step preconditioning for that level. It turns out that for any sufficiently large value of k0 and, asymptotically, for ν sufficiently large, but not too large, the method has both an optimal rate of convergence and an optimal order of computational complexity, both for two and three space dimensional problem domains. The method requires no parameter estimates and the convergence results do not depend on the regularity of the elliptic problem.  相似文献   

2.
In this paper we study inexact inverse iteration for solving the generalised eigenvalue problem A xM x. We show that inexact inverse iteration is a modified Newton method and hence obtain convergence rates for various versions of inexact inverse iteration for the calculation of an algebraically simple eigenvalue. In particular, if the inexact solves are carried out with a tolerance chosen proportional to the eigenvalue residual then quadratic convergence is achieved. We also show how modifying the right hand side in inverse iteration still provides a convergent method, but the rate of convergence will be quadratic only under certain conditions on the right hand side. We discuss the implications of this for the preconditioned iterative solution of the linear systems. Finally we introduce a new ILU preconditioner which is a simple modification to the usual preconditioner, but which has advantages both for the standard form of inverse iteration and for the version with a modified right hand side. Numerical examples are given to illustrate the theoretical results. AMS subject classification (2000)  65F15, 65F10  相似文献   

3.
n this paper, we present an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax=λBx. We first formulate a version of inexact inverse subspace iteration in which the approximation from one step is used as an initial approximation for the next step. We then analyze the convergence property, which relates the accuracy in the inner iteration to the convergence rate of the outer iteration. In particular, the linear convergence property of the inverse subspace iteration is preserved. Numerical examples are given to demonstrate the theoretical results.  相似文献   

4.
Lisa Lorentzen 《Acta Appl Math》2000,61(1-3):185-206
This is a survey of some basic ideas in the convergence theory for continued fractions, in particular value sets, general convergence and the use of modified approximants to obtain convergence acceleration and analytic continuation. The purpose is to show how these ideas apply to some other areas of mathematics. In particular, we introduce {w k }-modifications and general convergence for sequences of Padé approximants.  相似文献   

5.
The convergence behavior of the Picard iteration Xk+1=AXk+B and the weighted case Yk=Xk/bk is investigated. It is shown that the convergence of both these iterations is related to the so-called effective spectrum of A with respect to some matrix. As an application of our convergence results we discuss the convergence behavior of a sequence of scaled triangular matrices {DNTN }.  相似文献   

6.
One of the most widely used methods for eigenvalue computation is the QR iteration with Wilkinson’s shift: Here, the shift s is the eigenvalue of the bottom 2×2 principal minor closest to the corner entry. It has been a long-standing question whether the rate of convergence of the algorithm is always cubic. In contrast, we show that there exist matrices for which the rate of convergence is strictly quadratic. More precisely, let $T_{ {\mathcal {X}}}One of the most widely used methods for eigenvalue computation is the QR iteration with Wilkinson’s shift: Here, the shift s is the eigenvalue of the bottom 2×2 principal minor closest to the corner entry. It has been a long-standing question whether the rate of convergence of the algorithm is always cubic. In contrast, we show that there exist matrices for which the rate of convergence is strictly quadratic. More precisely, let T XT_{ {\mathcal {X}}} be the 3×3 matrix having only two nonzero entries (T X)12=(T X)21=1(T_{ {\mathcal {X}}})_{12}=(T_{ {\mathcal {X}}})_{21}=1 and let T\varLambda {\mathcal {T}}_{\varLambda } be the set of real, symmetric tridiagonal matrices with the same spectrum as T XT_{ {\mathcal {X}}} . There exists a neighborhood U ì T\varLambda \boldsymbol {{\mathcal {U}}}\subset {\mathcal {T}}_{\varLambda } of T XT_{ {\mathcal {X}}} which is invariant under Wilkinson’s shift strategy with the following properties. For T0 ? UT_{0}\in \boldsymbol {{\mathcal {U}}} , the sequence of iterates (T k ) exhibits either strictly quadratic or strictly cubic convergence to zero of the entry (T k )23. In fact, quadratic convergence occurs exactly when limTk=T X\lim T_{k}=T_{ {\mathcal {X}}} . Let X\boldsymbol {{\mathcal {X}}} be the union of such quadratically convergent sequences (T k ): The set X\boldsymbol {{\mathcal {X}}} has Hausdorff dimension 1 and is a union of disjoint arcs Xs\boldsymbol {{\mathcal {X}}}^{\sigma} meeting at T XT_{ {\mathcal {X}}} , where σ ranges over a Cantor set.  相似文献   

7.
The discretization of eigenvalue problems for partial differential operators is a major source of matrix eigenvalue problems having very large dimensions, but only some of the smallest eigenvalues together with the eigenvectors are to be determined. Preconditioned inverse iteration (a “matrix-free” method) derives from the well-known inverse iteration procedure in such a way that the associated system of linear equations is solved approximately by using a (multigrid) preconditioner. A new convergence analysis for preconditioned inverse iteration is presented. The preconditioner is assumed to satisfy some bound for the spectral radius of the error propagation matrix resulting in a simple geometric setup. In this first part the case of poorest convergence depending on the choice of the preconditioner is analyzed. In the second part the dependence on all initial vectors having a fixed Rayleigh quotient is considered. The given theory provides sharp convergence estimates for the eigenvalue approximations showing that multigrid eigenvalue/vector computations can be done with comparable efficiency as known from multigrid methods for boundary value problems.  相似文献   

8.
Quasi-Newton algorithms minimize a functionF(x),xR n, searching at any iterationk along the directions k=?H kgk, whereg k=?F(x k) andH k approximates in some sense the inverse Hessian ofF(x) atx k. When the matrixH is updated according to the formulas in Broyden's family and when an exact line search is performed at any iteration, a compact algorithm (free from the Broyden's family parameter) can be conceived in terms of the followingn ×n matrix: $$H{_R} = H - Hgg{^T} H/g{^T} Hg,$$ which can be viewed as an approximating reduced inverse Hessian. In this paper, a new algorithm is proposed which uses at any iteration an (n?1)×(n?1) matrixK related toH R by $$H_R = Q\left[ {\begin{array}{*{20}c} 0 & 0 \\ 0 & K \\ \end{array} } \right]Q$$ whereQ is a suitable orthogonaln×n matrix. The updating formula in terms of the matrixK incorporated in this algorithm is only moderately more complicated than the standard updating formulas for variable-metric methods, but, at the same time, it updates at any iteration a positive definite matrixK, instead of a singular matrixH R. Other than the compactness with respect to the algorithms with updating formulas in Broyden's class, a further noticeable feature of the reduced Hessian algorithm is that the downhill condition can be stated in a simple way, and thus efficient line searches may be implemented.  相似文献   

9.
In this paper we discuss an abstract iteration scheme for the calculation of the smallest eigenvalue of an elliptic operator eigenvalue problem. A short and geometric proof based on the preconditioned inverse iteration (PINVIT) for matrices (Knyazev and Neymeyr, SIAM J Matrix Anal 31:621–628, 2009) is extended to the case of operators. We show that convergence is retained up to any tolerance if one only uses approximate applications of operators which leads to the perturbed preconditioned inverse iteration (PPINVIT). We then analyze the Besov regularity of the eigenfunctions of the Poisson eigenvalue problem on a polygonal domain, showing the advantage of an adaptive solver to uniform refinement when using a stable wavelet base. A numerical example for PPINVIT, applied to the model problem on the L-shaped domain, is shown to reproduce the predicted behaviour.  相似文献   

10.
LetG be an algebraic group over a fieldk. We callg εG(k) real ifg is conjugate tog −1 inG(k). In this paper we study reality for groups of typeG 2 over fields of characteristic different from 2. LetG be such a group overk. We discuss reality for both semisimple and unipotent elements. We show that a semisimple element inG(k) is real if and only if it is a product of two involutions inG(k). Every unipotent element inG(k) is a product of two involutions inG(k). We discuss reality forG 2 over special fields and construct examples to show that reality fails for semisimple elements inG 2 over ℚ and ℚp. We show that semisimple elements are real forG 2 overk withcd(k) ≤ 1. We conclude with examples of nonreal elements inG 2 overk finite, with characteristick not 2 or 3, which are not semisimple or unipotent.  相似文献   

11.
Given any natural numberm 2, we describe an iteration functiong m (x) having the property that for any initial iterate \sqrt \alpha $$ " align="middle" border="0"> , the sequence of fixed-point iterationx k +1 =g m (x k ) converges monotonically to having anm-th order rate of convergence. Form = 2 and 3,g m (x) coincides with Newton's and Halley's iteration functions, respectively, as applied top(x) =x 2 – .This research is supported in part by the National Science Foundation under Grant No. CCR-9208371.  相似文献   

12.
Iterative algorithms for the Convex Feasibility Problem can be modified so that at iterationk the original convex sets are perturbed with a parameter εk which tends to zero ask increases. We establish conditions on such algorithms which guarantee existence of a sequence of perturbation parameters which make them finitely convergent when applied to a convex feasibility problem whose feasible set has non empty interior.  相似文献   

13.
14.
In this paper, we propose an inverse inexact iteration method for the computation of the eigenvalue with the smallest modulus and its associated eigenvector for a large sparse matrix. The linear systems of the traditional inverse iteration are solved with accuracy that depends on the eigenvalue with the second smallest modulus and iteration numbers. We prove that this approach preserves the linear convergence of inverse iteration. We also propose two practical formulas for the accuracy bound which are used in actual implementation. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
The preconditioned inverse iteration is an efficient method to compute the smallest eigenpair of a symmetric positive definite matrix M. Here we use this method to find the smallest eigenvalues of a hierarchical matrix. The storage complexity of the data‐sparse ‐matrices is almost linear. We use ‐arithmetic to precondition with an approximate inverse of M or an approximate Cholesky decomposition of M. In general, ‐arithmetic is of linear‐polylogarithmic complexity, so the computation of one eigenvalue is cheap. We extend the ideas to the computation of inner eigenvalues by computing an invariant subspace S of (M ? μI)2 by subspace preconditioned inverse iteration. The eigenvalues of the generalized matrix Rayleigh quotient μM(S) are the desired inner eigenvalues of M. The idea of using (M ? μI)2 instead of M is known as the folded spectrum method. As we rely on the positive definiteness of the shifted matrix, we cannot simply apply shifted inverse iteration therefor. Numerical results substantiate the convergence properties and show that the computation of the eigenvalues is superior to existing algorithms for non‐sparse matrices.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
The purpose of this paper is to analyze the convergence of interval-type algorithms for solving the generalized fractional program. They are characterized by an interval [LB k , UB k ] including*, and the length of the interval is reduced at each iteration. A closer analysis of the bounds LB k and UB k allows to modify slightly the best known interval-type algorithm NEWMODM accordingly to prove its convergence and derive convergence rates similar to those for a Dinkelbach-type algorithm MAXMODM under the same conditions. Numerical results in the linear case indicate that the modifications to get convergence results are not obtained at the expense of the numerical efficiency since the modified version BFII is as efficient as NEWMODM and more efficient than MAXMODM.This research was supported by NSERC (Grant A8312) and FCAR (Grant 0899).  相似文献   

17.
Optimization techniques are finding increasingly numerous applications in process design, in parallel to the increase of computer sophistication. The process synthesis problem can be stated as a largescale constrained optimization problem involving numerous local optima and presenting a nonlinear and nonconvex character. To solve this kind of problem, the classical optimization methods can lead to analytical and numerical difficulties. This paper describes the feasibility of an optimization technique based on learning systems which can take into consideration all the prior information concerning the process to be optimized and improve their behavior with time. This information generally occurs in a very complex analytical, empirical, or know-how form. Computer simulations related to chemical engineering problems (benzene chlorination, distillation sequence) and numerical examples are presented. The results illustrate both the performance and the implementation simplicity of this method.Nomenclature c i penalty probability - cp precision parameter on constraints - D variation domain of the variablex - f(·) objective function - g(·) constraints - i,j indexes - k iteration number - N number of actions - P probability distribution vector - p i ith component of the vectorP as iterationk - r number of reactors in the flowsheet - u(k) discrete value or action chosen by the algorithm at iterationk - u i discrete value of the optimization variable in [u min,u max] - u min lowest value of the optimization variable - u max largest value of the optimization variable - Z random number - x variable for the criterion function - xp precision parameter on criterion function - W(k) performance index unit output at iterationk - 0, 1 reinforcement scheme parameters - p sum of the probability distribution vector components  相似文献   

18.
In this paper, we will compare the convergence properties of three basic reduction methods, by placing them in a general framework. It covers the reduction to tridiagonal, semiseparable and semiseparable plus diagonal form. These reductions are often used as the first step in the computation of the eigenvalues and/or eigenvectors of arbitrary matrices. In this way, the calculation of the eigenvalues using, for example, the QR-algorithm reduces in complexity. First we will investigate the convergence properties of these three reduction algorithms. It will be shown that for the partially reduced matrices at step k of any of these reduction algorithms, the lower right k × k (already reduced) sub-block will have the Lanczos–Ritz values, w.r.t. a certain starting vector. It will also be shown that the reductions to semiseparable and to semiseparable plus diagonal form have an extra convergence behavior a special type of subspace iteration is performed on the lower right k × k submatrix, which contains these Ritz-values. Secondly we look in more detail at the behavior of the involved subspace iteration. It will be shown that the reduction method can be interpreted as a nested type of multi-shift iteration. Theoretical results will be presented, making it possible to predict the convergence behavior of these reduction algorithms. Also a theoretical bound on the convergence rate is presented. Finally we illustrate by means of numerical examples, how it is possible to tune the convergence behavior such that it can become a powerful tool for certain applications.  相似文献   

19.
We use methods of geometric computing combined with hermitean matrix eigenvalue/eigenvector evaluations to find the numerical radius w(A) of a real or complex square matrix A simply, quickly, and accurately. The numerical radius w(A) is defined as the maximal distance of points in the field of values F(A) = { x* A x | ||x||2 = 1 }F(A) = \{ x^* A x \mid \|x\|_2 = 1 \} from zero in ℂ. Its value is an indicator of the transient behavior of the discrete dynamical system f k + 1 = Af k . We describe and test a MATLAB code for solving this optimization problem that can have multiple and even infinitely many solutions with maximal distance.  相似文献   

20.
For each natural number n, poset T, and |T|–tuple of scalars Q, we introduce the ramified partition algebra P n (T) (Q), which is a physically motivated and natural generalization of the partition algebra [24, 25] (the partition algebra coincides with case |T|=1). For fixed n and T these algebras, like the partition algebra, have a basis independent of Q. We investigate their representation theory in case ${{T=\underline{{2}}:=({1,2},\leq)}}$. We show that ${{P_n^{(\underline{{2}})$ (Q) is quasi–hereditary over field k when Q 1 Q 2 is invertible in k and k is such that certain finite group algebras over k are semisimple (e.g. when k is algebraically closed, characteristic zero). Under these conditions we determine an index set for simple modules of ${{P_n^{(\underline{{2}})$ (Q), and construct standard modules with this index set. We show that there are unboundedly many choices of Q such that ${{P_n^{(\underline{{2}})$ (Q) is not semisimple for sufficiently large n, but that it is generically semisimple for all n. We construct tensor space representations of certain non–semisimple specializations of ${{P_n^{(\underline{{2}})$ (Q), and show how to use these to build clock model transfer matrices [24] in arbitrary physical dimensions. Sadly Ahmed died before this work was completed. His memory lives on.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号