首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The critical delays of a delay‐differential equation can be computed by solving a nonlinear two‐parameter eigenvalue problem. The solution of this two‐parameter problem can be translated to solving a quadratic eigenvalue problem of squared dimension. We present a structure preserving QR‐type method for solving such quadratic eigenvalue problem that only computes real‐valued critical delays; that is, complex critical delays, which have no physical meaning, are discarded. For large‐scale problems, we propose new correction equations for a Newton‐type or Jacobi–Davidson style method, which also forces real‐valued critical delays. We present three different equations: one real‐valued equation using a direct linear system solver, one complex valued equation using a direct linear system solver, and one Jacobi–Davidson style correction equation that is suitable for an iterative linear system solver. We show numerical examples for large‐scale problems arising from PDEs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
The correction equation in the Jacobi‐Davidson method is effective in a subspace orthogonal to the current eigenvector approximation, whereas for the continuation of the process only vectors orthogonal to the search subspace are of importance. Such a vector is obtained by orthogonalizing the (approximate) solution of the correction equation against the search subspace. As an alternative, a variant of the correction equation can be formulated that is restricted to the subspace orthogonal to the current search subspace. In this paper, we discuss the effectiveness of this variant. Our investigation is also motivated by the fact that the restricted correction equation can be used for avoiding stagnation in the case of defective eigenvalues. Moreover, this equation plays a key role in the inexact TRQ method [18]. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
Rayleigh quotient iteration is an iterative method with some attractive convergence properties for finding (interior) eigenvalues of large sparse Hermitian matrices. However, the method requires the accurate (and, hence, often expensive) solution of a linear system in every iteration step. Unfortunately, replacing the exact solution with a cheaper approximation may destroy the convergence. The (Jacobi‐) Davidson correction equation can be seen as a solution for this problem. In this paper we deduce quantitative results to support this viewpoint and we relate it to other methods. This should make some of the experimental observations in practice more quantitative in the Hermitian case. Asymptotic convergence bounds are given for fixed preconditioners and for the special case if the correction equation is solved with some fixed relative residual precision. A dynamic tolerance is proposed and some numerical illustration is presented. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, a new variant of the Jacobi–Davidson (JD) method is presented that is specifically designed for real unsymmetric matrix pencils. Whenever a pencil has a complex conjugate pair of eigenvalues, the method computes the two‐dimensional real invariant subspace spanned by the two corresponding complex conjugated eigenvectors. This is beneficial for memory costs and in many cases it also accelerates the convergence of the JD method. Both real and complex formulations of the correction equation are considered. In numerical experiments, the RJDQZ variant is compared with the original JDQZ method. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Several Jacobi–Davidson type methods are proposed for computing interior eigenpairs of large‐scale cubic eigenvalue problems. To successively compute the eigenpairs, a novel explicit non‐equivalence deflation method with low‐rank updates is developed and analysed. Various techniques such as locking, search direction transformation, restarting, and preconditioning are incorporated into the methods to improve stability and efficiency. A semiconductor quantum dot model is given as an example to illustrate the cubic nature of the eigenvalue system resulting from the finite difference approximation. Numerical results of this model are given to demonstrate the convergence and effectiveness of the methods. Comparison results are also provided to indicate advantages and disadvantages among the various methods. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
We propose subspace methods for three‐parameter eigenvalue problems. Such problems arise when separation of variables is applied to separable boundary value problems; a particular example is the Helmholtz equation in ellipsoidal and paraboloidal coordinates. While several subspace methods for two‐parameter eigenvalue problems exist, their extensions to a three‐parameter setting seem challenging. An inherent difficulty is that, while for two‐parameter eigenvalue problems, we can exploit a relation to Sylvester equations to obtain a fast Arnoldi‐type method, such a relation does not seem to exist when there are three or more parameters. Instead, we introduce a subspace iteration method with projections onto generalized Krylov subspaces that are constructed from scratch at every iteration using certain Ritz vectors as the initial vectors. Another possibility is a Jacobi–Davidson‐type method for three or more parameters, which we generalize from its two‐parameter counterpart. For both approaches, we introduce a selection criterion for deflation that is based on the angles between left and right eigenvectors. The Jacobi–Davidson approach is devised to locate eigenvalues close to a prescribed target; yet, it often also performs well when eigenvalues are sought based on the proximity of one of the components to a prescribed target. The subspace iteration method is devised specifically for the latter task. The proposed approaches are suitable especially for problems where the computation of several eigenvalues is required with high accuracy. MATLAB implementations of both methods have been made available in the package MultiParEig (see http://www.mathworks.com/matlabcentral/fileexchange/47844-multipareig ).  相似文献   

7.
This paper presents numerical solutions for the space‐ and time‐fractional Korteweg–de Vries equation (KdV for short) using the variational iteration method. The space‐ and time‐fractional derivatives are described in the Caputo sense. In this method, general Lagrange multipliers are introduced to construct correction functionals for the problems. The multipliers in the functionals can be identified optimally via variational theory. The iteration method, which produces the solutions in terms of convergent series with easily computable components, requiring no linearization or small perturbation. The numerical results show that the approach is easy to implement and accurate when applied to space‐ and time‐fractional KdV equations. The method introduces a promising tool for solving many space–time fractional partial differential equations. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2007  相似文献   

8.
Sadkane  Miloud  Sidje  Roger B. 《Numerical Algorithms》1999,20(2-3):217-240
The Davidson method is a preconditioned eigenvalue technique aimed at computing a few of the extreme (i.e., leftmost or rightmost) eigenpairs of large sparse symmetric matrices. This paper describes a software package which implements a deflated and variable-block version of the Davidson method. Information on how to use the software is provided. Guidelines for its upgrading or for its incorporation into existing packages are also included. Various experiments are performed on an SGI Power Challenge and comparisons with ARPACK are reported. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
Solutions of large sparse linear systems of equations are usually obtained iteratively by constructing a smaller dimensional subspace such as a Krylov subspace. The convergence of these methods is sometimes hampered by the presence of small eigenvalues, in which case, some form of deflation can help improve convergence. The method presented in this paper enables the solution to be approximated by focusing the attention directly on the ‘small’ eigenspace (‘singular vector’ space). It is based on embedding the solution of the linear system within the eigenvalue problem (singular value problem) in order to facilitate the direct use of methods such as implicitly restarted Arnoldi or Jacobi–Davidson for the linear system solution. The proposed method, called ‘solution by null‐space approximation and projection’ (SNAP), differs from other similar approaches in that it converts the non‐homogeneous system into a homogeneous one by constructing an annihilator of the right‐hand side. The solution then lies in the null space of the resulting matrix. We examine the construction of a sequence of approximate null spaces using a Jacobi–Davidson style singular value decomposition method, called restarted SNAP‐JD, from which an approximate solution can be obtained. Relevant theory is discussed and the method is illustrated by numerical examples where SNAP is compared with both GMRES and GMRES‐IR. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Convergence results are provided for inexact two‐sided inverse and Rayleigh quotient iteration, which extend the previously established results to the generalized non‐Hermitian eigenproblem and inexact solves with a decreasing solve tolerance. Moreover, the simultaneous solution of the forward and adjoint problem arising in two‐sided methods is considered, and the successful tuning strategy for preconditioners is extended to two‐sided methods, creating a novel way of preconditioning two‐sided algorithms. Furthermore, it is shown that inexact two‐sided Rayleigh quotient iteration and the inexact two‐sided Jacobi‐Davidson method (without subspace expansion) applied to the generalized preconditioned eigenvalue problem are equivalent when a certain number of steps of a Petrov–Galerkin–Krylov method is used and when this specific tuning strategy is applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
To compute the smallest eigenvalues and associated eigenvectors of a real symmetric matrix, we consider the Jacobi–Davidson method with inner preconditioned conjugate gradient iterations for the arising linear systems. We show that the coefficient matrix of these systems is indeed positive definite with the smallest eigenvalue bounded away from zero. We also establish a relation between the residual norm reduction in these inner linear systems and the convergence of the outer process towards the desired eigenpair. From a theoretical point of view, this allows to prove the optimality of the method, in the sense that solving the eigenproblem implies only a moderate overhead compared with solving a linear system. From a practical point of view, this allows to set up a stopping strategy for the inner iterations that minimizes this overhead by exiting precisely at the moment where further progress would be useless with respect to the convergence of the outer process. These results are numerically illustrated on some model example. Direct comparison with some other eigensolvers is also provided. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
Partial eigenvalue decomposition (PEVD) and partial singular value decomposition (PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algorithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method. Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations.  相似文献   

13.
In this article, the Sawada–Kotera–Ito seventh‐order equation is studied. He's variational iteration method and Adomian's decomposition method (ADM) are applied to obtain solution of this equation. We compare these methods together. The study highlights the significant features of the employed methods and its capability of handling completely integrable equations. © 2010 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 27: 887–897, 2011  相似文献   

14.
In this paper, we shall use the variational iteration method to solve some problems of non-linear partial differential equations (PDEs) such as the combined KdV–MKdV equation and Camassa–Holm equation. The variational iteration method is superior than the other non-linear methods, such as the perturbation methods where this method does not depend on small parameters, such that it can fined wide application in non-linear problems without linearization or small perturbation. In this method, the problems are initially approximated with possible unknowns, then a correction functional is constructed by a general Lagrange multiplier, which can be identified optimally via the variational theory.  相似文献   

15.
We consider variational iteration method to investigate generalized Burger–Fisher and Burger equations. In this method, general Lagrange multipliers are introduced to construct correction functionals for the problems. The multipliers in the functionals can be identified optimally via variational theory. Comparison with Adomian decomposition method reveals that the approximate solutions obtained by the proposed method converge to its exact solution faster than those of Adomian’s method. Its remarkable accuracy is finally demonstrated in the study of some values of constants in generalized Burger–Fisher and Burger equations.  相似文献   

16.
Perturbation methods depend on a small parameter which is difficult to be found for real-life nonlinear problems. To overcome this shortcoming, two new but powerful analytical methods are introduced to solve nonlinear heat transfer problems in this article; one is He's variational iteration method (VIM) and the other is the homotopy-perturbation method (HPM). The VIM is to construct correction functionals using general Lagrange multipliers identified optimally via the variational theory, and the initial approximations can be freely chosen with unknown constants. The HPM deforms a difficult problem into a simple problem which can be easily solved. Nonlinear convective–radiative cooling equation, nonlinear heat equation (porous media equation) and nonlinear heat equation with cubic nonlinearity are used as examples to illustrate the simple solution procedures. Comparison of the applied methods with exact solutions reveals that both methods are tremendously effective.  相似文献   

17.
Heinrich Voss 《PAMM》2007,7(1):1021001-1021002
The Jacobi–Davidson method is known to converge at least quadratically if the correction equation is solved exactly, and it is common experience that the fast convergence is maintained if the correction equation is solved only approximately. Here we derive the Jacobi–Davidson method in a way that explains this robust behavior. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
The work presents an adaptation of iteration method for solving a class of thirst order partial nonlinear differential equation with mixed derivatives.The class of partial differential equations present here is not solvable with neither the method of Green function, the most usual iteration methods for instance variational iteration method, homotopy perturbation method and Adomian decomposition method, nor integral transform for instance Laplace,Sumudu, Fourier and Mellin transform. We presented the stability and convergence of the used method for solving this class of nonlinear chaotic equations.Using the proposed method, we obtained exact solutions to this kind of equations.  相似文献   

19.
Kathrin Schreiber  Hubert Schwetlick 《PAMM》2007,7(1):1020401-1020402
We present a Jacobi–Davidson like correction formula for left and right eigenvector approximations for non-Hermitian nonlinear eigenvalue problems. It exploits techniques from singularity theory for characterizing singular points of nonlinear equations. Unlike standard nonlinear Jacobi-Davidson, the correction formula does not contain derivative information and works with orthogonal projectors only. Moreover, the basic method is modified in that the new eigenvalue approximation is taken as a nonlinear Rayleigh functional obtained as root of a certain scalar nonlinear equation the existence of which – as well as a first order perturbation expansion – is shown. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
Instead of finding a small parameter for solving nonlinear problems through perturbation method, a new analytical method called He's variational iteration method (VIM) is introduced to be applied to solve nonlinear Jaulent–Miodek, coupled KdV and coupled MKdV equations in this article. In this method, general Lagrange multipliers are introduced to construct correction functionals for the problems. The multipliers can be identified optimally via the variational theory. The results are compared with exact solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号