首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses the accelerating of nonlinear parabolic equations. Two iterative methods for solving the implicit scheme new nonlinear iterative methods named by the implicit-explicit quasi-Newton (IEQN) method and the derivative free implicit-explicit quasi-Newton (DFIEQN) method are introduced, in which the resulting linear equations from the linearization can preserve the parabolic characteristics of the original partial differential equations. It is proved that the iterative sequence of the iteration method can converge to the solution of the implicit scheme quadratically. Moreover, compared with the Jacobian Free Newton-Krylov (JFNK) method, the DFIEQN method has some advantages, e.g., its implementation is easy, and it gives a linear algebraic system with an explicit coefficient matrix, so that the linear (inner) iteration is not restricted to the Krylov method. Computational results by the IEQN, DFIEQN, JFNK and Picard iteration methods are presented in confirmation of the theory and comparison of the performance of these methods.  相似文献   

2.
改进的预处理共轭斜量法及其在工程有限元分析中的应用   总被引:9,自引:0,他引:9  
本文就预处理共轭斜量法(PCCG法)给出了两个具有理论和实际意义的定理,它们分别讨论了迭代解的定性性质和迭代矩阵的构造原则.作者提出了新的非M-矩阵的不完全LU分解技术和迭代矩阵的构造方法.用此改进的PCCG法,对病态问题和大型三维有限元问题进行了计算并与其他方法作了对比,分析了PCCG法在求解病态方程组时的反常现象.计算结果表明本文建议的方法是求解大型有限元方程组和病态方程组的一种十分有效的方法.  相似文献   

3.
潘春平 《计算数学》2022,44(4):481-495
本文针对求解大型稀疏非Hermitian正定线性方程组的HSS迭代方法,利用迭代法的松弛技术进行加速,提出了一种具有三个参数的超松弛HSS方法(SAHSS)和不精确的SAHSS方法(ISAHSS),它采用CG和一些Krylov子空间方法作为其内部过程,并研究了SAHSS和ISAHSS方法的收敛性.数值例子验证了新方法的有效性.  相似文献   

4.
Inspired by some implicit-explicit linear multistep schemes and additive Runge-Kutta methods, we develop a novel split Newton iterative algorithm for the numerical solution of nonlinear equations. The proposed method improves computational efficiency by reducing the computational cost of the Jacobian matrix. Consistency and global convergence of the new method are also maintained. To test its effectiveness, we apply the method to nonlinear reaction-diffusion equations, such as Burger’s-Huxley equation and fisher’s equation. Numerical examples suggest that the involved iterative method is much faster than the classical Newton’s method on a given time interval.  相似文献   

5.
Combining a suitable two-point iterative method for solving nonlinear equations and Weierstrass’ correction, a new iterative method for simultaneous finding all zeros of a polynomial is derived. It is proved that the proposed method possesses a cubic convergence locally. Numerical examples demonstrate a good convergence behavior of this method in a global sense. It is shown that its computational efficiency is higher than the existing derivative-free methods.  相似文献   

6.
The discretizations of many differential equations by the finite difference or the finite element methods can often result in a class of system of weakly nonlinear equations. In this paper, by applying the two-tage iteration technique and in accordance with the special properties of this weakly nonlinear system, we first propose a general two-tage iterative method through the two-tage splitting of the system matrix. Then, by applying the accelerated overrelaxation (AOR) technique of the linear iterative methods, we present a two-tage AOR method, which particularly uses the AOR iteration as the inner iteration and is substantially a relaxed variant of the afore-presented method. For these two classes of methods, we establish their local convergence theories, and precisely estimate their asymptotic convergence factors under some suitable assumptions when the involved nonlinear mapping is only B-differentiable. When the system matrix is either a monotone matrix or an H-matrix, and the nonlinear mapping is a P-bounded mapping, we thoroughly set up the global convergence theories of these new methods. Moreover, under the assumptions that the system matrix is monotone and the nonlinear mapping is isotone, we discuss the monotone convergence properties of the new two-tage iteration methods, and investigate the influence of the matrix splittings as well as the relaxation parameters on the convergence behaviours of these methods. Numerical computations show that our new methods are feasible and efficient for solving the system of weakly nonlinear equations. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

7.
It is well known that the nonconvex variational inequalities are equivalent to the fixed point problems. We use this equivalent alternative formulation to suggest and analyze a new class of two-step iterative methods for solving the nonconvex variational inequalities. We discuss the convergence of the iterative method under suitable conditions. We also introduce a new class of Wiener – Hopf equations. We establish the equivalence between the nonconvex variational inequalities and the Wiener – Hopf equations. This alternative equivalent formulation is used to suggest some iterative methods. We also consider the convergence analysis of these iterative methods. Our method of proofs is very simple compared to other techniques.  相似文献   

8.
Recently, Bai et al. (2013) proposed an effective and efficient matrix splitting iterative method, called preconditioned modified Hermitian/skew-Hermitian splitting (PMHSS) iteration method, for two-by-two block linear systems of equations. The eigenvalue distribution of the iterative matrix suggests that the splitting matrix could be advantageously used as a preconditioner. In this study, the CGNR method is utilized for solving the PMHSS preconditioned linear systems, and the performance of the method is considered by estimating the condition number of the normal equations. Furthermore, the proposed method is compared with other PMHSS preconditioned Krylov subspace methods by solving linear systems arising in complex partial differential equations and a distributed control problem. The numerical results demonstrate the difference in the performance of the methods under consideration.  相似文献   

9.
The conjugate gradient method is one of the most popular iterative methods for computing approximate solutions of linear systems of equations with a symmetric positive definite matrix A. It is generally desirable to terminate the iterations as soon as a sufficiently accurate approximate solution has been computed. This paper discusses known and new methods for computing bounds or estimates of the A-norm of the error in the approximate solutions generated by the conjugate gradient method.  相似文献   

10.
A new Alternating-Direction Sinc–Galerkin (ADSG) method is developed and contrasted with classical Sinc–Galerkin methods. It is derived from an iterative scheme for solving the Lyapunov equation that arises when a symmetric Sinc–Galerkin method is used to approximate the solution of elliptic partial differential equations. We include parameter choices (derived from numerical experiments) that simplify existing alternating-direction algorithms. We compare the new scheme to a standard method employing Gaussian elimination on a system produced using the Kronecker product and Kronecker sum, as well as to a more efficient algorithm employing matrix diagonalization. We note that the ADSG method easily outperforms Gaussian elimination on the Kronecker sum and, while competitive with matrix diagonalization, does not require the computation of eigenvalues and eigenvectors.  相似文献   

11.
反问题是现在数学物理研究中的一个热点问题,而反问题求解面临的一个本质性困难是不适定性。求解不适定问题的普遍方法是:用与原不适定问题相“邻近”的适定问题的解去逼近原问题的解,这种方法称为正则化方法.如何建立有效的正则化方法是反问题领域中不适定问题研究的重要内容.当前,最为流行的正则化方法有基于变分原理的Tikhonov正则化及其改进方法,此类方法是求解不适定问题的较为有效的方法,在各类反问题的研究中被广泛采用,并得到深入研究.  相似文献   

12.
1. IntroductionThe new aPProaCh is based on the analysis of the motion of a damped harmonic oscillatorin the gravitational field 11]. The associated equation of motion ismXtt + oXt + aX = b (1)where X = X(t), is the one dimensions di8PlaCement of Of a mass m under a dissipation(o > 0), a ~nic potential (a > 0) and a constant acceleration (b, gravitational field). Thetotal energy variation is given by the equationwhereThe solution of the motion equation (1) is given by the sum of two contr…  相似文献   

13.
The use of block two-stage methods for the iterative solution of consistent singular linear systems is studied. In these methods, suitable for parallel computations, different blocks, i.e., smaller linear systems, can be solved concurrently by different processors. Each of these smaller systems are solved by an (inner) iterative method. Hypotheses are provided for the convergence of non-stationary methods, i.e., when the number of inner iterations may vary from block to block and from one outer iteration to another. It is shown that the iteration matrix corresponding to one step of the block method is convergent, i.e., that its powers converge to a limit matrix. A theorem on the convergence of the infinite product of matrices with the same eigenspace corresponding to the eigenvalue 1 is proved, and later used as a tool in the convergence analysis of the block method. The methods studied can be used to solve any consistent singular system, including discretizations of certain differential equations. They can also be used to find stationary probability distribution of Markov chains. This last application is considered in detail.  相似文献   

14.
曾闽丽  张国凤 《计算数学》2016,38(4):354-371
 有限元离散一类速度追踪问题后得到具有鞍点结构的线性系统,针对该鞍点系统,本文提出了一种新的分裂迭代技术.证明了新的分裂迭代方法的无条件收敛性,详细分析了新的分裂预条件子对应的预处理矩阵的谱性质.数值结果验证了对于大范围的网格参数和正则参数,新的分裂预条件子在求解有限元离散速度追踪问题得到的鞍点系统时的可行性和有效性.  相似文献   

15.
Block Krylov subspace methods (KSMs) comprise building blocks in many state‐of‐the‐art solvers for large‐scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well‐explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs.  相似文献   

16.
Projection methods have emerged as competitive techniques for solving large scale matrix Lyapunov equations. We explore the numerical solution of this class of linear matrix equations when a Minimal Residual (MR) condition is used during the projection step. We derive both a new direct method, and a preconditioned operator-oriented iterative solver based on CGLS, for solving the projected reduced least squares problem. Numerical experiments with benchmark problems show the effectiveness of an MR approach over a Galerkin procedure using the same approximation space.  相似文献   

17.
提出了一类具有参数平方收敛的求解非线性方程的线性插值迭代法,方法以Newton法和Steffensen法为其特例,并且给出了该类方法的最佳迭代参数.数值试验表明,选用最佳迭代参数或其近似值的新方法比Newton法和Steffensen方法更有效.  相似文献   

18.
Bai  Zhong-Zhi 《Numerical Algorithms》1997,15(3-4):347-372
The finite difference or the finite element discretizations of many differential or integral equations often result in a class of systems of weakly nonlinear equations. In this paper, by reasonably applying both the multisplitting and the two-stage iteration techniques, and in accordance with the special properties of this system of weakly nonlinear equations, we first propose a general multisplitting two-stage iteration method through the two-stage multiple splittings of the system matrix. Then, by applying the accelerated overrelaxation (AOR) technique of the linear iterative methods, we present a multisplitting two-stage AOR method, which particularly uses the AOR-like iteration as inner iteration and is substantially a relaxed variant of the afore-presented method. These two methods have a forceful parallel computing function and are much more suitable to the high-speed multiprocessor systems. For these two classes of methods, we establish their local convergence theories, and precisely estimate their asymptotic convergence factors under some suitable assumptions when the involved nonlinear mapping is only directionally differentiable. When the system matrix is either an H-matrix or a monotone matrix, and the nonlinear mapping is a P-bounded mapping, we thoroughly set up the global convergence theories of these new methods. Moreover, under the assumptions that the system matrix is monotone and the nonlinear mapping is isotone, we discuss the monotone convergence properties of the new multisplitting two-stage iteration methods, and investigate the influence of the multiple splittings as well as the relaxation parameters upon the convergence behaviours of these methods. Numerical computations show that our new methods are feasible and efficient for parallel solving of the system of weakly nonlinear equations. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

19.
The aim of the present paper is to introduce and investigate new ninth and seventh order convergent Newton-type iterative methods for solving nonlinear equations. The ninth order convergent Newton-type iterative method is made derivative free to obtain seventh-order convergent Newton-type iterative method. These new with and without derivative methods have efficiency indices 1.5518 and 1.6266, respectively. The error equations are used to establish the order of convergence of these proposed iterative methods. Finally, various numerical comparisons are implemented by MATLAB to demonstrate the performance of the developed methods.  相似文献   

20.
In the present paper, we propose a hierarchical identification method (SSHI) for solving Lyapunov matrix equations, which is based on the symmetry and skew-symmetry splitting of the coefficient matrix. We prove that the iterative algorithm consistently converges to the true solution for any initial values with some conditions, and illustrate that the rate of convergence of the iterative solution can be enhanced by choosing the convergence factors appropriately. Furthermore, we show that the method adopted can be easily extended to study iterative solutions of other matrix equations, such as Sylvester matrix equations. Finally, we test the algorithms and show their effectiveness using numerical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号