首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A family of eighth-order iterative methods with four evaluations for the solution of nonlinear equations is presented. Kung and Traub conjectured that an iteration method without memory based on n evaluations could achieve optimal convergence order 2n-1. The new family of eighth-order methods agrees with the conjecture of Kung-Traub for the case n=4. Therefore this family of methods has efficiency index equal to 1.682. Numerical comparisons are made with several other existing methods to show the performance of the presented methods.  相似文献   

2.
Modification of Newton’s method with higher-order convergence is presented. The modification of Newton’s method is based on King’s fourth-order method. The new method requires three-step per iteration. Analysis of convergence demonstrates that the order of convergence is 16. Some numerical examples illustrate that the algorithm is more efficient and performs better than classical Newton’s method and other methods.  相似文献   

3.
In this paper we present two new schemes, one is third-order and the other is fourth-order. These are improvements of second-order methods for solving nonlinear equations and are based on the method of undetermined coefficients. We show that the fourth-order method is more efficient than the fifth-order method due to Kou et al. [J. Kou, Y. Li, X. Wang, Some modifications of Newton’s method with fifth-order covergence, J. Comput. Appl. Math., 209 (2007) 146–152]. Numerical examples are given to support that the methods thus obtained can compete with other iterative methods.  相似文献   

4.
In this paper, a variant of Steffensen’s method of fourth-order convergence for solving nonlinear equations is suggested. Its error equation and asymptotic convergence constant are proven theoretically and demonstrated numerically. The derivative-free method only uses three evaluations of the function per iteration to achieve fourth-order convergence. Its applications on systems of nonlinear equations and boundary-value problems of nonlinear ODEs are showed as well in the numerical examples.  相似文献   

5.
In this paper, we present a new third-order modification of Newton’s method for multiple roots, which is based on existing third-order multiple root-finding methods. Numerical examples show that the new method is competitive to other methods for multiple roots.  相似文献   

6.
In this paper, we present a new fourth-order method for finding multiple roots of nonlinear equations. It requires one evaluation of the function and two of its first derivative per iteration. Finally, some numerical examples are given to show the performance of the presented method compared with some known third-order methods.  相似文献   

7.
In this paper we present a new efficient sixth-order scheme for nonlinear equations. The method is compared to several members of the family of methods developed by Neta (1979) [B. Neta, A sixth-order family of methods for nonlinear equations, Int. J. Comput. Math. 7 (1979) 157-161]. It is shown that the new method is an improvement over this well known scheme.  相似文献   

8.
In this paper, based on Ostrowski’s method, a new family of eighth-order methods for solving nonlinear equations is derived. In terms of computational cost, each iteration of these methods requires three evaluations of the function and one evaluation of its first derivative, so that their efficiency indices are 1.682, which is optimal according to Kung and Traub’s conjecture. Numerical comparisons are made to show the performance of the new family.  相似文献   

9.
In this paper, we developed two new families of sixth-order methods for solving simple roots of non-linear equations. Per iteration these methods require two evaluations of the function and two evaluations of the first-order derivatives, which implies that the efficiency indexes of our methods are 1.565. These methods have more advantages than Newton’s method and other methods with the same convergence order, as shown in the illustration examples. Finally, using the developing methodology described in this paper, two new families of improvements of Jarratt method with sixth-order convergence are derived in a straightforward manner. Notice that Kou’s method in [Jisheng Kou, Yitian Li, An improvement of the Jarratt method, Appl. Math. Comput. 189 (2007) 1816-1821] and Wang’s method in [Xiuhua Wang, Jisheng Kou, Yitian Li, A variant of Jarratt method with sixth-order convergence, Appl. Math. Comput. 204 (2008) 14-19] are the special cases of the new improvements.  相似文献   

10.
Picard’s iterative method for the solution of nonlinear advection-reaction-diffusion equations is formulated and its convergence proved. The method is based on the introduction of a complete metric space and makes uses of a contractive mapping and Banach’s fixed-point theory. From Picard’s iterative method, the variational iteration method is derived without making any use at all of Lagrange multipliers and constrained variations. Some examples that illustrate the advantages and shortcomings of the iterative procedure presented here are shown.  相似文献   

11.
Four generalized algorithms builded up from Ostrowski’s method for solving systems of nonlinear equations are written and analyzed. A development of an inverse first-order divided difference operator for functions of several variables is presented, as well as a direct computation of the local order of convergence for these variants of Ostrowski’s method. Furthermore, a sequence that approximates the order of convergence is generated for the examples and it confirms in a numerical way that the order of the methods is well deduced.  相似文献   

12.
There exists a real competition between authors to construct improved iterative methods for solving nonlinear equations. In this paper, by using computer experiment, we study the basins of attraction for some of the iterative methods for solving the equation P(z) = 0, where P:CC is a complex coefficients polynomial, and this allows us to compare their performances (the area of convergence and theirs speed). The beauty fractal pictures generated by these methods are presented too.  相似文献   

13.
It is well known that Newton’s method for a nonlinear system has quadratic convergence when the Jacobian is a nonsingular matrix in a neighborhood of the solution. Here we present a modification of this method for nonlinear systems whose Jacobian matrix is singular. We prove, under certain conditions, that this modified Newton’s method has quadratic convergence. Moreover, different numerical tests confirm the theoretical results and allow us to compare this variant with the classical Newton’s method.  相似文献   

14.
In this paper we study inexact inverse iteration for solving the generalised eigenvalue problem A xM x. We show that inexact inverse iteration is a modified Newton method and hence obtain convergence rates for various versions of inexact inverse iteration for the calculation of an algebraically simple eigenvalue. In particular, if the inexact solves are carried out with a tolerance chosen proportional to the eigenvalue residual then quadratic convergence is achieved. We also show how modifying the right hand side in inverse iteration still provides a convergent method, but the rate of convergence will be quadratic only under certain conditions on the right hand side. We discuss the implications of this for the preconditioned iterative solution of the linear systems. Finally we introduce a new ILU preconditioner which is a simple modification to the usual preconditioner, but which has advantages both for the standard form of inverse iteration and for the version with a modified right hand side. Numerical examples are given to illustrate the theoretical results. AMS subject classification (2000)  65F15, 65F10  相似文献   

15.
Newton’s method is a basic tool in numerical analysis and numerous applications, including operations research and data mining. We survey the history of the method, its main ideas, convergence results, modifications, its global behavior. We focus on applications of the method for various classes of optimization problems, such as unconstrained minimization, equality constrained problems, convex programming and interior point methods. Some extensions (non-smooth problems, continuous analog, Smale’s results, etc.) are discussed briefly, while some others (e.g., versions of the method to achieve global convergence) are addressed in more details.  相似文献   

16.
17.
In this paper, we derive one-parameter families of Newton, Halley, Chebyshev, Chebyshev-Halley type methods, super-Halley, C-methods, osculating circle and ellipse methods respectively for finding simple zeros of nonlinear equations, permitting f ′ (x) = 0 at some points in the vicinity of the required root. Halley, Chebyshev, super-Halley methods and, as an exceptional case, Newton method are seen as the special cases of the family. All the methods of the family and various others are cubically convergent to simple roots except Newton’s or a family of Newton’s method.   相似文献   

18.
In this paper, we present a family of new variants of Chebyshev–Halley methods with sixth-order convergence. Compared with Chebyshev–Halley methods, the new methods require one additional evaluation of the function. The numerical results presented show that the new methods compete with Chebyshev–Halley methods.  相似文献   

19.
We study the relaxed Newton’s method applied to polynomials. In particular, we give a technique such that for any n≥2, we may construct a polynomial so that when the method is applied to a polynomial, the resulting rational function has an attracting cycle of period n. We show that when we use the method to extract radicals, the set consisting of the points at which the method fails to converge to the roots of the polynomial p(z)=zmc (this set includes the Julia set) has zero Lebesgue measure. Consequently, iterate sequences under the relaxed Newton’s method converge to the roots of the preceding polynomial with probability one.  相似文献   

20.
In this paper two families of zero-finding iterative methods for solving nonlinear equations f(x)=0 are presented. The key idea to derive them is to solve an initial value problem applying Obreshkov-like techniques. More explicitly, Obreshkov’s methods have been used to numerically solve an initial value problem that involves the inverse of the function f that defines the equation. Carrying out this procedure, several methods with different orders of local convergence have been obtained. An analysis of the efficiency of these methods is given. Finally we introduce the concept of extrapolated computational order of convergence with the aim of numerically test the given methods. A procedure for the implementation of an iterative method with an adaptive multi-precision arithmetic is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号