首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15篇
  免费   0篇
数学   15篇
  2020年   1篇
  2016年   1篇
  2014年   1篇
  2013年   1篇
  2012年   2篇
  2009年   1篇
  2008年   1篇
  2007年   3篇
  1983年   2篇
  1978年   1篇
  1977年   1篇
排序方式: 共有15条查询结果,搜索用时 31 毫秒
1.
We develop a simple yet effective and applicable scheme for constructing derivative free optimal iterative methods, consisting of one parameter, for solving nonlinear equations. According to the, still unproved, Kung-Traub conjecture an optimal iterative method based on k+1 evaluations could achieve a maximum convergence order of $2^{k}$ . Through the scheme, we construct derivative free optimal iterative methods of orders two, four and eight which request evaluations of two, three and four functions, respectively. The scheme can be further applied to develop iterative methods of even higher orders. An optimal value of the free-parameter is obtained through optimization and this optimal value is applied adaptively to enhance the convergence order without increasing the functional evaluations. Computational results demonstrate that the developed methods are efficient and robust as compared with many well known methods.  相似文献   
2.
To solve the linear algebraic equationP(A)x=y whereP is a real polynomial of degree two, we shall use a stationary iterative method. It is shown that this method converges for all matrices with eigenvalues in a sector in the right complex half plane provided that the zeros ofP are not in the same sector.  相似文献   
3.
The Newton method is one of the most used methods for solving nonlinear system of equations when the Jacobian matrix is nonsingular. The method converges to a solution with Q-order two for initial points sufficiently close to the solution. The method of Halley and the method of Chebyshev are among methods that have local and cubic rate of convergence. Combining these methods with a backtracking and curvilinear strategy for unconstrained optimization problems these methods have been shown to be globally convergent. The backtracking forces a strict decrease of the function of the unconstrained optimization problem. It is shown that no damping of the step in the backtracking routine is needed close to a strict local minimizer and the global method behaves as a local method. The local behavior for the unconstrained optimization problem is investigated by considering problems with two unknowns and it is shown that there are no significant differences in the region where the global method turn into a local method for second and third order methods. Further, the final steps to reach a predefined tolerance are investigated. It is shown that the region where the higher order methods terminate in one or two iteration is significantly larger than the corresponding region for Newton’s method.  相似文献   
4.
Lennart Frimannslund  Trond Steihaug 《PAMM》2007,7(1):1062101-1062102
We present a theorem regarding the average curvature properties of partially separable functions which need not be differentiable or continuous. This has implications for derivative-free optimization methods which make use of average curvature information to select the set of search directions. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   
5.
Steihaug  Trond 《Numerical Algorithms》2020,83(4):1259-1275
Numerical Algorithms - This is an overview of examples and problems posed in the late 1600s up to the mid 1700s for the purpose of testing or explaining the two different implementations of the...  相似文献   
6.
In this paper, we present a new barrier function for primal–dual interior-point methods in linear optimization. The proposed kernel function has a trigonometric barrier term. It is shown that in the interior-point methods based on this function for large-update methods, the iteration bound is improved significantly. For small-update interior-point methods, the iteration bound is the best currently known bound for primal–dual interior-point methods.  相似文献   
7.
8.
Least change secant updates can be obtained as the limit of iterated projections based on other secant updates. We show that these iterated projections can be terminated or truncated after any positive number of iterations and the local and the superlinear rate of convergence are still maintained. The truncated iterated projections method is used to find sparse and symmetric updates that are locally and superlinearly convergent. A part of this paper was presented at the Third International Workshop on Numerical Analysis, IIMAS, University of Mexico, January 1981 and ORSA-TIMS National Meeting, Houston, October 1981.  相似文献   
9.
We consider solving the unconstrained minimization problem using an iterative method derived from the third order super Halley method. Each iteration of the super Halley method requires the solution of two linear systems of equations. We show a practical implementation using an iterative method to solve the linear systems. This paper introduces an array of arrays (jagged) data structure for storing the second and third derivative of a multivariate function and suitable termination criteria for the (inner) iterative method to achieve a cubic rate of convergence. Using a jagged compressed diagonal storage of the Hessian matrices and for the tensor, numerical results show that storing the diagonals are more efficient than the row or column oriented approach when we use an iterative method for solving the linear systems of equations.  相似文献   
10.
Geir Gundersen  Trond Steihaug 《PAMM》2007,7(1):2060011-2060012
One of the central problems of scientific computation is the efficient numerical solution of the system of n equations in n unknowns F (x) = 0 where F: RnRn is sufficiently smooth. While Newton's method is usually used for solving such systems, third order methods will in general use fewer iterations than a second order method to reach the same accuracy. However, the number of arithmetic operations per iteration is higher for third order methods than second order methods. In this note we will consider the case where F = ∇f, where f is three times continuously differentiable. We will show that for a large class of sparse problems the ratio of the number of arithmetic operations of a third order method and Newton's method is constant per iteration. It is shown that when the structure of the tensor is induced by a general sparse structured Hessian matrix which gives no fill-ins when we use a direct method to solve a system of linear equations. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号