共查询到20条相似文献,搜索用时 0 毫秒
1.
We are concerned with defining new globalization criteria for solution methods of nonlinear equations. The current criteria used in these methods require a sufficient decrease of a particular merit function at each iteration of the algorithm. As was observed in the field of smooth unconstrained optimization, this descent requirement can considerably slow the rate of convergence of the sequence of points produced and, in some cases, can heavily deteriorate the performance of algorithms. The aim of this paper is to show that the global convergence of most methods proposed in the literature for solving systems of nonlinear equations can be obtained using less restrictive criteria that do not enforce a monotonic decrease of the chosen merit function. In particular, we show that a general stabilization scheme, recently proposed for the unconstrained minimization of continuously differentiable functions, can be extended to methods for the solution of nonlinear (nonsmooth) equations. This scheme includes different kinds of relaxation of the descent requirement and opens up the possibility of describing new classes of algorithms where the old monotone linesearch techniques are replace with more flexible nonmonotone stabilization procedures. As in the case of smooth unconstrained optimization, this should be the basis for defining more efficient algorithms with very good practical rates of convergence.This material is partially based on research supported by the Air Force Office of Scientific Research Grant AFOSR-89-0410, National Science Foundation Grant CCR-91-57632, and Istituto di Analisi dei Sistemi ed Informatica del CNR. 相似文献
2.
Recently a new derivative-free algorithm has been proposed for the solution of linearly constrained finite minimax problems. This derivative-free algorithm is based on a smoothing technique that allows one to take into account the non-smoothness of the max function. In this paper, we investigate, both from a theoretical and computational point of view, the behavior of the minmax algorithm when used to solve systems of nonlinear inequalities when derivatives are unavailable. In particular, we show an interesting property of the algorithm, namely, under some mild conditions regarding the regularity of the functions defining the system, it is possible to prove that the algorithm locates a solution of the problem after a finite number of iterations. Furthermore, under a weaker regularity condition, it is possible to show that an accumulation point of the sequence generated by the algorithm exists which is a solution of the system. Moreover, we carried out numerical experimentation and comparison of the method against a standard pattern search minimization method. The obtained results confirm that the good theoretical properties of the method correspond to interesting numerical performance. Moreover, the algorithm compares favorably with a standard derivative-free method, and this seems to indicate that extending the smoothing technique to pattern search algorithms can be beneficial. 相似文献
3.
Chao Gu 《Applied mathematics and computation》2011,217(22):9351-9357
In this paper, we propose a nonmonotone filter Diagonalized Quasi-Newton Multiplier (DQMM) method for solving system of nonlinear equations. The system of nonlinear equations is transformed into a constrained nonlinear programming problem which is then solved by nonmonotone filter DQMM method. A nonmonotone criterion is used to speed up the convergence progress in some ill-conditioned cases. Under reasonable conditions, we give the global convergence properties. The numerical experiments are reported to show the effectiveness of the proposed algorithm. 相似文献
4.
M.A. Diniz-Ehrhardt J.M. Martínez M. Raydan 《Journal of Computational and Applied Mathematics》2008,219(2):383
A tolerant derivative–free nonmonotone line-search technique is proposed and analyzed. Several consecutive increases in the objective function and also nondescent directions are admitted for unconstrained minimization. To exemplify the power of this new line search we describe a direct search algorithm in which the directions are chosen randomly. The convergence properties of this random method rely exclusively on the line-search technique. We present numerical experiments, to illustrate the advantages of using a derivative-free nonmonotone globalization strategy, with approximated-gradient type methods and also with the inverse SR1 update that could produce nondescent directions. In all cases we use a local variation finite differences approximation to the gradient. 相似文献
5.
We present a hybrid algorithm that combines a genetic algorithm with the Barzilai–Borwein gradient method. Under specific assumptions the new method guarantees the convergence to a stationary point of a continuously differentiable function, from any arbitrary initial point. Our preliminary numerical results indicate that the new methodology finds efficiently and frequently the global minimum, in comparison with the globalized Barzilai–Borwein method and the genetic algorithm of the Toolbox of Genetic Algorithms of MatLab. 相似文献
6.
Xia Wang 《Journal of Computational and Applied Mathematics》2010,234(5):1611-4927
In this paper, three new families of eighth-order iterative methods for solving simple roots of nonlinear equations are developed by using weight function methods. Per iteration these iterative methods require three evaluations of the function and one evaluation of the first derivative. This implies that the efficiency index of the developed methods is 1.682, which is optimal according to Kung and Traub’s conjecture [7] for four function evaluations per iteration. Notice that Bi et al.’s method in [2] and [3] are special cases of the developed families of methods. In this study, several new examples of eighth-order methods with efficiency index 1.682 are provided after the development of each family of methods. Numerical comparisons are made with several other existing methods to show the performance of the presented methods. 相似文献
7.
《Journal of the Egyptian Mathematical Society》2013,21(3):334-339
The aim of the present paper is to introduce and investigate new ninth and seventh order convergent Newton-type iterative methods for solving nonlinear equations. The ninth order convergent Newton-type iterative method is made derivative free to obtain seventh-order convergent Newton-type iterative method. These new with and without derivative methods have efficiency indices 1.5518 and 1.6266, respectively. The error equations are used to establish the order of convergence of these proposed iterative methods. Finally, various numerical comparisons are implemented by MATLAB to demonstrate the performance of the developed methods. 相似文献
8.
J. Abaffy 《Journal of Optimization Theory and Applications》1992,73(2):269-277
In this paper, someQ-order convergence theorems are given for the problem of solving nonlinear systems of equations when using very general finitely terminating methods for the solution of the associated linear systems. The theorems differ from those of Dembo, Eisenstat, and Steihaug in the different stopping condition and in their applicability to the nonlinear ABS algorithm.Lecture presented at the University of Bergamo, Bergamo, Italy, October 1989. 相似文献
9.
Tensor methods for large sparse systems of nonlinear equations 总被引:1,自引:0,他引:1
This paper introduces tensor methods for solving large sparse systems of nonlinear equations. Tensor methods for nonlinear equations were developed in the context of solving small to medium-sized dense problems. They base each iteration on a quadratic model of the nonlinear equations, where the second-order term is selected so that the model requires no more derivative or function information per iteration than standard linear model-based methods, and hardly more storage or arithmetic operations per iteration. Computational experiments on small to medium-sized problems have shown tensor methods to be considerably more efficient than standard Newton-based methods, with a particularly large advantage on singular problems. This paper considers the extension of this approach to solve large sparse problems. The key issue considered is how to make efficient use of sparsity in forming and solving the tensor model problem at each iteration. Accomplishing this turns out to require an entirely new way of solving the tensor model that successfully exploits the sparsity of the Jacobian, whether the Jacobian is nonsingular or singular. We develop such an approach and, based upon it, an efficient tensor method for solving large sparse systems of nonlinear equations. Test results indicate that this tensor method is significantly more efficient and robust than an efficient sparse Newton-based method, in terms of iterations, function evaluations, and execution time. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Work supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, US Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008.Research supported by AFOSR Grants No. AFOSR-90-0109 and F49620-94-1-0101, ARO Grants No. DAAL03-91-G-0151 and DAAH04-94-G-0228, and NSF Grant No. CCR-9101795. 相似文献
10.
Tensor methods for nonlinear equations base each iteration upon a standard linear model, augmented by a low rank quadratic term that is selected in such a way that the mode is efficient to form, store, and solve. These methods have been shown to be very efficient and robust computationally, especially on problems where the Jacobian matrix at the root has a small rank deficiency. This paper analyzes the local convergence properties of two versions of tensor methods, on problems where the Jacobian matrix at the root has a null space of rank one. Both methods augment the standard linear model by a rank one quadratic term. We show under mild conditions that the sequence of iterates generated by the tensor method based upon an ideal tensor model converges locally and two-step Q-superlinearly to the solution with Q-order 3/2, and that the sequence of iterates generated by the tensor method based upon a practial tensor model converges locally and three-step Q-superlinearly to the solution with Q-order 3/2. In the same situation, it is known that standard methods converge linearly with constant converging to 1/2. Hence, tensor methods have theoretical advantages over standard methods. Our analysis also confirms that tensor methods converge at least quadratically on problems where the Jacobian matrix at the root is nonsingular.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday.Research supported by AFOSR grant AFOSR-90-0109, ARO grant DAAL 03-91-G-0151, NSF grants CCR-8920519 CCR-9101795. 相似文献
11.
In this paper fast implicit and explicit Runge–Kutta methods for systems of Volterra integral equations of Hammerstein type
are constructed. The coefficients of the methods are expressed in terms of the values of the Laplace transform of the kernel.
These methods have been suitably constructed in order to be implemented in an efficient way, thus leading to a very low computational
cost both in time and in space. The order of convergence of the constructed methods is studied. The numerical experiments
confirm the expected accuracy and computational cost.
AMS subject classification (2000) 65R20, 45D05, 44A35, 44A10 相似文献
12.
13.
14.
《Applied Mathematical Modelling》2014,38(11-12):3003-3015
This study presents a new trust-region procedure to solve a system of nonlinear equations in several variables. The proposed approach combines an effective adaptive trust-region radius with a nonmonotone strategy, because it is believed that this combination can improve the efficiency and robustness of the trust-region framework. Indeed, it decreases the computational cost of the algorithm by decreasing the required number of subproblems to be solved. The global and the quadratic convergence of the proposed approach is proved without any nondegeneracy assumption of the exact Jacobian. Preliminary numerical results indicate the promising behavior of the new procedure to solve systems of nonlinear equations. 相似文献
15.
A family of eighth-order iterative methods with four evaluations for the solution of nonlinear equations is presented. Kung and Traub conjectured that an iteration method without memory based on n evaluations could achieve optimal convergence order 2n-1. The new family of eighth-order methods agrees with the conjecture of Kung-Traub for the case n=4. Therefore this family of methods has efficiency index equal to 1.682. Numerical comparisons are made with several other existing methods to show the performance of the presented methods. 相似文献
16.
Modification of Newton’s method with higher-order convergence is presented. The modification of Newton’s method is based on King’s fourth-order method. The new method requires three-step per iteration. Analysis of convergence demonstrates that the order of convergence is 16. Some numerical examples illustrate that the algorithm is more efficient and performs better than classical Newton’s method and other methods. 相似文献
17.
R. Thukral 《Applied mathematics and computation》2010,217(1):222-6635
In this paper we present an improvement of the fourth-order Newton-type method for solving a nonlinear equation. The new Newton-type method is shown to converge of the order eight. Per iteration the new method requires three evaluations of the function and one evaluation of its first derivative and therefore the new method has the efficiency index of , which is better than the well known Newton-type methods of lower order. We shall examine the effectiveness of the new eighth-order Newton-type method by approximating the simple root of a given nonlinear equation. Numerical comparisons are made with several other existing methods to show the performance of the presented method. 相似文献
18.
In this paper, we derive a new family of eighth-order methods for solving simple roots of nonlinear equations by using weight function methods. Per iteration these methods require three evaluations of the function and one evaluation of its first derivative, which implies that the efficiency indexes are 1.682. Numerical comparisons are made to show the performance of the derived methods, as shown in the illustration examples. 相似文献
19.
Invariants of reduced forms of a p.d.e. are obtainable from a variational principle even though the p.d.e. itself does not admit a Lagrangian. The reductions carry all the advantages regarding Noether symmetries and double reductions via first integrals or conserved quantities. The examples we consider are nonlinear evolution type equations like the general form of the Fizhugh–Nagumo and KdV–Burgers equations. Some aspects of Painlevé properties of the reduced equations are also obtained. 相似文献
20.
Jovana D?uni? 《Applied mathematics and computation》2011,217(14):6633-6635
In this short note we discuss certain similarities between some three-point methods for solving nonlinear equations. In particular, we show that the recent three-point method published in [R. Thukral, A new eighth-order iterative method for solving nonlinear equations, Appl. Math. Comput. 217 (2010) 222-229] is a special case of the family of three-point methods proposed previously in [R. Thukral, M.S. Petkovi?, Family of three-point methods of optimal order for solving nonlinear equations, J. Comput. Appl. Math. 233 (2010) 2278-2284]. 相似文献