共查询到20条相似文献,搜索用时 15 毫秒
1.
Tensor methods for large sparse systems of nonlinear equations 总被引:1,自引:0,他引:1
This paper introduces tensor methods for solving large sparse systems of nonlinear equations. Tensor methods for nonlinear equations were developed in the context of solving small to medium-sized dense problems. They base each iteration on a quadratic model of the nonlinear equations, where the second-order term is selected so that the model requires no more derivative or function information per iteration than standard linear model-based methods, and hardly more storage or arithmetic operations per iteration. Computational experiments on small to medium-sized problems have shown tensor methods to be considerably more efficient than standard Newton-based methods, with a particularly large advantage on singular problems. This paper considers the extension of this approach to solve large sparse problems. The key issue considered is how to make efficient use of sparsity in forming and solving the tensor model problem at each iteration. Accomplishing this turns out to require an entirely new way of solving the tensor model that successfully exploits the sparsity of the Jacobian, whether the Jacobian is nonsingular or singular. We develop such an approach and, based upon it, an efficient tensor method for solving large sparse systems of nonlinear equations. Test results indicate that this tensor method is significantly more efficient and robust than an efficient sparse Newton-based method, in terms of iterations, function evaluations, and execution time. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Work supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Computational and Technology Research, US Department of Energy, under Contract W-31-109-Eng-38, by the National Aerospace Agency under Purchase Order L25935D, and by the National Science Foundation, through the Center for Research on Parallel Computation, under Cooperative Agreement No. CCR-9120008.Research supported by AFOSR Grants No. AFOSR-90-0109 and F49620-94-1-0101, ARO Grants No. DAAL03-91-G-0151 and DAAH04-94-G-0228, and NSF Grant No. CCR-9101795. 相似文献
2.
A new trust region method for nonlinear equations 总被引:1,自引:0,他引:1
In this paper, a new trust region method for the system of nonlinear equations is presented in which the determining of the trust region radius incorporates the information of its natural residual. The global convergence is obtained under mild conditions. Unlike traditional trust region method, the superlinear convergence of the method is proven under the local error bound condition. This condition is weaker than the nondegeneracy assumption which is necessary for superlinear convergence of traditional trust region method. We also propose an approximate algorithm for the trust region subproblem. Preliminary numerical experiments are reported.
Acknowledgements.The authors are indebted to our supervisor, Professor Y.-X. Yuan, for his excellent guidance and Jorge J. Moré for his subroutine. And we would like to thank the referees for their valuable suggestions and comments. 相似文献
3.
We present an algorithm, partitioning group correction (PGC) algorithm based on trust region and conjugate gradient method,
for large-scale sparse unconstrained optimization. In large sparse optimization, computing the whole Hessian matrix and solving
the Newton-like equations at each iteration can be considerably expensive when a trust region method is adopted. The method
depends on a symmetric consistent partition of the columns of the Hessian matrix and an inaccurate solution to the Newton-like
equations by conjugate gradient method. And we allow that the current direction exceeds the trust region bound if it is a
good descent direction. Besides, we studies a method dealing with some sparse matrices having a dense structure part. Some
good convergence properties are kept and we contrast the computational behavior of our method with that of other algorithms.
Our numerical tests show that the algorithm is promising and quite effective, and that its performance is comparable to or
better than that of other algorithms available. 相似文献
4.
We propose an extension of secant methods for nonlinear equations using a population of previous iterates. Contrarily to classical secant methods, where exact interpolation is used, we prefer a least squares approach to calibrate the linear model. We propose an explicit control of the numerical stability of the method. 相似文献
5.
J. Abaffy 《Journal of Optimization Theory and Applications》1992,73(2):269-277
In this paper, someQ-order convergence theorems are given for the problem of solving nonlinear systems of equations when using very general finitely terminating methods for the solution of the associated linear systems. The theorems differ from those of Dembo, Eisenstat, and Steihaug in the different stopping condition and in their applicability to the nonlinear ABS algorithm.Lecture presented at the University of Bergamo, Bergamo, Italy, October 1989. 相似文献
6.
L. Lukšan 《Journal of Optimization Theory and Applications》1996,89(3):575-595
Hybrid methods are developed for improving the Gauss-Newton method in the case of large residual or ill-conditioned nonlinear least-square problems. These methods are used usually in a form suitable for dense problems. But some standard approaches are unsuitable, and some new possibilities appear in the sparse case. We propose efficient hybrid methods for various representations of the sparse problems. After describing the basic ideas that help deriving new hybrid methods, we are concerned with designing hybrid methods for sparse Jacobian and sparse Hessian representations of the least-square problems. The efficiency of hybrid methods is demonstrated by extensive numerical experiments.This work was supported by the Czech Republic Grant Agency, Grant 201/93/0129. The author is indebted to Jan Vlek for his comments on the first draft of this paper and to anonymous referees for many useful remarks. 相似文献
7.
Ali Bouaricha 《Computational Optimization and Applications》1996,5(3):207-232
In this paper, we describe tensor methods for large systems of nonlinear equations based on Krylov subspace techniques for approximately solving the linear systems that are required in each tensor iteration. We refer to a method in this class as a tensor-Krylov algorithm. We describe comparative testing for a tensor-Krylov implementation versus an analogous implementation based on a Newton-Krylov method. The test results show that tensor-Krylov methods are much more efficient and robust than Newton-Krylov methods on hard nonlinear equations problems.Part of this work was performed while the author was research associate at CERFACS (Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique).Research supported in part by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38. 相似文献
8.
Elias A. Lipitakis 《Journal of Computational and Applied Mathematics》1984,11(1):39-48
Second degree normalized implicit conjugate gradient methods for the numerical solution of self-adjoint elliptic partial differential equations are developed. A proposal for the selection of certain values of the iteration parameters ?i, γi involved in solving two and three-dimensional elliptic boundary-value problems leading to substantial savings in computational work is presented. Experimental results for model problems are given. 相似文献
9.
This paper is concerned with the solution of nonlinear algebraic systems of equations. For this problem, we suggest new methods, which are combinations of the nonlinear ABS methods and quasi-Newton methods. Extensive numerical experiments compare particular algorithms and show the efficiency of the proposed methods.The authors are grateful to Professors C. G. Broyden and E. Spedicato for many helpful discussions. 相似文献
10.
Modification of Newton’s method with higher-order convergence is presented. The modification of Newton’s method is based on King’s fourth-order method. The new method requires three-step per iteration. Analysis of convergence demonstrates that the order of convergence is 16. Some numerical examples illustrate that the algorithm is more efficient and performs better than classical Newton’s method and other methods. 相似文献
11.
ACLASSOFFACTORIZATIONUPDATEALGORITHMFORSOLVINGSYSTEMSOFSPARSENONLINEAREQUATIONSBAIZHONGZHI(InstituteofComputationalMathematic... 相似文献
12.
TongXiaojiao ZhouShuzi 《高校应用数学学报(英文版)》2000,15(2):201-210
Abstract. A trust region algorithm for equality constrained optimization is given in this paper.The algorithm does not enforce strict monotonicity of the merit function for every iteration.Global convergence of the algorithm is proved under the same conditions of usual trust regionmethod. 相似文献
13.
We consider the problem of finding sparse solutions to a system of underdetermined non-linear system of equations. The methods are based on a Gauss–Newton approach with line search where the search direction is found by solving a linearized problem using only a subset of the columns in the Jacobian. The choice of columns in the Jacobian is made through a greedy approach looking at either maximum descent or an approach corresponding to orthogonal matching for linear problems. The methods are shown to be convergent and efficient and outperform the l1 approach on the test problems presented. 相似文献
14.
William La Cruz José Mario Martí nez Marcos Raydan. 《Mathematics of Computation》2006,75(255):1429-1448
A fully derivative-free spectral residual method for solving large-scale nonlinear systems of equations is presented. It uses in a systematic way the residual vector as a search direction, a spectral steplength that produces a nonmonotone process and a globalization strategy that allows for this nonmonotone behavior. The global convergence analysis of the combined scheme is presented. An extensive set of numerical experiments that indicate that the new combination is competitive and frequently better than well-known Newton-Krylov methods for large-scale problems is also presented.
15.
R. Thukral 《Applied mathematics and computation》2010,217(1):222-6635
In this paper we present an improvement of the fourth-order Newton-type method for solving a nonlinear equation. The new Newton-type method is shown to converge of the order eight. Per iteration the new method requires three evaluations of the function and one evaluation of its first derivative and therefore the new method has the efficiency index of , which is better than the well known Newton-type methods of lower order. We shall examine the effectiveness of the new eighth-order Newton-type method by approximating the simple root of a given nonlinear equation. Numerical comparisons are made with several other existing methods to show the performance of the presented method. 相似文献
16.
This paper discusses nonlinear complementarity problems; its goal is to present a globally and superlinearly convergent algorithm for the discussed problems. Filter methods are extensively studied to handle nonlinear complementarity problem. Because of good numerical results, filter techniques are attached. By means of a filter strategy, we present a new trust region method based on a conic model for nonlinear complementarity problems. Under a proper condition, the superlinear convergence of the algorithm is established without the strict complementarity condition. 相似文献
17.
Yixun Shi 《Numerical Algorithms》1996,12(2):273-286
A new globalization procedure for solving a nonlinear system of equationsF(x)=0 is proposed based on the idea of combining Newton step and the steepest descent step WITHIN each iteration. Starting with an arbitrary initial point, the procedure converges either to a solution of the system or to a local minimizer off(x)=1/2F(x)
T
F(x). Each iteration is chosen to be as close to a Newton step as possible and could be the Newton step itself. Asymptotically the Newton step will be taken in each iteration and thus the convergence is quadratic. Numerical experiments yield positive results. Further generalizations of this procedure are also discussed in this paper. 相似文献
18.
William La Cruz 《Applied mathematics and computation》2010,217(1):11-24
A derivative-free residual method for solving nonlinear operator equations in real Hilbert spaces is discussed. This method uses in a systematic way the residual as search direction, but it does not use first order information. Furthermore a convergence analysis and numerical results of the new method applied to nonlinear integral equations using symbolic computation are presented. 相似文献
19.
P. Michaels 《Communications in Nonlinear Science & Numerical Simulation》2012,17(7):3022-3030
We investigate thalamo-cortical systems that are modeled by nonlinear Volterra integro-differential equations of convolution type. We divide the systems into smaller subsystems in such a way that each of them is solved separately by a processor working independently of other processors results of which are shared only once in the process of computations. We solve the subsystems concurrently in a parallel computing environment and present results of numerical experiments, which show savings in the run time and therefore efficiency of our approach. For our numerical simulations, we apply different numbers np of processors and each case shows that the run time decreases with increasing np. The optimal speed-up is obtained with np = N, where N is the (moderate) number of equations in the thalamo-cortical model. 相似文献
20.
If F:H→H is a map in a Hilbert space H, , and there exists y such that F(y)=0, F′(y)≠0, then equation F(u)=0 can be solved by a DSM (dynamical systems method). This method yields also a convergent iterative method for finding y, and this method converges at the rate of a geometric series. It is not assumed that y is the only solution to F(u)=0. A stable approximation to a solution of the equation F(u)=f is constructed by a DSM when f is unknown but fδ is known, where fδ−f≤δ. 相似文献