共查询到20条相似文献,搜索用时 15 毫秒
1.
The midpoint method is an iterative method for the solution of nonlinear equations in a Banach space. Convergence results for this method have been studied in [3, 4, 9, 12]. Here we show how to improve and extend these results. In particular, we use hypotheses on the second Fréchet derivative of the nonlinear operator instead of the third-derivative hypotheses employed in the previous results and we obtain Banach space versions of some results that were derived in [9, 12] only in the real or complex space. We also provide various examples that validate our results. 相似文献
2.
A. S. Tikhomirov 《Computational Mathematics and Mathematical Physics》2007,47(5):780-790
An estimate of the convergence rate of some homogeneous Markov monotone random search optimization algorithms is obtained. 相似文献
3.
Liu Hongwei Wang Mingjie Li Jinshan Zhang Xiangsun 《高校应用数学学报(英文版)》2006,21(3):276-288
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient. 相似文献
4.
J. L. Goffin 《Mathematical Programming》1977,13(1):329-347
Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratio the convergence, if it occurs, is geometric with rate. For convergence to occur, it is necessary that the initial step size be large enough, and that the ratio be greater than a sustainable ratez(), which depends upon a condition number, defined for both differentiable and nondifferentiable functions. The sustainable ratez() is closely related to the rate of convergence of the steepest ascent method for differentiable functions: in fact it is identical if the function is not too well conditioned.This research was supported in part by the D.G.E.S. (Quebec) and the N.R.C. of Canada under grants A8970 and A4152. 相似文献
5.
6.
Hongmin Ren 《Journal of Mathematical Analysis and Applications》2006,321(1):396-404
For the iteration which was independently proposed by King [R.F. King, Tangent method for nonlinear equations, Numer. Math. 18 (1972) 298-304] and Werner [W. Werner, Über ein Verfarhren der Ordnung zur Nullstellenbestimmung, Numer. Math. 32 (1979) 333-342] for solving a nonlinear operator equation in Banach space, we established a local convergence theorem under the condition which was introduced recently by Argyros [I.K. Argyros, A unifying local-semilocal convergence analysis and application for two-point Newton-like methods in Banach space, J. Math. Anal. Appl. 298 (2004) 374-397]. 相似文献
7.
We study the projected gradient algorithm for linearly constrained optimization. Wolfe (Ref. 1) has produced a counterexample to show that this algorithm can jam. However, his counterexample is only 1(
n
), and it is conjectured that the algorithm is convergent for 2-functions. We show that this conjecture is partly right. We also show that one needs more assumptions to prove convergence, since we present a family of counterexamples. We finally give a demonstration that no jamming can occur for quadratic objective functions.This work was supported by the Natural Sciences and Engineering Research Council of Canada 相似文献
8.
In this paper we give local convergence results of an inexact Newton-type method for monotone equations under a local error bound condition. This condition may hold even for problems with non-isolated solutions, and it therefore is weaker than the standard non-singularity condition. 相似文献
9.
In this paper, we propose a new integral global optimization algorithm for finding the solution of continuous minimization problem, and prove the asymptotic convergence of this algorithm. In our modified method we use variable measure integral, importance sampling and main idea of the cross-entropy method to ensure its convergence and efficiency. Numerical results show that the new method is very efficient in some challenging continuous global optimization problems. 相似文献
10.
M. S. Sarma 《Journal of Optimization Theory and Applications》1990,66(2):337-343
The Baba and Dorea global minimization methods have been applied to two physical problems. The first one is that of finding the global minimum of the transformer design function of six variables subject to constraints. The second one is the problem of fitting the orbit of a satellite using a set of observations. The latter problem is reduced to that of finding the global minimum of the sum of the squares of the differences between the observed values of the azimuth, elevation, and range at certain intervals of time from the epoch and the computed values of the azimuth, elevation, and range at the same intervals of time. Baba and Dorea established theoretically that the random optimization methods converge to the global minimum with probability one. The numerical experiments carried out for the above two problems show that convergence is very slow for the first problem and is even slower for the second problem. In both cases, it has not been possible to reach the global minimum if the search domains of the variables are wide, even after a very large number of function evaluations.The author thanks the referee for his suggestions on improving the presentation of the paper. 相似文献
11.
The local quadratic convergence of the Gauss-Newton method for convex composite optimization f=h∘F is established for any convex function h with the minima set C, extending Burke and Ferris’ results in the case when C is a set of weak sharp minima for h.
Received: July 24, 1998 / Accepted: November 29, 2000?Published online September 3, 2001 相似文献
12.
Ioannis K. Argyros 《Central European Journal of Mathematics》2007,5(2):205-214
We provide sufficient convergence conditions for the Secant method of approximating a locally unique solution of an operator
equation in a Banach space. The main hypothesis is the gamma condition first introduced in [10] for the study of Newton’s
method. Our sufficient convergence condition reduces to the one obtained in [10] for Newton’s method. A numerical example
is also provided.
相似文献
13.
《Optimization》2012,61(9):1387-1400
Although the Hesteness and Stiefel (HS) method is a well-known method, if an inexact line search is used, researches about its convergence rate are very rare. Recently, Zhang, Zhou and Li [Some descent three-term conjugate gradient methods and their global convergence, Optim. Method Softw. 22 (2007), pp. 697–711] proposed a three-term Hestenes–Stiefel method for unconstrained optimization problems. In this article, we investigate the convergence rate of this method. We show that the three-term HS method with the Wolfe line search will be n-step superlinearly and even quadratically convergent if some restart technique is used under reasonable conditions. Some numerical results are also reported to verify the theoretical results. Moreover, it is more efficient than the previous ones. 相似文献
14.
A. S. Tikhomirov 《Computational Mathematics and Mathematical Physics》2006,46(3):361-375
Estimates of the convergence rate of some homogeneous Markov monotone random search optimization methods are given. 相似文献
15.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method
combines line search method and trust region method to generate new iterative points at each iteration and therefore has both
advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information
at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so
that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and
analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and
robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method
is effective, stable and robust in practical computation, compared with other similar methods. 相似文献
16.
In this paper, we study the order of convergence of the Euler-Maruyama (EM) method for neutral stochastic functional differential equations (NSFDEs). Under the global Lipschitz condition, we show that the pth moment convergence of the EM numerical solutions for NSFDEs has order p/2 − 1/l for any p ? 2 and any integer l > 1. Moreover, we show the rate of the mean-square convergence of EM method under the local Lipschitz condition is 1 − ε/2 for any ε ∈ (0, 1), provided the local Lipschitz constants of the coefficients, valid on balls of radius j, are supposed not to grow faster than log j. This is significantly different from the case of stochastic differential equations where the order is 1/2. 相似文献
17.
倪勤 《应用数学学报(英文版)》1998,14(3):271-283
1.IntroductionIn[6],aQPFTHmethodwasproposedforsolvingthefollowingnonlinearprogrammingproblemwherefunctionsf:R"-- RIandgi:R"-- R',jeJaretwicecontinuouslydifferentiable.TheQPFTHalgorithmwasdevelopedforsolvingsparselarge-scaleproblem(l.l)andwastwo-stepQ-quadraticallyandR-quadraticallyconvergent(see[6]).Theglobalconvergenceofthisalgorithmisdiscussedindetailinthispaper.Forthefollowinginvestigationwerequiresomenotationsandassumptions.TheLagrangianofproblem(1.1)isdefinedbyFOundationofJiangs… 相似文献
18.
The semilocal convergence properties of Halley’s method for nonlinear operator equations are studied under the hypothesis that the second derivative satisfies some weak Lipschitz condition. The method employed in the present paper is based on a family of recurrence relations which will be satisfied by the involved operator. An application to a nonlinear Hammerstein integral equation of the second kind is provided. 相似文献
19.
20.
Ahmed Roubi 《Computational Optimization and Applications》1994,3(3):259-280
We consider a method of centers for solving constrained optimization problems. We establish its global convergence and that it converges with a linear rate when the starting point of the algorithm is feasible as well as when the starting point is infeasible. We demonstrate the effect of the scaling on the rate of convergence. We extend afterwards, the stability result of [5] to the infeasible case anf finally, we give an application to semi-infinite optimization problems. 相似文献