首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   0篇
数学   10篇
  2017年   1篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2012年   1篇
  1998年   2篇
  1993年   1篇
  1986年   1篇
  1985年   1篇
排序方式: 共有10条查询结果,搜索用时 15 毫秒
1
1.
In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi-Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG. In the latter case, we embed damped techniques within a class of preconditioners based on quasi-Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising.  相似文献   
2.
Self-scaling quasi-Newton methods for unconstrained optimization depend upon updating the Hessian approximation by a formula which depends on two parameters (say, and ) such that = 1, = 0, and = 1 yield the unscaled Broyden family, the BFGS update, and the DFP update, respectively. In previous work, conditions were obtained on these parameters that imply global and superlinear convergence for self-scaling methods on convex objective functions. This paper discusses the practical performance of several new algorithms designed to satisfy these conditions.  相似文献   
3.
4.
This paper is aimed to extend a certain damped technique, suitable for the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, to the limited memory BFGS method in the case of the large-scale unconstrained optimization. It is shown that the proposed technique maintains the global convergence property on uniformly convex functions for the limited memory BFGS method. Some numerical results are described to illustrate the important role of the damped technique. Since this technique enforces safely the positive definiteness property of the BFGS update for any value of the steplength, we also consider only the first Wolfe–Powell condition on the steplength. Then, as for the backtracking framework, only one gradient evaluation is performed on each iteration. It is reported that the proposed damped methods work much better than the limited memory BFGS method in several cases.  相似文献   
5.
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ??better?? curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.  相似文献   
6.
We consider Newton-like line search descent methods for solving non-linear least-squares problems. The basis of our approach is to choose a method, or parameters within a method, by minimizing a variational measure which estimates the error in an inverse Hessian approximation. In one approach we consider sizing methods and choose sizing parameters in an optimal way. In another approach we consider various possibilities for hybrid Gauss-Newton/BFGS methods. We conclude that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and we illustrate this by a range of comparative tests with other methods. These experiments include not only many well known test problems but also some new classes of large residual problem.  相似文献   
7.
This paper studies the convergence properties of algorithms belonging to the class of self-scaling (SS) quasi-Newton methods for unconstrained optimization. This class depends on two parameters, say k and k , for which the choice k =1 gives the Broyden family of unscaled methods, where k =1 corresponds to the well known DFP method. We propose simple conditions on these parameters that give rise to global convergence with inexact line searches, for convex objective functions. The q-superlinear convergence is achieved if further restrictions on the scaling parameter are introduced. These convergence results are an extension of the known results for the unscaled methods. Because the scaling parameter is heavily restricted, we consider a subclass of SS methods which satisfies the required conditions. Although convergence for the unscaled methods with k 1 is still an open question, we show that the global and superlinear convergence for SS methods is possible and present, in particular, a new SS-DFP method.  相似文献   
8.
This article proposes new conjugate gradient method for unconstrained optimization by applying the Powell symmetrical technique in a defined sense. Using the Wolfe line search conditions, the global convergence property of the method is also obtained based on the spectral analysis of the conjugate gradient iteration matrix and the Zoutendijk condition for steepest descent methods. Preliminary numerical results for a set of 86 unconstrained optimization test problems verify the performance of the algorithm and show that the Generalized Descent Symmetrical Hestenes-Stiefel algorithm is competitive with the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP+) algorithms.  相似文献   
9.
In this paper, we propose new members of the Broyden family of quasi-Newton methods. We develop, on the basis of well-known least-change results for the BFGS and DFP updates, a measure for the Broyden family which seeks to take into account the change in both the Hessian approximation and its inverse. The proposal is then to choose the formula which gives the least value of this measure in terms of the two parameters available, and hence to produce an update which is optimal in the sense of the given measure. Several approaches to the problem of minimizing the measure are considered, from which new updates are obtained. In particular, one approach yields a new variational result for the Davidon optimally conditioned method and another yields a reasonable modification to this method. The paper is also concerned with the possibility of estimating, in a certain sense, the size of the eigenvalues of the Hessian approximation on the basis of two available scalars. This allows one to derive further modifications to the above-mentioned methods. Comparisons with the BFGS and Davidson methods are made on a set of standard test problems that show promising results for certain new methods.Part of this work was done during the author's visits at International Centre for Theoretical Physics, Trieste, Italy, at Systems Department, University of Calabria, Cosenza, Italy, and at Ajman University College of Science and Technology, Ajman, United Arab Emirates.The author expresses his gratitude to Professor L. Grandinetti for his encouragement and thanks the anonymous referees for their careful reading of an earlier draft of the paper and valuable comments, which led to a substantial improvement of the original paper.  相似文献   
10.
The line search subproblem in unconstrained optimization is concerned with finding an acceptable steplength which satisfies certain standard conditions. Prototype algorithms are described which guarantee finding such a step in a finite number of operations. This is achieved by first bracketing an interval of acceptable values and then reducing this bracket uniformly by the repeated use of sectioning in a systematic way. Some new theorems about convergence and termination of the line search are presented.Use of these algorithms to solve the line search subproblem in methods for nonlinear least squares is considered. We show that substantial gains in efficiency can be made by making polynomial interpolations to the individual residual functions rather than the overall objective function. We also study modified schemes in which the Jacobian matrix is evaluated as infrequently as possible, and show that further worthwhile savings can be made. Numerical results are presented.This work was supported by the award of a Syrian Ministry of Higher Education Scholarship.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号