首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This paper considers simple modifications of the limited memory BFGS (L-BFGS) method for large scale optimization. It describes algorithms in which alternating ways of re-using a given set of stored difference vectors are outlined. The proposed algorithms resemble the L-BFGS method, except that the initial Hessian approximation is defined implicitly like the L-BFGS Hessian in terms of some stored vectors rather than the usual choice of a multiple of the unit matrix. Numerical experiments show that the new algorithms yield desirable improvement over the L-BFGS method. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

3.
In this paper, we give a new method for solving large scale problems. The basic idea of this method depends on implementing the conjugate gradient as a corrector into a continuation method. We use the Euler method as a predictor. Adaptive steplength control is used during the tracing of the solution curve. We present some of our experimental examples to demonstrate the efficiency of the method.

  相似文献   


4.
《Optimization》2012,61(12):1491-1509
Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems.  相似文献   

5.
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.

  相似文献   


6.
In this paper, an active set limited BFGS algorithm is proposed for bound constrained optimization. The global convergence will be established under some suitable conditions. Numerical results show that the given method is effective.  相似文献   

7.
《Optimization》2012,61(6):945-962
Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.  相似文献   

8.
This paper deals with the solution of nonlinear programming problems arising from elliptic control problems by an interior point scheme. At each step of the scheme, we have to solve a large scale symmetric and indefinite system; inner iterative solvers, with an adaptive stopping rule, can be used in order to avoid unnecessary inner iterations, especially when the current outer iterate is far from the solution. In this work, we analyse the method of multipliers and the preconditioned conjugate gradient method as inner solvers for interior point schemes. We discuss the convergence of the whole approach, the implementation details and report the results of numerical experimentation on a set of large scale test problems arising from the discretization of elliptic control problems. A comparison with other interior point codes is also reported. This research was supported by the Italian Ministry for Education, University and Research (MIUR) projects: FIRB Project: “Parallel Nonlinear Numerical Optimization PN 2 O” (grant n. RBAU01JYPN, ) and COFIN/PRIN04 Project “Numerical Methods and Mathematical Software for Applications” (grant n. 2004012559, ).  相似文献   

9.
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient.  相似文献   

10.
Partial separability and partitioned quasi-Newton updating have been recently introduced and experimented with success in large scale nonlinear optimization, large nonlinear least squares calculations and in large systems of nonlinear equations. It is the purpose of this paper to apply this idea to large dimensional nonlinear network optimization problems. The method proposed thus uses these techniques for handling the cost function, while more classical tools as variable partitioning and specialized data structures are used in handling the network constraints. The performance of a code implementing this method, as well as more classical techniques, is analyzed on several numerical examples.  相似文献   

11.
We present an algorithm for very large-scale linearly constrained nonlinear programming (LCNP) based on a Limited-Storage Quasi-newton method. In large-scale programming solving the reduced Newton equation at each iteration can be expensive and may not be justified when far from a local solution; besides, the amount of storage required by the reduced Hessian matrix, and even the computing time for its Quasi-Newton approximation, may be prohibitive. An alternative based on the reduced Truncated-Newton methodology, that has proved to be satisfactory for large-scale problems, is not recommended for very large-scale problems since it requires an additional gradient evaluation and the solving of two systems of linear equations per each minor iteration. We recommend a 2-step BFGS approximation of the inverse of the reduced Hessian matrix that does not require to store any matrix since the product matrix-vector is the vector to be approximated; it uses the reduced gradient and information from two previous iterations and the so-termed restart iteration. A diagonal direct BFGS preconditioning is used.  相似文献   

12.
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.  相似文献   

13.
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.  相似文献   

14.
An active set limited memory BFGS algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction is determined by a lower dimensional system of linear equations in free subspace. The implementations of the method on CUTE test problems are described, which show the efficiency of the proposed algorithm. The work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036.  相似文献   

15.
In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an “adequate” negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence. On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun.  相似文献   

16.
In this paper, a subspace limited memory BFGS algorithm for solving large-scale bound constrained optimization problems is developed. It is modifications of the subspace limited memory quasi-Newton method proposed by Ni and Yuan [Q. Ni, Y.X. Yuan, A subspace limited memory quasi-Newton algorithm for large-scale nonlinear bound constrained optimization, Math. Comput. 66 (1997) 1509–1520]. An important property of our proposed method is that more limited memory BFGS update is used. Under appropriate conditions, the global convergence of the method is established. The implementations of the method on CUTE test problems are presented, which indicate the modifications are beneficial to the performance of the algorithm.  相似文献   

17.
《Optimization》2012,61(7):857-878
In this article, by means of an active set and limited memory strategy, we propose a trust-region method for box-constrained nonsmooth equations. The global convergence and the superlinear convergence are established under suitable conditions.  相似文献   

18.
Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition. This case convers a known practical optimization method for large dimensional unconstrained problems. Inexact solution of the linear system defining the search direction and variants of the steplength rule are also shown to be acceptable without affecting the global convergence properties.  相似文献   

19.
Starting from the paper by Nash and Sofer (1990), we propose a heuristic adaptive truncation criterion for the inner iterations within linesearch-based truncated Newton methods. Our aim is to possibly avoid “over-solving” of the Newton equation, based on a comparison between the predicted reduction of the objective function and the actual reduction obtained. A numerical experience on unconstrained optimization problems highlights a satisfactory effectiveness and robustness of the adaptive criterion proposed, when a residual-based truncation criterion is selected.  相似文献   

20.
We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations.These authors were supported by the Air Force Office of Scientific Research under Grant AFOSR-90-0109, the Army Research Office under Grant DAAL03-91-0151 and the National Science Foundation under Grants CCR-8920519 and CCR-9101795.This author was supported by the U.S. Department of Energy, under Grant DE-FG02-87ER25047-A001, and by National Science Foundation Grants CCR-9101359 and ASC-9213149.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号