共查询到20条相似文献,搜索用时 15 毫秒
1.
Sun Linping 《Computational Optimization and Applications》2004,27(1):23-29
A new limited memory quasi-Newton algorithm is developed, in which the self-scaling symmetric rank one update with Davidon's optimal condition is applied. Preliminary numerical tests show that the new algorithm is very efficient for large-scale problems as well as general nonlinear optimization. 相似文献
2.
G. Fasano 《Journal of Optimization Theory and Applications》2005,125(3):543-558
In this paper, we describe an application of the planar conjugate gradient method introduced in Part 1 (Ref. 1) and aimed at solving indefinite nonsingular sets of linear equations. We prove that it can be used fruitfully within optimization frameworks; in particular, we present a globally convergent truncated Newton scheme, which uses the above planar method for solving the Newton equation. Finally, our approach is tested over several problems from the CUTE collection (Ref. 2).This work was supported by MIUR, FIRB Research Program on Large-Scale Nonlinear Optimization, Rome, Italy.The author acknowledges Luigi Grippo and Stefano Lucidi, who contributed considerably to the elaboration of this paper. The exchange of experiences with Massimo Roma was a constant help in the investigation. The author expresses his gratitude to the Associate Editor and the referees for suggestions and corrections. 相似文献
3.
In 1952, Hestenes and Stiefel first established, along with the conjugate-gradient algorithm, fundamental relations which exist between conjugate direction methods for function minimization on the one hand and Gram-Schmidt processes relative to a given positive-definite, symmetric matrix on the other. This paper is based on a recent reformulation of these relations by Hestenes which yield the conjugate Gram-Schmidt (CGS) algorithm. CGS includes a variety of function minimization routines, one of which is the conjugate-gradient routine. This paper gives the basic equations of CGS, including the form applicable to minimizing general nonquadratic functions ofn variables. Results of numerical experiments of one form of CGS on five standard test functions are presented. These results show that this version of CGS is very effective.The preparation of this paper was sponsored in part by the US Army Research Office, Grant No. DH-ARO-D-31-124-71-G18.The authors wish to thank Mr. Paul Speckman for the many computer runs made using these algorithms. They served as a good check on the results which they had obtained earlier. Special thanks must go to Professor M. R. Hestenes whose constant encouragement and assistance made this paper possible. 相似文献
4.
Partial separability and partitioned quasi-Newton updating have been recently introduced and experimented with success in large scale nonlinear optimization, large nonlinear least squares calculations and in large systems of nonlinear equations. It is the purpose of this paper to apply this idea to large dimensional nonlinear network optimization problems. The method proposed thus uses these techniques for handling the cost function, while more classical tools as variable partitioning and specialized data structures are used in handling the network constraints. The performance of a code implementing this method, as well as more classical techniques, is analyzed on several numerical examples. 相似文献
5.
A modified Newton method for minimization 总被引:6,自引:0,他引:6
Some promising ideas for minimizing a nonlinear function, whose first and second derivatives are given, by a modified Newton method, were introduced by Fiacco and McCormick (Ref. 1). Unfortunately, in developing a method around these ideas, Fiacco and McCormick used a potentially unstable, or even impossible, matrix factorization. Using some recently developed techniques for factorizing an indefinite symmetric matrix, we are able to produce a method which is similar to Fiacco and McCormick's original method, but avoids the difficulties of the original method.Both authors gratefully acknowledge the award of a research fellowship from the British Science Research Council. 相似文献
6.
The Adjoint Newton Algorithm for Large-Scale Unconstrained Optimization in Meteorology Applications 总被引:1,自引:0,他引:1
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the Newton method. The algorithm is referred to as the adjoint Newton algorithm. The adjoint Newton algorithm is based on the first- and second-order adjoint techniques allowing us to obtain the Newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-Newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint Newton algorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint Newton algorithm with that of truncated Newton, adjoint truncated Newton, and LBFGS methods. Our numerical tests indicate that the adjoint Newton algorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint Newton algorithm improves upon the efficiencies of the truncated Newton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion.The Newton, truncated Newton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint Newton algorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the sense of Hadamard, the adjoint Newton algorithm may not work. Thus, the adjoint Newton algorithm must be used with some caution. A possible solution to avoid the current weakness of the adjoint Newton algorithm is proposed. 相似文献
7.
Muhammed I. Syam. 《Mathematics of Computation》2005,74(250):805-818
In this paper, we give a new method for solving large scale problems. The basic idea of this method depends on implementing the conjugate gradient as a corrector into a continuation method. We use the Euler method as a predictor. Adaptive steplength control is used during the tracing of the solution curve. We present some of our experimental examples to demonstrate the efficiency of the method.
8.
Trust-region methods are globally convergent techniques widely used, for example, in connection with the Newton’s method for unconstrained optimization. One of the most commonly-used iterative approaches for solving trust-region subproblems is the Steihaug–Toint method which is based on conjugate gradient iterations and seeks a solution on Krylov subspaces. This paper contains new theoretical results concerning properties of Lagrange multipliers obtained on these subspaces. AMS subject classification (2000) 65F20 相似文献
9.
This work deals with the solution of ill-conditioned unconstrained minimization problems by means of pattern search methods. To this end, the usual requirement of monotonic reduction of the objective function is relaxed and nonmonotone pattern search methods are proposed, which maintain the convergence properties of their monotone counterparts. Numerical experimentation on several well-known ill-conditioned functions is reported. The results highlight a class of pattern search methods which benefit very much by the introduction of nonmonotone strategies. 相似文献
10.
D.G. Hull 《Journal of Optimization Theory and Applications》2002,113(1):1-4
Several authors have created one-parameter families of variable metric methods for function minimization. These families contain the methods known as Davidon–Fletcher–Powell, Broyden–Fletcher–Goldfarb–Shanno, and symmetric rank one. It is shown here that the same one-parameter families of methods are obtained from the Huang update by requiring the update to be symmetric. 相似文献
11.
We consider a class of stochastic linear complementarity problems (SLCPs) with finitely many realizations. In this paper we reformulate this class of SLCPs as a constrained minimization (CM) problem. Then, we present a feasible semismooth Newton method to solve this CM problem. Preliminary numerical results show that this CM reformulation may yield a solution with high safety for SLCPs. 相似文献
12.
In this paper, we present the compact representation for matrices belonging to the Broyden class of quasi‐Newton updates, where each update may be either rank one or rank two. This work extends previous results solely for the restricted Broyden class of rank‐two updates. In this article, it is not assumed that the same Broyden update is used in each iteration; rather, different members of the Broyden class may be used in each iteration. Numerical experiments suggest that a practical implementation of the compact representation is able to accurately represent matrices belonging to the Broyden class of updates. Furthermore, we demonstrate how to compute the compact representation for the inverse of these matrices and a practical algorithm for solving linear systems with members of the Broyden class of updates. We demonstrate through numerical experiments that the proposed linear solver is able to efficiently solve linear systems with members of the Broyden class of matrices with high accuracy. As an immediate consequence of this work, it is now possible to efficiently compute the eigenvalues of any limited‐memory member of the Broyden class of matrices, allowing for the computation of condition numbers and the ability to perform sensitivity analysis. 相似文献
13.
A characterization of the minimizer of a function 总被引:1,自引:0,他引:1
This paper is intended to give a characterization of the minimum point of a function in terms of the gradient of the function at some other point using some concepts from differential geometry. The function is assumed to have continuous partial derivatives up to and including order four. It is also assumed to have a positive-definite Hessian matrix onR
n
and a unique minimum point. 相似文献
14.
何炼坚 《高等学校计算数学学报》1998,20(2):112-120
1 引言 考虑无约束优化问题 minf(x),(1.1) x∈R~n其中f为非线性町微函数。 对于中小规模的无约束优化问题,拟牛顿法(如BFGS方法)是十分有效的。但对于大规模问题,即n相当大时,算法所需存贮相当重要,并且在每次迭代中线代数计算量也影响算法的效率。 有限存贮((1imited memory)拟牛顿法可看成是共轭梯度法的推广。这一类方法最早由Perry和Shanno提出,此后有不少人进行研究,如Gill和Murray,Buckley,Buckley和LeNir及Nocedal。 有限存贮BFGS方法由Nocedal提出,是目前一种十分有效的有限存贮拟牛顿方法,其基本出法点是减少存贮。由于BFGS修正公式可写成 相似文献
15.
A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone, and that the level sets are bounded if F is continuous and strongly monotone. A descent algorithm is described which uses only function values of F. Some numerical results are given. 相似文献
16.
Andrea Caliciotti Giovanni Fasano Stephen G. Nash Massimo Roma 《Operations Research Letters》2018,46(1):7-12
Starting from the paper by Nash and Sofer (1990), we propose a heuristic adaptive truncation criterion for the inner iterations within linesearch-based truncated Newton methods. Our aim is to possibly avoid “over-solving” of the Newton equation, based on a comparison between the predicted reduction of the objective function and the actual reduction obtained. A numerical experience on unconstrained optimization problems highlights a satisfactory effectiveness and robustness of the adaptive criterion proposed, when a residual-based truncation criterion is selected. 相似文献
17.
The parallel quasi-Newton method based on updating conjugate subspaces proposed in [4] can be very effective for large-scale sparse minimization because conjugate subspaces with respect to sparse Hessians are usually easy to obtain. We demonstrate this point in this paper for the partially separable case with matrices updated by a quasi-Newton scheme ofGriewank andToint [2,3]. The algorithm presented is suitable for parallel computation and economical in computer storage. Some testing results of the algorithm on an Alliant FX/8 minisupercomputer are reported.The material is based on work supported in part by the National Science Foundation under Grant No. DMS 8602419 and by the Center for Supercomputing Research and Development at the University of Illinois. 相似文献
18.
Silvia Bonettini Valeria Ruggiero Federica Tinti 《Numerical Linear Algebra with Applications》2007,14(10):807-831
This work is concerned with the convergence properties and the numerical analysis of the preconditioned conjugate gradient (PCG) method applied to the solution of indefinite linear systems arising in nonlinear optimization. Our approach is based on the choice of quasidefinite preconditioners and of a suitable factorization routine. Some theoretical and numerical results about these preconditioners are obtained. Furthermore, we show the behaviour of the PCG method for different formulations of the indefinite system and we compare the effectiveness of the proposed variants. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
19.
Projected gradient methods for linearly constrained problems 总被引:23,自引:0,他引:23
The aim of this paper is to study the convergence properties of the gradient projection method and to apply these results
to algorithms for linearly constrained problems. The main convergence result is obtained by defining a projected gradient,
and proving that the gradient projection method forces the sequence of projected gradients to zero. A consequence of this
result is that if the gradient projection method converges to a nondegenerate point of a linearly constrained problem, then
the active and binding constraints are identified in a finite number of iterations. As an application of our theory, we develop
quadratic programming algorithms that iteratively explore a subspace defined by the active constraints. These algorithms are
able to drop and add many constraints from the active set, and can either compute an accurate minimizer by a direct method,
or an approximate minimizer by an iterative method of the conjugate gradient type. Thus, these algorithms are attractive for
large scale problems. We show that it is possible to develop a finite terminating quadratic programming algorithm without
non-degeneracy assumptions.
Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department
of Energy under Contract W-31-109-Eng-38.
Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department
of Energy under Contract W-31-109-Eng-38. 相似文献
20.
F. Biegler-König 《Journal of Optimization Theory and Applications》1985,47(4):393-399
The well-known quadratically convergent methods of the Huang type (Refs. 1 and 2) to maximize or minimize a functionf:
n
are generalized to find saddlepoints off. Furthermore, a procedure is derived which homes in on saddlepoints with prescribed inertia, i.e., with a given number of positive and negative eigenvalues in the Hessian matrix off. Examples are presented to show that saddlepoints with different inertia can be calculated from the same starting vector. 相似文献