共查询到20条相似文献,搜索用时 15 毫秒
1.
Discrete global descent method for discrete global optimization and nonlinear integer programming 总被引:2,自引:0,他引:2
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization
problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function
f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function.
The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers
of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer
variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method. 相似文献
2.
Generalized descent for global optimization 总被引:6,自引:0,他引:6
A. O. Griewank 《Journal of Optimization Theory and Applications》1981,34(1):11-39
This paper introduces a new method for the global unconstrained minimization of a differentiable objective function. The method is based on search trajectories, which are defined by a differential equation and exhibit certain similarities to the trajectories of steepest descent. The trajectories depend explicitly on the value of the objective function and aim at attaining a given target level, while rejecting all larger local minima. Convergence to the gloal minimum can be proven for a certain class of functions and appropriate setting of two parameters.The author wishes to thank Professor R. P. Brent for making helpful suggestions and acknowledges the financial support of an Australian National University Postgraduate Scholarship. 相似文献
3.
We consider the problem of minimizing a smooth function over a feasible set defined as the Cartesian product of convex compact sets. We assume that the dimension of each factor set is huge, so we are interested in studying inexact block coordinate descent methods (possibly combined with column generation strategies). We define a general decomposition framework where different line search based methods can be embedded, and we state global convergence results. Specific decomposition methods based on gradient projection and Frank–Wolfe algorithms are derived from the proposed framework. The numerical results of computational experiments performed on network assignment problems are reported. 相似文献
4.
S. K. Zavriev 《Journal of Optimization Theory and Applications》1996,91(1):185-214
The steepest-descent technique dealing with the perturbed values of the objective function and its gradients and with nonexact line searches is considered. Attention is given to the case where the perturbations do not decrease on the algorithm trajectories; the aim is to investigate how perturbations at every iteration of the algorithm perturb its original attractor set.Based on the Liapunov direct method for attraction analysis of discrete-time processes, a sharp estimation of the attractor set generated by a perturbed steepest-descent technique with respect to the perturbation magnitudes is obtained. Some global optimization properties of finite-difference analogues of the gradient method are discovered. These properties are not inherent in methods which use exact gradients.The author is grateful to the referees for many useful suggestions. 相似文献
5.
This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the case where the expected-value function is continuously differentiable; and the other, when that function is nondifferentiable but the sample performance functions are convex. The proofs are based on a version of the uniform law of large numbers which is provable for many discrete event systems where infinitesimal perturbation analysis is known to be strongly consistent. 相似文献
6.
Hans De Sterck 《Numerical Linear Algebra with Applications》2013,20(3):453-471
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
7.
The minimal criterion for the equivalence between local and global optimal solutions in nondifferentiable optimization problem 下载免费PDF全文
Manuel Arana‐Jiménez Tadeusz Antczak 《Mathematical Methods in the Applied Sciences》2017,40(18):6556-6564
In the paper, a necessary and sufficient criterion it provided such that any local optimal solution is also global in a not necessarily differentiable constrained optimization problem. This criterion is compared to others earlier appeared in the literature, which are sufficient but not necessary for a local optimal solution to be global. The importance of the established criterion is illustrated by suitable examples of nonconvex optimization problems presented in the paper. 相似文献
8.
Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems 总被引:2,自引:0,他引:2
Gonglin Yuan 《Optimization Letters》2009,3(1):11-21
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate
gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global
convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function
under suitable conditions. Numerical results are reported.
This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001. 相似文献
9.
In this paper, we propose a new integral global optimization algorithm for finding the solution of continuous minimization problem, and prove the asymptotic convergence of this algorithm. In our modified method we use variable measure integral, importance sampling and main idea of the cross-entropy method to ensure its convergence and efficiency. Numerical results show that the new method is very efficient in some challenging continuous global optimization problems. 相似文献
10.
Hoang Tuy 《Journal of Global Optimization》2007,37(2):321-323
A counter-example is given to several recently published results on duality bound methods for nonconvex global optimization. 相似文献
11.
12.
13.
14.
For solving large scale linear least‐squares problem by iteration methods, we introduce an effective probability criterion for selecting the working columns from the coefficient matrix and construct a greedy randomized coordinate descent method. It is proved that this method converges to the unique solution of the linear least‐squares problem when its coefficient matrix is of full rank, with the number of rows being no less than the number of columns. Numerical results show that the greedy randomized coordinate descent method is more efficient than the randomized coordinate descent method. 相似文献
15.
《Optimization》2012,61(3):255-264
A simple outer approximation concept for global optimization problems is presented that simplifies and generalizes several previous approaches. Unbounded feasible regions and non-affine cuts are admitted. Constraint dropping strategies and a large class of cutting plane methods are derived. 相似文献
16.
A filled function method for constrained global optimization 总被引:1,自引:0,他引:1
In this paper, a filled function method for solving constrained global optimization problems is proposed. A filled function
is proposed for escaping the current local minimizer of a constrained global optimization problem by combining the idea of
filled function in unconstrained global optimization and the idea of penalty function in constrained optimization. Then a
filled function method for obtaining a global minimizer or an approximate global minimizer of the constrained global optimization
problem is presented. Some numerical results demonstrate the efficiency of this global optimization method for solving constrained
global optimization problems. 相似文献
17.
Effect of the subdivision strategy on convergence and efficiency of some global optimization algorithms 总被引:2,自引:0,他引:2
Hoang Tuy 《Journal of Global Optimization》1991,1(1):23-36
We investigate subdivision strategies that can improve the convergence and efficiency of some branch and bound algorithms of global optimization. In particular, a general class of so called weakly exhaustive simplicial subdivision processes is introduced that subsumes all previously known radial exhaustive processes. This result provides the basis for constructing flexible subdivision strategies that can be adapted to take advantage of various problem conditions. 相似文献
18.
《Optimization》2012,61(2):249-263
New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods. 相似文献
19.
On the convergence of the coordinate descent method for convex differentiable minimization 总被引:3,自引:0,他引:3
The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.This work was partially supported by the U.S. Army Research Office, Contract No. DAAL03-86-K-0171, by the National Science Foundation, Grant No. ECS-8519058, and by the Science and Engineering Research Board of McMaster University. 相似文献
20.
In this paper, we propose a three-term conjugate gradient method via the symmetric rank-one update. The basic idea is to exploit the good properties of the SR1 update in providing quality Hessian approximations to construct a conjugate gradient line search direction without the storage of matrices and possess the sufficient descent property. Numerical experiments on a set of standard unconstrained optimization problems showed that the proposed method is superior to many well-known conjugate gradient methods in terms of efficiency and robustness. 相似文献