首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
We study the asymptotic behavior of the solutions of evolution equations of the form , where is a one-parameter family of approximations of a convex function we wish to minimize. We investigate sufficient conditions on the parametrization ensuring that the integral curves converge when towards a particular minimizer of . The speed of convergence is also investigated, and a result concerning the continuity of the limit point with respect to the parametrization is established. The results are illustrated on different approximation methods. In particular, we present a detailed application to the logarithmic barrier in linear programming.

  相似文献   


3.
In this paper we study an inexact steepest descent method for multicriteria optimization whose step-size comes with Armijo’s rule. We show that this method is well-defined. Moreover, by assuming the quasi-convexity of the multicriteria function, we prove full convergence of any generated sequence to a Pareto critical point. As an application, we offer a model for the Psychology’s self regulation problem, using a recent variational rationality approach.  相似文献   

4.
We consider the problem s.t. , where C is a closed and covex subset of with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When , the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches,under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established.  相似文献   

5.
In this paper we propose a new algorithm called MCS for the search for solutions to multicriteria combinatorial optimisation problems. To quickly produce a solution that offers a good trade-off between criteria, the MCS algorithm alternates several Branch & Bound searches following diversified search strategies. It is implemented in CP in a dedicated framework and can be specialised for either complete or partial search.  相似文献   

6.
We consider a vector linear combinatorial optimization problem in which initial coefficients of objective functions are subject to perturbations. For Pareto and lexicographic principles of efficiency we introduce appropriate measures of the quality of a given feasible solution. These measures correspond to so-called stability and accuracy functions defined earlier for scalar optimization problems. Then we study properties of such functions and calculate the maximum norms of perturbations for which an efficient solution preserves the efficiency. This work was partially supported through NATO Science Fellowship grant.  相似文献   

7.
This note concerns a method for analyzing insoluble multicriteria linear programming problems.  相似文献   

8.
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function. The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method.  相似文献   

9.
Generalized descent for global optimization   总被引:6,自引:0,他引:6  
This paper introduces a new method for the global unconstrained minimization of a differentiable objective function. The method is based on search trajectories, which are defined by a differential equation and exhibit certain similarities to the trajectories of steepest descent. The trajectories depend explicitly on the value of the objective function and aim at attaining a given target level, while rejecting all larger local minima. Convergence to the gloal minimum can be proven for a certain class of functions and appropriate setting of two parameters.The author wishes to thank Professor R. P. Brent for making helpful suggestions and acknowledges the financial support of an Australian National University Postgraduate Scholarship.  相似文献   

10.
It is well known that the norm of the gradient may be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. In this paper we present results descibing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make some general observations that apply to nonlinear problems, relating the gradient norm, the objective function value, and the path generated by the iterates.  相似文献   

11.
This work proposes an upper bound on the maximal number of non-dominated points of a multicriteria optimization problem. Assuming that the number of values taken on each criterion is known, the criterion space corresponds to a comparability graph or a product of chains. Thus, the upper bound can be interpreted as the stability number of a comparability graph or, equivalently, as the width of a product of chains. Standard approaches or formulas for computing these numbers are impractical. We develop a practical formula which only depends on the number of criteria. We also investigate the tightness of this upper bound and the reduction of this bound when feasible, possibly efficient, solutions are known.  相似文献   

12.
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported. This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001.  相似文献   

13.
A recent work of Shi (Numer. Linear Algebra Appl. 2002; 9 : 195–203) proposed a hybrid algorithm which combines a primal‐dual potential reduction algorithm with the use of the steepest descent direction of the potential function. The complexity of the potential reduction algorithm remains valid but the overall computational cost can be reduced. In this paper, we make efforts to further reduce the computational costs. We notice that in order to obtain the steepest descent direction of the potential function, the Hessian matrix of second order partial derivatives of the objective function needs to be computed. To avoid this, we in this paper propose another hybrid algorithm which uses a projected steepest descent direction of the objective function instead of the steepest descent direction of the potential function. The complexity of the original potential reduction algorithm still remains valid but the overall computational cost is further reduced. Our numerical experiments are also reported. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
This paper concerns the connection among different sets of multicriteria optimization problem solutions. For the family of bicriteria optimization problems, the limiting properties of the sets of weakly-efficient solutions are determined.  相似文献   

15.
This paper proposes a new method for multicriteria analysis, named Multicriteria Tournament Decision (MTD). It provides the ranking of alternatives from best to worst, according to the preferences of a human decision-maker (DM). It has some positive aspects such as: it has a simple algorithm with intuitive appeal; it involves few input parameters (just the importance weight of each criterion).The helpfulness of MTD is demonstrated by using it to select the final solution of multiobjective optimization problems in an a posteriori decision making approach. Having at hand a discrete approximation of the Pareto front (provided by a multiobjective evolutionary search algorithm), the choice of the preferred Pareto-optimal solution is performed using MTD.A simple method, named Gain Analysis method (GAM), for verifying the existence of a better solution (a solution associated to higher marginal rates of return) than the one originally chosen by the DM, is also introduced here. The usefulness of MTD and GAM methods is confirmed by the suitable results shown in this paper.  相似文献   

16.
We consider the problem of minimizing a smooth function over a feasible set defined as the Cartesian product of convex compact sets. We assume that the dimension of each factor set is huge, so we are interested in studying inexact block coordinate descent methods (possibly combined with column generation strategies). We define a general decomposition framework where different line search based methods can be embedded, and we state global convergence results. Specific decomposition methods based on gradient projection and Frank–Wolfe algorithms are derived from the proposed framework. The numerical results of computational experiments performed on network assignment problems are reported.  相似文献   

17.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

18.
宋春玲  夏尊铨 《数学季刊》2007,22(1):131-136
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.  相似文献   

19.
The Convergence of the Steepest Descent Algorithm for D.C.Optimization   总被引:1,自引:0,他引:1  
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper.And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.  相似文献   

20.
倪仁兴最近的文章研究了广义最速下降法强收敛于拟增生算子方程解的一特征条件.本文对此进行了修正和改进,给出了一个新的特征条件.所得结果同时改进和推广了一些已有的结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号