首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A descent method on a closed set X of a Hilbert space, adapted to the multi-objective optimization, is presented. After solving a differential inclusion, the limit points of the solutions are used to characterize a critical set, which contains the set of Pareto optima. Under suitable assumptions existence of a Pareto optimum is proved. Then the Lusternik-Schnirelman theory is generalized to this framework and the critical set is related to the topological properties of X.  相似文献   

2.
We propose an iterative gradient descent algorithm for solving scenario-based Mean-CVaR portfolio selection problem. The algorithm is fast and does not require any LP solver. It also has efficiency advantage over the LP approach for large scenario size.  相似文献   

3.
In this work we propose a Cauchy-like method for solving smooth unconstrained vector optimization problems. When the partial order under consideration is the one induced by the nonnegative orthant, we regain the steepest descent method for multicriteria optimization recently proposed by Fliege and Svaiter. We prove that every accumulation point of the generated sequence satisfies a certain first-order necessary condition for optimality, which extends to the vector case the well known “gradient equal zero” condition for real-valued minimization. Finally, under some reasonable additional hypotheses, we prove (global) convergence to a weak unconstrained minimizer.As a by-product, we show that the problem of finding a weak constrained minimizer can be viewed as a particular case of the so-called Abstract Equilibrium problem.  相似文献   

4.
The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing stochastic approximation (SA) type algorithms. To this end we show that while running a Mirror Descent Stochastic Approximation procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective value. We demonstrate that for a certain class of convex stochastic programs these bounds are comparable in quality with similar bounds computed by the sample average approximation method, while their computational cost is considerably smaller.  相似文献   

5.
In this article, we consider a mini‐max multi‐agent optimization problem where multiple agents cooperatively optimize a sum of local convex–concave functions, each of which is available to one specific agent in a network. To solve the problem, we propose a distributed optimization method by extending classical mirror descent algorithms to the distributed setting. We obtain the convergence of the algorithm under wild conditions that the agent communication follows a directed graph and the related weighted matrices are row stochastic. In particular, when the weighted matrices are restricted to be doubly stochastic, we provide the explicit convergence rate of the algorithm by choosing the stepsize in a suitable way. The proposed algorithm can be viewed as a generalization of the subgradient projection methods since it utilizes a customized Bregman divergence instead of the usual Euclidean squared distance. Finally, some simulation results on a matrix game are presented to illustrate the performance of the algorithm. © 2016 Wiley Periodicals, Inc. Complexity 21: 178–190, 2016  相似文献   

6.
7.
8.
9.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

10.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

11.
12.
Generalized descent for global optimization   总被引:6,自引:0,他引:6  
This paper introduces a new method for the global unconstrained minimization of a differentiable objective function. The method is based on search trajectories, which are defined by a differential equation and exhibit certain similarities to the trajectories of steepest descent. The trajectories depend explicitly on the value of the objective function and aim at attaining a given target level, while rejecting all larger local minima. Convergence to the gloal minimum can be proven for a certain class of functions and appropriate setting of two parameters.The author wishes to thank Professor R. P. Brent for making helpful suggestions and acknowledges the financial support of an Australian National University Postgraduate Scholarship.  相似文献   

13.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

14.
New first-order methods are introduced for solving convex optimization problems from a fairly broad class. For composite optimization problems with an inexact stochastic oracle, a stochastic intermediate gradient method is proposed that allows using an arbitrary norm in the space of variables and a prox-function. The mean rate of convergence of this method and the probability of large deviations from this rate are estimated. For problems with a strongly convex objective function, a modification of this method is proposed and its rate of convergence is estimated. The resulting estimates coincide, up to a multiplicative constant, with lower complexity bounds for the class of composite optimization problems with an inexact stochastic oracle and for all usually considered subclasses of this class.  相似文献   

15.
Most of the descent methods developed so far suffer from the computational burden due to a sequence of constrained quadratic subproblems which are needed to obtain a descent direction. In this paper we present a class of proximal-type descent methods with a new direction-finding subproblem. Especially, two of them have a linear programming subproblem instead of a quadratic subproblem. Computational experience of these two methods has been performed on two well-known test problems. The results show that these methods are another very promising approach for nondifferentiable convex optimization.  相似文献   

16.
In this paper, we make a modification to the Liu-Storey (LS) conjugate gradient method and propose a descent LS method. The method can generate sufficient descent directions for the objective function. This property is independent of the line search used. We prove that the modified LS method is globally convergent with the strong Wolfe line search. The numerical results show that the proposed descent LS method is efficient for the unconstrained problems in the CUTEr library.  相似文献   

17.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

18.
Juditsky  Anatoli  Kwon  Joon  Moulines  Éric 《Mathematical Programming》2023,199(1-2):793-830
Mathematical Programming - We introduce and analyze a new family of first-order optimization algorithms which generalizes and unifies both mirror descent and dual averaging. Within the framework of...  相似文献   

19.
Packing optimization problems aim to seek the best way of placing a given set of rectangular cartons within a minimum volume rectangular container. Currently, packing optimization methods either have difficulty in finding a globally optimal solution or are computationally inefficient, because models involve too many 0–1 variables and because use of just a single computer. This study proposes a distributed computation method for solving a packing problem by a set of personal computers via the Internet. First, the traditional packing optimization model is converted into an equivalent model containing many fewer 0–1 variables. Then the model is decomposed into several sub-problems by dividing the objective value into many intervals. Each of these sub-problems is a linearized logarithmic program expressed as a linear mixed 0–1 problem. The whole problem is solvable and reaches a globally optimal solution. The numerical examples demonstrate that the proposed method can obtain the global optimum of a packing problem effectively.  相似文献   

20.
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function. The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号