首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.  相似文献   

2.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

3.
W. Hare 《Optimization Letters》2017,11(7):1217-1227
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.  相似文献   

4.
A trust region algorithm for minimization of locally Lipschitzian functions   总被引:7,自引:0,他引:7  
Qi  Liqun  Sun  Jie 《Mathematical Programming》1994,66(1-3):25-43
The classical trust region algorithm for smooth nonlinear programs is extended to the nonsmooth case where the objective function is only locally Lipschitzian. At each iteration, an objective function that carries both first and second order information is minimized over a trust region. The term that carries the first order information is an iteration function that may not explicitly depend on subgradients or directional derivatives. We prove that the algorithm is globally convergent. This convergence result extends the result of Powell for minimization of smooth functions, the result of Yuan for minimization of composite convex functions, and the result of Dennis, Li and Tapia for minimization of regular functions. In addition, compared with the recent model of Pang, Han and Rangaraj for minimization of locally Lipschitzian functions using a line search, this algorithm has the same convergence property without assuming positive definiteness and uniform boundedness of the second order term. Applications of the algorithm to various nonsmooth optimization problems are discussed.This author's work was supported in part by the Australian Research Council.This author's work was carried out while he was visiting the Department of Applied Mathematics at the University of New South Wales.  相似文献   

5.
A class of simulated annealing algorithms for continuous global optimization is considered in this paper. The global convergence property is analyzed with respect to the objective value sequence and the minimum objective value sequence induced by simulated annealing algorithms. The convergence analysis provides the appropriate conditions on both the generation probability density function and the temperature updating function. Different forms of temperature updating functions are obtained with respect to different kinds of generation probability density functions, leading to different types of simulated annealing algorithms which all guarantee the convergence to the global optimum.  相似文献   

6.
The two stage stochastic program with recourse is known to have numerous applications in financial planning, energy modeling, telecommunications systems etc. Notwithstanding its applicability, the two stage stochastic program is limited in its ability to incorporate a decision maker's attitudes towards risk. In this paper we present an extension via the inclusion of a recourse constraint. This results in a convex integrated chance constraint (ICC), which inherits the convexity properties of two stage programs. However, it also inherits some of the difficulties associated with the evaluation of recourse functions. This motivates our study of conditions that may be applicable to algorithms using statistical approximations of such ICC. We present a set of sufficient conditions that these approximations may satisfy in order to assure convergence. Our conditions are satisfied by a wide range of statistical approximations, and we demonstrate that these approximations can be generated within standard algorithmic procedures.This work was supported in part by Grant No. NSF-DDM-9114352 from the National Science Foundation.  相似文献   

7.
Recently, optimization algorithms for solving a minimization problem whose objective function is a sum of two convex functions have been widely investigated in the field of image processing. In particular, the scenario when a non-differentiable convex function such as the total variation (TV) norm is included in the objective function has received considerable interests since many variational models encountered in image processing have this nature. In this paper, we propose a fast fixed point algorithm based on the adapted metric method, and apply it in the field of TV-based image deblurring. The novel method is derived from the idea of establishing a general fixed point algorithm framework based on an adequate quadratic approximation of one convex function in the objective function, in a way reminiscent of Quasi-Newton methods. Utilizing the non-expansion property of the proximity operator we further investigate the global convergence of the proposed algorithm. Numerical experiments on image deblurring problem demonstrate that the proposed algorithm is very competitive with the current state-of-the-art algorithms in terms of computational efficiency.  相似文献   

8.
The optimization of multimodal functions is a challenging task, in particular when derivatives are not available for use. Recently, in a directional direct search framework, a clever multistart strategy was proposed for global derivative-free optimization of single objective functions. The goal of the current work is to generalize this approach to the computation of global Pareto fronts for multiobjective multimodal derivative-free optimization problems. The proposed algorithm alternates between initializing new searches, using a multistart strategy, and exploring promising subregions, resorting to directional direct search. Components of the objective function are not aggregated and new points are accepted using the concept of Pareto dominance. The initialized searches are not all conducted until the end, merging when they start to be close to each other. The convergence of the method is analyzed under the common assumptions of directional direct search. Numerical experiments show its ability to generate approximations to the different Pareto fronts of a given problem.  相似文献   

9.
一类新的信赖域算法的全局收敛性   总被引:22,自引:1,他引:22  
本文对于无约束最优化问题提出了一类非单调的信赖域算法,它是通常的单调信赖域算法的推广。当目标函数是有下界的连续可微函数,而且它的二阶导数的近似的模是线性地依赖于迭代次数时,我们证明了新算法的整体收敛性。  相似文献   

10.
We consider the global convergence properties for a class of quasi-Newton algorithms solving nonsmooth optimization problems. Stationary points are defined, and several relations with optimal points are proven. We show descent properties of the algorithm using approximations for the derivatives. The global convergence results are given for determining the stepsizes inexactly.  相似文献   

11.
In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.This research was funded by SERC and ESRC research contracts. The author is grateful to Professors Laurence Dixon and David Mayne for their comments. The numerical results in the paper were obtained using a program written by Mr. Robin Becker.  相似文献   

12.
Some algorithms for unconstrained and differentiable optimization problems involve the evaluation of quantities related to high order derivatives. The cost of these evaluations depends widely on the technique used to obtain the derivatives and on some characteristics of the objective function: its size, structure and complexity. Functions with banded Hessian are a special case that we study in this paper. Because of their partial separability, the cost of obtaining their high order derivatives, subtly computed by the technique of automatic differentiation, makes High order Chebyshev methods more interesting for banded systems than for dense functions. These methods have an attractive efficiency as we can improve their convergence order without increasing significantly their algorithmic costs. This paper provides an analysis of the per-iteration complexities of High order Chebyshev methods applied to sparse functions with banded Hessians. The main result can be summarized as: the per-iteration complexity of a High order Chebyshev method is of order of the objective function’s. This theoretical analysis is verified by numerical illustrations.  相似文献   

13.
We consider a class of smoothing methods for minimization problems where the feasible set is convex but the objective function is not convex, not differentiable and perhaps not even locally Lipschitz at the solutions. Such optimization problems arise from wide applications including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium and spherical approximations. In this paper, we focus on smoothing methods for solving such optimization problems, which use the structure of the minimization problems and composition of smoothing functions for the plus function (x)+. Many existing optimization algorithms and codes can be used in the inner iteration of the smoothing methods. We present properties of the smoothing functions and the gradient consistency of subdifferential associated with a smoothing function. Moreover, we describe how to update the smoothing parameter in the outer iteration of the smoothing methods to guarantee convergence of the smoothing methods to a stationary point of the original minimization problem.  相似文献   

14.
Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives. This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used. The rate of convergence is also examined and proven to be improving with the rate of convergence of the underlying iterates. The theory is confirmed by some numerical experiments that also show the convergence of the Hessian approximations to be substantially slower for other known quasi-Newton formulae.The work of this author was supported by the National Sciences and Engineering Research Council of Canada, and by the Information Technology Research Centre, which is funded by the Province of Ontario.  相似文献   

15.
We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the ?ojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the ?ojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform DCA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.  相似文献   

16.
Huang  Wen  Wei  Ke 《Mathematical Programming》2022,194(1-2):371-413

In the Euclidean setting the proximal gradient method and its accelerated variants are a class of efficient algorithms for optimization problems with decomposable objective. In this paper, we develop a Riemannian proximal gradient method (RPG) and its accelerated variant (ARPG) for similar problems but constrained on a manifold. The global convergence of RPG is established under mild assumptions, and the O(1/k) is also derived for RPG based on the notion of retraction convexity. If assuming the objective function obeys the Rimannian Kurdyka–?ojasiewicz (KL) property, it is further shown that the sequence generated by RPG converges to a single stationary point. As in the Euclidean setting, local convergence rate can be established if the objective function satisfies the Riemannian KL property with an exponent. Moreover, we show that the restriction of a semialgebraic function onto the Stiefel manifold satisfies the Riemannian KL property, which covers for example the well-known sparse PCA problem. Numerical experiments on random and synthetic data are conducted to test the performance of the proposed RPG and ARPG.

  相似文献   

17.
Trust region(TR)algorithms are a class of recently developed alogrthms for nonlinear optimization.A new family of TR algorithms for unconstrained optimization,which is the extension of the usual TR method,is pressented in this paper.When the objective function is bounded below and continuously differentiable,and the norm of the Hesse approximations increases at most linearly with the iteration number,we prove the global convergence of the algorithms.Limited numerical results are repoted,which indicate that our new TR algorithm is competitive.  相似文献   

18.
Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive.  相似文献   

19.
In this paper, the optimization of time-varying objective functions, known only through estimates, is considered. Recent research defined algorithms for static optimization problems. Based on one of these algorithms, we derive an optimization scheme for the time-varying case. In stochastic optimization problems, convergence of an algorithm to the optimum prevents the algorithm from being efficiently adaptive to changes of the objective function if it is time-varying. So, convergence cannot be required in a time-varying scenario. Rather, we require convergence to the optimum with high probability together with a satisfactory dynamical behavior. Analytical and simulative results illustrate the performance of the proposed algorithm compared with other optimization techniques.  相似文献   

20.
A concept of local approximation of a function is introduced. This concept is defined via directional derivatives. In consequence, the local approximation is carried out by a positively homogeneous mapping. We obtain local approximations for functions that are not necessarily locally Lipschitzian nor continuous. This is the case of some large classes of functions such as stable functions or contingently epidifferentiable and directionally Lipschitzian functions. Using the concept of topological equivalence we establish the existence of a local coordinate transformation between the original function and the positively homogeneous function. This investigation is developed for contingently epidifferentiable functions around a noncritical point, and for noncontingently epidifferentiable functions under particular conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号