首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
In an earlier paper, the authors introduced epigraphical nesting of objective functions as a means to characterize the convergence of global optimization algorithms. Epigraphical nesting of objective functions may be looked upon as a relaxation of epigraphical convergence of objective functions, whereby one ensures that epigraphs of the approximations contain asymptotically the epigraph of the objective function of the original optimization problem. In this paper, we show that, for algorithms which seek only a stationary point, convergence can be assured by objective function approximations whose directional derivatives attain an epigraphical nesting property. We demonstrate that epigraphical nesting provides a unifying thread that ties together a number of different algorithms, including those for the solution of variational inequalities and smooth as well as nonsmooth optimization. We show that the Newton method and its variants for unconstrained optimization, successive quadratic programming methods for constrained optimization, and proximal point algorithms (deterministic and stochastic) all construct approximations in which the directional derivatives attain an epigraphical nesting property, even though the approximations themselves fail to attain such a property.This work was supported in part by Grants NSF-DDM-89-10046 and NSF-DDM-91-14352 from the National Science Foundation.  相似文献   

2.
For optimization problems with computationally demanding objective functions and subgradients, inexact subgradient methods (IXS) have been introduced by using successive approximation schemes within subgradient optimization methods (Au et al., 1994). In this paper, we develop alternative solution procedures when the primal-dual information of IXS is utilized. This approach is especially useful when the projection operation onto the feasible set is difficult. We also demonstrate its applicability to stochastic linear programs.  相似文献   

3.
The two stage stochastic program with recourse is known to have numerous applications in financial planning, energy modeling, telecommunications systems etc. Notwithstanding its applicability, the two stage stochastic program is limited in its ability to incorporate a decision maker's attitudes towards risk. In this paper we present an extension via the inclusion of a recourse constraint. This results in a convex integrated chance constraint (ICC), which inherits the convexity properties of two stage programs. However, it also inherits some of the difficulties associated with the evaluation of recourse functions. This motivates our study of conditions that may be applicable to algorithms using statistical approximations of such ICC. We present a set of sufficient conditions that these approximations may satisfy in order to assure convergence. Our conditions are satisfied by a wide range of statistical approximations, and we demonstrate that these approximations can be generated within standard algorithmic procedures.This work was supported in part by Grant No. NSF-DDM-9114352 from the National Science Foundation.  相似文献   

4.
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems.  相似文献   

5.
The Lagrangian dual of an integer program can be formulated as a min-max problem where the objective function is convex, piecewise affine and, hence, nonsmooth. It is usually tackled by means of subgradient algorithms, or multiplier adjustment techniques, or even more sophisticated nonsmooth optimization methods such as bundle-type algorithms. Recently a new approach to solving unconstrained convex finite min-max problems has been proposed, which has the nice property of working almost independently of the exact evaluation of the objective function at every iterate-point.  相似文献   

6.
We consider convex stochastic optimization problems under different assumptions on the properties of available stochastic subgradient. It is known that, if the value of the objective function is available, one can obtain, in parallel, several independent approximate solutions in terms of the objective residual expectation. Then, choosing the solution with the minimum function value, one can control the probability of large deviation of the objective residual. On the contrary, in this short paper, we address the situation, when the value of the objective function is unavailable or is too expensive to calculate. Under "‘light-tail"’ assumption for stochastic subgradient and in general case with moderate large deviation probability, we show that parallelization combined with averaging gives bounds for probability of large deviation similar to a serial method. Thus, in these cases, one can benefit from parallel computations and reduce the computational time without loss in the solution quality.  相似文献   

7.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

8.
An important field of application of non-smooth optimization refers to decomposition of large-scale or complex problems by Lagrangian duality. In this setting, the dual problem consists in maximizing a concave non-smooth function that is defined as the sum of sub-functions. The evaluation of each sub-function requires solving a specific optimization sub-problem, with specific computational complexity. Typically, some sub-functions are hard to evaluate, while others are practically straightforward. When applying a bundle method to maximize this type of dual functions, the computational burden of solving sub-problems is preponderant in the whole iterative process. We propose to take full advantage of such separable structure by making a dual bundle iteration after having evaluated only a subset of the dual sub-functions, instead of all of them. This type of incremental approach has already been applied for subgradient algorithms. In this work we use instead a specialized variant of bundle methods and show that such an approach is related to bundle methods with inexact linearizations. We analyze the convergence properties of two incremental-like bundle methods. We apply the incremental approach to a generation planning problem over an horizon of one to three years. This is a large scale stochastic program, unsolvable by a direct frontal approach. For a real-life application on the French power mix, we obtain encouraging numerical results, achieving a significant improvement in speed without losing accuracy.  相似文献   

9.
Inspired by the successful applications of the stochastic optimization with second order stochastic dominance (SSD) model in portfolio optimization, we study new numerical methods for a general SSD model where the underlying functions are not necessarily linear. Specifically, we penalize the SSD constraints to the objective under Slater’s constraint qualification and then apply the well known stochastic approximation (SA) method and the level function method to solve the penalized problem. Both methods are iterative: the former requires to calculate an approximate subgradient of the objective function of the penalized problem at each iterate while the latter requires to calculate a subgradient. Under some moderate conditions, we show that w.p.1 the sequence of approximated solutions generated by the SA method converges to an optimal solution of the true problem. As for the level function method, the convergence is deterministic and in some cases we are able to estimate the number of iterations for a given precision. Both methods are applied to portfolio optimization problem where the return functions are not necessarily linear and some numerical test results are reported.  相似文献   

10.
In this paper, we study the influence of noise on subgradient methods for convex constrained optimization. The noise may be due to various sources, and is manifested in inexact computation of the subgradients and function values. Assuming that the noise is deterministic and bounded, we discuss the convergence properties for two cases: the case where the constraint set is compact, and the case where this set need not be compact but the objective function has a sharp set of minima (for example the function is polyhedral). In both cases, using several different stepsize rules, we prove convergence to the optimal value within some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below some threshold. We then extend these results to objective functions that are the sum of a large number of convex functions, in which case an incremental subgradient method can be used.  相似文献   

11.
Abstract

Quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Hölder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb–Douglas production efficiency problem.  相似文献   

12.
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.  相似文献   

13.
New first-order methods are introduced for solving convex optimization problems from a fairly broad class. For composite optimization problems with an inexact stochastic oracle, a stochastic intermediate gradient method is proposed that allows using an arbitrary norm in the space of variables and a prox-function. The mean rate of convergence of this method and the probability of large deviations from this rate are estimated. For problems with a strongly convex objective function, a modification of this method is proposed and its rate of convergence is estimated. The resulting estimates coincide, up to a multiplicative constant, with lower complexity bounds for the class of composite optimization problems with an inexact stochastic oracle and for all usually considered subclasses of this class.  相似文献   

14.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

15.
In this work, we consider numerical methods for solving a class of block three‐by‐three saddle‐point problems, which arise from finite element methods for solving time‐dependent Maxwell equations and some other applications. The direct extension of the Uzawa method for solving this block three‐by‐three saddle‐point problem requires the exact solution of a symmetric indefinite system of linear equations at each step. To avoid heavy computations at each step, we propose an inexact Uzawa method, which solves the symmetric indefinite linear system in some inexact way. Under suitable assumptions, we show that the inexact Uzawa method converges to the unique solution of the saddle‐point problem within the approximation level. Two special algorithms are customized for the inexact Uzawa method combining the splitting iteration method and a preconditioning technique, respectively. Numerical experiments are presented, which demonstrated the usefulness of the inexact Uzawa method and the two customized algorithms.  相似文献   

16.
Locating proximal points is a component of numerous minimization algorithms. This work focuses on developing a method to find the proximal point of a convex function at a point, given an inexact oracle. Our method assumes that exact function values are at hand, but exact subgradients are either not available or not useful. We use approximate subgradients to build a model of the objective function, and prove that the method converges to the true prox-point within acceptable tolerance. The subgradient g k used at each step k is such that the distance from g k to the true subdifferential of the objective function at the current iteration point is bounded by some fixed ε > 0. The algorithm includes a novel tilt-correct step applied to the approximate subgradient.  相似文献   

17.
W. Hare 《Optimization Letters》2017,11(7):1217-1227
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.  相似文献   

18.
Gap Functions for Equilibrium Problems   总被引:1,自引:0,他引:1  
The theory of gap functions, developed in the literature for variational inequalities, is extended to a general equilibrium problem. Descent methods, with exact an inexact line-search rules, are proposed. It is shown that these methods are a generalization of the gap function algorithms for variational inequalities and optimization problems.  相似文献   

19.
Based on the gradient sampling technique, we present a subgradient algorithm to solve the nondifferentiable convex optimization problem with an extended real-valued objective function. A feature of our algorithm is the approximation of subgradient at a point via random sampling of (relative) gradients at nearby points, and then taking convex combinations of these (relative) gradients. We prove that our algorithm converges to an optimal solution with probability 1. Numerical results demonstrate that our algorithm performs favorably compared with existing subgradient algorithms on applications considered.  相似文献   

20.
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号