首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
A readily implementable algorithm is proposed for minimizing any convex, not necessarily differentiable, function f of several variables subject to a finite number of linear constraints. The algorithm requires only the calculation of f at designated feasible points. At each iteration a lower polyhedral approximation to f is constructed from a few previously calculated subgradients and an aggregate subgradient. The recursively updated aggregate subgradient accumulates the subgradient information to deal with nondifferentiability of f. The polyhedral approximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a step length is found by approximate line search. It is shown that the algorithm converges to a solution, if any.  相似文献   

2.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient is available at each point, but in many situations there is more information available. We present a new method for optimizing such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations, and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe. Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999  相似文献   

3.
We present an approximate bundle method for solving nonsmooth equilibrium problems. An inexact cutting-plane linearization of the objective function is established at each iteration, which is actually an approximation produced by an oracle that gives inaccurate values for the functions and subgradients. The errors in function and subgradient evaluations are bounded and they need not vanish in the limit. A descent criterion adapting the setting of inexact oracles is put forward to measure the current descent behavior. The sequence generated by the algorithm converges to the approximately critical points of the equilibrium problem under proper assumptions. As a special illustration, the proposed algorithm is utilized to solve generalized variational inequality problems. The numerical experiments show that the algorithm is effective in solving nonsmooth equilibrium problems.  相似文献   

4.
We present a method able to recover location and residue of poles of functions meromorphic in a half-plane from samples of the function on the real positive semi-axis. The function is assumed to satisfy appropriate asymptotic conditions including, in particular, that required by Carlson’s theorem. The peculiar features of the present procedure are: (a) it does not make use of the approximation of meromorphic functions by rational functions; (b) it does not use the standard methods of regularization of ill-posed problems. The data required for the determination of the pole parameters (i.e., location and residue) are the approximate values of the meromorphic function on a finite set of equidistant points on the real positive semi-axis. We show that this method is numerically stable by proving that the algorithm is convergent as the number of data points tends to infinity and the noise on the input data goes to zero. Moreover, we can also evaluate the degree of approximation of the estimates of pole location and residue which we obtain from the knowledge of a finite number of noisy samples.  相似文献   

5.
《Optimization》2012,61(3):275-288
The increasing interest in partially-linear models is caused by their role in economics, technology and mathematics. Every continuous partially-linear function with a finite number of linear parts can be formulated by means of linear function and absolute value functions. If we presebt every absolote value function as vertex of a graph we obtain the graph interpretaion of this optimization problem. The Partially-linear problems are often not only nondifferentiable but nonconvex as well,which makes their solving by standard optimization methods difficult. In this work we give the necessary and sufficient conditions for local optimality. An algorithm for solving such problems, based on the constructive optimization methods is suggested.  相似文献   

6.
In this paper we develop a primal-dual subgradient algorithm for preferably decomposable, generally nondifferentiable, convex programming problems, under usual regularity conditions. The algorithm employs a Lagrangian dual function along with a suitable penalty function which satisfies a specified set of properties, in order to generate a sequence of primal and dual iterates for which some subsequence converges to a pair of primal-dual optimal solutions. Several classical types of penalty functions are shown to satisfy these specified properties. A geometric convergence rate is established for the algorithm under some additional assumptions. This approach has three principal advantages. Firstly, both primal and dual solutions are available which prove to be useful in several contexts. Secondly, the choice of step sizes, which plays an important role in subgradient optimization, is guided more determinably in this method via primal and dual information. Thirdly, typical subgradient algorithms suffer from the lack of an appropriate stopping criterion, and so the quality of the solution obtained after a finite number of steps is usually unknown. In contrast, by using the primal-dual gap, the proposed algorithm possesses a natural stopping criterion.  相似文献   

7.
Inspired by the successful applications of the stochastic optimization with second order stochastic dominance (SSD) model in portfolio optimization, we study new numerical methods for a general SSD model where the underlying functions are not necessarily linear. Specifically, we penalize the SSD constraints to the objective under Slater’s constraint qualification and then apply the well known stochastic approximation (SA) method and the level function method to solve the penalized problem. Both methods are iterative: the former requires to calculate an approximate subgradient of the objective function of the penalized problem at each iterate while the latter requires to calculate a subgradient. Under some moderate conditions, we show that w.p.1 the sequence of approximated solutions generated by the SA method converges to an optimal solution of the true problem. As for the level function method, the convergence is deterministic and in some cases we are able to estimate the number of iterations for a given precision. Both methods are applied to portfolio optimization problem where the return functions are not necessarily linear and some numerical test results are reported.  相似文献   

8.
Subgradient projectors play an important role in optimization and for solving convex feasibility problems. For every locally Lipschitz function, we can define a subgradient projector via generalized subgradients even if the function is not convex. The paper consists of three parts. In the first part, we study basic properties of subgradient projectors and give characterizations when a subgradient projector is a cutter, a local cutter, or a quasi-nonexpansive mapping. We present global and local convergence analyses of subgradent projectors. Many examples are provided to illustrate the theory. In the second part, we investigate the relationship between the subgradient projector of a prox-regular function and the subgradient projector of its Moreau envelope. We also characterize when a mapping is the subgradient projector of a convex function. In the third part, we focus on linearity properties of subgradient projectors. We show that, under appropriate conditions, a linear operator is a subgradient projector of a convex function if and only if it is a convex combination of the identity operator and a projection operator onto a subspace. In general, neither a convex combination nor a composition of subgradient projectors of convex functions is a subgradient projector of a convex function.  相似文献   

9.
W. Hare 《Optimization Letters》2017,11(7):1217-1227
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.  相似文献   

10.
非光滑约束问题的既约次梯度法   总被引:1,自引:0,他引:1  
1引言 对带约束的不可微的非线性规划问题,由于不能使用梯度,求极小点就比较困难.本文给出解决此问题的一种有效的算法. 2 非光滑约束问题的既约次梯度法 1)非线性规划问题的Laerane对偶理论 考虑下面非线性规划问题其中g(x)=(g1(x),…,gr(x))T,h(x))=(h1(x),…,hm(x))T,f(x)=      Rn中是Lispschitz连续的i=1,2,…,r,j=1,2,…,m相应的Lagrange对偶问题为其中  (u, )=infL(x;u,v)=inf(f(x)+uT…  相似文献   

11.
A readily implementable algorithm is given for minimizing a (possibly nondifferentiable and nonconvex) locally Lipschitz continuous functionf subject to linear constraints. At each iteration a polyhedral approximation tof is constructed from a few previously computed subgradients and an aggregate subgradient, which accumulates the past subgradient information. This aproximation and the linear constraints generate constraints in the search direction finding subproblem that is a quadratic programming problem. Then a stepsize is found by an approximate line search. All the algorithm's accumulation points are stationary. Moreover, the algorithm converges whenf happens to be convex.  相似文献   

12.
《Optimization》2012,61(1-2):61-92
We consider finite-dimensional minimax problems for two traditional models: firstly,with box constraints at variables and,secondly,taking into account a finite number of tinear inequalities. We present finite exact primal and dual methods. These methods are adapted to a great extent to the specific structure of the cost function which is formed by a finite number of linear functions. During the iterations of the primal method we make use of the information from the dual problem, thereby increasing effectiveness. To improve the dual method we use the “long dual step” rule (the principle of ullrelaxation).The results are illustrated by numerical experiments.  相似文献   

13.
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands.  相似文献   

14.
For optimization problems with computationally demanding objective functions and subgradients, inexact subgradient methods (IXS) have been introduced by using successive approximation schemes within subgradient optimization methods (Au et al., 1994). In this paper, we develop alternative solution procedures when the primal-dual information of IXS is utilized. This approach is especially useful when the projection operation onto the feasible set is difficult. We also demonstrate its applicability to stochastic linear programs.  相似文献   

15.
In many instances, the exact evaluation of an objective function and its subgradients can be computationally demanding. By way of example, we cite problems that arise within the context of stochastic optimization, where the objective function is typically defined via multi-dimensional integration. In this paper, we address the solution of such optimization problems by exploring the use of successive approximation schemes within subgradient optimization methods. We refer to this new class of methods as inexact subgradient algorithms. With relatively mild conditions imposed on the approximations, we show that the inexact subgradient algorithms inherit properties associated with their traditional (i.e., exact) counterparts. Within the context of stochastic optimization, the conditions that we impose allow a relaxation of requirements traditionally imposed on steplengths in stochastic quasi-gradient methods. Additionally, we study methods in which steplengths may be defined adaptively, in a manner that reflects the improvement in the objective function approximations as the iterations proceed. We illustrate the applicability of our approach by proposing an inexact subgradient optimization method for the solution of stochastic linear programs.This work was supported by Grant Nos. NSF-DDM-89-10046 and NSF-DDM-9114352 from the National Science Foundation.  相似文献   

16.
Using the predicate language for ordered fields a class of problems referred to aslinear problems is defined. This class contains, for example, all systems of linear equations and inequalities, all linear programming problems, all integer programming problems with bounded variables, all linear complementarity problems, the testing of whether sets that are defined by linear inequalities are semilattices, all satisfiability problems in sentenial logic, the rank-computation of matrices, the computation of row-reduced echelon forms of matrices, and all quadratic programming problems with bounded variables. A single, one, algorithm, to which we refer as theUniversal Linear Machine, is described. It solves any instance of any linear problem. The Universal Linear Machine runs in two phases. Given a linear problem, in the first phase a Compiler running on a Turing Machine generates alinear algorithm for the problem. Then, given an instance of the linear problem, in the second phase the linear algorithm solves the particular instance of the linear problem. The linear algorithm is finite, deterministic, loopless and executes only the five ordered field operations — additions, multiplications, subtractions, divisions and comparisons. Conversely, we show that for each linear algorithm there is a linear problem which the linear algorithm solves uniquely. Finally, it is shown that with a linear algorithm for a linear problem, one can solve certain parametric instances of the linear problem.Research was supported in part by the National Science Foundation Grant DMS 92-07409, by the Department of Energy Grant DE-FG03-87-ER-25028, by the United States—Israel Binational Science Foundation Grant 90-00434 and by ONR Grant N00014-92-J1142.Corresponding author.  相似文献   

17.
In this paper we consider cardinality-constrained convex programs that minimize a convex function subject to a cardinality constraint and other linear constraints. This class of problems has found many applications, including portfolio selection, subset selection and compressed sensing. We propose a successive convex approximation method for this class of problems in which the cardinality function is first approximated by a piecewise linear DC function (difference of two convex functions) and a sequence of convex subproblems is then constructed by successively linearizing the concave terms of the DC function. Under some mild assumptions, we establish that any accumulation point of the sequence generated by the method is a KKT point of the DC approximation problem. We show that the basic algorithm can be refined by adding strengthening cuts in the subproblems. Finally, we report some preliminary computational results on cardinality-constrained portfolio selection problems.  相似文献   

18.
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.  相似文献   

19.
Multiparametric programming considers optimization problems where the data are functions of a parameter vector and describes the optimal value and an optimizer as explicit functions of the parameters. In this paper, we consider a linear program where the right-hand side is an affine function of a parameter vector; we propose an algorithm for approximating its solution. Given a full-dimensional simplex in the parameter space and an optimizer for each simplex vertex, the algorithm formulates the linear interpolation of the given solutions as an explicit function of the parameters, giving a primal feasible approximation of an optimizer inside the simplex. If the resulting absolute error in the objective exceeds a prescribed tolerance, then the algorithm subdivides the simplex into smaller simplices where it applies recursively. We propose both a basic version and a refined version of the algorithm. The basic version is polynomial in the output size, provided a polynomial LP solver is used; the refined version may give a smaller output. A global error bound for the optimizer is derived and some computational tests are discussed.  相似文献   

20.
This paper presents exact and heuristic solution procedures for a multiproduct capacitated facility location (MPCFL) problem in which the demand for a number of different product families must be supplied from a set of facility sites, and each site offers a choice of facility types exhibiting different capacities. MPCFL generalizes both the uncapacitated (or simple) facility location (UFL) problem and the pure-integer capacitated facility location problem. We define a branch-and-bound algorithm for MPCFL that utilizes bounds formed by a Lagrangian relaxation of MPCFL which decomposes the problem into UFL subproblems and easily solvable 0-1 knapsack subproblems. The UFL subproblems are solved by the dual-based procedure of Erlenkotter. We also present a subgradient optimization-Lagrangian relaxation-based heuristic for MPCFL. Computational experience with the algorithm and heuristic are reported. The MPCFL heuristic is seen to be extremely effective, generating solutions to the test problems that are on average within 2% of optimality, and the branch-and-bound algorithm is successful in solving all of the test problems to optimality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号