共查询到20条相似文献,搜索用时 15 毫秒
1.
Poljak has suggested an improved subgradient method and provided a lower bound on the improvement of the Euclidean distance to an optimal solution. In this paper, we provide a stronger lower bound and show that the direction of movement in this method forms a more acute angle with the direction toward the set of optimal solutions than that in the subgradient method.Most of this research has been performed while the first author was visiting the Decision and Information Systems Department, College of Business, Arizona State University, Tempe, Arizona. 相似文献
2.
Polyak's subgradient algorithm for nondifferentiable optimization problems requires prior knowledge of the optimal value of the objective function to find an optimal solution. In this paper we extend the convergence properties of the Polyak's subgradient algorithm with a fixed target value to a more general case with variable target values. Then a target value updating scheme is provided which finds an optimal solution without prior knowledge of the optimal objective value. The convergence proof of the scheme is provided and computational results of the scheme are reported.Most of this research was performed when the first author was visiting Decision and Information Systems Department, College of Business, Arizona State University. 相似文献
3.
J. X. Da Cruz Neto G. J. P. Da Silva O. P. Ferreira J. O. Lopes 《Computational Optimization and Applications》2013,54(3):461-472
A method for solving quasiconvex nondifferentiable unconstrained multiobjective optimization problems is proposed in this paper. This method extends to the multiobjective case of the classical subgradient method for real-valued minimization. Assuming the basically componentwise quasiconvexity of the objective components, full convergence (to Pareto optimal points) of all the sequences produced by the method is established. 相似文献
4.
Ulf Brännlund 《Mathematical Programming》1995,71(2):207-219
We study conditions for convergence of a generalized subgradient algorithm in which a relaxation step is taken in a direction, which is a convex combination of possibly all previously generated subgradients. A simple condition for convergence is given and conditions that guarantee a linear convergence rate are also presented. We show that choosing the steplength parameter and convex combination of subgradients in a certain sense optimally is equivalent to solving a minimum norm quadratic programming problem. It is also shown that if the direction is restricted to be a convex combination of the current subgradient and the previous direction, then an optimal choice of stepsize and direction is equivalent to the Camerini—Fratta—Maffioli modification of the subgradient method.Research supported by the Swedish Research Council for Engineering Sciences (TFR). 相似文献
5.
An algorithm for solving convex feasibility problem for a finite family of convex sets is considered. The acceleration scheme of De Pierro (em Methodos de projeção para a resolução de sistemas gerais de equações algébricas lineares. Thesis (tese de Doutoramento), Instituto de Matemática da UFRJ, Cidade Universitária, Rio de Janeiro, Brasil, 1981), which is designed for simultaneous algorithms, is used in the algorithm to speed up the fully sequential cyclic subgradient projections method. A convergence proof is presented. The advantage of using this strategy is demonstrated with some examples. 相似文献
6.
Dirk A. Lorenz Marc E. Pfetsch Andreas M. Tillmann 《Computational Optimization and Applications》2014,57(2):271-306
We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum ? 1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing. 相似文献
7.
In this paper, we introduce an algorithm as combination between the subgradient extragradient method and inertial method for solving variational inequality problems in Hilbert spaces. The weak convergence of the algorithm is established under standard assumptions imposed on cost operators. The proposed algorithm can be considered as an improvement of the previously known inertial extragradient method over each computational step. The performance of the proposed algorithm is also illustrated by several preliminary numerical experiments. 相似文献
8.
Krzysztof Czesław Kiwiel 《Mathematical Programming》1983,27(3):320-341
A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables. The methods require only the calculation off and one subgradient off at designated points. They generalize Lemarechal's bundle method. More specifically, instead of using all previously computed
subgradients in search direction finding subproblems that are quadratic programming problems, the methods use an aggregate
subgradient which is recursively updated as the algorithms proceed. Each algorithm yields a minimizing sequence of points,
and iff has any minimizers, then this sequence converges to a solution of the problem. Particular members of this algorithm class
terminate whenf is piecewise linear. The methods are easy to implement and have flexible storage requirements and computational effort per
iteration that can be controlled by a user.
Research sponsored by the Institute of Automatic Control, Technical University of Warsaw, Poland, under Project R.I.21. 相似文献
9.
《Journal of Computational and Applied Mathematics》1986,14(3):391-400
This paper presents a readily implementable algorithm for minimizing a locally Lipschitz continuous function that is not necessarily convex or differentiable. This extension of the aggregate subgradient method differs from one developed by the author in the treatment of nonconvexity. Subgradient aggregation allows the user to control the number of constraints in search direction finding subproblems and, thus, trade-off subproblem solution effort for rate of convergence. All accumulation points of the algorithm are stationary. Moreover, the algorithm converges when the objective function happens to be convex. 相似文献
10.
Bagirov A. M. Hoseini Monjezi N. Taheri S. 《Computational Optimization and Applications》2021,80(2):411-438
Computational Optimization and Applications - A method, called an augmented subgradient method, is developed to solve unconstrained nonsmooth difference of convex (DC) optimization problems. At... 相似文献
11.
《Operations Research Letters》2020,48(5):579-583
In this paper we propose a subgradient algorithm for solving the equilibrium problem where the bifunction may be quasiconvex with respect to the second variable. The convergence of the algorithm is investigated. 相似文献
12.
In this paper, we consider a type of the celebrated convex feasibility problem, named as split quasi-convex feasibility problem (SQFP). The SQFP is to find a point in a sublevel set of a quasi-convex function in one space and its image under a bounded linear operator is contained in a sublevel set of another quasi-convex function in the image space. We propose a new adaptive subgradient algorithm for solving SQFP problem. We also discuss the convergence analyses for two cases: the first case where the functions are upper semicontinuous in the setting of finite dimensional, and the second case where the functions are weakly continuous in the infinite-dimensional settings. Finally some numerical examples in order to support the convergence results are given. 相似文献
13.
Convergence of a generalized subgradient method for nondifferentiable convex optimization 总被引:2,自引:0,他引:2
A generalized subgradient method is considered which uses the subgradients at previous iterations as well as the subgradient at current point. This method is a direct generalization of the usual subgradient method. We provide two sets of convergence conditions of the generalized subgradient method. Our results provide a larger class of sequences which converge to a minimum point and more freedom of adjustment to accelerate the speed of convergence.Most of this research was performed when the first author was visiting Decision and Information Systems Department, College of Business, Arizona State University. 相似文献
14.
15.
A. El Ghali 《Optimization》2016,65(7):1497-1518
We present an implementable algorithm for minimizing a convex function which is not necessarily differentiable subject to linear equality constraints and to nonnegativity bounds on the variables. The algorithm is based on extending the variant proposed by Luenberger to the nondifferentiable case and using the bundle techniques introduced by Lemaréchal to approximate the subdifferential of the objective function. In particular, at each iteration, we compute a search direction by solving a quadratic subproblem, and an inexact line search along this direction yields a decrease in the objective value. Under some assumptions, the convergence of the proposed algorithm is analysed. Finally, some numerical results are presented, which show that the algorithm performs efficiently. 相似文献
16.
A projected subgradient method for solving generalized mixed variational inequalities 总被引:1,自引:0,他引:1
We consider the projected subgradient method for solving generalized mixed variational inequalities. In each step, we choose an εk-subgradient uk of the function f and wk in a set-valued mapping T, followed by an orthogonal projection onto the feasible set. We prove that the sequence is weakly convergent. 相似文献
17.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered. 相似文献
18.
《European Journal of Operational Research》1986,27(3):301-312
We model capacity expansion problems as nondifferentiable convex programs. A dual to this problem is also formulated as a nondifferentiable convex program. The solution methodology we utilize is a primal-dual subgradient approach. Here the primal-dual pair is utilized in step length determination. The special structure of these capacity expansion problems also results in other simplifications. In particular, unlike the application of subgradient optimization for general convex programs, the test for feasibility in certain capacity expansion problems is straightforward. Further, quadratic programs associated with projection operators are also avoided by using the special problem structure. The algorithm is shown to be convergent. In order to illustrate the applicability of our methodology, we discuss its application to a time dynamic power generation capacity planning problem. Computational results with this application is also provided. 相似文献
19.
《Journal of Computational and Applied Mathematics》1987,18(3):307-320
An iterative method to solve the convex feasibility problem for a finite family of convex sets is presented. The algorithm consists in the application of a generalization of an acceleration procedure introduced by De Pierro, in a parallel version of the Subgradient Projections Method proposed by Censor and Lent. The generated sequence is shown to converge for any starting point. Some numerical results are presented. 相似文献
20.
A cyclically controlled method of subgradient projections (CSP) for the convex feasibility problem of solving convex inequalities is presented. The features of this method make it an efficient tool in handling huge and sparse problems. A particular application to an image reconstruction problem of emission computerized tomography is mentioned.Research supported by National Institute of Health Grant HL 28438-01. 相似文献