首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Multiobjective optimization has a large number of real-life applications. Under this motivation, in this paper, we present a new method for solving multiobjective optimization problems with both linear constraints and bound constraints on the variables. This method extends, to the multiobjective setting, the classical reduced gradient method for scalar-valued optimization. The proposed algorithm generates a feasible descent direction by solving an appropriate quadratic subproblem, without the use of any scalarization approaches. We prove that the sequence generated by the algorithm converges to Pareto-critical points of the problem. We also present some numerical results to show the efficiency of the proposed method.  相似文献   

2.
Jiang  Xianzhen  Liao  Wei  Yin  Jianghua  Jian  Jinbao 《Numerical Algorithms》2022,91(1):161-191

In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.

  相似文献   

3.
《Optimization》2012,61(7):1499-1520
In this article, we intend to study several scalar-valued gap functions for Stampacchia and Minty-type vector variational inequalities. We first introduce gap functions based on a scalarization technique and then develop a gap function without any scalarizing parameter. We then develop its regularized version and under mild conditions develop an error bound for vector variational inequalities with strongly monotone data. Further, we introduce the notion of a partial gap function which satisfies all, but one of the properties of the usual gap function. However, the partial gap function is convex and we provide upper and lower estimates of its directional derivative.  相似文献   

4.
Generalized convex functions preserve many valuable properties of mathematical programming problems with convex functions. Generalized monotone maps allow for an extension of existence results for variational inequality problems with monotone maps. Both models are special realizations of an abstract equilibrium problem with numerous applications, especially in equilibrium analysis (e.g., Blum and Oettli, 1994). We survey existence results for equilibrium problems obtained under generalized convexity and generalized monotonicity. We consider both the scalar and the vector case. Finally existence results for a system of vector equilibrium problems under generalized convexity are surveyed which have applications to a system of vector variational inequality problems. Throughout the survey we demonstrate that the results can be obtained without the rigid assumptions of convexity and monotonicity.  相似文献   

5.
Generalized convex functions preserve many valuable properties of mathematical programming problems with convex functions. Generalized monotone maps allow for an extension of existence results for variational inequality problems with monotone maps. Both models are special realizations of an abstract equilibrium problem with numerous applications, especially in equilibrium analysis (e.g., Blum and Oettli, 1994). We survey existence results for equilibrium problems obtained under generalized convexity and generalized monotonicity. We consider both the scalar and the vector case. Finally existence results for a system of vector equilibrium problems under generalized convexity are surveyed which have applications to a system of vector variational inequality problems. Throughout the survey we demonstrate that the results can be obtained without the rigid assumptions of convexity and monotonicity.  相似文献   

6.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported.  相似文献   

7.
In this paper, we consider a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint and the upper level program has a convex set constraint. By using the value function of the lower level program, we reformulate the bilevel program as a single level optimization problem with a nonsmooth inequality constraint and a convex set constraint. To deal with such a nonsmooth and nonconvex optimization problem, we design a smoothing projected gradient algorithm for a general optimization problem with a nonsmooth inequality constraint and a convex set constraint. We show that, if the sequence of penalty parameters is bounded then any accumulation point is a stationary point of the nonsmooth optimization problem and, if the generated sequence is convergent and the extended Mangasarian-Fromovitz constraint qualification holds at the limit then the limit point is a stationary point of the nonsmooth optimization problem. We apply the smoothing projected gradient algorithm to the bilevel program if a calmness condition holds and to an approximate bilevel program otherwise. Preliminary numerical experiments show that the algorithm is efficient for solving the simple bilevel program.  相似文献   

8.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

9.
In recent years, convex optimization methods were successfully applied for various image processing tasks and a large number of first-order methods were designed to minimize the corresponding functionals. Interestingly, it was shown recently in Grewenig et al. (2010) that the simple idea of so-called “superstep cycles” leads to very efficient schemes for time-dependent (parabolic) image enhancement problems as well as for steady state (elliptic) image compression tasks. The “superstep cycles” approach is similar to the nonstationary (cyclic) Richardson method which has been around for over sixty years. In this paper, we investigate the incorporation of superstep cycles into the projected gradient method. We show for two problems in compressive sensing and image processing, namely the LASSO approach and the Rudin-Osher-Fatemi model that the resulting simple cyclic projected gradient algorithm can numerically compare with various state-of-the-art first-order algorithms. However, due to the nonlinear projection within the algorithm convergence proofs even under restrictive assumptions on the linear operators appear to be hard. We demonstrate the difficulties by studying the simplest case of a two-cycle algorithm in ?2 with projections onto the Euclidean ball.  相似文献   

10.
Convex optimization methods are used for many machine learning models such as support vector machine. However, the requirement of a convex formulation can place limitations on machine learning models. In recent years, a number of machine learning methods not requiring convexity have emerged. In this paper, we study non-convex optimization problems on the Stiefel manifold in which the feasible set consists of a set of rectangular matrices with orthonormal column vectors. We present examples of non-convex optimization problems in machine learning and apply three nonlinear optimization methods for finding a local optimal solution; geometric gradient descent method, augmented Lagrangian method of multipliers, and alternating direction method of multipliers. Although the geometric gradient method is often used to solve non-convex optimization problems on the Stiefel manifold, we show that the alternating direction method of multipliers generally produces higher quality numerical solutions within a reasonable computation time.  相似文献   

11.
This work focuses on convergence analysis of the projected gradient method for solving constrained convex minimization problems in Hilbert spaces. We show that the sequence of points generated by the method employing the Armijo line search converges weakly to a solution of the considered convex optimization problem. Weak convergence is established by assuming convexity and Gateaux differentiability of the objective function, whose Gateaux derivative is supposed to be uniformly continuous on bounded sets. Furthermore, we propose some modifications in the classical projected gradient method in order to obtain strong convergence. The new variant has the following desirable properties: the sequence of generated points is entirely contained in a ball with diameter equal to the distance between the initial point and the solution set, and the whole sequence converges strongly to the solution of the problem that lies closest to the initial iterate. Convergence analysis of both methods is presented without Lipschitz continuity assumption.  相似文献   

12.
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm.  相似文献   

13.
In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative errors on the search directions. At each iteration, a decrease of the objective value is obtained by means of an Armijo-like rule. The convergence results of this new method extend those obtained by Fukuda and Graña Drummond for the exact version. For partial orders induced by both pointed and nonpointed cones, under some reasonable hypotheses, global convergence to weakly efficient points of all sequences generated by the inexact projected gradient method is established for convex (respect to the ordering cone) objective functions. In the convergence analysis we also establish a connection between the so-called weighting method and the one we propose.  相似文献   

14.
In the present paper, the Polyak’s principle, concerning convexity of the images of small balls through C1, 1 mappings, is employed in the study of vector optimization problems. This leads to extend to such a context achievements of local programming, an approach to nonlinear optimization, due to B.T. Polyak, which consists in exploiting the benefits of the convex local behaviour of certain nonconvex problems. In doing so, solution existence and optimality conditions are established for localizations of vector optimization problems, whose data satisfy proper assumptions. Such results are subsequently applied in the analysis of welfare economics, in the case of an exchange economy model with infinite-dimensional commodity space. In such a setting, the localization of an economy yields existence of Pareto optimal allocations, which, under certain additional assumptions, lead to competitive equilibria.  相似文献   

15.
In this paper we propose a new Riemannian conjugate gradient method for optimization on the Stiefel manifold. We introduce two novel vector transports associated with the retraction constructed by the Cayley transform. Both of them satisfy the Ring-Wirth nonexpansive condition, which is fundamental for convergence analysis of Riemannian conjugate gradient methods, and one of them is also isometric. It is known that the Ring-Wirth nonexpansive condition does not hold for traditional vector transports as the differentiated retractions of QR and polar decompositions. Practical formulae of the new vector transports for low-rank matrices are obtained. Dai’s nonmonotone conjugate gradient method is generalized to the Riemannian case and global convergence of the new algorithm is established under standard assumptions. Numerical results on a variety of low-rank test problems demonstrate the effectiveness of the new method.  相似文献   

16.
We propose an interior point method for large-scale convex quadratic programming where no assumptions are made about the sparsity structure of the quadratic coefficient matrixQ. The interior point method we describe is a doubly iterative algorithm that invokes aconjugate projected gradient procedure to obtain the search direction. The effect is thatQ appears in a conjugate direction routine rather than in a matrix factorization. By doing this, the matrices to be factored have the same nonzero structure as those in linear programming. Further, one variant of this method istheoretically convergent with onlyone matrix factorization throughout the procedure.  相似文献   

17.
In this paper, we introduce a Minty type vector variational inequality, a Stampacchia type vector variational inequality, and the weak forms of them, which are all defined by means of subdifferentials on Hadamard manifolds. We also study the equivalent relations between the vector variational inequalities and nonsmooth convex vector optimization problems. By using the equivalent relations and an analogous to KKM lemma, we give some existence theorems for weakly efficient solutions of convex vector optimization problems under relaxed compact assumptions.  相似文献   

18.
The main aim of the paper is to accelerate the existing method for a convex optimization problem over the fixed-point set of a nonexpansive mapping. To achieve this goal, we present an algorithm (Algorithm 3.1) by using the conjugate gradient direction. We present also a convergence analysis (Theorem 3.1) under some assumptions. Finally, to demonstrate the effectiveness and performance of the proposed method, we present numerical comparisons of the existing method with the proposed method.  相似文献   

19.
We deal with extended-valued nonsmooth convex vector optimization problems in infinite-dimensional spaces where the solution set (the weakly efficient set) may be empty. We characterize the class of convex vector functions having the property that every scalarly stationary sequence is a weakly-efficient sequence. We generalize the results obained in the scalar case by Auslender and Crouzeix about asymptotically well-behaved convex functions and improve substantially the few results known in the vector case.  相似文献   

20.
This paper studies the vector optimization problem of finding weakly efficient points for maps from Rn to Rm, with respect to the partial order induced by a closed, convex, and pointed cone CRm, with nonempty interior. We develop for this problem an extension of the proximal point method for scalar-valued convex optimization problem with a modified convergence sensing condition that allows us to construct an interior proximal method for solving VOP on nonpolyhedral set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号