首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the minimization problem with strictly convex, possibly nondifferentiable, separable cost and linear constraints. The dual of this problem is an unconstrained minimization problem with differentiable cost which is well suited for solution by parallel methods based on Gauss-Seidel relaxation. We show that these methods yield the optimal primal solution and, under additional assumptions, an optimal dual solution. To do this it is necessary to extend the classical Gauss-Seidel convergence results because the dual cost may not be strictly convex, and may have unbounded level sets. Work supported by the National Science Foundation under grant NSF-ECS-3217668.  相似文献   

2.
We introduce a penalty term-based splitting algorithm with inertial effects designed for solving monotone inclusion problems involving the sum of maximally monotone operators and the convex normal cone to the (nonempty) set of zeros of a monotone and Lipschitz continuous operator. We show weak ergodic convergence of the generated sequence of iterates to a solution of the monotone inclusion problem, provided a condition expressed via the Fitzpatrick function of the operator describing the underlying set of the normal cone is verified. Under strong monotonicity assumptions we can even show strong nonergodic convergence of the iterates. This approach constitutes the starting point for investigating from a similar perspective monotone inclusion problems involving linear compositions of parallel-sum operators and, further, for the minimization of a complexly structured convex objective function subject to the set of minima of another convex and differentiable function.  相似文献   

3.
In this paper, based on a merit function of the split feasibility problem (SFP), we present a Newton projection method for solving it and analyze the convergence properties of the method. The merit function is differentiable and convex. But its gradient is a linear composite function of the projection operator, so it is nonsmooth in general. We prove that the sequence of iterates converges globally to a solution of the SFP as long as the regularization parameter matrix in the algorithm is chosen properly. Especially, under some local assumptions which are necessary for the case where the projection operator is nonsmooth, we prove that the sequence of iterates generated by the algorithm superlinearly converges to a regular solution of the SFP. Finally, some numerical results are presented.  相似文献   

4.
We propose a new modified primal–dual proximal best approximation method for solving convex not necessarily differentiable optimization problems. The novelty of the method relies on introducing memory by taking into account iterates computed in previous steps in the formulas defining current iterate. To this end we consider projections onto intersections of halfspaces generated on the basis of the current as well as the previous iterates. To calculate these projections we are using recently obtained closed-form expressions for projectors onto polyhedral sets. The resulting algorithm with memory inherits strong convergence properties of the original best approximation proximal primal–dual algorithm. Additionally, we compare our algorithm with the original (non-inertial) one with the help of the so called attraction property defined below. Extensive numerical experimental results on image reconstruction problems illustrate the advantages of including memory into the original algorithm.  相似文献   

5.
This paper presents a theoretical result on convergence of a primal affine-scaling method for convex quadratic programs. It is shown that, as long as the stepsize is less than a threshold value which depends on the input data only, Ye and Tse's interior ellipsoid algorithm for convex quadratic programming is globally convergent without nondegeneracy assumptions. In addition, its local convergence rate is at least linear and the dual iterates have an ergodically convergent property.Research supported in part by the NSF under grant DDM-8721709.  相似文献   

6.
We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advantage of the history of the iterates. We show weak convergence of the generated sequence of iterates to an optimal solution of the optimization problem, provided a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. We also prove convergence for the objective function values to the optimal objective value. The convergence analysis carried out in this paper relies on the celebrated Opial Lemma and generalized Fejér monotonicity techniques. We illustrate the functionality of the method via a numerical experiment addressing image classification via support vector machines.  相似文献   

7.
This short note gives the sharp bound for the Q-linear convergence rate of the iterates generated by the steepest descent method with exact line searches when the objective function is strictly convex quadratic.  相似文献   

8.
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.  相似文献   

9.
This paper presents a wide class of globally convergent interior-point algorithms for the nonlinear complementarity problem with a continuously differentiable monotone mapping in terms of a unified global convergence theory given by Polak in 1971 for general nonlinear programs. The class of algorithms is characterized as: Move in a Newton direction for approximating a point on the path of centers of the complementarity problem at each iteration. Starting from a strictly positive but infeasible initial point, each algorithm in the class either generates an approximate solution with a given accuracy or provides us with information that the complementarity problem has no solution in a given bounded set. We present three typical examples of our interior-point algorithms, a horn neighborhood model, a constrained potential reduction model with the use of the standard potential function, and a pure potential reduction model with the use of a new potential function.Research supported in part by Grant-in-Aids for Co-Operative Research (03832017) of the Japan Ministry of Education, Science and Culture.Corresponding author.  相似文献   

10.
In this paper, we present a measure of distance in a second-order cone based on a class of continuously differentiable strictly convex functions on ℝ++. Since the distance function has some favorable properties similar to those of the D-function (Censor and Zenios in J. Optim. Theory Appl. 73:451–464 [1992]), we refer to it as a quasi D-function. Then, a proximal-like algorithm using the quasi D-function is proposed and applied to the second-cone programming problem, which is to minimize a closed proper convex function with general second-order cone constraints. Like the proximal point algorithm using the D-function (Censor and Zenios in J. Optim. Theory Appl. 73:451–464 [1992]; Chen and Teboulle in SIAM J. Optim. 3:538–543 [1993]), under some mild assumptions we establish the global convergence of the algorithm expressed in terms of function values; we show that the sequence generated by the proposed algorithm is bounded and that every accumulation point is a solution to the considered problem. Research of Shaohua Pan was partially supported by the Doctoral Starting-up Foundation (B13B6050640) of GuangDong Province. Jein-Shan Chen is a member of the Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work was partially supported by National Science Council of Taiwan.  相似文献   

11.
An algorithm is presented which minimizes continuously differentiable pseudoconvex functions on convex compact sets which are characterized by their support functions. If the function can be minimized exactly on affine sets in a finite number of operations and the constraint set is a polytope, the algorithm has finite convergence. Numerical results are reported which illustrate the performance of the algorithm when applied to a specific search direction problem. The algorithm differs from existing algorithms in that it has proven convergence when applied to any convex compact set, and not just polytopal sets.This research was supported by the National Science Foundation Grant ECS-85-17362, the Air Force Office Scientific Research Grant 86-0116, the Office of Naval Research Contract N00014-86-K-0295, the California State MICRO program, and the Semiconductor Research Corporation Contract SRC-82-11-008.  相似文献   

12.
Consider the problem of minimizing a convex essentially smooth function over a polyhedral set. For the special case where the cost function is strictly convex, we propose a feasible descent method for this problem that chooses the descent directions from a finite set of vectors. When the polyhedral set is the nonnegative orthant or the entire space, this method reduces to a coordinate descent method which, when applied to certain dual of linearly constrained convex programs with strictly convex essentially smooth costs, contains as special cases a number of well-known dual methods for quadratic and entropy (either –logx orx logx) optimization. Moreover, convergence of these dual methods can be inferred from a general convergence result for the feasible descent method. When the cost function is not strictly convex, we propose an extension of the feasible descent method which makes descent along the elementary vectors of a certain subspace associated with the polyhedral set. The elementary vectors are not stored, but generated using the dual rectification algorithm of Rockafellar. By introducing an -complementary slackness mechanism, we show that this extended method terminates finitely with a solution whose cost is within an order of of the optimal cost. Because it uses the dual rectification algorithm, this method can exploit the combinatorial structure of the polyhedral set and is well suited for problems with a special (e.g., network) structure.This work was partially supported by the US Army Research Office Contract No. DAAL03-86-K-0171 and by the National Science Foundation Grant No. ECS-85-19058.  相似文献   

13.
C. Zălinescu 《Optimization》2016,65(3):651-670
It is known that, in finite dimensions, the support function of a compact convex set with nonempty interior is differentiable excepting the origin if and only if the set is strictly convex. In this paper, we realize a thorough study of the relations between the differentiability of the support function on the interior of its domain and the convexity of the set, mainly for unbounded sets. Then, we revisit some results related to the differentiability of the cost function associated to a production function.  相似文献   

14.
An algorithm for unconstrained minimization of a function of n variables that does not require the evaluation of partial derivatives is presented. It is a second order extension of the method of local variations and it does not require any exact one variable minimizations. This method retains the local variations property of accumulation points being stationary for a continuously differentiable function. Furthermore, because this extension makes the algorithm an approximate Newton method, its convergence is superlinear for a twice continuously differentiable strongly convex function.Research sponsored by National Science Foundation Grant GK-32710 and by the Air Force Office of Scientific Research, Air Force Systems Command, USAF, under Grant No. AFOSR-74-2695.  相似文献   

15.
In this paper, we propose a decomposition algorithm for convex differentiable minimization. This algorithm at each iteration solves a variational inequality problem obtained by adding to the gradient of the cost function a strongly proximal related function. A line search is then performed in the direction of the solution to this variational inequality (with respect to the original cost). If the constraint set is a Cartesian product ofm sets, the variational inequality decomposes intom coupled variational inequalities, which can be solved in either a Jacobi manner or a Gauss-Seidel manner. This algorithm also applies to the minimization of a strongly convex (possibly nondifferentiable) cost subject to linear constraints. As special cases, we obtain the GP-SOR algorithm of Mangasarian and De Leone, a diagonalization algorithm of Feijoo and Meyer, the coordinate descent method, and the dual gradient method. This algorithm is also closely related to a splitting algorithm of Gabay and a gradient projection algorithm of Goldstein and of Levitin-Poljak, and has interesting applications to separable convex programming and to solving traffic assignment problems.This work was partially supported by the US Army Research Office Contract No. DAAL03-86-K-0171 and by the National Science Foundation Grant No. ECS-85-19058. The author thanks the referees for their many helpful comments, particularly for suggesting the use of a general functionH instead of that given by (4).  相似文献   

16.
对无约束规划 ( P) :minx∈ Rnf ( x) ,其中 f ( x)是 Rn→ R1上的一阶连续可微函数 ,设计了一个超记忆梯度求解算法 ,并在去掉迭代点列 { xk}有界和广义 Armijo步长搜索下 ,讨论了算法的全局的收敛性 ,证明了算法具有较强的收敛性质  相似文献   

17.
We treat an extension of the generalized Fermat—Weber problem with convex cost functions. It is shown that the entire sequence of iterates (as opposed to selected subsequences) generated by each of the two proposed algorithms converges to a minimum although the economic function is not strictly convex. The general idea is to associate, with the economic function calledh, a family of more regular strictly convex functions, the lower envelope of which is the functionh.  相似文献   

18.
We consider the metric projection operator from the real Hilbert space onto a strongly convex set. We prove that the restriction of this operator on the complement of some neighborhood of the strongly convex set is Lipschitz continuous with the Lipschitz constant strictly less than 1. This property characterizes the class of strongly convex sets and (to a certain degree) the Hilbert space. We apply the results obtained to the question concerning the rate of convergence for the gradient projection algorithm with differentiable convex function and strongly convex set.  相似文献   

19.
A descent algorithm for nonsmooth convex optimization   总被引:1,自引:0,他引:1  
This paper presents a new descent algorithm for minimizing a convex function which is not necessarily differentiable. The algorithm can be implemented and may be considered a modification of the ε-subgradient algorithm and Lemarechal's descent algorithm. Also our algorithm is seen to be closely related to the proximal point algorithm applied to convex minimization problems. A convergence theorem for the algorithm is established under the assumption that the objective function is bounded from below. Limited computational experience with the algorithm is also reported.  相似文献   

20.
In this paper, we extend the ordinary discrete type facility location problems to continuous type ones. Unlike the discrete type facility location problem in which the objective function isn't everywhere differentiable, the objective function in the continuous type facility location problem is strictly convex and continuously differentiable. An algorithm without line search for solving the continuous type facility location problems is proposed and its global convergence, linear convergence rate is proved. Numerical experiments illustrate that the algorithm suggested in this paper have smaller amount of computation, quicker convergence rate than the gradient method and conjugate direction method in some sense.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号