首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a new method for minimizing a strictly convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions (thus the row-action nature of the algorithm) and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. Convergence of the algorithm is established and the method is compared with other row-action algorithms for several relevant particular cases.Corresponding author. Research of this author was partially supported by CNPq grant No. 301280/86.  相似文献   

2.
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? We answer this question in the affirmative for a convex program and an associated subgradient algorithm for its Lagrange dual. We show that the primal–dual pair of programs corresponding to an associated homogeneous dual function is in turn associated with a saddle-point problem, in which—in the inconsistent case—the primal part amounts to finding a solution in the primal space such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized; the dual part amounts to identifying a feasible steepest ascent direction for the Lagrangian dual function. We present convergence results for a conditional \(\varepsilon \)-subgradient optimization algorithm applied to the Lagrangian dual problem, and the construction of an ergodic sequence of primal subproblem solutions; this composite algorithm yields convergence of the primal–dual sequence to the set of saddle-points of the associated homogeneous Lagrangian function; for linear programs, convergence to the subset in which the primal objective is at minimum is also achieved.  相似文献   

3.
It is shown that the dual of the problem of minimizing the 2-norm of the primal and dual optimal variables and slacks of a linear program can be transformed into an unconstrained minimization of a convex parameter-free globally differentiable piecewise quadratic function with a Lipschitz continuous gradient. If the slacks are not included in the norm minimization, one obtains a minimization problem with a convex parameter-free quadratic objective function subject to nonnegativity constraints only.  相似文献   

4.
A customized Douglas-Rachford splitting method (DRSM) was recently proposed to solve two-block separable convex optimization problems with linear constraints and simple abstract constraints. The algorithm has advantage over the well-known alternating direction method of multipliers (ADMM), the dual application of DRSM to the two-block convex minimization problem, in the sense that the subproblems can have larger opportunity of possessing closed-form solutions since they are unconstrained. In this paper, we further study along this way by considering the primal application of DRSM for the general case m≥3, i.e., we consider the multi-block separable convex minimization problem with linear constraints where the objective function is separable into m individual convex functions without coupled variables. The resulting method fully exploits the separable structure and enjoys decoupled subproblems which can be solved simultaneously. Both the exact and inexact versions of the new method are presented in a unified framework. Under mild conditions, we manage to prove the global convergence of the algorithm. Preliminary numerical experiments for extracting the background from corrupted surveillance video verify the encouraging efficiency of the new algorithm.  相似文献   

5.
The alternating direction method of multipliers(ADMM)is a widely used method for solving many convex minimization models arising in signal and image processing.In this paper,we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.This algorithm is obtained by making use of the inertial Douglas-Rachford splitting algorithm to the corresponding dual of the primal problem.We study the convergence analysis of the proposed algorithm in infinite-dimensional Hilbert spaces.Furthermore,we apply the proposed algorithm on the robust principal component analysis problem and also compare it with other state-of-the-art algorithms.Numerical results demonstrate the advantage of the proposed algorithm.  相似文献   

6.
We consider the minimization problem with strictly convex, possibly nondifferentiable, separable cost and linear constraints. The dual of this problem is an unconstrained minimization problem with differentiable cost which is well suited for solution by parallel methods based on Gauss-Seidel relaxation. We show that these methods yield the optimal primal solution and, under additional assumptions, an optimal dual solution. To do this it is necessary to extend the classical Gauss-Seidel convergence results because the dual cost may not be strictly convex, and may have unbounded level sets. Work supported by the National Science Foundation under grant NSF-ECS-3217668.  相似文献   

7.
A proximal-based decomposition method for convex minimization problems   总被引:10,自引:0,他引:10  
This paper presents a decomposition method for solving convex minimization problems. At each iteration, the algorithm computes two proximal steps in the dual variables and one proximal step in the primal variables. We derive this algorithm from Rockafellar's proximal method of multipliers, which involves an augmented Lagrangian with an additional quadratic proximal term. The algorithm preserves the good features of the proximal method of multipliers, with the additional advantage that it leads to a decoupling of the constraints, and is thus suitable for parallel implementation. We allow for computing approximately the proximal minimization steps and we prove that under mild assumptions on the problem's data, the method is globally convergent and at a linear rate. The method is compared with alternating direction type methods and applied to the particular case of minimizing a convex function over a finite intersection of closed convex sets.Corresponding author. Partially supported by Air Force Office of Scientific Research Grant 91-0008 and National Science Foundation Grant DMS-9201297.  相似文献   

8.
We consider a primal optimization problem in a reflexive Banach space and a duality scheme via generalized augmented Lagrangians. For solving the dual problem (in a Hilbert space), we introduce and analyze a new parameterized Inexact Modified Subgradient (IMSg) algorithm. The IMSg generates a primal-dual sequence, and we focus on two simple new choices of the stepsize. We prove that every weak accumulation point of the primal sequence is a primal solution and the dual sequence converges weakly to a dual solution, as long as the dual optimal set is nonempty. Moreover, we establish primal convergence even when the dual optimal set is empty. Our second choice of the stepsize gives rise to a variant of IMSg which has finite termination.  相似文献   

9.
马玉敏  蔡邢菊 《计算数学》2022,44(2):272-288
增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性.  相似文献   

10.
The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primal-dual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner sub-problems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed state-of-the-art algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature.  相似文献   

11.
In this paper a minimization problem with convex objective function subject to a separable convex inequality constraint “≤” and bounded variables (box constraints) is considered. We propose an iterative algorithm for solving this problem based on line search and convergence of this algorithm is proved. At each iteration, a separable convex programming problem with the same constraint set is solved using Karush-Kuhn-Tucker conditions. Convex minimization problems subject to linear equality/ linear inequality “≥” constraint and bounds on the variables are also considered. Numerical illustration is included in support of theory.  相似文献   

12.
This paper describes, and analyzes, a method of successive approximations for finding critical points of a function which can be written as the difference of two convex functions. The method is based on using a non-convex duality theory. At each iteration one solves a convex, optimization problem. This alternates between the primal and the dual variables. Under very general structural conditions on the problem, we prove that the resulting sequence is a descent sequence, which converges to a critical point of the problem. To illustrate the method, it is applied to some weighted eigenvalue problems, to a problem from astrophysics, and to some semilinear elliptic equations.  相似文献   

13.
A primal-dual version of the proximal point algorithm is developed for linearly constrained convex programming problems. The algorithm is an iterative method to find a saddle point of the Lagrangian of the problem. At each iteration of the algorithm, we compute an approximate saddle point of the Lagrangian function augmented by quadratic proximal terms of both primal and dual variables. Specifically, we first minimize the function with respect to the primal variables and then approximately maximize the resulting function of the dual variables. The merit of this approach exists in the fact that the latter function is differentiable and the maximization of this function is subject to no constraints. We discuss convergence properties of the algorithm and report some numerical results for network flow problems with separable quadratic costs.  相似文献   

14.
In this paper, we present a dual algorithm for minimizing a convex quadratic function with two quadratic constraints. Such a minimization problem is a subproblem that appears in some trust region algorithms for general nonlinear programming. Some theoretical properties of the dual problem are given. Global convergence of the algorithm is proved and a local superlinear convergence result is presented. Numerical examples are also provided.  相似文献   

15.
1.IntroductionTheproblemconsideredinthispaperiswhereX={xER"laTx5hi,jEI={l,.'.,m}},ajeR"(jEI)areallcolumn*ThisresearchissupportedbytheNationalNaturalSciencesFoundationofChinaandNaturalSciencesFoundationofHunanProvince.vectors,hiERI(j6I)areallscalars,andf:R"-- Risacontinuouslydifferentiablefunction.Weonlyconsiderinequalityconstraintsheresinceanyequalitycanbeexpressedastwoinequalities.Withoutassumingregularityofthelinearconstraints,thereisnotanydifficultyinextendingtheresultstothegenera…  相似文献   

16.
Branch and bound approaches for nonconvex programming problems had been given in [1] and [4]. Crucial for both are the use of rectangular partitions, convex envelopes and separable nonconvex portions of the objective function and constraints. We want to propose a similar algorithm which solves a sequence of problems in each of which the objective function is convex or even linear. The main difference between this approach and previous approaches is the use of general compact partitions instead of rectangular ones and a different refining rule such that the algorithm does not rely on the concept of convex envelopes and handles non-separable functions.First we describe a general algorithm and prove a convergence theorem under suitable regularity assumptions. Then we give as example an algorithm for concave minimization problems.  相似文献   

17.
In this paper, based on a merit function of the split feasibility problem (SFP), we present a Newton projection method for solving it and analyze the convergence properties of the method. The merit function is differentiable and convex. But its gradient is a linear composite function of the projection operator, so it is nonsmooth in general. We prove that the sequence of iterates converges globally to a solution of the SFP as long as the regularization parameter matrix in the algorithm is chosen properly. Especially, under some local assumptions which are necessary for the case where the projection operator is nonsmooth, we prove that the sequence of iterates generated by the algorithm superlinearly converges to a regular solution of the SFP. Finally, some numerical results are presented.  相似文献   

18.
1. IntroductionThe quadratic programming (QP) problem is the most simple one in nonlinear pro-gramming and plays a very important role in optimization theory and applications.It is well known that matriX splitting teChniques are widely used for solving large-scalelinear system of equations very successfully. These algorithms generate an infinite sequence,in contrast to the direct algorithms which terminate in a finite number of steps. However,iterative algorithms are considerable simpler tha…  相似文献   

19.
We introduce a new and very simple algorithm for a class of smooth convex constrained minimization problems which is an iterative scheme related to sequential quadratically constrained quadratic programming methods, called sequential simple quadratic method (SSQM). The computational simplicity of SSQM, which uses first-order information, makes it suitable for large scale problems. Theoretical results under standard assumptions are given proving that the whole sequence built by the algorithm converges to a solution and becomes feasible after a finite number of iterations. When in addition the objective function is strongly convex then asymptotic linear rate of convergence is established.  相似文献   

20.
In this paper, we introduce the notion of a weak sharp set of solutions to a variational inequality problem (VIP) in a reflexive, strictly convex and smooth Banach space, and present its several equivalent conditions. We also prove, under some continuity and monotonicity assumptions, that if any sequence generated by an algorithm for solving (VIP) converges to a weak sharp solution, then we can obtain solutions for (VIP) by solving a finite number of convex optimization subproblems with linear objective. Moreover, in order to characterize finite convergence of an iterative algorithm, we introduce the notion of a weak subsharp set of solutions to a variational inequality problem (VIP), which is more general than that of weak sharp solutions in Hilbert spaces. We establish a sufficient and necessary condition for the finite convergence of an algorithm for solving (VIP) which satisfies that the sequence generated by which converges to a weak subsharp solution of (VIP), and show that the proximal point algorithm satisfies this condition. As a consequence, we prove that the proximal point algorithm possesses finite convergence whenever the sequence generated by which converges to a weak subsharp solution of (VIP).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号