首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we develop the sufficient conditions for the existence of local and global saddle points of two classes of augmented Lagrangian functions for nonconvex optimization problem with both equality and inequality constraints, which improve the corresponding results in available papers. The main feature of our sufficient condition for the existence of global saddle points is that we do not need the uniqueness of the optimal solution. Furthermore, we show that the existence of global saddle points is a necessary and sufficient condition for the exact penalty representation in the framework of augmented Lagrangians. Based on these, we convert a class of generalized semi-infinite programming problems into standard semi-infinite programming problems via augmented Lagrangians. Some new first-order optimality conditions are also discussed. This research was supported by the National Natural Science Foundation of P.R. China (Grant No. 10571106 and No. 10701047).  相似文献   

2.
Given a set of corrupted data drawn from a union of multiple subspace, the subspace recovery problem is to segment the data into their respective subspace and to correct the possible noise simultaneously. Recently, it is discovered that the task can be characterized, both theoretically and numerically, by solving a matrix nuclear-norm and a ?2,1-mixed norm involved convex minimization problems. The minimization model actually has separable structure in both the objective function and constraint; it thus falls into the framework of the augmented Lagrangian alternating direction approach. In this paper, we propose and investigate an augmented Lagrangian algorithm. We split the augmented Lagrangian function and minimize the subproblems alternatively with one variable by fixing the other one. Moreover, we linearize the subproblem and add a proximal point term to easily derive the closed-form solutions. Global convergence of the proposed algorithm is established under some technical conditions. Extensive experiments on the simulated and the real data verify that the proposed method is very effective and faster than the sate-of-the-art algorithm LRR.  相似文献   

3.
刘芳  王长钰 《经济数学》2007,24(4):420-426
本文利用指数型增广拉格朗日函数将一类广义半无限极大极小问题在一定条件下转化为标准的半无限极大极小问题,使它们具有相同的局部与全局最优解.我们给出了两个转化条件:一个是充分与必要条件,另一个是在实际中易于验证的充分条件.通过这种转化,我们给出了广义半无限极大极小问题的一个新的一阶最优性条件.  相似文献   

4.
Jia  Xiaoxi  Kanzow  Christian  Mehlitz  Patrick  Wachsmuth  Gerd 《Mathematical Programming》2023,199(1-2):1365-1415

This paper is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints. Specifically, we study situations where parts of the constraints are nonconvex and possibly complicated, but allow for a fast computation of projections onto this nonconvex set. Typical problem classes which satisfy this requirement are optimization problems with disjunctive constraints (like complementarity or cardinality constraints) as well as optimization problems over sets of matrices which have to satisfy additional rank constraints. The key idea behind our method is to keep these complicated constraints explicitly in the constraints and to penalize only the remaining constraints by an augmented Lagrangian function. The resulting subproblems are then solved with the aid of a problem-tailored nonmonotone projected gradient method. The corresponding convergence theory allows for an inexact solution of these subproblems. Nevertheless, the overall algorithm computes so-called Mordukhovich-stationary points of the original problem under a mild asymptotic regularity condition, which is generally weaker than most of the respective available problem-tailored constraint qualifications. Extensive numerical experiments addressing complementarity- and cardinality-constrained optimization problems as well as a semidefinite reformulation of MAXCUT problems visualize the power of our approach.

  相似文献   

5.
This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical since it is readily testable given only a gradient (or subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for proximal point algorithms: in particular, it uses a single relative tolerance parameter, rather than a summable parameter sequence. Our analysis first describes an abstract version of the criterion within Rockafellar’s general parametric convex duality framework, and proves a global convergence result for the resulting algorithm. Specializing this algorithm to a standard formulation of convex programming produces a version of the classical augmented Lagrangian method with a novel inexact solution condition for the subproblems. Finally, we present computational results drawn from the CUTE test set—including many nonconvex problems—indicating that the approach works well in practice.  相似文献   

6.
We propose a Uzawa block relaxation domain decomposition method for a two-body frictionless contact problem. We introduce auxiliary variables to separate subdomains representing linear elastic bodies. Applying a Uzawa block relaxation algorithm to the corresponding augmented Lagrangian functional yields a domain decomposition algorithm in which we have to solve two uncoupled linear elasticity subproblems in each iteration while the auxiliary variables are computed explicitly using Kuhn–Tucker optimality conditions.  相似文献   

7.
This paper presents two new approximate versions of the alternating direction method of multipliers (ADMM) derived by modifying of the original “Lagrangian splitting” convergence analysis of Fortin and Glowinski. They require neither strong convexity of the objective function nor any restrictions on the coupling matrix. The first method uses an absolutely summable error criterion and resembles methods that may readily be derived from earlier work on the relationship between the ADMM and the proximal point method, but without any need for restrictive assumptions to make it practically implementable. It permits both subproblems to be solved inexactly. The second method uses a relative error criterion and the same kind of auxiliary iterate sequence that has recently been proposed to enable relative-error approximate implementation of non-decomposition augmented Lagrangian algorithms. It also allows both subproblems to be solved inexactly, although ruling out “jamming” behavior requires a somewhat complicated implementation. The convergence analyses of the two methods share extensive underlying elements.  相似文献   

8.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

9.

This paper addresses problems of second-order cone programming important in optimization theory and applications. The main attention is paid to the augmented Lagrangian method (ALM) for such problems considered in both exact and inexact forms. Using generalized differential tools of second-order variational analysis, we formulate the corresponding version of second-order sufficiency and use it to establish, among other results, the uniform second-order growth condition for the augmented Lagrangian. The latter allows us to justify the solvability of subproblems in the ALM and to prove the linear primal–dual convergence of this method.

  相似文献   

10.
In this paper, we present a necessary and sufficient condition for a zero duality gap between a primal optimization problem and its generalized augmented Lagrangian dual problems. The condition is mainly expressed in the form of the lower semicontinuity of a perturbation function at the origin. For a constrained optimization problem, a general equivalence is established for zero duality gap properties defined by a general nonlinear Lagrangian dual problem and a generalized augmented Lagrangian dual problem, respectively. For a constrained optimization problem with both equality and inequality constraints, we prove that first-order and second-order necessary optimality conditions of the augmented Lagrangian problems with a convex quadratic augmenting function converge to that of the original constrained program. For a mathematical program with only equality constraints, we show that the second-order necessary conditions of general augmented Lagrangian problems with a convex augmenting function converge to that of the original constrained program.This research is supported by the Research Grants Council of Hong Kong (PolyU B-Q359.)  相似文献   

11.
B. Jin 《Optimization》2016,65(6):1151-1166
In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms: we derive a Lagrange–Newton algorithm for the nonsmooth convex optimization, and establish the nonsingularity of the Newton system and the local convergence of the algorithm.  相似文献   

12.
《Optimization》2012,61(6):1107-1130
ABSTRACT

We develop three algorithms to solve the subproblems generated by the augmented Lagrangian methods introduced by Iusem-Nasri (2010) for the equilibrium problem. The first algorithm that we propose incorporates the Newton method and the other two are instances of the subgradient projection method. One of our algorithms is also capable of solving nondifferentiable equilibrium problems. Using well-known test problems, all algorithms introduced here are implemented and numerical results are reported to compare their performances.  相似文献   

13.
In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functions. Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality, we prove that the sequence generated by the GADMM converges to a critical point of the augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large. Moreover, we also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm.  相似文献   

14.
Splitting methods have been extensively studied in the context of convex programming and variational inequalities with separable structures. Recently, a parallel splitting method based on the augmented Lagrangian method (abbreviated as PSALM) was proposed in He (Comput. Optim. Appl. 42:195?C212, 2009) for solving variational inequalities with separable structures. In this paper, we propose the inexact version of the PSALM approach, which solves the resulting subproblems of PSALM approximately by an inexact proximal point method. For the inexact PSALM, the resulting proximal subproblems have closed-form solutions when the proximal parameters and inexact terms are chosen appropriately. We show the efficiency of the inexact PSALM numerically by some preliminary numerical experiments.  相似文献   

15.
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.  相似文献   

16.
In this paper, we propose, analyze and test primal and dual versions of the alternating direction algorithm for the sparse signal reconstruction from its major noise contained observation data. The algorithm minimizes a convex non-smooth function consisting of the sum of ? 1-norm regularization term and ? 1-norm data fidelity term. We minimize the corresponding augmented Lagrangian function alternatively from either primal or dual forms. Both of the resulting subproblems admit explicit solutions either by using a one-dimensional shrinkage or by an efficient Euclidean projection. The algorithm is easily implementable and it requires only two matrix-vector multiplications per-iteration. The global convergence of the proposed algorithm is established under some technical conditions. The extensions to the non-negative signal recovery problem and the weighted regularization minimization problem are also discussed and tested. Numerical results illustrate that the proposed algorithm performs better than the state-of-the-art algorithm YALL1.  相似文献   

17.
This paper demonstrates a customized application of the classical proximal point algorithm (PPA) to the convex minimization problem with linear constraints. We show that if the proximal parameter in metric form is chosen appropriately, the application of PPA could be effective to exploit the simplicity of the objective function. The resulting subproblems could be easier than those of the augmented Lagrangian method (ALM), a benchmark method for the model under our consideration. The efficiency of the customized application of PPA is demonstrated by some image processing problems.  相似文献   

18.
The aim of this paper is to present some results for the augmented Lagrangian function in the context of constrained global optimization by means of the image space analysis. It is first shown that a saddle point condition for the augmented Lagrangian function is equivalent to the existence of a regular nonlinear separation in the image space. Local and global sufficient optimality conditions for the exact augmented Lagrangian function are then investigated by means of second-order analysis in the image space. Local optimality result for this function is established under second-order sufficiency conditions in the image space. Global optimality result is further obtained under additional assumptions. Finally, it is proved that the exact augmented Lagrangian method converges to a global solution–Lagrange multiplier pair of the original problem under mild conditions.  相似文献   

19.
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we consider the formulation of subproblems in which the objective function is a generalization of the Hestenes-Powell augmented Lagrangian function. The main feature of the generalized function is that it is minimized with respect to both the primal and the dual variables simultaneously. The benefits of this approach include: (i) the ability to control the quality of the dual variables during the solution of the subproblem; (ii) the availability of improved dual estimates on early termination of the subproblem; and (iii) the ability to regularize the subproblem by imposing explicit bounds on the dual variables. We propose two primal-dual variants of conventional primal methods: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual 1 linearly constrained Lagrangian (pd 1LCL) method. Finally, a new sequential quadratic programming (pdSQP) method is proposed that uses the primal-dual augmented Lagrangian as a merit function.  相似文献   

20.
We examine the problem of scheduling a given set of jobs on a single machine to minimize total early and tardy costs without considering machine idle time. We decompose the problem into two subproblems with a simpler structure. Then the lower bound of the problem is the sum of the lower bounds of two subproblems. A lower bound of each subproblem is obtained by Lagrangian relaxation. Rather than using the well-known subgradient optimization approach, we develop two efficient multiplier adjustment procedures with complexity O(nlog n) to solve two Lagrangian dual subproblems. A branch-and-bound algorithm based on the two efficient procedures is presented, and is used to solve problems with up to 50 jobs, hence doubling the size of problems that can be solved by existing branch-and-bound algorithms. We also propose a heuristic procedure based on the neighborhood search approach. The computational results for problems with up to 3 000 jobs show that the heuristic procedure performs much better than known heuristics for this problem in terms of both solution efficiency and quality. In addition, the results establish the effectiveness of the heuristic procedure in solving realistic problems to optimality or near optimality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号