首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k~2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O(1/k). Numerical experiments on the linearly constrained l_1-l_2minimization problem are presented to demonstrate the effectiveness of AALM.  相似文献   

3.
A potential reduction algorithm is proposed for optimization of a convex function subject to linear constraints. At each step of the algorithm,a system of linear equations is solved toget a search direction and the Armijo‘s rule is used to determine a stepsize. It is proved that thealgorithm is globally convergent. Computational results are reported.  相似文献   

4.
In this paper, we present a two-phase augmented Lagrangian method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality and inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian method (ALM) wherein the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions. The software reviewed as part of this submission was given the DOI (Digital Object Identifier)  https://doi.org/10.5281/zenodo.1206980.  相似文献   

5.
6.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with 1 and 2 norms. These inverse optimization problems are either linear programming when 1 norm is used in the formulation, or convex quadratic separable programming when 2 norm is used.  相似文献   

7.
In this paper we study two inexact fast augmented Lagrangian algorithms for solving linearly constrained convex optimization problems. Our methods rely on a combination of the excessive-gap-like smoothing technique introduced in Nesterov (SIAM J Optim 16(1):235–249, 2005) and the general inexact oracle framework studied in Devolder (Math Program 146:37–75, 2014). We develop and analyze two augmented based algorithmic instances with constant and adaptive smoothness parameters, and derive a total computational complexity estimate in terms of projections on a simple primal feasible set for each algorithm. For the constant parameter algorithm we obtain the overall computational complexity of order \(\mathcal {O}(\frac{1}{\epsilon ^{5/4}})\), while for the adaptive one we obtain \(\mathcal {O}(\frac{1}{\epsilon })\) total number of projections onto the primal feasible set in order to achieve an \(\epsilon \)-optimal solution for the original problem.  相似文献   

8.
Zhou  Yuhao  Bao  Chenglong  Ding  Chao  Zhu  Jun 《Mathematical Programming》2023,199(1-2):1-48
Mathematical Programming - We derive new and improved non-asymptotic deviation inequalities for the sample average approximation (SAA) of an optimization problem. Our results give strong error...  相似文献   

9.
A primal-dual version of the proximal point algorithm is developed for linearly constrained convex programming problems. The algorithm is an iterative method to find a saddle point of the Lagrangian of the problem. At each iteration of the algorithm, we compute an approximate saddle point of the Lagrangian function augmented by quadratic proximal terms of both primal and dual variables. Specifically, we first minimize the function with respect to the primal variables and then approximately maximize the resulting function of the dual variables. The merit of this approach exists in the fact that the latter function is differentiable and the maximization of this function is subject to no constraints. We discuss convergence properties of the algorithm and report some numerical results for network flow problems with separable quadratic costs.  相似文献   

10.
Based on an augmented Lagrangian line search function, a sequential quadratically constrained quadratic programming method is proposed for solving nonlinearly constrained optimization problems. Compared to quadratic programming solved in the traditional SQP methods, a convex quadratically constrained quadratic programming is solved here to obtain a search direction, and the Maratos effect does not occur without any other corrections. The “active set” strategy used in this subproblem can avoid recalculating the unnecessary gradients and (approximate) Hessian matrices of the constraints. Under certain assumptions, the proposed method is proved to be globally, superlinearly, and quadratically convergent. As an extension, general problems with inequality and equality constraints as well as nonmonotone line search are also considered.  相似文献   

11.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

12.
A new dual gradient method is given to solve linearly constrained, strongly convex, separable mathematical programming problems. The dual problem can be decomposed into one-dimensional problems whose solutions can be computed extremely easily. The dual objective function is shown to have a Lipschitz continuous gradient, and therefore a gradient-type algorithm can be used for solving the dual problem. The primal optimal solution can be obtained from the dual optimal solution in a straightforward way. Convergence proofs and computational results are given.  相似文献   

13.
14.
A novel smooth nonlinear augmented Lagrangian for solving minimax problems with inequality constraints, is proposed in this paper, which has the positive properties that the classical Lagrangian and the penalty function fail to possess. The corresponding algorithm mainly consists of minimizing the nonlinear augmented Lagrangian function and updating the Lagrange multipliers and controlling parameter. It is demonstrated that the algorithm converges Q-superlinearly when the controlling parameter is less than a threshold under the mild conditions. Furthermore, the condition number of the Hessian of the nonlinear augmented Lagrangian function is studied, which is very important for the efficiency of the algorithm. The theoretical results are validated further by the preliminary numerical experiments for several testing problems reported at last, which show that the nonlinear augmented Lagrangian is promising.  相似文献   

15.
16.
In this paper we propose a primal-dual algorithm for the solution of general nonlinear programming problems. The core of the method is a local algorithm which relies on a truncated procedure for the computation of a search direction, and is thus suitable for large scale problems. The truncated direction produces a sequence of points which locally converges to a KKT pair with superlinear convergence rate.  相似文献   

17.
In this paper, we employ the projection operator to design a semismooth Newton algorithm for solving nonlinear symmetric cone programming (NSCP). The algorithm is computable from theoretical standpoint and is proved to be locally quadratically convergent without assuming strict complementarity of the solution to NSCP.  相似文献   

18.
For convex optimization inR n,we show how a minor modification of the usual Lagrangian function (unlike that of the augmented Lagrangians), plus a limiting operation, allows one to close duality gaps even in the absence of a Kuhn-Tucker vector [see the introductory discussion, and see the discussion in Section 4 regarding Eq. (2)]. The cardinality of the convex constraining functions can be arbitrary (finite, countable, or uncountable).In fact, our main result (Theorem 4.3) reveals much finer detail concerning our limiting Lagrangian. There are affine minorants (for any value 0<1 of the limiting parameter ) of the given convex functions, plus an affine form nonpositive onK, for which a general linear inequality holds onR nAfter substantial weakening, this inequality leads to the conclusions of the previous paragraph.This work is motivated by, and is a direct outgrowth of, research carried out jointly with R. J. Duffin.This research was supported by NSF Grant No. GP-37510X1 and ONR Contract No. N00014-75-C0621, NR-047-048. This paper was presented at Constructive Approaches to Mathematical Models, a symposium in honor of R. J. Duffin, Pittsburgh, Pennsylvania, 1978. The author is grateful to Professor Duffin for discussions relating to the work reported here.The author wishes to thank R. J. Duffin for reading an earlier version of this paper and making numerous suggestions for improving it, which are incorporated here. Our exposition and proofs have profited from comments of C. E. Blair and J. Borwein.  相似文献   

19.
In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号