首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 660 毫秒
1.
A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it automatically and simultaneously updates the smoothness parameters which significantly improves its performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is $O(\frac {1}{k})$ , where k is the iteration counter. In the second part of the paper, the proposed algorithm is coupled with a dual scheme to construct a switching variant in a dual decomposition framework. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.  相似文献   

2.
郭洁  万中 《计算数学》2022,44(3):324-338
基于指数罚函数,对最近提出的一种求解无约束优化问题的三项共轭梯度法进行了修正,并用它求解更复杂的大规模极大极小值问题.证明了该方法生成的搜索方向对每一个光滑子问题是充分下降方向,而且与所用的线搜索规则无关.以此为基础,设计了求解大规模极大极小值问题的算法,并在合理的假设下,证明了算法的全局收敛性.数值实验表明,该算法优于文献中已有的类似算法.  相似文献   

3.
We consider an energy production network with zones of production and transfer links. Each zone representing an energy market (a country, part of a country or a set of countries) has to satisfy the local demand using its hydro and thermal units and possibly importing and exporting using links connecting the zones. Assuming that we have the appropriate tools to solve a single zonal problem (approximate dynamic programming, dual dynamic programming, etc.), the proposed algorithm allows us to coordinate the productions of all zones. We propose two reformulations of the dynamic model which lead to different decomposition strategies. Both algorithms are adaptations of known monotone operator splitting methods, namely the alternating direction method of multipliers and the proximal decomposition algorithm which have been proved to be useful to solve convex separable optimization problems. Both algorithms present similar performance in theory but our numerical experimentation on real-size dynamic models have shown that proximal decomposition is better suited to the coordination of the zonal subproblems, becoming a natural choice to solve the dynamic optimization of the European electricity market.  相似文献   

4.
We consider an inverse quadratic programming (QP) problem in which the parameters in the objective function of a given QP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semidefinite cone constraint and its dual is a linearly constrained semismoothly differentiable (SC1) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to 1/r, and the rate of multiplier iterates is proportional to  $1/\sqrt{r}$ , where r is the penalty parameter in the augmented Lagrangian. As the objective function of the dual problem is a SC1 function involving the projection operator onto the cone of symmetrically semi-definite matrices, the analysis requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and properties of the projection operator in the symmetric-matrix space. Furthermore, the semismooth Newton method with Armijo line search is applied to solve the subproblems in the augmented Lagrange approach, which is proven to have global convergence and local quadratic rate. Finally numerical results, implemented by the augmented Lagrangian method, are reported.  相似文献   

5.
Surrogate Gradient Algorithm for Lagrangian Relaxation   总被引:6,自引:0,他引:6  
The subgradient method is used frequently to optimize dual functions in Lagrangian relaxation for separable integer programming problems. In the method, all subproblems must be solved optimally to obtain a subgradient direction. In this paper, the surrogate subgradient method is developed, where a proper direction can be obtained without solving optimally all the subproblems. In fact, only an approximate optimization of one subproblem is needed to get a proper surrogate subgradient direction, and the directions are smooth for problems of large size. The convergence of the algorithm is proved. Compared with methods that take effort to find better directions, this method can obtain good directions with much less effort and provides a new approach that is especially powerful for problems of very large size.  相似文献   

6.
We examine the problem of scheduling a given set of jobs on a single machine to minimize total early and tardy costs without considering machine idle time. We decompose the problem into two subproblems with a simpler structure. Then the lower bound of the problem is the sum of the lower bounds of two subproblems. A lower bound of each subproblem is obtained by Lagrangian relaxation. Rather than using the well-known subgradient optimization approach, we develop two efficient multiplier adjustment procedures with complexity O(nlog n) to solve two Lagrangian dual subproblems. A branch-and-bound algorithm based on the two efficient procedures is presented, and is used to solve problems with up to 50 jobs, hence doubling the size of problems that can be solved by existing branch-and-bound algorithms. We also propose a heuristic procedure based on the neighborhood search approach. The computational results for problems with up to 3 000 jobs show that the heuristic procedure performs much better than known heuristics for this problem in terms of both solution efficiency and quality. In addition, the results establish the effectiveness of the heuristic procedure in solving realistic problems to optimality or near optimality.  相似文献   

7.
Many optimization problems can be reformulated as a system of equations. One may use the generalized Newton method or the smoothing Newton method to solve the reformulated equations so that a solution of the original problem can be found. Such methods have been powerful tools to solve many optimization problems in the literature. In this paper, we propose a Newton-type algorithm for solving a class of monotone affine variational inequality problems (AVIPs for short). In the proposed algorithm, the techniques based on both the generalized Newton method and the smoothing Newton method are used. In particular, we show that the algorithm can find an exact solution of the AVIP in a finite number of iterations under an assumption that the solution set of the AVIP is nonempty. Preliminary numerical results are reported.  相似文献   

8.
A convergent decomposition algorithm for support vector machines   总被引:1,自引:0,他引:1  
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.  相似文献   

9.
Jia  Xiaoxi  Kanzow  Christian  Mehlitz  Patrick  Wachsmuth  Gerd 《Mathematical Programming》2023,199(1-2):1365-1415

This paper is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints. Specifically, we study situations where parts of the constraints are nonconvex and possibly complicated, but allow for a fast computation of projections onto this nonconvex set. Typical problem classes which satisfy this requirement are optimization problems with disjunctive constraints (like complementarity or cardinality constraints) as well as optimization problems over sets of matrices which have to satisfy additional rank constraints. The key idea behind our method is to keep these complicated constraints explicitly in the constraints and to penalize only the remaining constraints by an augmented Lagrangian function. The resulting subproblems are then solved with the aid of a problem-tailored nonmonotone projected gradient method. The corresponding convergence theory allows for an inexact solution of these subproblems. Nevertheless, the overall algorithm computes so-called Mordukhovich-stationary points of the original problem under a mild asymptotic regularity condition, which is generally weaker than most of the respective available problem-tailored constraint qualifications. Extensive numerical experiments addressing complementarity- and cardinality-constrained optimization problems as well as a semidefinite reformulation of MAXCUT problems visualize the power of our approach.

  相似文献   

10.
In this paper, based on a p-norm with p being any fixed real number in the interval (1,+??), we introduce a family of new smoothing functions, which include the smoothing symmetric perturbed Fischer function as a special case. We also show that the functions have several favorable properties. Based on the new smoothing functions, we propose a nonmonotone smoothing Newton algorithm for solving nonlinear complementarity problems. The proposed algorithm only need to solve one linear system of equations. We show that the proposed algorithm is globally and locally superlinearly convergent under suitable assumptions. Numerical experiments indicate that the method associated with a smaller p, for example p=1.1, usually has better numerical performance than the smoothing symmetric perturbed Fischer function, which exactly corresponds to p=2.  相似文献   

11.
Non-negative matrix factorization (NMF) is a problem to obtain a representation of data using non-negativity constraints. Since the NMF was first proposed by Lee, NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. Recent years, many variants of NMF have been proposed. Common methods are: iterative multiplicative update algorithms, gradient descent methods, alternating least squares (ANLS). Since alternating least squares has nice optimization properties, various optimization methods can be used to solve ANLS’s subproblems. In this paper, we propose a modified subspace Barzilai-Borwein for subproblems of ANLS. Moreover, we propose a modified strategy for ANLS. Global convergence results of our algorithm are established. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

12.
13.
Solution oscillations, often caused by identical solutions to the homogeneous subproblems, constitute a severe and inherent disadvantage in applying Lagrangian relaxation based methods to resource scheduling problems with discrete decision variables. In this paper, the solution oscillations caused by homogeneous subproblems in the Lagrangian relaxation framework are identified and analyzed. Based on this analysis, the key idea to alleviate the homogeneous oscillations is to differentiate the homogeneous subproblems. A new algorithm is developed to solve the problem under the Lagrangian relaxation framework. The basic idea is to introduce a second-order penalty term in the Lagrangian. Since the dual cost function is no longer decomposable, a surrogate subgradient is used to update the multiplier at the high level. The homogeneous subproblems are not solved simultaneously, and the oscillations can be avoided or at least alleviated. Convergence proofs and properties of the new dual cost function are presented in the paper. Numerical testing for a short-term generation scheduling problem with two groups of identical units demonstrates that solution oscillations are greatly reduced and thus the generation schedule is significantly improved.  相似文献   

14.

This paper addresses problems of second-order cone programming important in optimization theory and applications. The main attention is paid to the augmented Lagrangian method (ALM) for such problems considered in both exact and inexact forms. Using generalized differential tools of second-order variational analysis, we formulate the corresponding version of second-order sufficiency and use it to establish, among other results, the uniform second-order growth condition for the augmented Lagrangian. The latter allows us to justify the solvability of subproblems in the ALM and to prove the linear primal–dual convergence of this method.

  相似文献   

15.
Smoothed penalty algorithms for optimization of nonlinear models   总被引:1,自引:0,他引:1  
We introduce an algorithm for solving nonlinear optimization problems with general equality and box constraints. The proposed algorithm is based on smoothing of the exact l 1-penalty function and solving the resulting problem by any box-constraint optimization method. We introduce a general algorithm and present theoretical results for updating the penalty and smoothing parameter. We apply the algorithm to optimization problems for nonlinear traffic network models and report on numerical results for a variety of network problems and different solvers for the subproblems.  相似文献   

16.
A certain regularization technique for contact problems leads to a family of problems that can be solved efficiently using infinite-dimensional semismooth Newton methods, or in this case equivalently, primal–dual active set strategies. We present two procedures that use a sequence of regularized problems to obtain the solution of the original contact problem: first-order augmented Lagrangian, and path-following methods. The first strategy is based on a multiplier-update, while path-following with respect to the regularization parameter uses theoretical results about the path-value function to increase the regularization parameter appropriately. Comprehensive numerical tests investigate the performance of the proposed strategies for both a 2D as well as a 3D contact problem.  相似文献   

17.
Nonlinear rescaling vs. smoothing technique in convex optimization   总被引:1,自引:0,他引:1  
We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints.?The constraints transformation is scaled by a vector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and at the same time are free from their main drawbacks.?Sequential unconstrained minimization of the Lagrangian for the equivalent problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem.?We proved convergence and estimate the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate of convergence under various assumptions on the input data.?In particular, under the standard second order optimality conditions the NR method converges with Q-linear rate without unbounded increase of the scaling parameters, which correspond to the active constraints.?We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution.?We provide numerical results, which strongly support the theory. Received: September 2000 / Accepted: October 2001?Published online April 12, 2002  相似文献   

18.
A widespread and successful approach to tackle unit-commitment problems is constraint decomposition: by dualizing the linking constraints, the large-scale nonconvex problem decomposes into smaller independent subproblems. The dual problem consists then in finding the best Lagrangian multiplier (the optimal “price”); it is solved by a convex nonsmooth optimization method. Realistic modeling of technical production constraints makes the subproblems themselves difficult to solve exactly. Nonsmooth optimization algorithms can cope with inexact solutions of the subproblems. In this case however, we observe that the computed dual solutions show a noisy and unstable behaviour, that could prevent their use as price indicators. In this paper, we present a simple and easy-to-implement way to stabilize dual optimal solutions, by penalizing the noisy behaviour of the prices in the dual objective. After studying the impact of a general stabilization term on the model and the resolution scheme, we focus on the penalization by discrete total variation, showing the consistency of the approach. We illustrate our stabilization on a synthetic example, and real-life problems from EDF (the French Electricity Board).  相似文献   

19.
We propose an adaptive smoothing algorithm based on Nesterov’s smoothing technique in Nesterov (Math Prog 103(1):127–152, 2005) for solving “fully” nonsmooth composite convex optimization problems. Our method combines both Nesterov’s accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the \(\mathcal {O}\left( \frac{1}{\varepsilon }\right) \)-worst-case iteration-complexity while preserves the same complexity-per-iteration as in Nesterov’s method and allows one to automatically update the smoothness parameter at each iteration. Then, we customize our algorithm to solve four special cases that cover various applications. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on a primal sequence of iterates. We demonstrate our algorithm through three numerical examples and compare it with other related algorithms.  相似文献   

20.
《Optimization》2012,61(6):1107-1130
ABSTRACT

We develop three algorithms to solve the subproblems generated by the augmented Lagrangian methods introduced by Iusem-Nasri (2010) for the equilibrium problem. The first algorithm that we propose incorporates the Newton method and the other two are instances of the subgradient projection method. One of our algorithms is also capable of solving nondifferentiable equilibrium problems. Using well-known test problems, all algorithms introduced here are implemented and numerical results are reported to compare their performances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号