共查询到20条相似文献,搜索用时 15 毫秒
1.
Ana Maria A.C. Rocha Tiago F.M.C. Martins 《Journal of Computational and Applied Mathematics》2011,235(16):4611-4620
This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms. 相似文献
2.
Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities 总被引:2,自引:0,他引:2
Bing-Sheng He 《Computational Optimization and Applications》2009,42(2):195-212
The typical structured variational inequalities can be interpreted as a system of equilibrium problems with a leader and two
cooperative followers. Assume that, based on the instruction given by the leader, each follower can solve the individual equilibrium
sub-problems in his own way. The responsibility of the leader is to give a more reasonable instruction for the next iteration
loop based on the feedback information from the followers. This consideration leads us to present a parallel splitting augmented
Lagrangian method (abbreviated to PSALM). The proposed method can be extended to solve the system of equilibrium problems
with three separable operators. Finally, it is explained why we cannot use the same technique to develop similar methods for
problems with more than three separable operators. 相似文献
3.
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations,
we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the
Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and
any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient
and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier
algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented
Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize
in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of
the unified classes of augmented Lagrangian functions. 相似文献
4.
Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints.
On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside
the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case.
In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure
of Augmented Lagrangians that come from the popular Powell–Hestenes–Rockafellar scheme. A combined algorithm, that uses the
quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian
function, is also suggested. Numerical experiments using the Cute collection are presented. 相似文献
5.
The augmented Lagrangian method is attractive in constraint optimizations. When it is applied to a class of constrained variational
inequalities, the sub-problem in each iteration is a nonlinear complementarity problem (NCP). By introducing a logarithmic-quadratic
proximal term, the sub-NCP becomes a system of nonlinear equations, which we call the LQP system. Solving a system of nonlinear equations is easier than the related NCP, because the solution of the NCP has combinatorial
properties. In this paper, we present an inexact logarithmic-quadratic proximal augmented Lagrangian method for a class of
constrained variational inequalities, in which the LQP system is solved approximately under a rather relaxed inexactness criterion.
The generated sequence is Fejér monotone and the global convergence is proved. Finally, some numerical test results for traffic
equilibrium problems are presented to demonstrate the efficiency of the method.
相似文献
6.
In [A. Ouorou, A primal-dual algorithm for monotropic programming and its application to network optimization, Computational Optimization and Application 15 (2002) 125–143], a block-wise Gauss–Seidel method has been developed for monotropic programming problems, using two different quadratic augmented Lagrangian functions defined for the primal and the dual problems. In this paper, we extend the concept by introducing a nonlinear re-scaling principle obtained recently by Polyak [R. Polyak, Nonlinear rescaling vs smoothing technique in constrained optimization, Mathematical Programming 92 (2002) 197–235]. 相似文献
7.
Esdras Penêdo de Carvalho Ansio dos Santos Júnior To Fu Ma 《Applied mathematics and computation》2008,200(2):529
A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. 相似文献
8.
《Operations Research Letters》2019,47(3):185-189
We consider the augmented Lagrangian method (ALM) for constrained optimization problems in the presence of convex inequality and convex abstract constraints. We focus on the case where the Lagrangian sub-problems are solved up to approximate stationary points, with increasing accuracy. We analyze two different criteria of approximate stationarity for the sub-problems and we prove the global convergence to stationary points of ALM in both cases. 相似文献
9.
Tangent cone and (regular) normal cone of a closed set under an invertible variable transformation around a given point are investigated, which lead to the concepts of θ−1-tangent cone of a set and θ−1-subderivative of a function. When the notion of θ−1-subderivative is applied to perturbation functions, a class of augmented Lagrangians involving an invertible mapping of perturbation variables are obtained, in which dualizing parameterization and augmenting functions are not necessarily convex in perturbation variables. A necessary and sufficient condition for the exact penalty representation under the proposed augmented Lagrangian scheme is obtained. For an augmenting function with an Euclidean norm, a sufficient condition (resp., a sufficient and necessary condition) for an arbitrary vector (resp., 0) to support an exact penalty representation is given in terms of θ−1-subderivatives. An example of the variable transformation applied to constrained optimization problems is given, which yields several exact penalization results in the literature. 相似文献
10.
Parallel iterative methods are powerful in solving large systems of linear equations (LEs). The existing parallel computing research results focus mainly on sparse systems or others with particular structure. Most are based on parallel implementation of the classical relaxation methods such as Gauss-Seidel, SOR, and AOR methods which can be efficiently carried out on multiprocessor system. In this paper, we propose a novel parallel splitting operator method in which we divide the coefficient matrix into two or three parts. Then we convert the original problem (LEs) into a monotone (linear) variational inequality problem (VI) with separable structure. Finally, an inexact parallel splitting augmented Lagrangian method is proposed to solve the variational inequality problem (VI). To avoid dealing with the matrix inverse operator, we introduce proper inexact terms in subproblems such that the complexity of each iteration of the proposed method is O(n2). In addition, the proposed method does not require any special structure of system of LEs under consideration. Convergence of the proposed methods in dealing with two and three separable operators respectively, is proved. Numerical computations are provided to show the applicability and robustness of the proposed methods. 相似文献
11.
Refinement of Lagrangian bounds in optimization problems 总被引:1,自引:0,他引:1
I. S. Litvinchev 《Computational Mathematics and Mathematical Physics》2007,47(7):1101-1107
Lagrangian constraint relaxation and the corresponding bounds for the optimal value of an original optimization problem are examined. Techniques for the refinement of the classical Lagrangian bounds are investigated in the case where the complementary slackness conditions are not fulfilled because either the original formulation is nonconvex or the Lagrange multipliers are nonoptimal. Examples are given of integer and convex problems for which the modified bounds improve the classical Lagrangian bounds. 相似文献
12.
A subspace limited memory quasi-Newton algorithm for large-scale nonlinear bound constrained optimization 总被引:10,自引:0,他引:10
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.
13.
T. S. Chang 《Journal of Optimization Theory and Applications》2008,137(3):691-697
This note presents not only a surrogate subgradient method, but also a framework of surrogate subgradient methods. Furthermore,
the framework can be used not only for separable problems, but also for coupled subproblems. The note delineates such a framework
and shows that the algorithm can converges for a larger stepsize.
The author thanks Professor Ching-An Lin from the Department of Electrical and Control Engineering of National Chiao Tung
University, Hsinchu, Taiwan for valuable discussions. 相似文献
14.
We analyze the rate of local convergence of the augmented Lagrangian method in nonlinear semidefinite optimization. The presence
of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices,
an implicit function theorem for semismooth functions, and variational analysis on the projection operator in the symmetric
matrix space. Without requiring strict complementarity, we prove that, under the constraint nondegeneracy condition and the
strong second order sufficient condition, the rate of convergence is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold .
The research of Defeng Sun is partly supported by the Academic Research Fund from the National University of Singapore. The
research of Jie Sun and Liwei Zhang is partly supported by Singapore–MIT Alliance and by Grants RP314000-042/057-112 of the
National University of Singapore. The research of Liwei Zhang is also supported by the National Natural Science Foundation
of China under project grant no. 10471015 and by the Scientific Research Foundation for the Returned Overseas Chinese Scholars,
State Education Ministry, China. 相似文献
15.
The paper analyzes the rate of local convergence of the augmented Lagrangian method for nonlinear second-order cone optimization
problems. Under the constraint nondegeneracy condition and the strong second order sufficient condition, we demonstrate that
the sequence of iterate points generated by the augmented Lagrangian method locally converges to a local minimizer at a linear
rate, whose ratio constant is proportional to 1/τ with penalty parameter τ not less than a threshold
. Importantly and interestingly enough, the analysis does not require the strict complementarity condition.
Supported by the National Natural Science Foundation of China under Project 10771026 and by the Scientific Research Foundation
for the Returned Overseas Chinese Scholars, State Education Ministry. 相似文献
16.
We introduce and analyze an exterior-point method (EPM) for constrained optimization problems with both inequality constraints
and equations. We show that under the standard second-order optimality conditions the EPM converges to the primal–dual solution
with 1.5-Q-superlinear rate.
Dedicated to Professor Gil Strang on the occasion on his 70th birthday. 相似文献
17.
18.
《Optimization》2012,61(11):1331-1345
Li and Sun [D. Li and X.L. Sun, Existence of a saddle point in nonconvex constrained optimization, J. Global Optim. 21 (2001), pp. 39--50; D. Li and X.L. Sun, Convexification and existence of saddle point in a p-th-power reformulation for nonconvex constrained optimization, Nonlinear Anal. 47 (2001), pp. 5611--5622], present the existence of a global saddle point of the p-th power Lagrangian functions for constrained nonconvex optimization, under second-order sufficiency conditions and additional conditions that the feasible set is compact and the global solution of the primal problem is unique. In this article, it is shown that the same results can be obtained under additional assumptions that do not require the compactness of the feasible set and the uniqueness of global solution of the primal problem. 相似文献
19.
M. A. Diniz-Ehrhardt M. A. Gomes-Ruggiero J. M. Martínez S. A. Santos 《Journal of Optimization Theory and Applications》2004,123(3):497-517
The spectral projected gradient method SPG is an algorithm for large-scale bound-constrained optimization introduced recently by Birgin, Martínez, and Raydan. It is based on the Raydan unconstrained generalization of the Barzilai-Borwein method for quadratics. The SPG algorithm turned out to be surprisingly effective for solving many large-scale minimization problems with box constraints. Therefore, it is natural to test its perfomance for solving the sub-problems that appear in nonlinear programming methods based on augmented Lagrangians. In this work, augmented Lagrangian methods which use SPG as the underlying convex-constraint solver are introduced (ALSPG) and the methods are tested in two sets of problems. First, a meaningful subset of large-scale nonlinearly constrained problems of the CUTE collection is solved and compared with the perfomance of LANCELOT. Second, a family of location problems in the minimax formulation is solved against the package FFSQP. 相似文献
20.
Magnetic resonance images which are corrupted by noise and by smooth modulations are corrected using a variational formulation incorporating a total variation like penalty for the image and a high order penalty for the modulation. The optimality system is derived and numerically discretized. The cost functional used is non-convex, but it possesses a bilinear structure which allows the ambiguity among solutions to be resolved technically by regularization and practically by normalizing the maximum value of the modulation. Since the cost is convex in each single argument, convex analysis is used to formulate the optimality condition for the image in terms of a primal-dual system. To solve the optimality system, a nonlinear Gauss-Seidel outer iteration is used in which the cost is minimized with respect to one variable after the other using an inner generalized Newton iteration. Favorable computational results are shown for artificial phantoms as well as for realistic magnetic resonance images. Reported computational times demonstrate the feasibility of the approach in practice. 相似文献