共查询到20条相似文献,搜索用时 15 毫秒
1.
《Optimization》2012,61(1-4):69-87
In the present paper the logarithmic barrier method applied to the linearly constrained convex optimization problems is studied from the view point of classical path-following algorithms. In particular, the radius of convergence of Newton's method which depends on the barrier parameter itself is estimated in standard norms, being independent of the parameter, without explicitly using self-concordance properties. The obtained results establish a parameter selection rule which guarantees the overall convergence of a barrier technique with only one Newton step at each parameter level and the complexity of the method can be estimated. 相似文献
2.
In this paper, we study the search directions of three important interior-point algorithms, namely, the primal-affine scaling method (with logarithmic barrier function), the dual-affine scaling method (with logarithmic barrier function), and the primal-dual interior point method. From an algebraic point of view, we show that the search directions of these three algorithms are merely Newton directions along three different paths that lead to a solution of the Karush-Kuhn-Tucker conditions of a given linear programming problem. From a geometric point of view, we show that these directions can be obtained by solving certain well-defined subproblems. Both views provide a general platform for studying the existing interior-point methods and deriving new interior-point algorithms. We illustrate the derivation of new interior-point algorithms by replacing the logarithmic barrier function with an entropic barrier function. The results have been generalized and discussed.This work is partially supported by the North Carolina Supercomputing Center 1990 Cray Grant Program sponsored by Cray Research. 相似文献
3.
In this paper, we describe a variant of the Newton Interior-Point method in [8] for nonlinear programming problems. In this
scheme, the perturbation parameter can be chosen within a range of, values and we can use an iterative method for approximately
solving the reduced linear system arising at each step. We have devised the inner termination rule which guarantees the global
convergence of this Newton Inexact Interior-Point method. We remark that the required assumptions are weaker than those stated
in [8], as shown by some numerical examples.
This research was supported by the Italian Ministry for Education, University and Research (MIUR), FIRB Project No. RBAU01JYPN. 相似文献
4.
《Optimization》2012,61(4):335-350
We provide a theoretical basis for approximating the sensitivity of a perturbed solution and the local optimalvalue function, using information generated by a sequential unconstrained minimization technique in the normal course of solving a mathematical program. We show that various algorithmic sensitivity results can be obtained without other assumptions than those needed for the corresponding nonalgorithmic results. Our results extend the algorithmic calculation of sensitivity information introduced by Fiacco, utilizing the logarithmic barrier function and quadratic penalty function 相似文献
5.
Matrix augmentation is used for the inversion of bases associated with large linearly constrained control problems. It is shown how an efficient data structure can be maintained by keeping all state variables in the basis, and then nullifying some of them explicitly by using additional constraints. The proposed methodology, together with a basis updating scheme based on augmentation, forms the skeleton for an in-core algorithm using either the revised simplex method or the generalized reduced gradient method. 相似文献
6.
Hiroshi Yamashita 《Mathematical Programming》1982,23(1):75-86
The recently proposed quasi-Newton method for constrained optimization has very attractive local convergence properties. To force global convergnce of the method, a descent method which uses Zangwill's penalty function and an exact line search has been proposed by Han. In this paper a new method which adopts a differentiable penalty function and an approximate line is presented. The proposed penalty function has the form of the augmented Lagrangian function. An algorithm for updating parameters which appear in the penalty function is described. Global convergence of the given method is proved. 相似文献
7.
S. D. Flåm 《Mathematical Methods of Operations Research》1986,30(5):A209-A222
Penalty algorithms for constrained minimax problems typically involve a sequence of unconstrained approximates which is pointwise monotone in each variable. This paper generalizes convergence results for a wider class of algorithms while imposing conditions which are close to being minimal.Sponsored in part by a grant from the Norwegian Council of Scientific and Technological Research. 相似文献
8.
《Optimization》2012,61(1-2):121-153
In this paper, we present a family of secant algorithms in association with nonmonotone trust region strategy for nonlinear equality constrained optimization problems. The proposed algorithms are globally convergent while keeping the local superlinear rate by introducing Fletcher's penalty function as merit function. 相似文献
9.
Projected gradient methods for linearly constrained problems 总被引:23,自引:0,他引:23
The aim of this paper is to study the convergence properties of the gradient projection method and to apply these results
to algorithms for linearly constrained problems. The main convergence result is obtained by defining a projected gradient,
and proving that the gradient projection method forces the sequence of projected gradients to zero. A consequence of this
result is that if the gradient projection method converges to a nondegenerate point of a linearly constrained problem, then
the active and binding constraints are identified in a finite number of iterations. As an application of our theory, we develop
quadratic programming algorithms that iteratively explore a subspace defined by the active constraints. These algorithms are
able to drop and add many constraints from the active set, and can either compute an accurate minimizer by a direct method,
or an approximate minimizer by an iterative method of the conjugate gradient type. Thus, these algorithms are attractive for
large scale problems. We show that it is possible to develop a finite terminating quadratic programming algorithm without
non-degeneracy assumptions.
Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department
of Energy under Contract W-31-109-Eng-38.
Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department
of Energy under Contract W-31-109-Eng-38. 相似文献
10.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with ℓ1 and ℓ2 norms. These inverse optimization problems are either linear programming when ℓ1 norm is used in the formulation, or convex quadratic separable programming when ℓ2 norm is used. 相似文献
11.
12.
The purpose of this paper is twofold. First we consider a class of nondifferentiable penalty functions for constrained Lipschitz programs and then we show how these penalty functions can be employed to solve a constrained Lipschitz program. The penalty functions considered incorporate a barrier term which makes their value go to infinity on the boundary of a perturbation of the feasible set. Exploiting this fact it is possible to prove, under mild compactness and regularity assumptions, a complete correspondence between the unconstrained minimization of the penalty functions and the solution of the constrained program, thus showing that the penalty functions are exact according to the definition introduced in [17]. Motivated by these results, we propose some algorithm models and study their convergence properties. We show that, even when the assumptions used to establish the exactness of the penalty functions are not satisfied, every limit point of the sequence produced by a basic algorithm model is an extended stationary point according to the definition given in [8]. Then, based on this analysis and on the one previously carried out on the penalty functions, we study the consequence on the convergence properties of increasingly demanding assumptions. In particular we show that under the same assumptions used to establish the exactness properties of the penalty functions, it is possible to guarantee that a limit point at least exists, and that any such limit point is a KKT point for the constrained problem.This research has been partially supported by the National Research Program on Metodi di Ottimizzazione per le Decisioni, Ministero dell' Università e della Ricerca Scientifica e Tecnologica. 相似文献
13.
A class of methods is presented for solving standard linear programming problems. Like the simplex method, these methods move from one feasible solution to another at each iteration, improving the objective function as they go. Each such feasible solution is also associated with a basis. However, this feasible solution need not be an extreme point and the basic solution corresponding to the associated basis need not be feasible. Nevertheless, an optimal solution, if one exists, is found in a finite number of iterations (under nondegeneracy). An important example of a method in the class is the reduced gradient method with a slight modification regarding selection of the entering variable. 相似文献
14.
15.
Alexander J. Zaslavski 《Optimization Letters》2008,2(3):287-298
We use the penalty approach in order to study constrained minimization problems. A penalty function is said to have the exact
penalty property if there is a penalty coefficient for which a solution of an unconstrained penalized problem is a solution
of the corresponding constrained problem. In this paper we establish the exact penalty property for a large class of inequality-constrained
minimization problems. 相似文献
16.
In this paper, an entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions. First, to general linearly constrained convex minimization problems and second, to variational inequalities on polyhedra. For linear programming, numerical results are presented and quadratic convergence is established.Corresponding author. His research has been supported by C.E.E grants: CI1* CT 92-0046. 相似文献
17.
J. Zhu 《Mathematical Methods of Operations Research》1992,36(4):359-377
We present a primal-dual path following interior algorithm for a class of linearly constrained convex programming problems with non-negative decision variables. We introduce the definition of a Scaled Lipschitz Condition and show that if the objective function satisfies the Scaled Lipschitz Condition then, at each iteration, our algorithm reduces the duality gap by at least a factor of (1–/n), where is positive and depends on the curvature of the objective function, by means of solving a system of linear equations which requires no more than O(n3) arithmetic operations. The class of functions having the Scaled Lipschitz Condition includes linear, convex quadratic and entropy functions. 相似文献
18.
On the classical logarithmic barrier function method for a class of smooth convex programming problems 总被引:6,自引:0,他引:6
In this paper, we describe a natural implementation of the classical logarithmic barrier function method for smooth convex programming. It is assumed that the objective and constraint functions fulfill the so-called relative Lipschitz condition, with Lipschitz constantM>0.In our method, we do line searches along the Newton direction with respect to the strictly convex logarithmic barrier function if we are far away from the central trajectory. If we are sufficiently close to this path, with respect to a certain metric, we reduce the barrier parameter. We prove that the number of iterations required by the algorithm to converge to an -optimal solution isO((1+M
2)
log) orO((1+M
2)nlog), depending on the updating scheme for the lower bound.on leave from Eötvös University, Budapest, Hungary. 相似文献
19.
I. V. Konnov 《Optimization》2018,67(5):665-682
We suggest simple implementable modifications of conditional gradient and gradient projection methods for smooth convex optimization problems in Hilbert spaces. Usually, the custom methods attain only weak convergence. We prove strong convergence of the new versions and establish their complexity estimates, which appear similar to the convergence rate of the weakly convergent versions. Preliminary results of computational tests confirm efficiency of the proposed modification. 相似文献
20.
John W. Pearson Martin Stoll Andrew J. Wathen 《Numerical Linear Algebra with Applications》2014,21(1):81-97
Optimal control problems with partial differential equations as constraints play an important role in many applications. The inclusion of bound constraints for the state variable poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau–Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared with other approaches. In this paper, we develop robust preconditioners for the efficient solution of the Newton steps associated with the fast solution of the Moreau–Yosida regularized problem. Numerical results illustrate the efficiency of our approach. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献