共查询到20条相似文献,搜索用时 24 毫秒
1.
Roman A. Polyak 《Mathematical Programming》2002,92(2):197-235
We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given
smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling
(NR) the constraints of a given constrained optimization problem into an equivalent set of constraints.?The constraints transformation
is scaled by a vector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing
Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the
Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and
at the same time are free from their main drawbacks.?Sequential unconstrained minimization of the Lagrangian for the equivalent
problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers
methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem.?We proved convergence and estimate
the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate
of convergence under various assumptions on the input data.?In particular, under the standard second order optimality conditions
the NR method converges with Q-linear rate without unbounded increase of the scaling parameters, which correspond to the active
constraints.?We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution.?We
provide numerical results, which strongly support the theory.
Received: September 2000 / Accepted: October 2001?Published online April 12, 2002 相似文献
2.
Mihai Anitescu 《Mathematical Programming》2002,92(2):359-386
We analyze the convergence of a sequential quadratic programming (SQP) method for nonlinear programming for the case in which
the Jacobian of the active constraints is rank deficient at the solution and/or strict complementarity does not hold for some
or any feasible Lagrange multipliers. We use a nondifferentiable exact penalty function, and we prove that the sequence generated
by an SQP using a line search is locally R-linearly convergent if the matrix of the quadratic program is positive definite
and constant over iterations, provided that the Mangasarian-Fromovitz constraint qualification and some second-order sufficiency
conditions hold.
Received: April 28, 1998 / Accepted: June 28, 2001?Published online April 12, 2002 相似文献
3.
Martin Gugat 《Mathematical Programming》2000,88(2):255-275
The feasible set of a convex semi–infinite program is described by a possibly infinite system of convex inequality constraints.
We want to obtain an upper bound for the distance of a given point from this set in terms of a constant multiplied by the
value of the maximally violated constraint function in this point. Apart from this Lipschitz case we also consider error bounds
of H?lder type, where the value of the residual of the constraints is raised to a certain power.?We give sufficient conditions
for the validity of such bounds. Our conditions do not require that the Slater condition is valid. For the definition of our
conditions, we consider the projections on enlarged sets corresponding to relaxed constraints. We present a condition in terms
of projection multipliers, a condition in terms of Slater points and a condition in terms of descent directions. For the Lipschitz
case, we give five equivalent characterizations of the validity of a global error bound.?We extend previous results in two
directions: First, we consider infinite systems of inequalities instead of finite systems. The second point is that we do
not assume that the Slater condition holds which has been required in almost all earlier papers.
Received: April 12, 1999 / Accepted: April 5, 2000?Published online July 20, 2000 相似文献
4.
We obtain local estimates of the distance to a set defined by equality constraints under assumptions which are weaker than
those previously used in the literature. Specifically, we assume that the constraints mapping has a Lipschitzian derivative,
and satisfies a certain 2-regularity condition at the point under consideration. This setting directly subsumes the classical
regular case and the twice differentiable 2-regular case, for which error bounds are known, but it is significantly richer
than either of these two cases. When applied to a certain equation-based reformulation of the nonlinear complementarity problem,
our results yield an error bound under an assumption more general than b-regularity. The latter appears to be the weakest assumption under which a local error bound for complementarity problems
was previously available. We also discuss an application of our results to the convergence rate analysis of the exterior penalty
method for solving irregular problems.
Received: February 2000 / Accepted: November 2000?Published online January 17, 2001 相似文献
5.
Sensitivity analysis in linear programming and semidefinite programming using interior-point methods
We analyze perturbations of the right-hand side and the cost parameters in linear programming (LP) and semidefinite programming
(SDP). We obtain tight bounds on the perturbations that allow interior-point methods to recover feasible and near-optimal
solutions in a single interior-point iteration. For the unique, nondegenerate solution case in LP, we show that the bounds
obtained using interior-point methods compare nicely with the bounds arising from using the optimal basis. We also present
explicit bounds for SDP using the Monteiro-Zhang family of search directions and specialize them to the AHO, H..K..M, and
NT directions.
Received: December 1999 / Accepted: January 2001?Published online March 22, 2001 相似文献
6.
In this paper, we introduce a transformation that converts a class of linear and nonlinear semidefinite programming (SDP)
problems into nonlinear optimization problems. For those problems of interest, the transformation replaces matrix-valued constraints
by vector-valued ones, hence reducing the number of constraints by an order of magnitude. The class of transformable problems
includes instances of SDP relaxations of combinatorial optimization problems with binary variables as well as other important
SDP problems. We also derive gradient formulas for the objective function of the resulting nonlinear optimization problem
and show that both function and gradient evaluations have affordable complexities that effectively exploit the sparsity of
the problem data. This transformation, together with the efficient gradient formulas, enables the solution of very large-scale
SDP problems by gradient-based nonlinear optimization techniques. In particular, we propose a first-order log-barrier method
designed for solving a class of large-scale linear SDP problems. This algorithm operates entirely within the space of the
transformed problem while still maintaining close ties with both the primal and the dual of the original SDP problem. Global
convergence of the algorithm is established under mild and reasonable assumptions.
Received: January 5, 2000 / Accepted: October 2001?Published online February 14, 2002 相似文献
7.
Based on the authors’ previous work which established theoretical foundations of two, conceptual, successive convex relaxation
methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive Semi-Infinite Linear Programming)
Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems
have a linear objective function c
T
x to be maximized over a nonconvex compact feasible region F described by a finite number of quadratic inequalities. We introduce two new techniques, “discretization” and “localization,”
into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an infinite number
of semi-infinite SDPs (or semi-infinite LPs) which appeared at each iteration of the original methods by a finite number of
standard SDPs (or standard LPs) with a finite number of linear inequality constraints. We establish:?•Given any open convex set U containing F, there is an implementable discretization of the SSDP (or SSILP) Relaxation Method
which generates a compact convex set C such that F⊆C⊆U in a finite number of iterations.?The localization technique is for the cases where we are only interested in upper bounds on the optimal objective value (for
a fixed objective function vector c) but not in a global approximation of the convex hull of F. This technique allows us to generate a convex relaxation of F that is accurate only in certain directions in a neighborhood of the objective direction c. This cuts off redundant work to make the convex relaxation accurate in unnecessary directions. We establish:?•Given any positive number ε, there is an implementable localization-discretization of the SSDP (or SSILP) Relaxation Method
which generates an upper bound of the objective value within ε of its maximum in a finite number of iterations.
Received: June 30, 1998 / Accepted: May 18, 2000?Published online September 20, 2000 相似文献
8.
In this paper we study the properties of the analytic central path of a semidefinite programming problem under perturbation
of the right hand side of the constraints, including the limiting behavior when the central optimal solution, namely the analytic
center of the optimal set, is approached. Our analysis assumes the primal-dual Slater condition and the strict complementarity
condition. Our findings are as follows. First, on the negative side, if we view the central optimal solution as a function
of the right hand side of the constraints, then this function is not continuous in general, whereas in the linear programming
case this function is known to be Lipschitz continuous. On the positive side, compared with the previous conclusion we obtain
a (seemingly) paradoxical result: on the central path any directional derivative with respect to the right hand side of the
constraints is bounded, and even converges as the central optimal solution is approached. This phenomenon is possible due
to the lack of a uniform bound on the derivatives with respect to the right hand side parameters. All these results are based
on the strict complementarity assumption. Concerning this last property we give an example. In that example the set of right
hand side parameters for which the strict complementarity condition holds is neither open nor closed. This is remarkable since
a similar set for which the primal-dual Slater condition holds is always open.
Received: April 2, 1998 / Accepted: January 16, 2001?Published online March 22, 2001 相似文献
9.
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality
constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods
for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the
iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and q-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal
set is ensured when the barrier parameter tends to zero, provided strict complementarity holds.
Received: December 21, 2000 / Accepted: July 13, 2001?Published online February 14, 2002 相似文献
10.
Optimality conditions for nonconvex semidefinite programming 总被引:9,自引:0,他引:9
Anders Forsgren 《Mathematical Programming》2000,88(1):105-128
This paper concerns nonlinear semidefinite programming problems for which no convexity assumptions can be made. We derive
first- and second-order optimality conditions analogous to those for nonlinear programming. Using techniques similar to those
used in nonlinear programming, we extend existing theory to cover situations where the constraint matrix is structurally sparse.
The discussion covers the case when strict complementarity does not hold. The regularity conditions used are consistent with
those of nonlinear programming in the sense that the conventional optimality conditions for nonlinear programming are obtained
when the constraint matrix is diagonal.
Received: May 15, 1998 / Accepted: April 12, 2000?Published online May 12, 2000 相似文献
11.
Krzysztof C. Kiwiel 《Mathematical Programming》2001,90(1):1-25
We study a general subgradient projection method for minimizing a quasiconvex objective subject to a convex set constraint
in a Hilbert space. Our setting is very general: the objective is only upper semicontinuous on its domain, which need not
be open, and various subdifferentials may be used. We extend previous results by proving convergence in objective values and
to the generalized solution set for classical stepsizes t
k
→0, ∑t
k
=∞, and weak or strong convergence of the iterates to a solution for {t
k
}∈ℓ2∖ℓ1 under mild regularity conditions. For bounded constraint sets and suitable stepsizes, the method finds ε-solutions with an
efficiency estimate of O(ε-2), thus being optimal in the sense of Nemirovskii.
Received: October 4, 1998 / Accepted: July 24, 2000?Published online January 17, 2001 相似文献
12.
In this paper necessary, and sufficient optimality conditions are established without Lipschitz continuity for convex composite
continuous optimization model problems subject to inequality constraints. Necessary conditions for the special case of the
optimization model involving max-min constraints, which frequently arise in many engineering applications, are also given. Optimality conditions in the presence
of Lipschitz continuity are routinely obtained using chain rule formulas of the Clarke generalized Jacobian which is a bounded
set of matrices. However, the lack of derivative of a continuous map in the absence of Lipschitz continuity is often replaced
by a locally unbounded generalized Jacobian map for which the standard form of the chain rule formulas fails to hold. In this
paper we overcome this situation by constructing approximate Jacobians for the convex composite function involved in the model
problem using ε-perturbations of the subdifferential of the convex function and the flexible generalized calculus of unbounded
approximate Jacobians. Examples are discussed to illustrate the nature of the optimality conditions.
Received: February 2001 / Accepted: September 2001?Published online February 14, 2002 相似文献
13.
In this paper, we consider a special class of nonconvex programming problems for which the objective function and constraints
are defined in terms of general nonconvex factorable functions. We propose a branch-and-bound approach based on linear programming
relaxations generated through various approximation schemes that utilize, for example, the Mean-Value Theorem and Chebyshev
interpolation polynomials coordinated with a Reformulation-Linearization Technique (RLT). A suitable partitioning process
is proposed that induces convergence to a global optimum. The algorithm has been implemented in C++ and some preliminary computational
results are reported on a set of fifteen engineering process control and design test problems from various sources in the
literature. The results indicate that the proposed procedure generates tight relaxations, even via the initial node linear
program itself. Furthermore, for nine of these fifteen problems, the application of a local search method that is initialized
at the LP relaxation solution produced the actual global optimum at the initial node of the enumeration tree. Moreover, for
two test cases, the global optimum found improves upon the solutions previously reported in the source literature.
Received: January 14, 1998 / Accepted: June 7, 1999?Published online December 15, 2000 相似文献
14.
Many derivative-free methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penalty-like strategies. When the constraints are computationally inexpensive but highly nonlinear, these methods spend many potentially expensive objective function evaluations motivated by the difficulties in improving feasibility. An algorithm that handles this case efficiently is proposed in this paper. The main iteration is split into two steps: restoration and minimization. In the restoration step, the aim is to decrease infeasibility without evaluating the objective function. In the minimization step, the objective function f is minimized on a relaxed feasible set. A global minimization result will be proved and computational experiments showing the advantages of this approach will be presented. 相似文献
15.
We describe an O(n
4
hmin{logU,n
2logn}) capacity scaling algorithm for the minimum cost submodular flow problem. Our algorithm modifies and extends the Edmonds–Karp
capacity scaling algorithm for minimum cost flow to solve the minimum cost submodular flow problem. The modification entails
scaling a relaxation parameter δ. Capacities are relaxed by attaching a complete directed graph with uniform arc capacity
δ in each scaling phase. We then modify a feasible submodular flow by relaxing the submodular constraints, so that complementary
slackness is satisfied. This creates discrepancies between the boundary of the flow and the base polyhedron of a relaxed submodular
function. To reduce these discrepancies, we use a variant of the successive shortest path algorithm that augments flow along
minimum cost paths of residual capacity at least δ. The shortest augmenting path subroutine we use is a variant of Dijkstra’s
algorithm modified to handle exchange capacity arcs efficiently. The result is a weakly polynomial time algorithm whose running
time is better than any existing submodular flow algorithm when U is small and C is big. We also show how to use maximum mean cuts to make the algorithm strongly polynomial. The resulting algorithm is the
first capacity scaling algorithm to match the current best strongly polynomial bound for submodular flow.
Received: August 6, 1999 / Accepted: July 2001?Published online October 2, 2001 相似文献
16.
The Cardinality Constrained Circuit Problem (CCCP) is the problem of finding a minimum cost circuit in a graph where the circuit
is constrained to have at most k edges. The CCCP is NP-Hard. We present classes of facet-inducing inequalities for the convex hull of feasible circuits, and
a branch-and-cut solution approach using these inequalities.
Received: April 1998 / Accepted: October 2000?Published online October 26, 2001 相似文献
17.
Feasible descent algorithms for mixed complementarity problems 总被引:6,自引:0,他引:6
In this paper we consider a general algorithmic framework for solving nonlinear mixed complementarity problems. The main features
of this framework are: (a) it is well-defined for an arbitrary mixed complementarity problem, (b) it generates only feasible
iterates, (c) it has a strong global convergence theory, and (d) it is locally fast convergent under standard regularity assumptions.
This framework is applied to the PATH solver in order to show viability of the approach. Numerical results for an appropriate
modification of the PATH solver indicate that this framework leads to substantial computational improvements.
Received April 9, 1998 / Revised version received November 23, 1998?Published online March 16, 1999 相似文献
18.
Stephen J. Wright 《Mathematical Programming》2001,90(3):459-473
Techniques for transforming convex quadratic programs (QPs) into monotone linear complementarity problems (LCPs) and vice
versa are well known. We describe a class of LCPs for which a reduced QP formulation – one that has fewer constraints than
the “standard” QP formulation – is available. We mention several instances of this class, including the known case in which
the coefficient matrix in the LCP is symmetric.
Received: May 2000 / Accepted: February 22, 2001?Published online April 12, 2001 相似文献
19.
M.J.D. Powell 《Mathematical Programming》2000,87(2):281-301
Let the DFP algorithm for unconstrained optimization be applied to an objective function that has continuous second derivatives
and bounded level sets, where each line search finds the first local minimum. It is proved that the calculated gradients are
not bounded away from zero if there are only two variables. The new feature of this work is that there is no need for the
objective function to be convex.
Received: June 16, 1999 / Accepted: December 24, 1999?Published online March 15, 2000 相似文献
20.
Self-regular functions and new search directions for linear and semidefinite optimization 总被引:11,自引:0,他引:11
In this paper, we introduce the notion of a self-regular function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any
such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following
interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy
a polynomial ?(n
log) iteration bound, where q≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends
on q and the growth degree p≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(lognlog) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor . Our unified analysis provides also the ?(log) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension
of the above results to semidefinite optimization (SDO) is also presented.
Received: March 2000 / Accepted: December 2001?Published online April 12, 2002 相似文献