共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective
function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease
the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search
direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone
of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties
under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant
and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case
is reported.
Received: November 12, 1999 / Accepted: April 6, 2001?Published online October 26, 2001 相似文献
2.
Stefan Henn 《BIT Numerical Mathematics》2006,46(2):325-344
A Gauss–Newton like method is considered to obtain a d–dimensional displacement vector field
, which minimizes a suitable distance measure D between two images. The key to find a minimizer is to substitute the Hessian of D with the Sobolev-H2(Ω)d norm for
. Since the kernel of the associated semi-norm consists only of the affine linear functions we can show in this way, that
the solution of each Newton step is a linear combination of an affine linear transformation and an affine-free nonlinear deformation.
Our approach is based on the solution of a sequence of quadratic subproblems with linear constraints. We show that the resulting
Karush–Kuhn–Tucker system, with a 3×3 block structure, can be solved uniquely and the Gauss–Newton like scheme can be separated
into two separated iterations. Finally, we report on synthetic as well as on real-life data test runs.
AMS subject classification (2000) 65F20, 68U10 相似文献
3.
In this work we consider the problem of minimizing a continuously differentiable function over a feasible set defined by box
constraints. We present a decomposition method based on the solution of a sequence of subproblems. In particular, we state
conditions on the rule for selecting the subproblem variables sufficient to ensure the global convergence of the generated
sequence without convexity assumptions. The conditions require to select suitable variables (related to the violation of the
optimality conditions) to guarantee theoretical convergence properties, and leave the degree of freedom of selecting any other
group of variables to accelerate the convergence. 相似文献
4.
Based on the authors’ previous work which established theoretical foundations of two, conceptual, successive convex relaxation
methods, i.e., the SSDP (Successive Semidefinite Programming) Relaxation Method and the SSILP (Successive Semi-Infinite Linear Programming)
Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems
have a linear objective function c
T
x to be maximized over a nonconvex compact feasible region F described by a finite number of quadratic inequalities. We introduce two new techniques, “discretization” and “localization,”
into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an infinite number
of semi-infinite SDPs (or semi-infinite LPs) which appeared at each iteration of the original methods by a finite number of
standard SDPs (or standard LPs) with a finite number of linear inequality constraints. We establish:?•Given any open convex set U containing F, there is an implementable discretization of the SSDP (or SSILP) Relaxation Method
which generates a compact convex set C such that F⊆C⊆U in a finite number of iterations.?The localization technique is for the cases where we are only interested in upper bounds on the optimal objective value (for
a fixed objective function vector c) but not in a global approximation of the convex hull of F. This technique allows us to generate a convex relaxation of F that is accurate only in certain directions in a neighborhood of the objective direction c. This cuts off redundant work to make the convex relaxation accurate in unnecessary directions. We establish:?•Given any positive number ε, there is an implementable localization-discretization of the SSDP (or SSILP) Relaxation Method
which generates an upper bound of the objective value within ε of its maximum in a finite number of iterations.
Received: June 30, 1998 / Accepted: May 18, 2000?Published online September 20, 2000 相似文献
5.
Jinhai Chen 《Computational Optimization and Applications》2008,40(1):97-118
In this paper, inexact Gauss–Newton methods for nonlinear least squares problems are studied. Under the hypothesis that derivative
satisfies some kinds of weak Lipschitz conditions, the local convergence properties of inexact Gauss–Newton and inexact Gauss–Newton
like methods for nonlinear problems are established with the modified relative residual control. The obtained results can
provide an estimate of convergence ball for inexact Gauss–Newton methods. 相似文献
6.
In this paper, adaptive finite element method is developed for the estimation of distributed parameter in elliptic equation.
Both upper and lower error bound are derived and used to improve the accuracy by appropriate mesh refinement. An efficient
preconditioned project gradient algorithm is employed to solve the nonlinear least-squares problem arising in the context
of parameter identification problem. The efficiency of our error estimators is demonstrated by some numerical experiments.
相似文献
7.
Self-regular functions and new search directions for linear and semidefinite optimization 总被引:11,自引:0,他引:11
In this paper, we introduce the notion of a self-regular function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any
such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following
interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy
a polynomial ?(n
log) iteration bound, where q≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends
on q and the growth degree p≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(lognlog) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor . Our unified analysis provides also the ?(log) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension
of the above results to semidefinite optimization (SDO) is also presented.
Received: March 2000 / Accepted: December 2001?Published online April 12, 2002 相似文献
8.
This paper investigates quasi-Newton updates for equality-constrained optimization. Using a least-change argument we derive
a class of rank-3 updates to approximations of the one-sided projection of the Hessian of the Lagrangian which keeps the appropriate
part symmetric (and possibly positive definite). By imposing the usual assumptions we are able to prove 1-step superlinear
convergence for one of these updates. Encouraging numerical results and comparisons with other previously analyzed updates
are presented.
Received: May 3, 1999 / Accepted: January 28, 2000?Published online March 15, 2000 相似文献
9.
Interior-point methods for nonconvex nonlinear programming: orderings and higher-order methods 总被引:6,自引:0,他引:6
The paper extends prior work by the authors on loqo, an interior point algorithm for nonconvex nonlinear programming. The
specific topics covered include primal versus dual orderings and higher order methods, which attempt to use each factorization
of the Hessian matrix more than once to improve computational efficiency. Results show that unlike linear and convex quadratic
programming, higher order corrections to the central trajectory are not useful for nonconvex nonlinear programming, but that
a variant of Mehrotra’s predictor-corrector algorithm can definitely improve performance.
Received: May 3, 1999 / Accepted: January 24, 2000?Published online March 15, 2000 相似文献
10.
Failure of global convergence for a class of interior point methods for nonlinear programming 总被引:6,自引:0,他引:6
Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming,
including some current methods, is not globally convergent. It is shown that those algorithms produce limit points that are
neither feasible nor stationary points of some measure of the constraint violation, when applied to a well-posed problem.
Received: December 1999 / Accepted: May 2000?Published online August 18, 2000 相似文献
11.
Dirk P. Kroese Sergey Porotsky Reuven Y. Rubinstein 《Methodology and Computing in Applied Probability》2006,8(3):383-407
In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In
this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness
of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear
constraints.
相似文献
12.
Alexander G. Ramm Alexandra B. Smirnova Angelo Favini 《Annali di Matematica Pura ed Applicata》2003,182(1):37-52
A nonlinear operator equation F(x)=0, F:H→H, in a Hilbert space is considered. Continuous Newton’s-type procedures based on a construction of a dynamical system with
the trajectory starting at some initial point x
0 and becoming asymptotically close to a solution of F(x)=0 as t→+∞ are discussed. Well-posed and ill-posed problems are investigated.
Received: June 29, 2001; in final form: February 26, 2002?Published online: February 20, 2003
This paper was finished when AGR was visiting Institute for Theoretical Physics, University of Giessen. The author thanks
DAAD for support 相似文献
13.
Krzysztof C. Kiwiel 《Mathematical Programming》2001,90(1):1-25
We study a general subgradient projection method for minimizing a quasiconvex objective subject to a convex set constraint
in a Hilbert space. Our setting is very general: the objective is only upper semicontinuous on its domain, which need not
be open, and various subdifferentials may be used. We extend previous results by proving convergence in objective values and
to the generalized solution set for classical stepsizes t
k
→0, ∑t
k
=∞, and weak or strong convergence of the iterates to a solution for {t
k
}∈ℓ2∖ℓ1 under mild regularity conditions. For bounded constraint sets and suitable stepsizes, the method finds ε-solutions with an
efficiency estimate of O(ε-2), thus being optimal in the sense of Nemirovskii.
Received: October 4, 1998 / Accepted: July 24, 2000?Published online January 17, 2001 相似文献
14.
In this paper, we study a gradient-based continuous method for large-scale optimization problems. By converting the optimization problem into an ODE, we are able to show that the solution trajectory of this ODE tends to the set of stationary points of the original optimization problem. We test our continuous method on large-scale problems available in the literature. The simulation results are very attractive.This research was supported in part by Grants FRG/99-00/II-23 and FRG/00-0l/II-63 of Hong Kong Baptist University and the Research Grant Council of Hong Kong. 相似文献
15.
Xiaotie Deng Toshihide Ibaraki Hiroshi Nagamochi Wenan Zang 《Mathematical Programming》2000,87(3):441-452
Combinatorial optimization games deal with cooperative games for which the value of every subset of players is obtained by
solving a combinatorial optimization problem on the resources collectively owned by this subset. A solution of the game is
in the core if no subset of players is able to gain advantage by breaking away from this collective decision of all players.
The game is totally balanced if and only if the core is non-empty for every induced subgame of it.?We study the total balancedness
of several combinatorial optimization games in this paper. For a class of the partition game [5], we have a complete characterization
for the total balancedness. For the packing and covering games [3], we completely clarify the relationship between the related
primal/dual linear programs for the corresponding games to be totally balanced. Our work opens up the question of fully characterizing
the combinatorial structures of totally balanced packing and covering games, for which we present some interesting examples:
the totally balanced matching, vertex cover, and minimum coloring games.
Received: November 5, 1998 / Accepted: September 8, 1999?Published online February 23, 2000 相似文献
16.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good
bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient
is available at each point, but in many situations there is more information available. We present a new method for optimizing
such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some
of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations,
and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation
of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the
projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe.
Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999 相似文献
17.
The local quadratic convergence of the Gauss-Newton method for convex composite optimization f=h∘F is established for any convex function h with the minima set C, extending Burke and Ferris’ results in the case when C is a set of weak sharp minima for h.
Received: July 24, 1998 / Accepted: November 29, 2000?Published online September 3, 2001 相似文献
18.
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality
constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods
for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the
iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and q-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal
set is ensured when the barrier parameter tends to zero, provided strict complementarity holds.
Received: December 21, 2000 / Accepted: July 13, 2001?Published online February 14, 2002 相似文献
19.
Portfolio optimization problem under concave transaction costs and minimal transaction unit constraints 总被引:9,自引:0,他引:9
We will propose a branch and bound algorithm for calculating a globally optimal solution of a portfolio construction/rebalancing
problem under concave transaction costs and minimal transaction unit constraints. We will employ the absolute deviation of
the rate of return of the portfolio as the measure of risk and solve linear programming subproblems by introducing (piecewise)
linear underestimating function for concave transaction cost functions. It will be shown by a series of numerical experiments
that the algorithm can solve the problem of practical size in an efficient manner.
Received: July 15, 1999 / Accepted: October 1, 2000?Published online December 15, 2000 相似文献
20.
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in
which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO
as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust
counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making
RO a useful tool for real-world applications. We discuss some of these applications, specifically: antenna design, truss topology
design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB
collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected
by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Received: May 24, 2000 / Accepted: September 12, 2001?Published online February 14, 2002 相似文献