共查询到20条相似文献,搜索用时 31 毫秒
1.
Kazuhiro Kobayashi Sunyoung Kim Masakazu Kojima 《Applied Mathematics and Optimization》2008,58(1):69-88
Exploiting sparsity has been a key issue in solving large-scale optimization problems. The most time-consuming part of primal-dual
interior-point methods for linear programs, second-order cone programs, and semidefinite programs is solving the Schur complement
equation at each iteration, usually by the Cholesky factorization. The computational efficiency is greatly affected by the
sparsity of the coefficient matrix of the equation which is determined by the sparsity of an optimization problem (linear
program, semidefinite program or second-order cone program). We show if an optimization problem is correlatively sparse, then the coefficient matrix of the Schur complement equation inherits the sparsity, and a sparse Cholesky factorization
applied to the matrix results in no fill-in.
S. Kim’s research was supported by Kosef R01-2005-000-10271-0 and KRF-2006-312-C00062. 相似文献
2.
3.
Large scale optimization problems often require an approximation to the Hessian matrix. If the Hessian matrix is sparse then
estimation by differences of gradients is attractive because the number of required differences is usually small compared
to the dimension of the problem. The problem of estimating Hessian matrices by differences can be phrased as follows: Given
the sparsity structure of a symmetric matrixA, obtain vectorsd
1,d
2, …d
p such thatAd
1,Ad
2, …Ad
p determineA uniquely withp as small as possible. We approach this problem from a graph theoretic point of view and show that both direct and indirect
approaches to this problem have a natural graph coloring interpretation. The complexity of the problem is analyzed and efficient
practical heuristic procedures are developed. Numerical results illustrate the differences between the various approaches.
Work supported in part by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of
the U.S. Department of Energy under Contract W-31-109-Eng-38. 相似文献
4.
A Modified Relaxation Scheme for Mathematical Programs with Complementarity Constraints 总被引:3,自引:0,他引:3
In this paper, we consider a mathematical program with complementarity constraints. We present a modified relaxed program
for this problem, which involves less constraints than the relaxation scheme studied by Scholtes (2000). We show that the
linear independence constraint qualification holds for the new relaxed problem under some mild conditions. We also consider
a limiting behavior of the relaxed problem. We prove that any accumulation point of stationary points of the relaxed problems
is C-stationary to the original problem under the MPEC linear independence constraint qualification and, if the Hessian matrices
of the Lagrangian functions of the relaxed problems are uniformly bounded below on the corresponding tangent space, it is
M-stationary. We also obtain some sufficient conditions of B-stationarity for a feasible point of the original problem. In
particular, some conditions described by the eigenvalues of the Hessian matrices mentioned above are new and can be verified
easily.
This work was supported in part by the Scientific Research Grant-in-Aid from the Ministry of Education, Science, Sports, and
Culture of Japan. The authors are grateful to an anonymous referee for critical comments. 相似文献
5.
6.
Lagrangian relaxation is a popular technique to solve difficult optimization problems. However, the applicability of this
technique depends on having a relatively low number of hard constraints to dualize. When there are many hard constraints,
it may be preferable to relax them dynamically, according to some rule depending on which multipliers are active. From the
dual point of view, this approach yields multipliers with varying dimensions and a dual objective function that changes along
iterations. We discuss how to apply a bundle methodology to solve this kind of dual problems. Our framework covers many separation
procedures to generate inequalities that can be found in the literature, including (but not limited to) the most violated
inequality. We analyze the resulting dynamic bundle method giving a positive answer for its primal-dual convergence properties,
and, under suitable conditions, show finite termination for polyhedral problems.
Claudia Sagastizábal is on leave from INRIA Rocquencourt, France.
Research supported by CNPq Grant No.303540-03/6. 相似文献
7.
Mukund N. Thapa 《Mathematical Programming》1984,29(2):156-186
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky
factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss
the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori.
Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the
Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems.
Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure
is suggested.
Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear
system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in)
may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in
a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed.
The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice. 相似文献
8.
The paper describes new conjugate gradient algorithms for large scale nonconvex problems with box constraints. In order to
speed up convergence the algorithms employ scaling matrices which transform the space of original variables into the space
in which Hessian matrices of the problem’s functionals have more clustered eigenvalues. This is done by applying limited memory
BFGS updating matrices. Once the scaling matrix is calculated, the next few conjugate gradient iterations are performed in
the transformed space. The box constraints are treated efficiently by the projection. We also present a limited memory quasi-Newton
method which is a special version of our general algorithm. The presented algorithms have strong global convergence properties,
in particular they identify constraints active at a solution in a finite number of iterations. We believe that they are competitive
to the L-BFGS-B method and present some numerical results which support our claim. 相似文献
9.
Hong Xia YIN Dong Lei DU 《数学学报(英文版)》2007,23(7):1233-1240
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems. 相似文献
10.
A Nash-based collusive game among a finite set of players is one in which the players coordinate in order for each to gain
higher payoffs than those prescribed by the Nash equilibrium solution. In this paper, we study the optimization problem of
such a collusive game in which the players collectively maximize the Nash bargaining objective subject to a set of incentive
compatibility constraints. We present a smooth reformulation of this optimization problem in terms of a nonlinear complementarity
problem. We establish the convexity of the optimization problem in the case where each player's strategy set is unidimensional.
In the multivariate case, we propose upper and lower bounding procedures for the collusive optimization problem and establish
convergence properties of these procedures. Computational results with these procedures for solving some test problems are
reported.
It is with great honor that we dedicate this paper to Professor Terry Rockafellar on the occasion of his 70th birthday. Our
work provides another example showing how Terry's fundamental contributions to convex and variational analysis have impacted
the computational solution of applied game problems.
This author's research was partially supported by the National Science Foundation under grant ECS-0080577.
This author's research was partially supported by the National Science Foundation under grant CCR-0098013. 相似文献
11.
In this paper, an unconstrained optimization method using the nonmonotone second order Goldstein’s line search is proposed.
By using the negative curvature information from the Hessian, the sequence generated is shown to converge to a stationary
point with the second order optimality conditions. Numerical tests on a set of standard test problems confirm the efficiency
of our new method.
This work was supported by the National Natural Science Foundation of China (Grant No. 10231060) and the Specialized Research
Fund of Doctoral Program of Higher Education of China (Grant No. 20040319003) 相似文献
12.
In this paper, we consider a class of two-stage stochastic risk management problems, which may be stated as follows. A decision-maker
determines a set of binary first-stage decisions, after which a random event from a finite set of possible outcomes is realized.
Depending on the realization of this outcome, a set of continuous second-stage decisions must then be made that attempt to
minimize some risk function. We consider a hierarchy of multiple risk levels along with associated penalties for each possible
scenario. The overall objective function thus depends on the cost of the first-stage decisions, plus the expected second-stage
risk penalties. We develop a mixed-integer 0–1 programming model and adopt an automatic convexification procedure using the
reformulation–linearization technique to recast the problem into a form that is amenable to applying Benders’ partitioning
approach. As a principal computational expedient, we show how the reformulated higher-dimensional Benders’ subproblems can
be efficiently solved via certain reduced-sized linear programs in the original variable space. In addition, we explore several
key ingredients in our proposed procedure to enhance the tightness of the prescribed Benders’ cuts and the efficiency with
which they are generated. Finally, we demonstrate the computational efficacy of our approaches on a set of realistic test
problems.
Dr. H. D. Sherali acknowledges the support of the National Science Foundation under Grant No. DMI-0552676. Dr. J. C. Smith acknowledges the support of the Air Force Office of Scientific Research under Grant No. AFOSR/MURI F49620-03-1-0477. 相似文献
13.
B. K. Pagnoncelli S. Ahmed A. Shapiro 《Journal of Optimization Theory and Applications》2009,142(2):399-416
We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the resulting problem. We discuss how one can use the SAA method
to obtain good candidate solutions for chance constrained problems. Numerical experiments are performed to correctly tune
the parameters involved in the SAA. In addition, we present a method for constructing statistical lower bounds for the optimal
value of the considered problem and discuss how one should tune the underlying parameters. We apply the SAA to two chance
constrained problems. The first is a linear portfolio selection problem with returns following a multivariate lognormal distribution.
The second is a joint chance constrained version of a simple blending problem.
B.K. Pagnoncelli’s research was supported by CAPES and FUNENSEG.
S. Ahmed’s research was partly supported by the NSF Award DMI-0133943.
A. Shapiro’s research was partly supported by the NSF Award DMI-0619977. 相似文献
14.
Stephen J. Wright 《Mathematical Programming》2001,90(1):71-100
In the Newton/log-barrier method, Newton steps are taken for the log-barrier function for a fixed value of the barrier parameter
until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated.
A naive analysis indicates that Newton’s method does not exhibit superlinear convergence to the minimizer of each instance
of the log-barrier function until it reaches a very small neighborhood, namely within O(μ2) of the minimizer, where μ is the barrier parameter. By analyzing the structure of the barrier Hessian and gradient in terms
of the subspace of active constraint gradients and the associated null space, we show that this neighborhood is in fact much
larger –O(μσ) for any σ∈(1,2] – thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that
the overall convergence rate of the Newton/log-barrier algorithm is superlinear in the number of function/derivative evaluations,
provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier
parameter is related in a certain way to the step length and convergence criteria for each Newton process.
Received: October 10, 1997 / Accepted: September 10, 2000?Published online February 22, 2001 相似文献
15.
A generic framework is postulated for utilizing the computational resources provided by a metacomputer to concurrently solve
a large number of optimization problems generated by a modeling language. An example of the framework using the Condor resource
manager and the AMPL and GAMS modeling languages is provided. A mixed integer programming formulation of a feature selection
problem from machine learning is used to test the mechanism developed. Due to this application’s computational requirements,
the ability to perform optimizations in parallel is necessary in order to obtain results within a reasonable amount of time.
Details about the simple and easy to use tool and implementation are presented so that other modelers with applications generating
many independent mathematical programs can take advantage of it to significantly reduce solution times.
Received: October 28, 1998 / Accepted: December 01, 1999?Published online June 8, 2000 相似文献
16.
Many algorithms for linearly constrained optimization problems proceed by solving a sequence of subproblems. In these subproblems,
the number of variables is implicitly reduced by using the linear constraints to express certain ‘basic’ variables in terms
of other variables. Difficulties may arise, however, if degeneracy is present; that is, if one or more basic variables are
at lower or upper bounds. In this situation, arbitrarily small movements along a feasible search direction in the reduced
problem may result in infeasibilities for basic variables in the original problem. For such cases, the search direction is
typically discarded, a new reduced problem is formed and a new search direction is computed. Such a process may be extremely
costly, particularly in large-scale optimization where degeneracy is likely and good search directions can be expensive to
compute. This paper is concerned with a practical method for ensuring that directions that are computed in the reduced space
are actually feasible in the original problem. It is based on a generalization of the ‘maximal basis’ result first introduced
by Dembo and Klincewicz for large nonlinear network optimization problems.
Research supported in part by NSF Grant ECS-8119513 and DOT Research Grant CT-06-0011. 相似文献
17.
Determining whether a solution is of high quality (optimal or near optimal) is fundamental in optimization theory and algorithms.
In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality
is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications
procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally
simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication
procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability
for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one
and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic
linear programs from the literature. We also discuss when the procedures perform well and when they fail, and we propose using
ɛ-optimal solutions to strengthen the performance of our procedures. 相似文献
18.
Mukund N. Thapa 《Mathematical Programming》1983,25(2):158-182
Newton-type methods and quasi-Newton methods have proven to be very successful in solving dense unconstrained optimization
problems. Recently there has been considerable interest in extending these methods to solving large problems when the Hessian
matrix has a known a priori sparsity pattern. This paper treats sparse quasi-Newton methods in a uniform fashion and shows
the effect of loss of positive-definiteness in generating updates. These sparse quasi-Newton methods coupled with a modified
Cholesky factorization to take into account the loss of positive-definiteness when solving the linear systems associated with
these methods were tested on a large set of problems. The overall conclusions are that these methods perform poorly in general—the
Hessian matrix becomes indefinite even close to the solution and superlinear convergence is not observed in practice.
Research for this paper was performed at the Department of Operations Research, Stanford, CA 94305. The research was partially
supported by the Department of Energy Contract AM03-76SF00326. PA# DE-AT03-76ER72018: Office of Naval Research Contract N00014-75-C-0267;
National Science Foundation Grants MCS-7681259, MCS-7926009 and ECS-8012974. 相似文献
19.
In this paper we develop a method for classifying an unknown data vector as belonging to one of several classes. This method
is based on the statistical methods of maximum likehood and borrowed strength estimation. We develop an MPEC procedure (for
Mathematical Program with Equilibrium Constraints) for the classification of a multi-dimensional observation, using a finite
set of observed training data as the inputs to a bilevel optimization problem. We present a penalty interior point method
for solving the resulting MPEC and report numerical results for a multispectral minefield classification application. Related
approaches based on conventional maximum likehood estimation and a bivariate normal mixture model, as well as alternative
surrogate classification objective functions, are described.
Received: October 26, 1998 / Accepted: June 11, 2001?Published online March 24, 2003
RID="***"
ID="***"The authors of this work were all partially supported by the Wright Patterson Air Force Base via Veda Contract F33615-94-D-1400.
The first and third author were also supported by the National Science Foundation under grant DMS-9705220.
RID="*"
ID="*"The work of this author was based on research supported by the U.S. National Science Foundation under grant CCR-9624018.
RID="**"
ID="**"The work of this author was supported by the Office of Naval Research under grant N00014-95-1-0777. 相似文献
20.
Extending Scope of Robust Optimization: Comprehensive Robust Counterparts of Uncertain Problems 总被引:2,自引:0,他引:2
In this paper, we propose a new methodology for handling optimization problems with uncertain data. With the usual Robust
Optimization paradigm, one looks for the decisions ensuring a required performance for all realizations of the data from a
given bounded uncertainty set, whereas with the proposed approach, we require also a controlled deterioration in performance
when the data is outside the uncertainty set.
The extension of Robust Optimization methodology developed in this paper opens up new possibilities to solve efficiently multi-stage
finite-horizon uncertain optimization problems, in particular, to analyze and to synthesize linear controllers for discrete
time dynamical systems.
Research was supported by the Binational Science Foundation grant #2002038 相似文献