共查询到20条相似文献,搜索用时 31 毫秒
1.
A superlinearly and quadratically convergent SQP type feasible method for constrained optimization 总被引:6,自引:0,他引:6
JianJinbao ZhangKecun XueShengjia 《高校应用数学学报(英文版)》2000,15(3):319-331
A new SQP type feasible method for inequality constrained optimization is presented, it is a combination of a master algorithm and an auxiliary algorithm which is taken only in finite iterations. The directions of the master algorithm are generated by only one quadratic programming, and its step-size is always one, the directions of the auxiliary algorithm are new “secondorder“ feasible descent. Under suitable assumptions, the algorithm is proved to possess global and strong convergence, superlinear and quadratic convergence. 相似文献
2.
Reverse engineering of program code is the process of constructing a higher level abstraction of an implementation in order
to facilitate the understanding of a system that may be in a “legacy” or “geriatric” state. Changing architectures and improvements
in programming methods, including formal methods in software development and object-oriented programming, have prompted a
need to reverse engineer and re-engineer program code. This paper describes the application of the strongest postcondition
predicate transformer (sp) as the formal basis for the reverse engineering of imperative program code.
This work is supported in part by the National Science Foundation grants CCR-9407318, CCR-9209873, and CDA-9312389.
This author is supported in part by a NASA Graduate Student Researchers Program Fellowship. 相似文献
3.
We start with a study of the primal—dual affine-scaling algorithms for linear programs. Using ideas from Kojima et al., Mizuno and Nagasawa, and new potential functions we establish a framework for primal—dual algorithms that keep a potential function value fixed. We show that if the potential function used in the algorithm is compatible with a corresponding neighborhood of the central path then the convergence proofs simplify greatly. Our algorithms have the property that all the iterates can be kept in a neighborhood of the central path without using any centering in the search directions.Research performed while the author was Ph.D. student at Cornell University and was supported in part by the United States Army Research Office through the Army Center of Excellence for Symbolic Methods in Algorithmic Mathematics (ACSyAM), Mathematical Sciences Institute of Cornell University, Contract DAAL03-91-C-0027, and also by NSF, AFOSR and ONR through NSF Grant DMS-8920550. 相似文献
4.
Based on the recent theoretical results of Zhao and Li [Math. Oper. Res., 26 (2001), pp. 119—146], we present in this paper
a new path-following method for nonlinear P* complementarity problems. Different from most existing interior-point algorithms
that are based on the central path, this algorithm tracks the “regularized central path” which exists for any continuous P*
problem. It turns out that the algorithm is globally convergent for any P* problem provided that its solution set is nonempty.
By different choices of the parameters in the algorithm, the iterative sequence can approach to different types of points
of the solution set. Moreover, local superlinear convergence of this algorithm can also be achieved under certain conditions.
The research of the first author was supported by The National Natural Science Foundation of China under Grant No. 10201032
and Grant No. 70221001. The research of the second author was supported by Grant CUHK4214/01E, Research Grants Council, Hong
Kong.
An erratum to this article is available at . 相似文献
5.
Recently, interior-point algorithms have been applied to nonlinear and nonconvex optimization. Most of these algorithms are
either primal-dual path-following or affine-scaling in nature, and some of them are conjectured to converge to a local minimum.
We give several examples to show that this may be untrue and we suggest some strategies for overcoming this difficulty.
Received: June 26, 2000 / Accepted: April 2002 Published online: September 5, 2002
Key words. Nonconvex quadratic optimization – local minimum – interior-point algorithms – trust region – branch-and-cut
This research is supported by the National Science Foundation Grant CCR-9731273 and DMS-9703490. 相似文献
6.
Luke B. Winternitz Stacey O. Nicholls André L. Tits Dianne P. O’Leary 《Computational Optimization and Applications》2012,51(3):1001-1036
Consider linear programs in dual standard form with n constraints and m variables. When typical interior-point algorithms are used for the solution of such problems, updating the iterates, using
direct methods for solving the linear systems and assuming a dense constraint matrix A, requires O(nm2)\mathcal{O}(nm^{2}) operations per iteration. When n≫m it is often the case that at each iteration most of the constraints are not very relevant for the construction of a good
update and could be ignored to achieve computational savings. This idea was considered in the 1990s by Dantzig and Ye, Tone,
Kaliski and Ye, den Hertog et al. and others. More recently, Tits et al. proposed a simple “constraint-reduction” scheme and
proved global and local quadratic convergence for a dual-feasible primal-dual affine-scaling method modified according to
that scheme. In the present work, similar convergence results are proved for a dual-feasible constraint-reduced variant of
Mehrotra’s predictor-corrector algorithm, under less restrictive nondegeneracy assumptions. These stronger results extend
to primal-dual affine scaling as a limiting case. Promising numerical results are reported. 相似文献
7.
Reuven Rubinstein 《Methodology and Computing in Applied Probability》2009,11(4):491-549
We present a randomized algorithm, called the cloning algorithm, for approximating the solutions of quite general NP-hard
combinatorial optimization problems, counting, rare-event estimation and uniform sampling on complex regions. Similar to the
algorithms of Diaconis–Holmes–Ross and Botev–Kroese the cloning algorithm is based on the MCMC (Gibbs) sampler equipped with
an importance sampling pdf and, as usual for randomized algorithms, it uses a sequential sampling plan to decompose a “difficult”
problem into a sequence of “easy” ones. The cloning algorithm combines the best features of the Diaconis–Holmes–Ross and the
Botev–Kroese. In addition to some other enhancements, it has a special mechanism, called the “cloning” device, which makes
the cloning algorithm, also called the Gibbs cloner fast and accurate. We believe that it is the fastest and the most accurate randomized algorithm for counting known so far.
In addition it is well suited for solving problems associated with the Boltzmann distribution, like estimating the partition
functions in an Ising model. We also present a combined version of the cloning and cross-entropy (CE) algorithms. We prove
the polynomial complexity of a particular version of the Gibbs cloner for counting. We finally present efficient numerical
results with the Gibbs cloner and the combined version, while solving quite general integer and combinatorial optimization
problems as well as counting ones, like SAT. 相似文献
8.
In this paper we provide a solution of the functional equation unsolved in the paper, by the second author, "On functional equations arising from map enumerations" that appeared in Discrete Math, 123: 93-109 (1993). It is also the number of combinatorial distinct rooted general eulerian planar maps with the valency of root-vertex, the number of non-root vertices and non-root faces of the maps as three parameters. In particular, a result in the paper, by the same author, "On the number of eulerian planar map... 相似文献
9.
Levent Tunçel 《Foundations of Computational Mathematics》2001,1(3):229-254
We generalize primal—dual interior-point methods for linear programming (LP) problems to the convex optimization problems
in conic form. Previously, the most comprehensive theory of symmetric primal—dual interior-point algorithms was given by Nesterov
and Todd for feasible regions expressed as the intersection of a symmetric cone with an affine subspace. In our setting, we
allow an arbitrary convex cone in place of the symmetric cone. Even though some of the impressive properties attained by Nesterov—Todd
algorithms are impossible in this general setting of convex optimization problems, we show that essentially all primal—dual
interior-point algorithms for LP can be extended easily to the general setting. We provide three frameworks for primal—dual
algorithms, each framework corresponding to a different level of sophistication in the algorithms. As the level of sophistication
increases, we demand better formulations of the feasible solution sets. Our algorithms, in return, attain provably better
theoretical properties. We also make a very strong connection to quasi-Newton methods by expressing the square of the symmetric
primal—dual linear transformation (the so-called scaling) as a quasi-Newton update in the case of the least sophisticated
framework.
August 25, 1999. Final version received: March 7, 2001. 相似文献
10.
An algorithm of sequential systems of linear equations for nonlinear optimization problems with arbitrary initial point 总被引:6,自引:0,他引:6
For current sequential quadratic programming (SQP) type algorithms, there exist two problems: (i) in order to obtain a search
direction, one must solve one or more quadratic programming subproblems per iteration, and the computation amount of this
algorithm is very large. So they are not suitable for the large-scale problems; (ii) the SQP algorithms require that the related
quadratic programming subproblems be solvable per iteration, but it is difficult to be satisfied. By using ε-active set procedure
with a special penalty function as the merit function, a new algorithm of sequential systems of linear equations for general
nonlinear optimization problems with arbitrary initial point is presented. This new algorithm only needs to solve three systems
of linear equations having the same coefficient matrix per iteration, and has global convergence and local superlinear convergence.
To some extent, the new algorithm can overcome the shortcomings of the SQP algorithms mentioned above.
Project partly supported by the National Natural Science Foundation of China and Tianyuan Foundation of China. 相似文献
11.
Global Rank Axioms for Poset Matroids 总被引:2,自引:0,他引:2
ShuChaoLI YanQinFENG 《数学学报(英文版)》2004,20(3):507-514
An excellent introduction to the topic of poset matroids is due to Barnabei, Nicoletti and Pezzoli. In this paper, we investigate the rank axioms for poset matroids; thereby we can characterize poset matroids in a “global” version and a “pseudo-global” version. Some corresponding properties of combinatorial schemes are also obtained. 相似文献
12.
Paulo J. S. Silva Jonathan Eckstein 《Computational Optimization and Applications》2006,33(2-3):115-156
We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded
“box” in
, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence
for a class of double regularizations generalizing a previously-proposed class of Auslender et al. Using these results, we derive a broadened class of augmented
Lagrangian methods. We point out some connections between these methods and earlier work on “pure penalty” smoothing methods
for complementarity; this connection leads to a new form of augmented Lagrangian based on the “neural” smoothing function.
Finally, we computationally compare this new kind of augmented Lagrangian to three previously-known varieties on the MCPLIB
problem library, and show that the neural approach offers some advantages. In these tests, we also consider primal-dual approaches
that include a primal proximal term. Such a stabilizing term tends to slow down the algorithms, but makes them more robust.
This author was partially supported by CNPq, Grant PQ 304133/2004-3 and PRONEX-Optimization. 相似文献
13.
On affine scaling algorithms for nonconvex quadratic programming 总被引:8,自引:0,他引:8
Yinyu Ye 《Mathematical Programming》1992,56(1-3):285-300
We investigate the use of interior algorithms, especially the affine-scaling algorithm, to solve nonconvex — indefinite or negative definite — quadratic programming (QP) problems. Although the nonconvex QP with a polytope constraint is a hard problem, we show that the problem with an ellipsoidal constraint is easy. When the hard QP is solved by successively solving the easy QP, the sequence of points monotonically converge to a feasible point satisfying both the first and the second order optimality conditions.Research supported in part by NSF Grant DDM-8922636 and the College Summer Grant, College of Business Administration, The University of Iowa. 相似文献
14.
15.
We present two strategies for choosing a “hot” starting-point in the context of an infeasible potential reduction (PR) method
for convex quadratic programming. The basic idea of both strategies is to select a preliminary point and to suitably scale
it in order to obtain a starting point such that its nonnegative entries are sufficiently bounded away from zero, and the
ratio between the duality gap and a suitable measure of the infeasibility is small. One of the two strategies is naturally
suggested by the convergence theory of the PR method; the other has been devised to reduce the initial values of the duality
gap and the infeasibility measure, with the objective of decreasing the number of PR iterations. Numerical experiments show
that the second strategy generally performs better than the first, and both outperform a starting-point strategy based on
the affine-scaling step. 相似文献
16.
New cubature formulae and hyperinterpolation in three variables 总被引:1,自引:0,他引:1
A new algebraic cubature formula of degree 2n+1 for the product Chebyshev measure in the d-cube with ≈n
d
/2
d−1 nodes is established. The new formula is then applied to polynomial hyperinterpolation of degree n in three variables, in which coefficients of the product Chebyshev orthonormal basis are computed by a fast algorithm based
on the 3-dimensional FFT. Moreover, integration of the hyperinterpolant provides a new Clenshaw-Curtis type cubature formula
in the 3-cube.
Work supported by the National Science Foundation under Grant DMS-0604056, by the “ex-60%” funds of the Universities of Padova
and Verona, and by the INdAM-GNCS. 相似文献
17.
We consider optimality systems of Karush-Kuhn-Tucker (KKT) type, which arise, for example, as primal-dual conditions characterizing
solutions of optimization problems or variational inequalities. In particular, we discuss error bounds and Newton-type methods
for such systems. An exhaustive comparison of various regularity conditions which arise in this context is given. We obtain
a new error bound under an assumption which we show to be strictly weaker than assumptions previously used for KKT systems,
such as quasi-regularity or semistability (equivalently, the R
0-property). Error bounds are useful, among other things, for identifying active constraints and developing efficient local
algorithms. We propose a family of local Newton-type algorithms. This family contains some known active-set Newton methods,
as well as some new methods. Regularity conditions required for local superlinear convergence compare favorably with convergence
conditions of nonsmooth Newton methods and sequential quadratic programming methods.
Received: December 10, 2001 / Accepted: July 28, 2002 Published online: February 14, 2003
Key words. KKT system – regularity – error bound – active constraints – Newton method
Mathematics Subject Classification (1991): 90C30, 65K05 相似文献
18.
Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems 总被引:8,自引:0,他引:8
In this paper, some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences {x
k
} convergingq-superlinearly to the solution. Furthermore, under mild assumptions, aq-quadratic convergence rate inx is also attained. Other features of these algorithms are that only the solution of linear systems of equations is required at each iteration and that the strict complementarity assumption is never invoked. First, the superlinear or quadratic convergence rate of a Newton-like algorithm is proved. Then, a simpler version of this algorithm is studied, and it is shown that it is superlinearly convergent. Finally, quasi-Newton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle, and Wang is given.This research was supported by the National Research Program Metodi di Ottimizzazione per la Decisioni, MURST, Roma, Italy. 相似文献
19.
In previous work (Krasnogor, . In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol,
UK, 2002; Krasnogor and Smith, IEEE Trans Evol Algorithms 9(6):474–488, 2005) we develop a syntax-only classification of evolutionary algorithms, in particular so-called memetic algorithms (MAs). When
“syntactic sugar” is added to our model, we are able to investigate the polynomial local search (PLS) complexity of memetic
algorithms. In this paper we show the PLS-completeness of whole classes of problems that occur when memetic algorithms are
applied to the travelling salesman problem using a range of mutation, crossover and local search operators. Our PLS-completeness
results shed light on the worst case behaviour that can be expected of a memetic algorithm under these circumstances. Moreover,
we point out in this paper that memetic algorithms for graph partitioning and maximum network flow (both with important practical
applications) also give rise to PLS-complete problems.
Electronic Supplementary Material The online version of this article (doi:) contains supplementary material, which is available to authorized users. 相似文献
Electronic Supplementary Material The online version of this article (doi:) contains supplementary material, which is available to authorized users. 相似文献
20.
Advanced Genetic Programming Based Machine Learning 总被引:1,自引:0,他引:1
Stephan Winkler Michael Affenzeller Stefan Wagner 《Journal of Mathematical Modelling and Algorithms》2007,6(3):455-480
A Genetic Programming based approach for solving classification problems is presented in this paper. Classification is understood
as the act of placing an object into a set of categories, based on the object’s properties; classification algorithms are
designed to learn a function which maps a vector of object features into one of several classes. This is done by analyzing
a set of input-output examples (“training samples”) of the function. Here we present a method based on the theory of Genetic
Algorithms and Genetic Programming that interprets classification problems as optimization problems: Each presented instance
of the classification problem is interpreted as an instance of an optimization problem, and a solution is found by a heuristic
optimization algorithm. The major new aspects presented in this paper are advanced algorithmic concepts as well as suitable
genetic operators for this problem class (mainly the creation of new hypotheses by merging already existing ones and their
detailed evaluation). The experimental part of the paper documents the results produced using new hybrid variants of Genetic
Algorithms as well as investigated parameter settings. Graphical analysis is done using a novel multiclass classifier analysis
concept based on the theory of Receiver Operating Characteristic curves.
The work described in this paper was done within the Translational Research Project L282 “GP-Based Techniques for the Design
of Virtual Sensors” sponsored by the Austrian Science Fund (FWF). 相似文献