共查询到20条相似文献,搜索用时 500 毫秒
1.
Constrained Global Optimization of Expensive Black Box Functions Using Radial Basis Functions 总被引:1,自引:0,他引:1
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem. 相似文献
2.
Miroslav Kuchta Kent‐Andre Mardal Mikael Mortensen 《Numerical Methods for Partial Differential Equations》2019,35(1):375-393
Multiscale or multiphysics problems often involve coupling of partial differential equations posed on domains of different dimensionality. In this work, we consider a simplified model problem of a 3d‐1d coupling and the main objective is to construct algorithms that may utilize standard multilevel algorithms for the 3d domain, which has the dominating computational complexity. Preconditioning for a system of two elliptic problems posed, respectively, in a three‐dimensional domain and an embedded one dimensional curve and coupled by the trace constraint is discussed. Investigating numerically the properties of the well‐defined discrete trace operator, it is found that negative fractional Sobolev norms are suitable preconditioners for the Schur complement of the system. The norms are employed to construct a robust block diagonal preconditioner for the coupled problem. 相似文献
3.
Angelo Alessandri Giorgio Gnecco Marcello Sanguineti 《Journal of Optimization Theory and Applications》2010,147(2):243-262
Rates of convergence are derived for approximate solutions to optimization problems associated with the design of state estimators
for nonlinear dynamic systems. Such problems consist in minimizing the functional given by the worst-case ratio between the
ℒ
p
-norm of the estimation error and the sum of the ℒ
p
-norms of the disturbances acting on the dynamic system. The state estimator depends on an innovation function, which is searched
for as a minimizer of the functional over a subset of a suitably-defined functional space. In general, no closed-form solutions
are available for these optimization problems. Following the approach proposed in (Optim. Theory Appl. 134:445–466, 2007), suboptimal solutions are searched for over linear combinations of basis functions containing some parameters to be optimized.
The accuracies of such suboptimal solutions are estimated in terms of the number of basis functions. The estimates hold for
families of approximators used in applications, such as splines of suitable orders. 相似文献
4.
Napsu Karmitsa Mario Tanaka Filho José Herskovits 《Journal of Optimization Theory and Applications》2011,148(3):528-549
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas
of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper,
we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines
the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible
direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation
points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz
continuous functions and give some preliminary results from numerical experiments. 相似文献
5.
V. Chepoi F. F. Dragan I. Newman Y. Rabinovich Y. Vaxès 《Discrete and Computational Geometry》2012,47(1):187-214
In this paper, we present a simple factor 6 algorithm for approximating the optimal multiplicative distortion of embedding
a graph metric into a tree metric (thus improving and simplifying the factor 100 and 27 algorithms of Bǎdoiu et al. (Proceedings
of the eighteenth annual ACM–SIAM symposium on discrete algorithms (SODA 2007), pp. 512–521, 2007) and Bǎdoiu et al. (Proceedings of the 11th international workshop on approximation algorithms for combinatorial optimization
problems (APPROX 2008), Springer, Berlin, pp. 21–34, 2008)). We also present a constant factor algorithm for approximating the optimal distortion of embedding a graph metric into
an outerplanar metric. For this, we introduce a general notion of a metric relaxed minor and show that if G contains an α-metric relaxed H-minor, then the distortion of any embedding of G into any metric induced by a H-minor free graph is ≥α. Then, for H=K
2,3, we present an algorithm which either finds an α-relaxed minor, or produces an O(α)-embedding into an outerplanar metric. 相似文献
6.
We consider domain subdivision algorithms for computing isotopic approximations of a nonsingular algebraic curve. The curve
is given by a polynomial equation f(X,Y)=0. Two algorithms in this area are from Snyder (1992) SIGGRAPH Comput. Graphics, 26(2), 121 and Plantinga and Vegter (2004) In Proc. Eurographics Symposium on Geometry Processing, pp. 245–254. We introduce a new algorithm that combines the advantages
of these two algorithms: like Snyder, we use the parameterizability criterion for subdivision, and like Plantinga and Vegter,
we exploit nonlocal isotopy. We further extend our algorithm in two important and practical directions: first, we allow subdivision
cells to be rectangles with arbitrary but bounded aspect ratios. Second, we extend the input domains to be regions R
0 with arbitrary geometry and which might not be simply connected. Our algorithm halts as long as the curve has no singularities
in the region, and intersects the boundary of R
0 transversally. Our algorithm is practical and easy to implement exactly. We report some very encouraging experimental results,
showing that our algorithms can be much more efficient than the algorithms of Plantinga–Vegter and Snyder. 相似文献
7.
We present algorithms for solving general sup-norm minimization problems over spaces of analytic functions, such as those arising inH
control. We also give an analysis and some theory of these algorithms. Part of this is specific to analytic optimization, while part holds for general sup-norm optimization. In particular, we are proposing a type of Newton-type algorithm which actually uses very high-order terms. The novel feature is that higher-order terms can be chosen in many ways while still maintaining a second-order convergence rate. Then, a clever choice of higher-order terms greatly reduces computation time. Conceivably this technique can be modified to accelerate Newton algorithms in some other circumstances. Estimates of order of convergence as well as results of numerical tests are also presented.This work was partially supported by the Air Force Office of Scientific Research and the National Science Foundation. 相似文献
8.
Random-key genetic algorithms were introduced by Bean (ORSA J. Comput. 6:154–160, 1994) for solving sequencing problems in combinatorial optimization. Since then, they have been extended to handle a wide class
of combinatorial optimization problems. This paper presents a tutorial on the implementation and use of biased random-key
genetic algorithms for solving combinatorial optimization problems. Biased random-key genetic algorithms are a variant of
random-key genetic algorithms, where one of the parents used for mating is biased to be of higher fitness than the other parent.
After introducing the basics of biased random-key genetic algorithms, the paper discusses in some detail implementation issues,
illustrating the ease in which sequential and parallel heuristics based on biased random-key genetic algorithms can be developed.
A survey of applications that have recently appeared in the literature is also given. 相似文献
9.
A. Y. Azimov 《Journal of Optimization Theory and Applications》2008,137(1):75-88
The present paper is a continuation of a paper by Azimov (J. Optim. Theory Appl. 2007, accepted), where we derived duality relations for some general multiobjective optimization problems which include convex
programming and optimal control problems. As a consequence, we established duality results for multiobjective convex programming
problems. In the present paper (Part 2), based on Theorem 3.2 of Azimov (J. Optim. Theory Appl. 2007, accepted), we establish duality results for several classes of multiobjective optimal control problems. 相似文献
10.
Radu Zaharopol 《Acta Appl Math》2008,104(1):47-81
Our main goal in this paper is to prove that any transition probability P on a locally compact separable metric space (X,d) defines a Kryloff-Bogoliouboff-Beboutoff-Yosida (KBBY) ergodic decomposition of the state space (X,d). Our results extend and strengthen the results of Chap. 5 of Hernández-Lerma and Lasserre (Markov Chains and Invariant Probabilities,
[2003]) and extend our KBBY-decomposition for Markov-Feller operators that we have obtained in Chap. 2 of our monograph (Zaharopol
in Invariant Probabilities of Markov-Feller Operators and Their Supports, [2005]). In order to deal with the decomposition that we present in this paper, we had to overcome the fact that the Lasota-Yorke
lemma (Theorem 1.2.4 in our book (op. cit.)) and two results of Lasota and Myjak (Proposition 1.1.7 and Corollary 1.1.8 of
our work (op. cit.)) are no longer true in general in the non-Feller case.
In the paper, we also obtain a “formula” for the supports of elementary measures of a fairly general type. The result is new
even for Markov-Feller operators.
We conclude the paper with an outline of the KBBY decomposition for a fairly large class of transition functions. The results
for transition functions and transition probabilities seem to us surprisingly similar. However, as expected, the arguments
needed to prove the results for transition functions are significantly more involved and are not presented here. We plan to
discuss the KBBY decomposition for transition functions with full details in a small monograph that we are currently trying
to write.
I am indebted to Sean Meyn for a discussion that we had in November 2004, which helped me to significantly improve the exposition
in this paper, and to two anonymous referees for useful recommendations. 相似文献
11.
Although evolutionary algorithms (EAs) have some operators which let them explore the whole search domain, still they get trapped in local minima when multimodality of the objective function is increased. To improve the performance of EAs, many optimization techniques or operators have been introduced in recent years. However, it seems that these modified versions exploit some special properties of the classical multimodal benchmark functions, some of which have been noted in previous research and solutions to eliminate them have been proposed.In this article, we show that quite symmetric behavior of the available multimodal test functions is another example of these special properties which can be exploited by some EAs such as covariance matrix adaptation evolution strategy (CMA-ES). This method, based on its invariance properties and good optimization results for available unimodal and multimodal benchmark functions, is considered as a robust and efficient method. However, as far as black box optimization problems are considered, no special trend in the behavior of the objective function can be assumed; consequently this symmetry limits the generalization of optimization results from available multimodal benchmark functions to real world problems. To improve the performance of CMA-ES, the Elite search sub-algorithm is introduced and implemented in the basic algorithm. Importance and effect of this modification is illustrated experimentally by dissolving some test problems in the end. 相似文献
12.
The matrix rank minimization problem has applications in many fields, such as system identification, optimal control, low-dimensional
embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often
solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm
minimization problem (Math. Program., doi:, 2009). By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank
minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point
continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when
its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed
sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported. 相似文献
13.
Mikhail Yu. Khachay 《Journal of Mathematical Modelling and Algorithms》2007,6(4):547-561
Two special cases of the Minimum Committee Problem are studied, the Minimum Committee Problem of Finite Sets (MCFS) and the
Minimum Committee Problem of a System of Linear Inequalities(MCLE). It is known that the first of these problems is NP-hard (see (Mazurov et al., Proc. Steklov Inst. Math., 1:67–101, 2002)). In this paper we show the NP-hardness of two integer optimization problems connected with it. In addition, we analyze the hardness of approximation to
the MCFS problem. In particular, we show that, unless NP⊂TIME(n
O(loglogn
)), for every ε>0 there are no approximation algorithms for this problem with approximation ratio (1–ε)ln (m–1), where m is the number of inclusions in the MCFS problem. To prove this bound we use the SET COVER problem, for which a similar result
is known (Feige, J. ACM, 45:634–652, 1998). We also show that the Minimum Committee of Linear Inequalities System (MCLE) problem is NP-hard as well and consider an approximation algorithm for this problem.
相似文献
14.
Carlos Ansótegui Ramón Béjar Cèsar Fernández Carla Gomes Carles Mateu 《Journal of Heuristics》2011,17(5):589-614
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few
years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact.
On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental
design, as in Bailey et al. (Am. Math. Mon. 115:383–404, 2008; J. Agron. Crop Sci. 165:121–130, 1990), Morgan (Latin squares and related experimental designs. Wiley, New York, 2008) and Vaughan (Electron. J. Comb. 16, 2009). On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and
thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms
and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems
to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard
to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem
(CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions
can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case
complexity, we prove that GSP with block regions of m rows and n columns with m≠n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the
balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed
in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of
solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP.
Finally, we provide a study of the correlation between backbone variables—variables with the same value in all the solutions
of an instance—and hardness of GSP. 相似文献
15.
We present an interior-point method for monotone linear complementarity problems over symmetric cones (SCLCP) that is based
on barrier functions which are defined by a large class of univariate functions, called eligible kernel functions. This class
is fairly general and includes the classical logarithmic function, the self-regular functions, as well as many non-self-regular
functions as special cases. We provide a unified analysis of the method and give a general scheme on how to calculate the
iteration bounds for the entire class. We also calculate the iteration bounds of both large-step and short-step versions of
the method for ten frequently used eligible kernel functions. For some of them we match the best known iteration bound for
large-step methods, while for short-step methods the best iteration bound is matched for all cases. The paper generalizes
results of Lesaja and Roos (SIAM J. Optim. 20(6):3014–3039, 2010) from P
∗(κ)-LCP over the non-negative orthant to monotone LCPs over symmetric cones. 相似文献
16.
Veraverbeke’s (Stoch Proc Appl 5:27–37, 1977) theorem relates the tail of the distribution of the supremum of a random walk with negative drift to the tail of the distribution
of its increments, or equivalently, the probability that a centered random walk with heavy-tail increments hits a moving linear
boundary. We study similar problems for more general processes. In particular, we derive an analogue of Veraverbeke’s theorem
for fractional integrated ARMA models without prehistoric influence, when the innovations have regularly varying tails. Furthermore,
we prove some limit theorems for the trajectory of the process, conditionally on a large maximum. Those results are obtained
by using a general scheme of proof which we present in some detail and should be of value in other related problems. 相似文献
17.
Ram U. Verma 《Journal of Optimization Theory and Applications》2012,155(1):196-214
Based on the generalized graph convergence, first a general framework for an implicit algorithm involving a sequence of generalized resolvents (or generalized resolvent operators) of set-valued A-maximal monotone (also referred to as A-maximal (m)-relaxed monotone, and A-monotone) mappings, and H-maximal monotone mappings is developed, and then the convergence analysis to the context of solving a general class of nonlinear implicit variational inclusion problems in a Hilbert space setting is examined. The obtained results generalize the work of Huang, Fang and Cho (in J. Nonlinear Convex Anal. 4:301–308, 2003) involving the classical resolvents to the case of the generalized resolvents based on A-maximal monotone (and H-maximal monotone) mappings, while the work of Huang, Fang and Cho (in J. Nonlinear Convex Anal. 4:301–308, 2003) added a new dimension to the classical resolvent technique based on the graph convergence introduced by Attouch (in Variational Convergence for Functions and Operators, Applied Mathematics Series, Pitman, London 1984). In general, the notion of the graph convergence has potential applications to several other fields, including models of phenomena with rapidly oscillating states as well as to probability theory, especially to the convergence of distribution functions on ℜ. The obtained results not only generalize the existing results in literature, but also provide a certain new approach to proofs in the sense that our approach starts in a standard manner and then differs significantly to achieving a linear convergence in a smooth manner. 相似文献
18.
In this paper, epsilon and Ritz methods are applied for solving a general class of fractional constrained optimization problems. The goal is to minimize a functional subject to a number of constraints. The functional and constraints can have multiple dependent variables, multiorder fractional derivatives, and a group of initial and boundary conditions. The fractional derivative in the problem is in the Caputo sense. The constrained optimization problems include isoperimetric fractional variational problems (IFVPs) and fractional optimal control problems (FOCPs). In the presented approach, first by implementing epsilon method, we transform the given constrained optimization problem into an unconstrained problem, then by applying Ritz method and polynomial basis functions, we reduce the optimization problem to the problem of optimizing a real value function. The choice of polynomial basis functions provides the method with such a flexibility that initial and boundary conditions can be easily imposed. The convergence of the method is analytically studied and some illustrative examples including IFVPs and FOCPs are presented to demonstrate validity and applicability of the new technique. 相似文献
19.
Reuven Rubinstein 《Methodology and Computing in Applied Probability》2009,11(4):491-549
We present a randomized algorithm, called the cloning algorithm, for approximating the solutions of quite general NP-hard
combinatorial optimization problems, counting, rare-event estimation and uniform sampling on complex regions. Similar to the
algorithms of Diaconis–Holmes–Ross and Botev–Kroese the cloning algorithm is based on the MCMC (Gibbs) sampler equipped with
an importance sampling pdf and, as usual for randomized algorithms, it uses a sequential sampling plan to decompose a “difficult”
problem into a sequence of “easy” ones. The cloning algorithm combines the best features of the Diaconis–Holmes–Ross and the
Botev–Kroese. In addition to some other enhancements, it has a special mechanism, called the “cloning” device, which makes
the cloning algorithm, also called the Gibbs cloner fast and accurate. We believe that it is the fastest and the most accurate randomized algorithm for counting known so far.
In addition it is well suited for solving problems associated with the Boltzmann distribution, like estimating the partition
functions in an Ising model. We also present a combined version of the cloning and cross-entropy (CE) algorithms. We prove
the polynomial complexity of a particular version of the Gibbs cloner for counting. We finally present efficient numerical
results with the Gibbs cloner and the combined version, while solving quite general integer and combinatorial optimization
problems as well as counting ones, like SAT. 相似文献
20.
In 1, we have previously observed that, in a chordal graph G, the maximum number of independent r-cliques (i.e., of vertex disjoint subgraphs of G, each isomorphic to Kr, with no edges joining any two of the subgraphs) equals the minimum number of cliques of G that meet all the r-cliques of G. When r = 1, this says that chordal graphs have independence number equal to the clique covering number. When r = 2, this is equivalent to a result of Cameron (1989), and it implies a well known forbidden subgraph characterization of
split graphs. In this paper we consider a weighted version of the above min-max optimization problem. Given a chordal graph
G, with a nonnegative weight for each r-clique in G, we seek a set of independent r-cliques with maximum total weight. We present two algorithms to solve this problem, based on the principle of complementary
slackness. The first one operates on a graph derived from G, and is an adaptation of an algorithm of Farber (1982). The second one improves the performance by reducing the number of
constraints of the linear programs. Both results produce a min-max relation. When the algorithms are specialized to the situation
in which all the r-cliques have the same weight, we simplify the algorithms reported in 1, although these simpler algorithms are not as efficient. As a byproduct, we also obtain new elementary proofs of the above
min-max result. 相似文献