首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Packing problems are np-hard problems with several practical applications. A variant of a 2d Packing Problem (2dpp) was proposed in the gecco 2008 competition session. In this paper, Memetic Algorithms (mas) and Hyperheuristics are applied to a multiobjectivised version of the 2dpp. Multiobjectivisation is the reformulation of a mono-objective problem into a multi-objective one. The main aim of multiobjectivising the 2dpp is to avoid stagnation in local optima. First generation mas refers to hybrid algorithms that combine a population-based global search with an individual learning process. A novel first generation ma is proposed, and an original multiobjectivisation method is applied to the 2dpp. In addition, with the aim of facilitating the application of such first generation mas from the point of view of the parameter setting, and of enabling their usage in parallel environments, a parallel hyperheuristic is also applied. Particularly, the method applied here is a hybrid approach which combines a parallel island-based model and a hyperheuristic. The main objective of this work is twofold. Firstly, to analyse the advantages and drawbacks of a set of first generation mas. Secondly, to attempt to avoid those drawbacks by applying a parallel hyperheuristic. Moreover, robustness and scalability analyses of the parallel scheme are included. Finally, we should note that our methods improve on the current best-known solutions for the tested instances of the 2dpp.  相似文献   

2.
Direct-type global optimization algorithms often spend an excessive number of function evaluations on problems with many local optima exploring suboptimal local minima, thereby delaying discovery of the global minimum. In this paper, a globally-biased simplicial partition Disimpl algorithm for global optimization of expensive Lipschitz continuous functions with an unknown Lipschitz constant is proposed. A scheme for an adaptive balancing of local and global information during the search is introduced, implemented, experimentally investigated, and compared with the well-known Direct and Direct l methods. Extensive numerical experiments executed on 800 multidimensional multiextremal test functions show a promising performance of the new acceleration technique with respect to competitors.  相似文献   

3.
We introduce the WeightedGrammar constraint and propose propagation algorithms based on the CYK parser and the Earley parser. We show that the traces of these algorithms can be encoded as a weighted negation normal form (WNNF), a generalization of NNF that allows nodes to carry weights. Based on this connection, we prove the correctness and complexity of these algorithms. Specifically, these algorithms enforce domain consistency on the WeightedGrammar constraint in time O(n 3). Further, we propose that the WNNF constraint can be decomposed into a set of primitive arithmetic constraint without hindering propagation.  相似文献   

4.
Many optimization algorithms require gradients of the model functions, but computing accurate gradients can be computationally expensive. We study the implications of using inexact gradients in the context of the multilevel optimization algorithm MG/Opt. MG/Opt recursively uses (typically cheaper) coarse models to obtain search directions for finer-level models. However, MG/Opt requires the gradient on the fine level to define the recursion. Our primary focus here is the impact of the gradient errors on the multilevel recursion. We analyze, partly through model problems, how MG/Opt is affected under various assumptions about the source of the error in the gradients, and demonstrate that in many cases the effect of the errors is benign. Computational experiments are included.  相似文献   

5.
In this paper a new tool for simultaneous optimisation of decisions on multiple time scales is presented. The tool combines the dynamic properties of Markov decision processes with the flexible and compact state space representation of LImited Memory Influence Diagrams (Limids). A temporal version of Limids, TemLimids, is defined by adding time-related functions to utility nodes. As a result, expected discounted utility, as well as expected relative utility might be used as optimisation criteria in TemLimids. Optimisation proceeds as in ordinary Limids. A sequence of such TemLimids can be used to model a Markov Limid Process, where each TemLimid represents a macro action. Algorithms are presented to find optimal plans for a sequence of such macro actions. Use of algorithms is illustrated based on an extended version of an example from pig production originally used to introduce the Limid concept.  相似文献   

6.
We present an exact rational solver for mixed-integer linear programming that avoids the numerical inaccuracies inherent in the floating-point computations used by existing software. This allows the solver to be used for establishing theoretical results and in applications where correct solutions are critical due to legal and financial consequences. Our solver is a hybrid symbolic/numeric implementation of LP-based branch-and-bound, using numerically-safe methods for all binding computations in the search tree. Computing provably accurate solutions by dynamically choosing the fastest of several safe dual bounding methods depending on the structure of the instance, our exact solver is only moderately slower than an inexact floating-point branch-and-bound solver. The software is incorporated into the SCIP optimization framework, using the exact LP  solver QSopt_ex and the GMP arithmetic library. Computational results are presented for a suite of test instances taken from the Miplib and Mittelmann libraries and for a new collection of numerically difficult instances.  相似文献   

7.
Garbage collection (GC) algorithms play a key role in reducing the write amplification in flash-based solid state drives, where the write amplification affects the lifespan and speed of the drive. This paper introduces a mean field model to assess the write amplification and the distribution of the number of valid pages per block for a class $\mathcal {C}$ of GC algorithms. Apart from the Random GC algorithm, class $\mathcal {C}$ includes two novel GC algorithms: the $d$ -Choices GC algorithm, that selects $d$ blocks uniformly at random and erases the block containing the least number of valid pages among the $d$ selected blocks, and the Random++ GC algorithm, that repeatedly selects another block uniformly at random until it finds a block with a lower than average number of valid blocks. Using simulation experiments, we show that the proposed mean field model is highly accurate in predicting the write amplification (for drives with $N=50{,}000$ blocks). We further show that the $d$ -Choices GC algorithm has a write amplification close to that of the Greedy GC algorithm even for small $d$ values, e.g., $d = 10$ , and offers a more attractive trade-off between its simplicity and its performance than the Windowed GC algorithm introduced and analyzed in earlier studies. The Random++ algorithm is shown to be less effective as it is even inferior to the FIFO algorithm when the number of pages $b$ per block is large (e.g., for $b \ge 64$ ).  相似文献   

8.
9.
In this paper we propose a new simplicial partition-based deterministic algorithm for global optimization of Lipschitz-continuous functions without requiring any knowledge of the Lipschitz constant. Our algorithm is motivated by the well-known Direct algorithm which evaluates the objective function on a set of points that tries to cover the most promising subregions of the feasible region. Almost all previous modifications of Direct algorithm use hyper-rectangular partitions. However, other types of partitions may be more suitable for some optimization problems. Simplicial partitions may be preferable when the initial feasible region is either already a simplex or may be covered by one or a manageable number of simplices. Therefore in this paper we propose and investigate simplicial versions of the partition-based algorithm. In the case of simplicial partitions, definition of potentially optimal subregion cannot be the same as in the rectangular version. In this paper we propose and investigate two definitions of potentially optimal simplices: one involves function values at the vertices of the simplex and another uses function value at the centroid of the simplex. We use experimental investigation to compare performance of the algorithms with different definitions of potentially optimal partitions. The experimental investigation shows, that proposed simplicial algorithm gives very competitive results to Direct algorithm using standard test problems and performs particularly well when the search space and the numbers of local and global optimizers may be reduced by taking into account symmetries of the objective function.  相似文献   

10.
RENS     
This article introduces rens, the relaxation enforced neighborhood search, a large neighborhood search algorithm for mixed integer nonlinear programs (MINLPs). It uses a sub-MINLP to explore the set of feasible roundings of an optimal solution $\bar{x}$ of a linear or nonlinear relaxation. The sub-MINLP is constructed by fixing integer variables $x_j$ with $\bar{x} _{j} \in \mathbb {Z}$ and bounding the remaining integer variables to $x_{j} \in \{ \lfloor \bar{x} _{j} \rfloor , \lceil \bar{x} _{j} \rceil \}$ . We describe two different applications of rens: as a standalone algorithm to compute an optimal rounding of the given starting solution and as a primal heuristic inside a complete MINLP solver. We use the former to compare different kinds of relaxations and the impact of cutting planes on the so-called roundability of the corresponding optimal solutions. We further utilize rens to analyze the performance of three rounding heuristics implemented in the branch-cut-and-price framework scip. Finally, we study the impact of rens when it is applied as a primal heuristic inside scip. All experiments were performed on three publicly available test sets of mixed integer linear programs (MIPs), mixed integer quadratically constrained programs (MIQCPs), and MINLP s, using solely software which is available in source code. It turns out that for these problem classes 60 to 70 % of the instances have roundable relaxation optima and that the success rate of rens does not depend on the percentage of fractional variables. Last but not least, rens applied as primal heuristic complements nicely with existing primal heuristics in scip.  相似文献   

11.
The Mesh Adaptive Direct Search algorithm (Mads) algorithm is designed for nonsmooth blackbox optimization problems in which the evaluation of the functions defining the problems are expensive to compute. The Mads algorithm is not designed for problems with a large number of variables. The present paper uses a statistical tool based on variance decomposition to rank the relative importance of the variables. This statistical method is then coupled with the Mads algorithm so that the optimization is performed either in the entire space of variables or in subspaces associated with statistically important variables. The resulting algorithm is called Stats-Mads and is tested on bound constrained test problems having up to 500 variables. The numerical results show a significant improvement in the objective function value after a fixed budget of function evaluations.  相似文献   

12.
Even though OpenMath has been around for more than 10?years, there is still confusion about the ??semantics of OpenMath??. As the recent MathML3 recommendation semantically bases Content MathML on OpenMath Objects, this question becomes more pressing. One source of confusions about OpenMath semantics is that it is given on two levels: a very weak algebraic semantics for expression trees, which is extended by considering mathematical properties in content dictionaries that interpret the meaning of (constant) symbols. While this two-leveled way to interpret objects is well-understood in logic, it has not been spelt out rigorously for OpenMath. We present two denotational semantics for OpenMath: a construction-oriented semantics that achieves full coverage of all legal OpenMath expressions at the cost of great conceptual complexity, and a symbol-oriented one for a subset of OpenMath expressions. This subset is given by a variant of the OpenMath 2 role system, which??we claim??does not exclude any representations of meaningful mathematical objects.  相似文献   

13.
In this article, we propose an algorithm, nesta-lasso, for the lasso problem, i.e., an underdetermined linear least-squares problem with a 1-norm constraint on the solution. We prove under the assumption of the restricted isometry property (rip) and a sparsity condition on the solution, that nesta-lasso is guaranteed to be almost always locally linearly convergent. As in the case of the algorithm nesta, proposed by Becker, Bobin, and Candès, we rely on Nesterov’s accelerated proximal gradient method, which takes $O(\sqrt {1/\varepsilon })$ iterations to come within $\varepsilon > 0$ of the optimal value. We introduce a modification to Nesterov’s method that regularly updates the prox-center in a provably optimal manner. The aforementioned linear convergence is in part due to this modification. In the second part of this article, we attempt to solve the basis pursuit denoising (bpdn) problem (i.e., approximating the minimum 1-norm solution to an underdetermined least squares problem) by using nesta-lasso in conjunction with the Pareto root-finding method employed by van den Berg and Friedlander in their spgl1 solver. The resulting algorithm is called parnes. We provide numerical evidence to show that it is comparable to currently available solvers.  相似文献   

14.
LIM is not slim     
In this paper LIM, a recently proposed impartial combinatorial ruleset, is analyzed. A formula to describe the $\mathcal G $ -values of LIM positions is given, by way of analyzing an equivalent combinatorial ruleset LIM’, closely related to the classical nim. Also, an enumeration of $\mathcal P $ -positions of LIM with $n$ stones, and its relation to the Ulam-Warburton cellular automaton, is presented.  相似文献   

15.
Can stochastic search algorithms outperform existing deterministic heuristics for the NP-hard problemNumber Partitioning if given a sufficient, but practically realizable amount of time? In a thorough empirical investigation using a straightforward implementation of one such algorithm, simulated annealing, Johnson et al. (Ref. 1) concluded tentatively that the answer is negative. In this paper, we show that the answer can be positive if attention is devoted to the issue of problem representation (encoding). We present results from empirical tests of several encodings ofNumber Partitioning with problem instances consisting of multiple-precision integers drawn from a uniform probability distribution. With these instances and with an appropriate choice of representation, stochastic and deterministic searches can—routinely and in a practical amount of time—find solutions several orders of magnitude better than those constructed by the best heuristic known (Ref. 2), which does not employ searching.  相似文献   

16.
In deterministic continuous constrained global optimization, upper bounding the objective function generally resorts to local minimization at several nodes/iterations of the branch and bound. We propose in this paper an alternative approach when the constraints are inequalities and the feasible space has a non-null volume. First, we extract an inner region, i.e., an entirely feasible convex polyhedron or box in which all points satisfy the constraints. Second, we select a point inside the extracted inner region and update the upper bound with its cost. We describe in this paper two original inner region extraction algorithms implemented in our interval B&B called IbexOpt (AAAI, pp 99–104, 2011). They apply to nonconvex constraints involving mathematical operators like , \( +\; \bullet ,\; /,\; power,\; sqrt,\; exp,\; log,\; sin\) . This upper bounding shows very good performance obtained on medium-sized systems proposed in the COCONUT suite.  相似文献   

17.
We introduce and study a notion of ‘Sasaki with torsion structure’ (st ) as an odd-dimensional analogue of Kähler with torsion geometry (kt ). These are normal almost contact metric manifolds that admit a unique compatible connection with \( 3 \) -form torsion. Any odd-dimensional compact Lie group is shown to admit such a structure; in this case, the structure is left-invariant and has closed torsion form. We illustrate the relation between st structures and other generalisations of Sasaki geometry, and we explain how some standard constructions in Sasaki geometry can be adapted to this setting. In particular, we relate the st structure to a kt structure on the space of leaves and show that both the cylinder and the cone over an st manifold are kt , although only the cylinder behaves well with respect to closedness of the torsion form. Finally, we introduce a notion of ‘ \( G \) -moment map’. We provide criteria based on equivariant cohomology ensuring the existence of these maps and then apply them as a tool for reducing st structures.  相似文献   

18.
LetG be a Vilenkin group (i.e., an infinite, compact, metrizable, zero-dimensional Abelian group). Our main result is a factorization theorem for functions in the Lipschitz spaces \(\mathfrak{L}\mathfrak{i}\mathfrak{p}\) (α,p; G). As colloraries of this theorem, we obtain (i) an extension of a factorization theorem ofY. Uno; (ii) a convolution formula which says that \(\mathfrak{L}\mathfrak{i}\mathfrak{p} (\alpha , r; G) = \mathfrak{L}\mathfrak{i}\mathfrak{p} (\beta , l; G)*\mathfrak{L}\mathfrak{i}\mathfrak{p} (\alpha - \beta , r; G)\) for 0<β<α<∞ and 1≤r≤∞; and (iii) an analogue, valid for allG, of a classical theorem ofHardy andLittlewood. We also present several results on absolute convergence of Fourier series defined onG, extending a theorem ofC. W. Onneweer and four results ofN. Ja. Vilenkin andA. I. Rubinshtein. The fourVilenkin-Rubinshtein results are analogues of classical theorems due, respectively, toO. Szász, S. B. Bernshtein, A. Zygmund, andG. G. Lorentz.  相似文献   

19.
Of key importance in convex analysis and optimization is the notion of duality, and in particular that of Fenchel duality. This work explores improvements to existing algorithms for the symbolic calculation of subdifferentials and Fenchel conjugates of convex functions defined on the real line. More importantly, these algorithms are extended to enable the symbolic calculation of Fenchel conjugates on a class of real-valued functions defined on $\mathbb{R}^n$ . These algorithms are realized in the form of the Maple package SCAT.  相似文献   

20.
Using Heijenoort??s unpublished generalized rules of quantification, we discuss the proof of Herbrand??s Fundamental Theorem in the form of Heijenoort??s correction of Herbrand??s ??False Lemma?? and present a didactic example. Although we are mainly concerned with the inner structure of Herbrand??s Fundamental Theorem and the questions of its quality and its depth, we also discuss the outer questions of its historical context and why Bernays called it ??the central theorem of predicate logic?? and considered the form of its expression to be ??concise and felicitous??.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号