首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In previous work (Krasnogor, . In: Studies on the Theory and Design Space of Memetic Algorithms. Ph.D. thesis, University of the West of England, Bristol, UK, 2002; Krasnogor and Smith, IEEE Trans Evol Algorithms 9(6):474–488, 2005) we develop a syntax-only classification of evolutionary algorithms, in particular so-called memetic algorithms (MAs). When “syntactic sugar” is added to our model, we are able to investigate the polynomial local search (PLS) complexity of memetic algorithms. In this paper we show the PLS-completeness of whole classes of problems that occur when memetic algorithms are applied to the travelling salesman problem using a range of mutation, crossover and local search operators. Our PLS-completeness results shed light on the worst case behaviour that can be expected of a memetic algorithm under these circumstances. Moreover, we point out in this paper that memetic algorithms for graph partitioning and maximum network flow (both with important practical applications) also give rise to PLS-complete problems.

Electronic Supplementary Material The online version of this article (doi:) contains supplementary material, which is available to authorized users.   相似文献   

2.
A recent trend in local search concerns the exploitation of several different neighborhoods so as to increase the ability of the algorithm to navigate the search space. In this work we investigate a hybridization technique, that we call Neighborhood Portfolio Approach, that consists in the interleave of local search techniques based on various combinations of neighborhoods. In particular, we are able to select the most effective search technique through a systematic analysis of all meaningful combinations built upon a set of basic neighborhoods. The proposed approach is applied to two practical problems belonging to the timetabling family, and systematically tested and compared on real-world instances. The experimental analysis shows that our approach leads to automatic design of new algorithms that provide better results than basic local search techniques.  相似文献   

3.
This paper examines the complexity of global verification for MAX-SAT, MAX-k-SAT (for k3), Vertex Cover, and Traveling Salesman Problem. These results are obtained by adaptations of the transformations that prove such problems to be NP-complete. The class of problems PGS is defined to be those discrete optimization problems for which there exists a polynomial time algorithm such that given any solution , either a solution can be found with a better objective function value or it can be concluded that no such solution exists and is a global optimum. This paper demonstrates that if any one of MAX-SAT, MAX-k-SAT (for k3), Vertex Cover, or Traveling Salesman Problem are in PGS, then P=NP.  相似文献   

4.
A parameterized computational problem is a set of pairs (x, k), where k is a distinguished item called “parameter”. FPT is the class of fixed-parameter tractable problems: for any fixed value of k, they are solvable in time bounded by a polynomial of degree α, where α is a constant not dependent on the parameter. In order to deal with parameterized intractability, Downey and Fellows have introduced a hierarchy of classes W[l] ? W[2] ? ? containing likely intractable parameterized problems, and they have shown that such classes have many natural, complete languages. In this paper we analyze several variations of the halting problem for nondeterministic Turing machines with parameterized time, and we show that its parameterized complexity strongly depends on some resources like the number of tapes, head and internal states, and on the size of the alphabet. Notice that classical polynomial-time complexity fails in distinguishing such features. As byproducts, we show that parameterized complexity is a useful tool for the study of the intrinsic power of some computational models, and we underline the different “computational powers” of some levels of the parameterized hierarchy.  相似文献   

5.
We study the complexity of a noninterior path-following method for the linear complementarity problem. The method is based on the Chen–Harker–Kanzow–Smale smoothing function. It is assumed that the matrix M is either a P-matrix or symmetric and positive definite. When M is a P-matrix, it is shown that the algorithm finds a solution satisfying the conditions Mx-y+q=0 and in at most
Newton iterations; here, and µ0 depend on the initial point, l(M) depends on M, and > 0. When Mis symmetric and positive definite, the complexity bound is
where
and are the smallest and largest eigenvalues of M.  相似文献   

6.
7.
We consider the problem of minimizing an SC1 function subject to inequality constraints. We propose a local algorithm whose distinguishing features are that: (a) a fast convergence rate is achieved under reasonable assumptions that do not include strict complementarity at the solution; (b) the solution of only linear systems is required at each iteration; (c) all the points generated are feasible. After analyzing a basic Newton algorithm, we propose some variants aimed at reducing the computational costs and, in particular, we consider a quasi-Newton version of the algorithm.  相似文献   

8.
A general algorithmic scheme for solving inclusions in a Banach space is investigated in respect to its local convergence behavior. Particular emphasis is placed on applications to certain proximal-point-type algorithms in Hilbert spaces. The assumptions do not necessarily require that a solution be isolated. In this way, results existing for the case of a locally unique solution can be extended to cases with nonisolated solutions. Besides the convergence rates for the distance of the iterates to the solution set, strong convergence to a sole solution is shown as well. As one particular application of the framework, an improved convergence rate for an important case of the inexact proximal-point algorithm is derived.  相似文献   

9.
The K-Constraint Multiple Knapsack Problem (K-MKP) is a generalization of the multiple knapsack problem, which is one of the representative combinatorial optimization problems known to be NP-hard. In K-MKP, each item has K types of weights and each knapsack has K types of capacity. In this paper, we propose several very large-scale neighborhood search (VLSN) algorithms to solve K-MKP. One of the VLSN algorithms incorporates a novel approach that consists of randomly perturbing the current solution in order to efficiently produce a set of simultaneous non-profitable moves. These moves would allow several items to be transferred from their current knapsacks and assigned to new knapsacks, which makes room for new items to be inserted through multi-exchange movements and allows for improved solutions. Computational results presented show that the method is effective, and provides better solutions compared to exact algorithms run for the same amount of time. This paper was written during Dr. Cunha's sabbatical at the Industrial and Systems Engineering Department at the University of Florida, Gainesville as a visiting faculty  相似文献   

10.
Two special cases of the Minimum Committee Problem are studied, the Minimum Committee Problem of Finite Sets (MCFS) and the Minimum Committee Problem of a System of Linear Inequalities(MCLE). It is known that the first of these problems is NP-hard (see (Mazurov et al., Proc. Steklov Inst. Math., 1:67–101, 2002)). In this paper we show the NP-hardness of two integer optimization problems connected with it. In addition, we analyze the hardness of approximation to the MCFS problem. In particular, we show that, unless NPTIME(n O(loglogn )), for every ε>0 there are no approximation algorithms for this problem with approximation ratio (1–ε)ln (m–1), where m is the number of inclusions in the MCFS problem. To prove this bound we use the SET COVER problem, for which a similar result is known (Feige, J. ACM, 45:634–652, 1998). We also show that the Minimum Committee of Linear Inequalities System (MCLE) problem is NP-hard as well and consider an approximation algorithm for this problem.   相似文献   

11.
The problem of estimating the global optimal values of intractable combinatorial optimization problems is of interest to researchers developing and evaluating heuristics for these problems. In this paper we present a method for combining statistical optimum prediction techniques with local search methods such as simulated annealing and tabu search and illustrate the approach on a single machine scheduling problem. Computational experiments show that the approach yields useful estimates of optimal values with very reasonable computational effort.  相似文献   

12.
Most writers on frequency assignment algorithms have described the details of a single algorithm, and evaluated the algorithm on selected data sets. There has been relatively little emphasis on describing the common features that are important if an algorithm is to have good performance. This paper describes the key features, with particular emphasis on algorithms for weighted fixed spectrum problems. The use of algorithms handling weighted constraints has become increasingly common in recent years. The advantages and disadvantages of weighting constraints are demonstrated.  相似文献   

13.
Let be the sequence of codimension growth for a variety V of associative algebras. We study the complexity function , which is the exponential generating function for the sequence of codimensions. Earlier, the complexity functions were used to study varieties of Lie algebras. The objective of the note is to start the systematic investigation of complexity functions in the associative case. These functions turn out to be a useful tool to study the growth of varieties over a field of arbitrary characteristic. In the present note, the Schreier formula for the complexity functions of one-sided ideals of a free associative algebra is found. This formula is applied to the study of products of T-ideals. An exact formula is obtained for the complexity function of the variety U c of associative algebras generated by the algebra of upper triangular matrices, and it is proved that the function is a quasi-polynomial. The complexity functions for proper identities are investigated. The results for the complexity functions are applied to study the asymptotics of codimension growth. Analogies between the complexity functions of varieties and the Hilbert--Poincaré series of finitely generated algebras are traced.  相似文献   

14.
《Journal of Complexity》1994,10(2):199-215
We consider two hybrid algorithms for finding an ϵ-approximation of a root of a convex real function that is twice differentiable and satisfies a certain growth condition on the intervial [0, R]. The first algorithm combines a binary search procedure with Newton′s method. The binary search produces an interval contained in the region of quadratic convergence of Newton′s method. The computational cost of the binary search, as well as the computational cost of Newton′s method, is of order O(log log(R/ϵ)). The second algorithm combines a binary search with the secant method in a similar fashion. This results in a lower overall computational cost when the cost of evaluating the derivative is more than .44042 of the cost of evaluating the function. Our results generalize same recent results of Ye.  相似文献   

15.
For solving linear programming problems, we describe and evaluate descent directions for projective methods in the setting of the method of de Ghellinck and Vial (1986). It is shown that choosing search directions from the projected simplex centered at 1 in transformed space guarantees a polynomial time bound. In particular, the projection of 1 coincides with the direction chosen by de Ghellinck and Vial (1986) which is well known to be equivalent to Karmarkar's search direction when the initial solution is feasible. Close to that search direction the complexity of the method is unchanged [0(nL) iterations] while in general the bound is 0(n2 ·L), whereL is the size of the linear programming problem.The investigations were not made in order to improve worst case bounds, but rather to enable a free choice of search directions from a quite large class of polynomial directions. We show how different directions can be computed in one iteration using the projection of 1 with relatively small extra computational effort. In projective methods linear programming is reformulated as one-parametric feasibility problem where the parameter is the currently known upper bound on the optimal value. It is therefore very important to improve that bound whenever possible. Generalizing bounds from de Ghellinck and Vial (1986) we prove some infeasibility criteria. In particular, any computed search direction yields some upper bound which eventually improves that currently used. All bounds can be computed by solving sets of simple linear and quadratic equations.
Zusammenfassung Zur Lösung linearer Optimierungsaufgaben beschreiben und bewerten wir Abstiegsrichtungen bei projektiven Methoden in der Formulierung von de Ghellinck and Vial (1986). Es zeigt sich, daß bei Wahl von Suchrichtungen aus dem projizierten Simplex um 1 im transformierten Raum stets polynomiale Laufzeitabschätzungen gelten. Insbesondere die Projektion der 1 stimmt mit der von de Ghellinck and Vial (1986) gewählten Suchrichtung überein, die bekanntlich bei zulässiger Startlösung äquivalent zu Karmarkar's Suchrichtung ist. Nahe bei dieser Suchrichtung bleibt die Komplexität der Methode unverändert [0(nL) Iterationen], während im allgemeinen die Schranke 0(n2 ·L) gilt, wobeiL die Größe des Linearen Optimierungsproblemes beschreibt.Die Untersuchungen zielen nicht darauf ab, Schranken für den ungünstigsten Fall zu verbessern, sondern sollen die freie Wahl von Suchrichtungen aus einer großen Klasse polynomialer Richtungen zulassen. Wir zeigen, wie man unterschiedliche Suchrichtungen innerhalb einer Iteration aus der Projektion der 1 mit relativ kleinem zusätzlichen Aufwand berechnen kann. In projektiven Methoden wird die Lineare Optimierungsaufgabe als einparametrisches Zulässigkeitsproblem reformuliert, wobei als Parameter die jeweils bekannte obere Schranke des Optimalwertes dient. Daher ist es sehr wichtig, diese Schranke bei jeder Gelegenheit zu verbessern. Durch Verallgemeinerung von Schranken aus de Ghellinck and Vial (1986) leiten wir einige Unzulässigkeitstests her. Insbesondere liefert jede berechnete Suchrichtung eine obere Schranke, die möglicherweise die bekannte obere Schranke verbessert. Alle Schranken können durch Lösung einer Reihe einfacher linearer und quadratischer Gleichungen berechnet werden.
  相似文献   

16.
The cost of solving an initial value problem for ordinary differential equations to accuracy 2 is polynomial in ln. Adaptive step-size control is never theoretically more costly than fixed step-size control, and can be an unbounded factor less costly. These results contradict the standard theory, but are based on more realistic assumptions.  相似文献   

17.
The graph coloring problem is to color a given graph with the minimum number of colors. This problem is known to be NP-hard even if we are only aiming at approximate solutions. On the other hand, the best known approximation algorithms require nδ (δ>0) colors even for bounded chromatic (k-colorable for fixed k) n-vertex graphs. The situation changes dramatically if we look at the average performance of an algorithm rather than its worst case performance. A k-colorable graph drawn from certain classes of distributions can be k-colored almost surely in polynomial time. It is also possible to k-color such random graphs in polynomial average time. In this paper, we present polynomial time algorithms for k-coloring graphs drawn from the semirandom model. In this model, the graph is supplied by an adversary each of whose decisions regarding inclusion of edges is reversed with some probability p. In terms of randomness, this model lies between the worst case model and the usual random model where each edge is chosen with equal probability. We present polynomial time algorithms of two different types. The first type of algorithms always run in polynomial time and succeed almost surely. Blum and Spencer [J. Algorithms, 19 , 204–234 (1995)] have also obtained independently such algorithms, but our results are based on different proof techniques which are interesting in their own right. The second type of algorithms always succeed and have polynomial running time on the average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work for semirandom graphs drawn from a wide range of distributions and work as long as pn−α(k)+ϵ where α(k)=(2k)/((k−1)(k+2)) and ϵ is a positive constant. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 13, 125–158 (1998)  相似文献   

18.
Abstract

A method of robust estimation of multivariate location and shape that has attracted a lot of attention recently is Rousseeuw's minimum volume ellipsoid estimator (MVE). This estimator has a high breakdown point but is difficult to compute successfully. In this article, we apply methods of heuristic search to this problem, including simulated annealing, genetic algorithms, and tabu search, and compare the results to the undirected random search algorithm that is often cited. Heuristic search provides several effective algorithms that are far more computationally efficient than random search. Furthermore, random search, as currently implemented, is shown to be ineffective for larger problems.  相似文献   

19.
In this paper, we consider the problem of developing efficient and optimal parallel algorithms for Cholesky decomposition. We design our algorithms based on different data layouts and methods. We thereotically analyze the run-time of each algorithm. In order to determine the optimality of the algorithms designed, we derive theoretical lower bounds on running time based on initial data layout and compare them against the algorithmic run-times. To address portability, we design our algorithms and perform complexity analysis on the LogP model. Lastly, we implement our algorithms and analyze performance data. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
Evolutionary algorithms working on continuous search space can be regarded as general homogeneous Markov chains. The finite space problem of describing the transition matrix turns into the more complicated problem of defining and estimating a transition function. We analyze in this respect the (1+1) evolutionary algorithm on the inclined plane and corridor models. In the first case, the probability of maximal success in n iterations is derived in closed form, under uniform mutation. For the second case, the algorithm is proved to escape the corner of the corridor for both uniform and normal mutation, exponentially fast.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号