首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a new generic minimum cross-entropy method, called the semi-iterative MinxEnt, or simply SME, for rare-event probability estimation, counting, and approximation of the optimal solutions of a broad class of NP-hard linear integer and combinatorial optimization problems (COP’s). The main idea of our approach is to associate with each original problem an auxiliary single-constrained convex MinxEnt program of a special type, which has a closed-form solution. We prove that the optimal pdf obtained from the solution of such a specially designed MinxEnt program is a zero variance pdf, provided the “temperature” parameter is set to minus infinity. In addition we prove that the parametric pdf based on the product of marginals obtained from the optimal zero variance pdf coincides with the parametric pdf of the standard cross-entropy (CE) method. Thus, originally designed at the end of 1990s as a heuristics for estimation of rare-events and COP’s, CE has strong connection with MinxEnt, and thus, strong mathematical foundation. The crucial difference between the proposed SME method and the standard CE counterparts lies in their simulation-based versions: in the latter we always require to generate (via Monte Carlo) a sequence of tuples including the temperature parameter and the parameter vector in the optimal marginal pdf’s, while in the former we can fix in advance the temperature parameter (to be set to a large negative number) and then generate (via Monte Carlo) a sequence of parameter vectors of the optimal marginal pdf’s alone. In addition, in contrast to CE, neither the elite sample no the rarity parameter is needed in SME. As result, the proposed SME algorithm becomes simpler, faster and at least as accurate as the standard CE. Motivated by the SME method we introduce a new updating rule for the parameter vector in the parametric pdf of the CE program. We show that the CE algorithm based on the new updating rule, called the combined CE (CCE), is at least as fast and accurate as its standard CE and SME counterparts. We also found numerically that the variance minimization (VM)-based algorithms are the most robust ones. We, finally, demonstrate numerically that the proposed algorithms, and in particular the CCE one, allows accurate estimation of counting quantities up to the order of hundred of decision variables and hundreds of constraints. This research was supported by the Israel Science Foundation (grant No 191-565).  相似文献   

2.
We present a new method, called the minimum cross-entropy (MCE) method for approximating the optimal solution of NP-hard combinatorial optimization problems and rare-event probability estimation, which can be viewed as an alternative to the standard cross entropy (CE) method. The MCE method presents a generic adaptive stochastic version of Kull-backs classic MinxEnt method. We discuss its similarities and differences with the standard cross-entropy (CE) method and prove its convergence. We show numerically that MCE is a little more accurate than CE, but at the same time a little slower than CE. We also present a new method for trajectory generation for TSP and some related problems. We finally give some numerical results using MCE for rare-events probability estimation for simple static models, the maximal cut problem and the TSP, and point out some new areas of possible applications.AMS 2000 Subject Classification: 65C05, 60C05, 68W20, 90C59*This reseach was supported by the Israel Science Foundation (grant no 191-565).  相似文献   

3.
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colorings in a graph and calculating the permanent, typically fail. They either do not converge at all or are heavily biased (converge to a local extremum). Exceptions are convex counting problems, like estimating the volume of a convex polytope. We also show how their performance could be dramatically improved by combining them with the classic splitting method, which is based on simulating simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple call the splitting algorithms. We show that the most advance splitting version coincides with the cloning algorithm suggested earlier by the author. As compared to the randomized algorithms, the proposed splitting algorithms require very little warm-up time while running the MCMC from iteration to iteration, since the underlying Markov chains are already in steady-state from the beginning. What required is only fine tuning, i.e. keeping the Markov chains in steady-state while moving from iteration to iteration. We present extensive simulation studies with both the splitting and randomized algorithms for different NP-hard counting problems.  相似文献   

4.
We present a randomized algorithm, called the cloning algorithm, for approximating the solutions of quite general NP-hard combinatorial optimization problems, counting, rare-event estimation and uniform sampling on complex regions. Similar to the algorithms of Diaconis–Holmes–Ross and Botev–Kroese the cloning algorithm is based on the MCMC (Gibbs) sampler equipped with an importance sampling pdf and, as usual for randomized algorithms, it uses a sequential sampling plan to decompose a “difficult” problem into a sequence of “easy” ones. The cloning algorithm combines the best features of the Diaconis–Holmes–Ross and the Botev–Kroese. In addition to some other enhancements, it has a special mechanism, called the “cloning” device, which makes the cloning algorithm, also called the Gibbs cloner fast and accurate. We believe that it is the fastest and the most accurate randomized algorithm for counting known so far. In addition it is well suited for solving problems associated with the Boltzmann distribution, like estimating the partition functions in an Ising model. We also present a combined version of the cloning and cross-entropy (CE) algorithms. We prove the polynomial complexity of a particular version of the Gibbs cloner for counting. We finally present efficient numerical results with the Gibbs cloner and the combined version, while solving quite general integer and combinatorial optimization problems as well as counting ones, like SAT.  相似文献   

5.
In this paper, we consider a task allocation model that consists of assigning a set of m unmanned aerial vehicles (UAVs) to a set of n tasks in an optimal way. The optimality is quantified by target scores. The mission is to maximize the target score while satisfying capacity constraints of both the UAVs and the tasks. This problem is known to be NP-hard. Existing algorithms are not suitable for the large scale setting. Scalability and robustness are recognized as two main issues. We deal with these issues by two optimization approaches. The first approach is the Cross-Entropy (CE) method, a generic and practical tool of stochastic optimization for solving NP-hard problem. The second one is Branch and Bound algorithm, an efficient classical tool of global deterministic optimization. The numerical results show the efficiency of our approaches, in particular the CE method for very large scale setting.  相似文献   

6.
We present an analytical method of analyzing a class of “splitting algorithms” that include probabilistic counting, selecting the leader, estimating the number of questions necessary to identify distinct objects, searching algorithms based on digital tries, approximate counting, and so forth. In our discussion we concentrate on the analysis of a generalized probabilistic counting algorithm. Our technique belongs to the toolkit of the analytical analysis of algorithms, and it involves solutions of functional equations, analytical poissonization and depoissonization as well as Mellin transform. In particular, we deal with an instance of the functional equation g(z) = βa(z)g(z/2) + b(z), where a(z) and b(z) are given functions and β < 1 is a constant. With respect to our generalized probabilistic counting algorithm, we obtain asymptotic expansions of the first two moments of an estimate of the cardinality of a set that is computed by the algorithm. We also derive the asymptotic distribution of this estimate, and observe that it actually fluctuates, leading to a conclusion that its limiting distribution does not exist. © 1996 John Wiley & Sons, Inc. Random Struct. Alg., 9 , 379–401 (1996)  相似文献   

7.
Absence of (complex) zeros property is at the heart of the interpolation method developed by Barvinok for designing deterministic approximation algorithms for various graph counting and related problems. An earlier method used for the same problem is one based on the correlation decay property. Remarkably, the classes of graphs for which the two methods apply often coincide or nearly coincide. In this article we show that this is not a coincidence. We establish that if the interpolation method is valid for a family of graphs, then this family exhibits a form of the correlation decay property which is asymptotic strong spatial mixing at superlogarithmic distances. Our proof is based on a certain graph polynomial representation of the associated partition function. This representation is at the heart of the design of the polynomial time algorithms underlying the interpolation method itself. We conjecture that our result holds for all, and not just amenable graphs. Indeed this conjecture was recently confirmed by Regts. See the body of the article for details.  相似文献   

8.
Optimal placement of UV-based communications relay nodes   总被引:1,自引:0,他引:1  
We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multi-extremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the two-objective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional Bellman-Ford algorithm tailored to solving this problem.  相似文献   

9.
We review the results of studying integer linear programming algorithms which exploit properties of problem relaxation sets. The main attention is paid to the estimation of the number of iterations of these algorithms by means of the regular partitions method and other approaches. We present such estimates for some cutting plane, branch and bound (Land and Doig scheme), and L-class enumeration algorithms and consider questions of their stability. We establish the upper bounds for the average number of iterations of the mentioned algorithms as applied to the knapsack problem and the set packing one.  相似文献   

10.
We consider two schemes of global optimization algorithms based on the use of grids. Our main goal is to compare the so-called independent sampling (IS), stratified sampling (SS) and random covering (RC) grids implemented to the estimation problem of the global maximum of a function. The results give an insight on how a decrease of randomness in selection rules for the trial points improves efficiency of global random search algorithms.  相似文献   

11.
The interior proximal extragradient method for solving equilibrium problems   总被引:1,自引:0,他引:1  
In this article we present a new and efficient method for solving equilibrium problems on polyhedra. The method is based on an interior-quadratic proximal term which replaces the usual quadratic proximal term. This leads to an interior proximal type algorithm. Each iteration consists in a prediction step followed by a correction step as in the extragradient method. In a first algorithm each of these steps is obtained by solving an unconstrained minimization problem, while in a second algorithm the correction step is replaced by an Armijo-backtracking linesearch followed by an hyperplane projection step. We prove that our algorithms are convergent under mild assumptions: pseudomonotonicity for the two algorithms and a Lipschitz property for the first one. Finally we present some numerical experiments to illustrate the behavior of the proposed algorithms.  相似文献   

12.
We analyze two global algorithms for solving the linear bilevel program (LBP) problem. The first one is a recent algorithm built on a new concept of equilibrium point and a modified version of the outer approximation method. The second one is an efficient branch-and-bound algorithm known in the literature. Based on computational results we propose some modifications in both algorithms to improve their computational performance. A significant number of experiments is carried out and a comparative study with the algorithms is presented. The modified procedures has better performance than the original versions.  相似文献   

13.
Minesweeper is a popular single player game. It has been shown that the Minesweeper consistency problem is NP-complete and the Minesweeper counting problem is #P-complete. In this paper, we present efficient algorithms for solving these problems for Minesweeper graphs with bounded treewidth. Our algorithms turn out to be much better than those based directly on dynamic programming. The algorithms mostly use of algebraic operations on multivariate polynomials, so that one may use existing software to implement them easily.  相似文献   

14.
There are significant research opportunities in the integration of Machine Learning (ML) methods and Combinatorial Optimization Problems (COPs). In this work, we focus on metaheuristics to solve COPs that have an important learning component. These algorithms must explore a solution space and learn from the information they obtain in order to find high-quality solutions. Among the metaheuristics, we study Hyper-Heuristics (HHs), algorithms that, given a number of low-level heuristics, iteratively select and apply heuristics to a solution. The HH we consider has a Markov model to produce sequences of low-level heuristics, which we combine with a Multi-Armed Bandit Problem (MAB)-based method to learn its parameters. This work proposes several improvements to the HH metaheuristic that yields a better learning for solving problem instances. Specifically, this is the first work in HHs to present Exponential Weights for Exploration and Exploitation (EXP3) as a learning method, an algorithm that is able to deal with adversarial settings. We also present a case study for the Vehicle Routing Problem with Time Windows (VRPTW), for which we include a list of low-level heuristics that have been proposed in the literature. We show that our algorithms can handle a large and diverse list of heuristics, illustrating that they can be easily configured to solve COPs of different nature. The computational results indicate that our algorithms are competitive methods for the VRPTW (2.16% gap on average with respect to the best known solutions), demonstrating the potential of these algorithms to solve COPs. Finally, we show how algorithms can even detect low-level heuristics that do not contribute to finding better solutions to the problem.  相似文献   

15.
We present algorithms for the propositional model counting problem #SAT. The algorithms utilize tree decompositions of certain graphs associated with the given CNF formula; in particular we consider primal, dual, and incidence graphs. We describe the algorithms coherently for a direct comparison and with sufficient detail for making an actual implementation reasonably easy. We discuss several aspects of the algorithms including worst-case time and space requirements.  相似文献   

16.
In this article, our primary concern is the classical problem of minimizing globally a concave function over a compact polyhedron (Problem (P)). We present a new simplicial branch and bound approach, which combines triangulations of intersections of simplices with halfspaces and ideas from outer approximation in such a way, that a class of finite algorithms for solving (P) results. For arbitrary compact convex feasible sets one obtains a not necessarily finite but convergent algorithm. Theoretical investigations include determination of the number of simplices in each applied triangulation step and bounds on the number of iterations in the resulting algorithms. Preliminary numerical results are given, and additional applications are sketched.  相似文献   

17.
We study classic machine sequencing problems in an online setting. Specifically, we look at deterministic and randomized algorithms for the problem of scheduling jobs with release dates on identical parallel machines, to minimize the sum of weighted completion times: Both preemptive and non-preemptive versions of the problem are analyzed. Using linear programming techniques, borrowed from the single machine case, we are able to design a 2.62-competitive deterministic algorithm for the non-preemptive version of the problem, improving upon the 3.28-competitive algorithm of Megow and Schulz. Additionally, we show how to combine randomization techniques with the linear programming approach to obtain randomized algorithms for both versions of the problem with competitive ratio strictly smaller than 2 for any number of machines (but approaching two as the number of machines grows). Our algorithms naturally extend several approaches for single and parallel machine scheduling. We also present a brief computational study, for randomly generated problem instances, which suggests that our algorithms perform very well in practice. A preliminary version of this work appears in the Proceedings of the 11th conference on integer programming and combinatorial optimization (IPCO), Berlin, 8–10 June 2005.  相似文献   

18.
In this paper, we present an exact solution procedure for the design of two-layer wavelength division multiplexing (WDM) optical networks with wavelength changers and bifurcated flows. This design problem closely resembles the traditional multicommodity flow problem, except that in the case of WDM optical networks, we are concerned with the routing of multiple commodities in two network layers. Consequently, the corresponding optimization models have to deal with two types of multicommodity variables defined for each of the network layers. The proposed procedure represents one of the first branch-and-price algorithms for a general WDM optical network setting with no assumptions on the number of logical links that can be established between nodes in the network. We apply our procedure in a computational study with four different network configurations. Our results show that for the three tested network configurations our branch-and-price algorithm provides solutions that are on average less than 5 % from optimality. We also provide a comparison of our branch-and-price algorithm with two simple variants of the upper bounding heuristic procedure HLDA that is commonly used for WDM optical network design.  相似文献   

19.
A convex polytope P can be specified in two ways: as the convex hull of the vertex set V of P, or as the intersection of the set H of its facet-inducing halfspaces. The vertex enumeration problem is to compute V from H>. The facet enumeration problem is to compute H from V. These two problems are essentially equivalent under point/hyperplane duality. They are among the central computational problems in the theory of polytopes. It is open whether they can be solved in time polynomial in |H| + |V| and the dimension. In this paper we consider the main known classes of algorithms for solving these problems. We argue that they all have at least one of two weaknesses: inability to deal well with “degeneracies”, or, inability to control the sizes of intermediate results. We then introduce families of polytopes that exercise those weaknesses. Roughly speaking, fat-lattice or intricate polytopes cause algorithms with bad degeneracy handling to perform badly; dwarfed polytopes cause algorithms with bad intermediate size control to perform badly. We also present computational experience with trying to solve these problem on these hard polytopes, using various implementations of the main algorithms.  相似文献   

20.
The goal of the simplified partial digest problem (SPDP) is motivated by the reconstruction of the linear structure of a DNA chain with respect to a given nucleotide pattern, based on the multiset of distances between the adjacent patterns (interpoint distances) and the multiset of distances between each pattern and the two unlabeled endpoints of the DNA chain (end distances). We consider optimization versions of the problem, called SPDP-Min and SPDP-Max. The aim of SPDP-Min (SPDP-Max) is to find a DNA linear structure with the same multiset of end distances and the minimum (maximum) number of incorrect (correct) interpoint distances. Results are presented on the worst-case efficiency of approximation algorithms for these problems. We suggest a graph-theoretic model for SPDP-Min and SPDP-Max, which can be used to reduce the search space for an optimal solution in either of these problems. We also present heuristic polynomial time algorithms based on this model. In computational experiments with randomly generated and real-life input data, our best algorithm delivered an optimal solution in 100% of the instances for a number of restriction sites not greater than 50.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号