共查询到20条相似文献,搜索用时 31 毫秒
1.
Alathea Jensen 《Methodology and Computing in Applied Probability》2018,20(4):1259-1284
Many hard problems in the computational sciences are equivalent to counting the leaves of a decision tree, or, more generally, by summing a cost function over the nodes. These problems include calculating the permanent of a matrix, finding the volume of a convex polyhedron, and counting the number of linear extensions of a partially ordered set. Many approximation algorithms exist to estimate such sums. One of the most recent is Stochastic Enumeration (SE), introduced in 2013 by Rubinstein. In 2015, Vaisman and Kroese provided a rigorous analysis of the variance of SE, and showed that SE can be extended to a fully polynomial randomized approximation scheme for certain cost functions on random trees. We present an algorithm that incorporates an importance function into SE, and provide theoretical analysis of its efficacy. We also present the results of numerical experiments to measure the variance of an application of the algorithm to the problem of counting linear extensions of a poset, and show that introducing importance sampling results in a significant reduction of variance as compared to the original version of SE. 相似文献
2.
Converging Marriage in Honey-Bees Optimization and Application to Stochastic Dynamic Programming 总被引:1,自引:0,他引:1
Hyeong Soo Chang 《Journal of Global Optimization》2006,35(3):423-441
In this paper, we first refine a recently proposed metaheuristic called “Marriage in Honey-Bees Optimization” (MBO) for solving
combinatorial optimization problems with some modifications to formally show that MBO converges to the global optimum value.
We then adapt MBO into an algorithm called “Honey-Bees Policy Iteration” (HBPI) for solving infinite horizon-discounted cost
stochastic dynamic programming problems and show that HBPI also converges to the optimal value. 相似文献
3.
Rade T. Živaljević 《Discrete and Computational Geometry》2009,41(1):135-161
This paper lays the foundation for a theory of combinatorial groupoids that allows us to use concepts like “holonomy”, “parallel transport”, “bundles”, “combinatorial curvature”, etc. in the context
of simplicial (polyhedral) complexes, posets, graphs, polytopes and other combinatorial objects. We introduce a new, holonomy-type
invariant for cubical complexes, leading to a combinatorial “Theorema Egregium” for cubical complexes that are non-embeddable
into cubical lattices. Parallel transport of Hom-complexes and maps is used as a tool to extend Babson–Kozlov–Lovász graph coloring results to more general statements about
nondegenerate maps (colorings) of simplicial complexes and graphs.
The author was supported by grants 144014 and 144026 of the Serbian Ministry of Science and Technology. 相似文献
4.
In this paper, we introduce a new method for perfect simulation of multivariate densities. We use One-Shot CFTP (G. Roberts
and J. Rosenthal, “One-shot coupling for certain stochastic recursive sequences,” Stochastic Processes and their Applications vol. 99 pp. 195–208, 2002) together with a monotone coupler for the Gibbs sampler, and implement the algorithm within the
Read-Once CFTP protocol (D. B. Wilson, “How to couple form the past using a read-once source of randomness,” Random Structures and Algorithms vol. 16(1) pp. 85–113, 2000b). We illustrate our method by simulating efficiently from high-dimensional truncated normal
distributions using the Gibbs sampler.
AMS 2000 Subject Classification Primary 65C40; Secondary 65C60 相似文献
5.
Bisubmodular functions are a natural “directed”, or “signed”, extension of submodular functions with several applications.
Recently Fujishige and Iwata showed how to extend the Iwata, Fleischer, and Fujishige (IFF) algorithm for submodular function
minimization (SFM) to bisubmodular function minimization (BSFM). However, they were able to extend only the weakly polynomial
version of IFF to BSFM. Here we investigate the difficulty that prevented them from also extending the strongly polynomial
version of IFF to BSFM, and we show a way around the difficulty. This new method gives a somewhat simpler strongly polynomial
SFM algorithm, as well as the first combinatorial strongly polynomial algorithm for BSFM. This further leads to extending
Iwata’s fully combinatorial version of IFF to BSFM.
The research of S. T. McCormick was supported by an NSERC Operating Grant.
The research of S. Fujishige was supported by a Grant-in-Aid of the Ministry of Education, Culture, Science and Technology
of Japan. 相似文献
6.
Oktay Günlük 《Mathematical Programming》1999,86(1):17-39
We present a branch-and-cut algorithm to solve capacitated network design problems. Given a capacitated network and point-to-point
traffic demands, the objective is to install more capacity on the edges of the network and route traffic simultaneously, so
that the overall cost is minimized. We study a mixed-integer programming formulation of the problem and identify some new
facet defining inequalities. These inequalities, together with other known combinatorial and mixed-integer rounding inequalities,
are used as cutting planes. To choose the branching variable, we use a new rule called “knapsack branching”. We also report
on our computational experience using real-life data.
Received April 29, 1997 / Revised version received January 9, 1999? Published online June 28, 1999 相似文献
7.
Darrin Doud 《manuscripta mathematica》1998,95(4):463-469
We present an algorithm which uses the analytic parameterization of elliptic curves to rapidly calculate torsion subgroups,
and calculate its running time. This algorithm is much faster than the “traditional” Lutz–Nagell algorithm used by most computer
algebra systems to calculate torsion subgroups.
Received: 7 August 1997 / Revised version: 28 November 1997 相似文献
8.
In Machine Learning algorithms, one of the crucial issues is the representation of the data. As the given data source become
heterogeneous and the data are large-scale, multiple kernel methods help to classify “nonlinear data”. Nevertheless, the finite
combinations of kernels are limited up to a finite choice. In order to overcome this discrepancy, a novel method of “infinite”
kernel combinations is proposed with the help of infinite and semi-infinite programming regarding all elements in kernel space.
Looking at all infinitesimally fine convex combinations of the kernels from the infinite kernel set, the margin is maximized
subject to an infinite number of constraints with a compact index set and an additional (Riemann–Stieltjes) integral constraint
due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We adapt
well-known numerical methods to our infinite kernel learning model and analyze the existence of solutions and convergence
for the given algorithms. We implement our new algorithm called “infinite” kernel learning (IKL) on heterogenous data sets
by using exchange method and conceptual reduction method, which are well known numerical techniques from solve semi-infinite
programming. The results show that our IKL approach improves the classifaction accuracy efficiently on heterogeneous data
compared to classical one-kernel approaches. 相似文献
9.
We study classic machine sequencing problems in an online setting. Specifically, we look at deterministic and randomized algorithms
for the problem of scheduling jobs with release dates on identical parallel machines, to minimize the sum of weighted completion
times: Both preemptive and non-preemptive versions of the problem are analyzed. Using linear programming techniques, borrowed
from the single machine case, we are able to design a 2.62-competitive deterministic algorithm for the non-preemptive version
of the problem, improving upon the 3.28-competitive algorithm of Megow and Schulz. Additionally, we show how to combine randomization
techniques with the linear programming approach to obtain randomized algorithms for both versions of the problem with competitive
ratio strictly smaller than 2 for any number of machines (but approaching two as the number of machines grows). Our algorithms
naturally extend several approaches for single and parallel machine scheduling. We also present a brief computational study,
for randomly generated problem instances, which suggests that our algorithms perform very well in practice.
A preliminary version of this work appears in the Proceedings of the 11th conference on integer programming and combinatorial
optimization (IPCO), Berlin, 8–10 June 2005. 相似文献
10.
In this paper, we first discuss how the nearly exact (NE) method proposed by Moré and Sorensen [14] for solving trust region
(TR) subproblems can be modified to solve large-scale “low-rank” TR subproblems efficiently. Our modified algorithm completely
avoids computation of Cholesky factorizations by instead relying primarily on the Sherman–Morrison–Woodbury formula for computing
inverses of “diagonal plus low-rank” type matrices. We also implement a specific version of the modified log-barrier (MLB)
algorithm proposed by Polyak [17] where the generated log-barrier subproblems are solved by a trust region method. The corresponding
direction finding TR subproblems are of the low-rank type and are then solved by our modified NE method. We finally discuss
the computational results of our implementation of the MLB method and its comparison with a version of LANCELOT [5] based
on a collection extracted from CUTEr [12] of nonlinear programming problems with simple bound constraints.
相似文献
11.
We introduce a new type of randomized incremental algorithms. Contrary to standard randomized incremental algorithms, theselazy randomized incremental algorithms are suited for computing structures that have a “nonlocal” definition. In order to analyze
these algorithms we generalize some results on random sampling to such situations. We apply our techniques to obtain efficient
algorithms for the computation of single cells in arrangements of segments in the plane, single cells in arrangements of triangles
in space, and zones in arrangements of hyperplanes.
This research was supported by the Netherlands' Organization for Scientific Research (NWO) and partially by the ESPRIT III
Basic Research Action 6546 (PROMotion). Part of this research was done while K.D. was visiting Utrecht University. 相似文献
12.
Two interior-point algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming
problems. They are of the Newton-KKT variety in that (much like in the case of primal-dual algorithms for linear programming)
search directions for the “primal” variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton
(or quasi-Newton) direction for the solution of the equalities in the first-order KKT conditions of optimality or a perturbed
version of these conditions. Our algorithms are adapted from previously proposed algorithms for convex quadratic programming
and general nonlinear programming. First, inspired by recent work by P. Tseng based on a “primal” affine-scaling algorithm
(à la Dikin) [J. of Global Optimization, 30 (2004), no. 2, 285–300], we consider a simple Newton-KKT affine-scaling algorithm. Then, a “barrier” version of the same algorithm is considered, which reduces to the affine-scaling version when the barrier parameter is set
to zero at every iteration, rather than to the prescribed value. Global and local quadratic convergence are proved under nondegeneracy
assumptions for both algorithms. Numerical results on randomly generated problems suggest that the proposed algorithms may
be of great practical interest.
The work of the first author was supported in part by the School of Computational Science of Florida State University through
a postdoctoral fellowship. Part of this work was done while this author was a Research Fellow with the Belgian National Fund
for Scientific Research (Aspirant du F.N.R.S.) at the University of Liège. The work of the second author was supported in
part by the National Science Foundation under Grants DMI9813057 and DMI-0422931 and by the US Department of Energy under Grant
DEFG0204ER25655. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors
and do not necessarily reflect the views of the National Science Foundation or those of the US Department of Energy. 相似文献
13.
Flavia Ventriglia 《Ricerche di matematica》2007,56(2):209-216
We introduce a common generalization of Boolean rings and lattice ordered groups called Vitali spaces and we give a version
of Cafiero and Brooks–Jewett convergence Theorems for additive functions defined in a Vitali space with values in a Hausdorff
commutative topological group.
This article was supported by MURST, project “Analisi Reale” and by GNAMPA of Istituto Nazionale di Alta Matematica. 相似文献
14.
In this paper, we study a special case of the Metropolis algorithm, the Independence Metropolis Sampler (IMS), in the finite state space case. The IMS is often used in designing components of more complex Markov Chain Monte Carlo algorithms. We present new results related to the first hitting time of individual states for the IMS. These results are expressed mostly in terms of the eigenvalues of the transition kernel. We derive a simple form formula for the mean first hitting time and we show tight lower and upper bounds on the mean first hitting time with the upper bound being the product of two factors: a “local” factor corresponding to the target state and a “global” factor, common to all the states, which is expressed in terms of the total variation distance between the target and the proposal probabilities. We also briefly discuss properties of the distribution of the first hitting time for the IMS and analyze its variance. We conclude by showing how some non-independence Metropolis–Hastings algorithms can perform better than the IMS and deriving general lower and upper bounds for the mean first hitting times of a Metropolis–Hastings algorithm. 相似文献
15.
A dual variant of Benson’s “outer approximation algorithm” for multiple objective linear programming
Outcome space methods construct the set of nondominated points in the objective (outcome) space of a multiple objective linear
programme. In this paper, we employ results from geometric duality theory for multiple objective linear programmes to derive
a dual variant of Benson’s “outer approximation algorithm” to solve multiobjective linear programmes in objective space. We
also suggest some improvements of the original version of the algorithm and prove that solving the dual provides a weight
set decomposition. We compare both algorithms on small illustrative and on practically relevant examples. 相似文献
16.
A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual
problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as “dual” to the Dinkelbach-type
algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem
from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is
also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution
of the primal problem. We also consider a variant of this new algorithm, based on scaling the “dual” parametric function.
The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm
and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears
that contrary to the primal approach, the “dual” approach is less influenced by scaling.
This research was carried out at the Econometric Institute, Erasmus University, Rotterdam, the Netherlands and was supported
by J.N.I.C.T. (Portugal) under contract BD/707/90-RM. 相似文献
17.
Reuven Rubinstein 《Methodology and Computing in Applied Probability》2008,10(2):121-178
We present a new generic minimum cross-entropy method, called the semi-iterative MinxEnt, or simply SME, for rare-event probability estimation, counting, and approximation of the optimal solutions of a broad class
of NP-hard linear integer and combinatorial optimization problems (COP’s). The main idea of our approach is to associate with
each original problem an auxiliary single-constrained convex MinxEnt program of a special type, which has a closed-form solution. We prove that the optimal pdf obtained from the
solution of such a specially designed MinxEnt program is a zero variance pdf, provided the “temperature” parameter is set
to minus infinity. In addition we prove that the parametric pdf based on the product of marginals obtained from the optimal
zero variance pdf coincides with the parametric pdf of the standard cross-entropy (CE) method. Thus, originally designed at the end of 1990s as a heuristics for estimation of rare-events and COP’s, CE has
strong connection with MinxEnt, and thus, strong mathematical foundation. The crucial difference between the proposed SME
method and the standard CE counterparts lies in their simulation-based versions: in the latter we always require to generate
(via Monte Carlo) a sequence of tuples including the temperature parameter and the parameter vector in the optimal marginal
pdf’s, while in the former we can fix in advance the temperature parameter (to be set to a large negative number) and then
generate (via Monte Carlo) a sequence of parameter vectors of the optimal marginal pdf’s alone. In addition, in contrast to
CE, neither the elite sample no the rarity parameter is needed in SME. As result, the proposed SME algorithm becomes simpler,
faster and at least as accurate as the standard CE. Motivated by the SME method we introduce a new updating rule for the parameter
vector in the parametric pdf of the CE program. We show that the CE algorithm based on the new updating rule, called the combined CE (CCE), is at least as fast and accurate as its standard CE and SME counterparts. We also found numerically that the variance
minimization (VM)-based algorithms are the most robust ones. We, finally, demonstrate numerically that the proposed algorithms,
and in particular the CCE one, allows accurate estimation of counting quantities up to the order of hundred of decision variables
and hundreds of constraints.
This research was supported by the Israel Science Foundation (grant No 191-565). 相似文献
18.
We define a discrete Laplace–Beltrami operator for simplicial surfaces (Definition 16). It depends only on the intrinsic geometry
of the surface and its edge weights are positive. Our Laplace operator is similar to the well known finite-elements Laplacian
(the so called “cotan formula”) except that it is based on the intrinsic Delaunay triangulation of the simplicial surface.
This leads to new definitions of discrete harmonic functions, discrete mean curvature, and discrete minimal surfaces. The
definition of the discrete Laplace–Beltrami operator depends on the existence and uniqueness of Delaunay tessellations in
piecewise flat surfaces. While the existence is known, we prove the uniqueness. Using Rippa’s Theorem we show that, as claimed,
Musin’s harmonic index provides an optimality criterion for Delaunay triangulations, and this can be used to prove that the
edge flipping algorithm terminates also in the setting of piecewise flat surfaces.
Research for this article was supported by the DFG Research Unit 565 “Polyhedral Surfaces” and the DFG Research Center Matheon “Mathematics for key technologies” in Berlin. 相似文献
19.
Silvano Martello 《Central European Journal of Operations Research》2010,18(1):47-58
We discuss some relevant results obtained by Egerváry in the early Thirties, whose importance has been recognized several
years later. We start with a quite well-known historical fact: the first modern polynomial-time algorithm for the assignment
problem, invented by Harold W. Kuhn half a century ago, was christened the “Hungarian method” to highlight that it derives
from two older results, by Kőnig (Math Ann 77:453–465, 1916) and Egerváry (Mat Fiz Lapok 38:16–28, 1931) (A recently discovered
posthumous paper by Jacobi (1804–1851) contains however a solution method that appears to be equivalent to the Hungarian algorithm).
Our second topic concerns a combinatorial optimization problem, independently defined in satellite communication and in scheduling
theory, for which the same polynomial-time algorithm was independently published 30 years ago by various authors. It can be
shown that such algorithm directly implements another result contained in the same 1931 paper by Egerváry. We finally observe
that the latter result also implies the famous Birkhoff-von Neumann theorem on doubly stochastic matrices. 相似文献
20.
We show that the original classic randomized algorithms for approximate counting in NP-hard problems, like for counting the number of satisfiability assignments in a SAT problem, counting the number of feasible colorings in a graph and calculating the permanent, typically fail. They either do not converge at all or are heavily biased (converge to a local extremum). Exceptions are convex counting problems, like estimating the volume of a convex polytope. We also show how their performance could be dramatically improved by combining them with the classic splitting method, which is based on simulating simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple call the splitting algorithms. We show that the most advance splitting version coincides with the cloning algorithm suggested earlier by the author. As compared to the randomized algorithms, the proposed splitting algorithms require very little warm-up time while running the MCMC from iteration to iteration, since the underlying Markov chains are already in steady-state from the beginning. What required is only fine tuning, i.e. keeping the Markov chains in steady-state while moving from iteration to iteration. We present extensive simulation studies with both the splitting and randomized algorithms for different NP-hard counting problems. 相似文献