首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we use Monte Carlo Techniques to deal with a real world delivery problem of a food company in Valencia (Spain). The problem is modeled as a set of 11 instances of the well known Vehicle Routing Problem, VRP, with additional time constraints. Given that VRP is a NP-hard problem, a heuristic algorithm, based on Monte Carlo techniques, is implemented. The solution proposed by this heuristic algorithm reaches distance and money savings of about 20% and 5% respectively. This work has been partially supported by thePlan de Incentivo a la Investigación/98 of the Universidad Politécnica de Valencia, under the project “Técnicas Monte Carlo aplicadas a Problemas de Rutas de Vehículos”.  相似文献   

2.
Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. In particular, randomized (0, m, s)-nets in basebproduce unbiased estimates of the integral, have a variance that tends to zero faster than 1/nfor any square integrable integrand and have a variance that for finitenis never more thane?2.718 times as large as the Monte Carlo variance. Lower bounds thaneare known for special cases. Some very important (t, m, s)-nets havet>0. The widely used Sobol' sequences are of this form, as are some recent and very promising nets due to Niederreiter and Xing. Much less is known about randomized versions of these nets, especially ins>1 dimensions. This paper shows that scrambled (t, m, s)-nets enjoy the same properties as scrambled (0, m, s)-nets, except the sampling variance is guaranteed only to be belowbt[(b+1)/(b−1)]stimes the Monte Carlo variance for a least-favorable integrand and finiten.  相似文献   

3.
Earlier literature introduced a network algorithm for computing an exact test of independence in a two-way contingency table. This article adapts that algorithm to tests of quasi-symmetry in square tables. The algorithm is generally faster than competing Monte Carlo methods, and essentially eliminates the need for asymptotic approximation of P values for assessing goodness-of-fit of the quasi-symmetry model. A macro written for the R computing package is available for implementing the method.  相似文献   

4.
5.
Quasi-Monte Carlo integration rules, which are equal-weight sample averages of function values, have been popular for approximating multivariate integrals due to their superior convergence rate of order close to 1/N or better, compared to the order 1/?N1/\sqrt{N} of simple Monte Carlo algorithms. For practical applications, it is desirable to be able to increase the total number of sampling points N one or several at a time until a desired accuracy is met, while keeping all existing evaluations. We show that although a convergence rate of order close to 1/N can be achieved for all values of N (e.g., by using a good lattice sequence), it is impossible to get better than order 1/N convergence for all values of N by adding equally-weighted sampling points in this manner. We then prove that a convergence of order N  − α for α > 1 can be achieved by weighting the sampling points, that is, by using a weighted compound integration rule. We apply our theory to lattice sequences and present some numerical results. The same theory also applies to digital sequences.  相似文献   

6.
The exchange of radiant energy (e.g., visible light, infrared radiation) in simple macroscopic physical models is sometimes approximated by the solution of a system of linear equations (energy transport equations). A variable in such a system represents the total energy emitted by a discrete surface element. The coefficients of these equations depend on the form factors between pairs of surface elements. A form factor is the fraction of energy leaving a surface element which directly reaches another surface element. Form factors depend only on the geometry of the physical model. Determining good approximations of form factors is the most time-consuming step in these methods, when the geometry of the model is complex due to occlusions. In this paper, we introduce a new characterization of form factors based on concepts from integral geometry. Using this characterization, we develop a new and asymptotically efficient Monte Carlo method for the simultaneous approximation of all form factors in an occluded polyhedral environment. The approximation error is bounded without recourse to special hypothesis. This algorithm is, for typical scenes, one order of magnitude faster than methods based on the hemisphere paradigm or on Monte Carlo ray-shooting. Let A be any set of convex nonintersecting polygons in R 3 with a total of n edges and vertices. Let ε be the error parameter and let δ be the confidence parameter. We compute an approximation of each nonzero form factor such that with probability at least 1-δ the absolute approximation error is less than ε. The expected running time of the algorithm is , where K is the expected number of regular intersections for a random projection of A. The number of regular intersections can range from 0 to quadratic in n, but for typical applications it is much smaller than quadratic. The expectation is with respect to the random choices of the algorithm and the result holds for any input. Received March 17, 1995, and in revised form April 10, 1996.  相似文献   

7.
The rank-one modification algorithm of theLDM t factorization was given by Bennett [1]. His method, however, could break down even when the matrix is nonsingular and well-conditioned. We introduce a pivoting strategy for avoiding possible break-down as well as for suppressing error growth in the modification process. The method is based on a symbolic formula of the rank-one modification of the factorization of a possibly singular nonsymmetric matrix. A new symbolic formula is also obtained for the inverses of the factor matrices. Repeated application of our method produces theLDM t-like product form factorization of a matrix. A numerical example is given to illustrate our pivoting method. An incomplete factorization algorithm is also introduced for updating positive definite matrix useful in quasi-Newton methods, in which the Fletcher and Powell algorithm [2] and the Gill, Murray and Saunders algorithm [4] are usually used.This paper is presented at the Japan SIAM Annual Meeting held at University of Tokyo, Japan, October 7–9, 1991.  相似文献   

8.
In the boolean decision tree model there is at least a linear gap between the Monte Carlo and the Las Vegas complexity of a function depending on the error probability. We prove for a large class of read-once formulae that this trivial speed-up is the best that a Monte Carlo algorithm can achieve. For every formula F belonging to that class we show that the Monte Carlo complexity of F with two-sided error p is (1 ? 2p)R(F), and with one-sided error p is (1 ? p)R(F), where R(F) denotes the Las Vegas complexity of F. The result follows from a general lower bound that we derive on the Monte Carlo complexity of these formulae. This bound is analogous to the lower bound due to Saks and Wigderson on their Las Vegas complexity.  相似文献   

9.
Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many biological and soft matter materials solvable in solvents.The appropriate treatment of long-range electrostatic interaction is essential for these charged systems,but remains a challenging problem for large-scale simulations.We develop an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems.The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure.We test the accuracy of the tree algorithm,and use it to perform computer simulations of electric double layer near a spherical interface.It is shown that the computational cost of the Monte Carlo method with treecode acceleration scales as log N in each move.For a typical system with ten thousand particles,by using the new algorithm,the speed has been improved by two orders of magnitude from the direct summation.  相似文献   

10.
Factoring wavelet transforms into lifting steps   总被引:236,自引:0,他引:236  
This article is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is well-known to algebraists (and expressed by the formulaSL(n;R[z, z−1])=E(n;R[z, z−1])); it is also used in linear systems theory in the electrical engineering community. We present here a self-contained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e., non-unitary case. Like the lattice factorization, the decomposition presented here asymptotically reduces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a wavelet-like transform that maps integers to integers. Research Tutorial Acknowledgements and Notes. Page 264.  相似文献   

11.
It has been recognized through theory and practice that uniformly distributed deterministic sequences provide more accurate results than purely random sequences. A quasi Monte Carlo (QMC) variant of a multi level single linkage (MLSL) algorithm for global optimization is compared with an original stochastic MLSL algorithm for a number of test problems of various complexities. An emphasis is made on high dimensional problems. Two different low-discrepancy sequences (LDS) are used and their efficiency is analysed. It is shown that application of LDS can significantly increase the efficiency of MLSL. The dependence of the sample size required for locating global minima on the number of variables is examined. It is found that higher confidence in the obtained solution and possibly a reduction in the computational time can be achieved by the increase of the total sample size N. N should also be increased as the dimensionality of problems grows. For high dimensional problems clustering methods become inefficient. For such problems a multistart method can be more computationally expedient.  相似文献   

12.
Combinatorial Sublinear-Time Fourier Algorithms   总被引:1,自引:0,他引:1  
We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length Nk. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of [^(A)]\hat{\mathbf{A}} , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem (Gilbert et al. in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method (Gilbert et al. in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).  相似文献   

13.
The problem of clustering a group of observations according to some objective function (e.g., K-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior) can be cast in the framework of Monte Carlo sampling for cluster indicators. We propose a new method called the evolutionary Monte Carlo clustering (EMCC) algorithm, in which three new “crossover moves,” based on swapping and reshuffling sub cluster intersections, are proposed. We apply the EMCC algorithm to several clustering problems including Bernoulli clustering, biological sequence motif clustering, BIC based variable selection, and mixture of normals clustering. We compare EMCC's performance both as a sampler and as a stochastic optimizer with Gibbs sampling, “split-merge” Metropolis–Hastings algorithms, K-means clustering, and the MCLUST algorithm.  相似文献   

14.
Let a given collection of sets have size N measured by the sum of the cardinalities. Yellin and Jutla presented an algorithm which constructed the partial order induced by the subset relation (a “subset graph”) in a claimed O(N2/log N) operations over a dictionary ADT, and exhibited a collection whose subset graph had Θ(N2/log2 N) edges. This paper first establishes a matching upper bound on the number of edges in a subset graph. It also presents a finer analysis of the algorithm, which confirms the claimed upper bound and shows it to be tight. A simple implementation requiring O(1) bit-parallel operations per ADT operation is presented, along with a variant of the algorithm with an implementation requiring O(N2/log N) RAM operations.  相似文献   

15.
Mean-shift is an iterative procedure often used as a nonparametric clustering algorithm that defines clusters based on the modal regions of a density function. The algorithm is conceptually appealing and makes assumptions neither about the shape of the clusters nor about their number. However, with a complexity of O(n2) per iteration, it does not scale well to large datasets. We propose a novel algorithm which performs density-based clustering much quicker than mean shift, yet delivering virtually identical results. This algorithm combines subsampling and a stochastic approximation procedure to achieve a potential complexity of O(n) at each step. Its convergence is established. Its performances are evaluated using simulations and applications to image segmentation, where the algorithm was tens or hundreds of times faster than mean shift, yet causing negligible amounts of clustering errors. The algorithm can be combined with existing approaches to further accelerate clustering.  相似文献   

16.
《Journal of Complexity》2002,18(3):683-701
We prove in a constructive way that multivariate integration in appropriate weighted Sobolev classes is strongly tractable and the ε-exponent of strong tractability is 1 (which is the best-possible value) under a stronger assumption than Sloan and Woźniakowski's assumption. We show that quasi-Monte Carlo algorithms based on the Sobol sequence and Halton sequence achieve the convergence order O(n−1+δ) for any δ>0 independent of the dimension with a worst-case deterministic guarantee (where n is the number of function evaluations). This implies that quasi-Monte Carlo algorithms based on the Sobol and Halton sequences converge faster and therefore are superior to Monte Carlo methods independent of the dimension for integrands in suitable weighted Sobolev classes.  相似文献   

17.
Given an n ×  n symmetric possibly indefinite matrix A, a modified Cholesky algorithm computes a factorization of the positive definite matrix AE, where E is a correction matrix. Since the factorization is often used to compute a Newton-like downhill search direction for an optimization problem, the goals are to compute the modification without much additional cost and to keep AE well-conditioned and close to A. Gill, Murray and Wright introduced a stable algorithm, with a bound of ||E||2O(n 2). An algorithm of Schnabel and Eskow further guarantees ||E||2O(n). We present variants that also ensure ||E||2O(n). Moré and Sorensen and Cheng and Higham used the block LBL T factorization with blocks of order 1 or 2. Algorithms in this class have a worst-case cost O(n 3) higher than the standard Cholesky factorization. We present a new approach using a sandwiched LTL T -LBL T factorization, with T tridiagonal, that guarantees a modification cost of at most O(n 2). H.-r. Fang’s work was supported by National Science Foundation Grant CCF 0514213. D. P. O’Leary’s work was supported by National Science Foundation Grant CCF 0514213 and Department of Energy Grant DEFG0204ER25655.  相似文献   

18.
We adopt the multilevel Monte Carlo method introduced by M. Giles (Multilevel Monte Carlo path simulation, Oper. Res. 56(3):607–617, 2008) to SDEs with additive fractional noise of Hurst parameter H>1/2. For the approximation of a Lipschitz functional of the terminal state of the SDE we construct a multilevel estimator based on the Euler scheme. This estimator achieves a prescribed root mean square error of order ε with a computational effort of order ε −2.  相似文献   

19.
Let N ∈ ? and let χ be a Dirichlet character modulo N. Let f be a modular form with respect to the group Γ0(N), multiplier χ and weight k. Let F be the L ‐function associated with f and normalized in such a way that F (s) satisfies a functional equation where s reflects in 1 – s. The modular forms f for which F belongs to the extended Selberg class S# are characterized. For these forms the factorization of F in primitive elements of S# is enquired. In particular, it is proved that if f is a cusp form and FS# then F is almost primitive (i.e., that if F = PG is a factorization with P, GS# and the degree of P is < 2 then P is a Dirichlet polynomial). It is also proved that the conductor of the polynomial factor P is bounded by N. If f belongs to the space generated by newforms and N ≤ 4 then F is actually primitive (i.e., P is a constant) (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
Inferential procedures for the difference between two multivariate normal mean vectors based on incomplete data matrices with different monotone patterns are developed. Assuming that the population covariance matrices are equal, a pivotal quantity, similar to the Hotelling T2 statistic, is proposed, and its approximate distribution is derived. Hypothesis testing and confidence estimation of the difference between the mean vectors based on the approximate distribution are outlined. The validity of the approximation is investigated using Monte Carlo simulation. Monte Carlo studies indicate that the approximate method is very satisfactory even for small samples. A multiple comparison procedure is outlined and the proposed methods are illustrated using an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号