首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Parallel Speed-Up of Monte Carlo Methods for Global Optimization
Institution:School of Mathematics, Georgia Institute of Technology, Atlanta, Georgia 30332; Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, Canada V5A 1S6
Abstract:We introduce the notion of expected hitting time to a goal as a measure of the convergence rate of a Monte Carlo optimization method. The techniques developed apply to simulated annealing, genetic algorithms, and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, (1/s)/(1 ? λ), characterized by two parameters, s, λ, related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, and exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly sped up. Further we observe that a serial Monte Carlo search can have an infinite expected hitting time, but the same algorithm when parallelized can have a finite expected hitting time. One consequence of the observed superlinear speedup is an improved uniprocessor algorithm by the technique of in-code parallelism.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号