首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A computational technique based on the method of path integral is studied with a view to finding approximate solutions of a class of two-point boundary-value problems. These solutions are rough solutions by Monte Carlo sampling. From the computational point of view, however, once these rough solutions are obtained for any nonlinear cases, they serve as good starting approximations for improving the solutions to higher accuracy. Numerical results of a few examples are also shown.  相似文献   

2.
Monte Carlo optimization   总被引:2,自引:0,他引:2  
Monte Carlo optimization techniques for solving mathematical programming problems have been the focus of some debate. This note reviews the debate and puts these stochastic methods in their proper perspective.  相似文献   

3.
4.
Importance sampling methods can be iterated like MCMC algorithms, while being more robust against dependence and starting values. The population Monte Carlo principle consists of iterated generations of importance samples, with importance functions depending on the previously generated importance samples. The advantage over MCMC algorithms is that the scheme is unbiased at any iteration and can thus be stopped at any time, while iterations improve the performances of the importance function, thus leading to an adaptive importance sampling. We illustrate this method on a mixture example with multiscale importance functions. A second example reanalyzes the ion channel model using an importance sampling scheme based on a hidden Markov representation, and compares population Monte Carlo with a corresponding MCMC algorithm.  相似文献   

5.
Automating the neighbourhood selection process in an iterative approach that uses multiple heuristics is not a trivial task. Hyper-heuristics are search methodologies that not only aim to provide a general framework for solving problem instances at different difficulty levels in a given domain, but a key goal is also to extend the level of generality so that different problems from different domains can also be solved. Indeed, a major challenge is to explore how the heuristic design process might be automated. Almost all existing iterative selection hyper-heuristics performing single point search contain two successive stages; heuristic selection and move acceptance. Different operators can be used in either of the stages. Recent studies explore ways of introducing learning mechanisms into the search process for improving the performance of hyper-heuristics. In this study, a broad empirical analysis is performed comparing Monte Carlo based hyper-heuristics for solving capacitated examination timetabling problems. One of these hyper-heuristics is an approach that overlaps two stages and presents them in a single algorithmic body. A learning heuristic selection method (L) operates in harmony with a simulated annealing move acceptance method using reheating (SA) based on some shared variables. Yet, the heuristic selection and move acceptance methods can be separated as the proposed approach respects the common selection hyper-heuristic framework. The experimental results show that simulated annealing with reheating as a hyper-heuristic move acceptance method has significant potential. On the other hand, the learning hyper-heuristic using simulated annealing with reheating move acceptance (L?CSA) performs poorly due to certain weaknesses, such as the choice of rewarding mechanism and the evaluation of utility values for heuristic selection as compared to some other hyper-heuristics in examination timetabling. Trials with other heuristic selection methods confirm that the best alternative for the simulated annealing with reheating move acceptance for examination timetabling is a previously proposed strategy known as the choice function.  相似文献   

6.
The problem of clustering a group of observations according to some objective function (e.g., K-means clustering, variable selection) or a density (e.g., posterior from a Dirichlet process mixture model prior) can be cast in the framework of Monte Carlo sampling for cluster indicators. We propose a new method called the evolutionary Monte Carlo clustering (EMCC) algorithm, in which three new “crossover moves,” based on swapping and reshuffling sub cluster intersections, are proposed. We apply the EMCC algorithm to several clustering problems including Bernoulli clustering, biological sequence motif clustering, BIC based variable selection, and mixture of normals clustering. We compare EMCC's performance both as a sampler and as a stochastic optimizer with Gibbs sampling, “split-merge” Metropolis–Hastings algorithms, K-means clustering, and the MCLUST algorithm.  相似文献   

7.
This contribution to the debate on Monte Carlo optimization methods shows that there exist techniques that may be useful in many technical applications.  相似文献   

8.
We propose a modification, based on the RESTART (repetitive simulation trials after reaching thresholds) and DPR (dynamics probability redistribution) rare event simulation algorithms, of the standard diffusion Monte Carlo (DMC) algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the “naïve” generalization of the standard algorithm would be impractical due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard‐Jones cluster), as well as a high‐frequency data assimilation problem. © 2014 Wiley Periodicals, Inc.  相似文献   

9.
We introduce a new class of Monte Carlo-based approximations of expectations of random variables such that their laws are only available via certain discretizations. Sampling from the discretized versions of these laws can typically introduce a bias. In this paper, we show how to remove that bias, by introducing a new version of multi-index Monte Carlo (MIMC) that has the added advantage of reducing the computational effort, relative to i.i.d. sampling from the most precise discretization, for a given level of error. We cover extensions of results regarding variance and optimality criteria for the new approach. We apply the methodology to the problem of computing an unbiased mollified version of the solution of a partial differential equation with random coefficients. A second application concerns the Bayesian inference (the smoothing problem) of an infinite-dimensional signal modeled by the solution of a stochastic partial differential equation that is observed on a discrete space grid and at discrete times. Both applications are complemented by numerical simulations.  相似文献   

10.
Although various efficient and sophisticated Markov chain Monte Carlo sampling methods have been developed during the last decade, the sample mean is still a dominant in computing Bayesian posterior quantities. The sample mean is simple, but may not be efficient. The weighted sample mean is a natural generalization of the sample mean. In this paper, a new weighted sample mean is proposed by partitioning the support of posterior distribution, so that the same weight is assigned to observations that belong to the same subset in the partition. A novel application of this new weighted sample mean in computing ratios of normalizing constants and necessary theory are provided. Illustrative examples are given to demonstrate the methodology.  相似文献   

11.
12.
In recent years efficient methods have been developed for calculating derivative price sensitivities using Monte Carlo simulation. Malliavin calculus has been used to transform the simulation problem in the case where the underlying follows a Markov diffusion process. In this work, recent developments in the area of Malliavin calculus for Levy processes are applied and slightly extended. This allows for derivation of similar stochastic weights as in the continuous case for a certain class of jump-diffusion processes.  相似文献   

13.
14.
In this article, we provide a review and development of sequential Monte Carlo (SMC) methods for option pricing. SMC are a class of Monte Carlo-based algorithms, that are designed to approximate expectations w.r.t a sequence of related probability measures. These approaches have been used successfully for a wide class of applications in engineering, statistics, physics, and operations research. SMC methods are highly suited to many option pricing problems and sensitivity/Greek calculations due to the nature of the sequential simulation. However, it is seldom the case that such ideas are explicitly used in the option pricing literature. This article provides an up-to-date review of SMC methods, which are appropriate for option pricing. In addition, it is illustrated how a number of existing approaches for option pricing can be enhanced via SMC. Specifically, when pricing the arithmetic Asian option w.r.t a complex stochastic volatility model, it is shown that SMC methods provide additional strategies to improve estimation.  相似文献   

15.
The problem of finding densely connected subgraphs in a network has attracted a lot of recent interest. Such subgraphs are sometimes referred to as communities in social networks or molecular modules in protein networks. In this article, we propose two Monte Carlo optimization algorithms for identifying the densest subgraphs with a fixed size or with size in a given range. The new algorithms combine the idea of simulated annealing and efficient moves for the Markov chain, and both algorithms are shown to converge to the set of optimal states (densest subgraphs) with probability 1. When applied to a yeast protein interaction network and a stock market graph, the algorithms identify interesting new densely connected subgraphs. Supplementary materials for the article are available online.  相似文献   

16.
This article discusses design ideas useful in the development of Markov chain Monte Carlo (MCMC) software. Goals of the design are to facilitate analysis of as many statistical models as possible, and to enable users to experiment with different MCMC algorithms as a research tool. These ideas have been used in YADAS, a system written in the Java language, but are also applicable in other object-oriented languages.  相似文献   

17.
Monte Carlo simulation is a common method for studying the volatility of market traded instruments. It is less employed in retail lending, because of the inherent nonlinearities in consumer behaviour. In this paper, we use the approach of Dual-time Dynamics to separate loan performance dynamics into three components: a maturation function of months-on-books, an exogenous function of calendar date, and a quality function of vintage origination date. The exogenous function captures the impacts from the macroeconomic environment. Therefore, we want to generate scenarios for the possible futures of these environmental impacts. To generate such scenarios, we must go beyond the random walk methods most commonly applied in the analysis of market-traded instruments. Retail portfolios exhibit autocorrelation structure and variance growth with time that requires more complex modelling. This paper is aimed at practical application and describes work using ARMA and ARIMA models for scenario generation, rules for selecting the correct model form given the input data, and validation methods on the scenario generation. We find when the goal is capturing the future volatility via Monte Carlo scenario generation, that model selection does not follow the same rules as for forecasting. Consequently, tests more appropriate to reproducing volatility are proposed, which assure that distributions of scenarios have the proper statistical characteristics. These results are supported by studies of the variance growth properties of macroeconomic variables and theoretical calculations of the variance growth properties of various models. We also provide studies on historical data showing the impact of training length on model accuracy and the existence of differences between macroeconomic epochs.  相似文献   

18.
We investigate in this work a recently proposed diagrammatic quantum Monte Carlo method—the inchworm Monte Carlo method—for open quantum systems. We establish its validity rigorously based on resummation of Dyson series. Moreover, we introduce an integro-differential equation formulation for open quantum systems, which illuminates the mathematical structure of the inchworm algorithm. This new formulation leads to an improvement of the inchworm algorithm by introducing classical deterministic time-integration schemes. The numerical method is validated by applications to the spin-boson model. © 2020 Wiley Periodicals, Inc.  相似文献   

19.
While studying various features of the posterior distribution of a vector-valued parameter using an MCMC sample, a subsample is often all that is available for analysis. The goal of benchmark estimation is to use the best available information, that is, the full MCMC sample, to improve future estimates made on the basis of the subsample. We discuss a simple approach to do this and provide a theoretical basis for the method. The methodology and benefits of benchmark estimation are illustrated using a well-known example from the literature. We obtain nearly a 90% reduction in MSE with the technique based on a 1-in-10 subsample and show that greater benefits accrue with the thinner subsamples that are often used in practice.  相似文献   

20.
Process monitoring and control requires the detection of structural changes in a data stream in real time. This article introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be made adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables rebalancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and nonconjugate Bayesian models for the intensity. Appendices to the article are available online, illustrating the method on other models and applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号