首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.  相似文献   

2.
Importance analysis is aimed at finding the contributions of the inputs to the output uncertainty. For structural models involving correlated input variables, the variance contribution by an individual input variable is decomposed into correlated contribution and uncorrelated contribution in this study. Based on point estimate, this work proposes a new algorithm to conduct variance based importance analysis for correlated input variables. Transformation of the input variables from correlation space to independence space and the computation of conditional distribution in the process ensure that the correlation information is inherited correctly. Different point estimate methods can be employed in the proposed algorithm, thus the algorithm is adaptable and evolvable. Meanwhile, the proposed algorithm is also applicable to uncertainty systems with multiple modes. The proposed algorithm avoids the sampling procedure, which usually consumes a heavy computational cost. Results of several examples in this work have proven the proposed algorithm can be used as an effective tool to deal with uncertainty analysis involving correlated inputs.  相似文献   

3.
This note introduces a method for sampling Ising models with mixed boundary conditions. As an application of annealed importance sampling and the Swendsen-Wang algorithm, the method adopts a sequence of intermediate distributions that keeps the temperature fixed but turns on the boundary condition gradually. The numerical results show that the variance of the sample weights is relatively small.  相似文献   

4.
We propose new sequential importance sampling methods for sampling contingency tables with given margins. The proposal for each method is based on asymptotic approximations to the number of tables with fixed margins. These methods generate tables that are very close to the uniform distribution. The tables, along with their importance weights, can be used to approximate the null distribution of test statistics and calculate the total number of tables. We apply the methods to a number of examples and demonstrate an improvement over other methods in a variety of real problems. Supplementary materials are available online.  相似文献   

5.
The probabilistic traveling salesman problem is a paradigmatic example of a stochastic combinatorial optimization problem. For this problem, recently an estimation-based local search algorithm using delta evaluation has been proposed. In this paper, we adopt two well-known variance reduction procedures in the estimation-based local search algorithm: the first is an adaptive sampling procedure that selects the appropriate size of the sample to be used in Monte Carlo evaluation; the second is a procedure that adopts importance sampling to reduce the variance involved in the cost estimation. We investigate several possible strategies for applying these procedures to the given problem and we identify the most effective one. Experimental results show that a particular heuristic customization of the two procedures increases significantly the effectiveness of the estimation-based local search.  相似文献   

6.
There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator.  相似文献   

7.
Approximate Bayesian inference by importance sampling derives probabilistic statements from a Bayesian network, an essential part of evidential reasoning with the network and an important aspect of many Bayesian methods. A critical problem in importance sampling on Bayesian networks is the selection of a good importance function to sample a network’s prior and posterior probability distribution. The initially optimal importance functions eventually start deviating from the optimal function when sampling a network’s posterior distribution given evidence, even when adaptive methods are used that adjust an importance function to the evidence by learning. In this article we propose a new family of Refractor Importance Sampling (RIS) algorithms for adaptive importance sampling under evidential reasoning. RIS applies “arc refractors” to a Bayesian network by adding new arcs and refining the conditional probability tables. The goal of RIS is to optimize the importance function for the posterior distribution and reduce the error variance of sampling. Our experimental results show a significant improvement of RIS over state-of-the-art adaptive importance sampling algorithms.  相似文献   

8.
In this paper, we propose a new method, namely the level-value estimation method, for finding global minimizer of continuous optimization problem. For this purpose, we define the variance function and the mean deviation function, both depend on a level value of the objective function to be minimized. These functions have some good properties when Newton’s method is used to solve a variance equation resulting by setting the variance function to zero. We prove that the largest root of the variance equation equals the global minimal value of the corresponding optimization problem. We also propose an implementable algorithm of the level-value estimation method where importance sampling is used to calculate integrals of the variance function and the mean deviation function. The main idea of the cross-entropy method is used to update the parameters of sample distribution at each iteration. The implementable level-value estimation method has been verified to satisfy the convergent conditions of the inexact Newton method for solving a single variable nonlinear equation. Thus, convergence is guaranteed. The numerical results indicate that the proposed method is applicable and efficient in solving global optimization problems.  相似文献   

9.
This paper develops a generalized dynamic network model for portfolio investment diversification. The model considers the situation of the fixed solution subset corresponding to a fixed single-resource economic investment such as that found in many oil-producing nations. Quadratic side constraints on the variance of the resultant flow distribution are added to the model to accommodate uncertainty. The model has been tested using a prototype example. The results indicate that the risk associated with a single-resource investment can be reduced by determining optimal investment weights.  相似文献   

10.
Gaussian time-series models are often specified through their spectral density. Such models present several computational challenges, in particular because of the nonsparse nature of the covariance matrix. We derive a fast approximation of the likelihood for such models. We propose to sample from the approximate posterior (i.e., the prior times the approximate likelihood), and then to recover the exact posterior through importance sampling. We show that the variance of the importance sampling weights vanishes as the sample size goes to infinity. We explain why the approximate posterior may typically be multimodal, and we derive a Sequential Monte Carlo sampler based on an annealing sequence to sample from that target distribution. Performance of the overall approach is evaluated on simulated and real datasets. In addition, for one real-world dataset, we provide some numerical evidence that a Bayesian approach to semiparametric estimation of spectral density may provide more reasonable results than its frequentist counterparts. The article comes with supplementary materials, available online, that contain an Appendix with a proof of our main Theorem, a Python package that implements the proposed procedure, and the Ethernet dataset.  相似文献   

11.
本文提出了一种生成广义高斯分布 (GGD)随机数的通用算法 .该算法针对GGD密度函数衰减性的特点 ,采用变步长的方法 ,综合运用了逆函数法、近似复合抽样法及变换抽样法 .通过调整分布参数的数值 ,就能产生具有任何形状参数和任何方差的GGD随机数 ,简单易于实现 .最后将仿真实验结果与已有算法的结果做比较 ,并用 χ2检验法和Kolmogorov Smirnov检验法 (K S检验法 )验证该方法的有效性 .  相似文献   

12.
苏兵  高理峰 《数学杂志》2012,32(2):206-210
本文研究了非线性贝叶斯动态模型的随机模拟.在更宽泛的先验分布假设下.利用重要性再抽样的方法,以"样本"代替"分布",实现了对模型的后验推断、预测和模型选择,扩张了贝叶斯动态模型的应用领域.  相似文献   

13.
We propose an approach to a twofold optimal parameter search for a combined variance reduction technique of the control variates and the important sampling in a suitable pure-jump Lévy process framework. The parameter search procedure is based on the two-time-scale stochastic approximation algorithm with equilibrated control variates component and with quasi-static importance sampling one. We prove the almost sure convergence of the algorithm to a unique optimum. The parameter search algorithm is further embedded in adaptive Monte Carlo simulations in the case of the gamma distribution and process. Numerical examples of the CDO tranche pricing with the Gamma copula model and the intensity Gamma model are provided to illustrate the effectiveness of our method.   相似文献   

14.
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube [0,1] s from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. In this paper the worst-case error is therefore a random variable depending on random weights. We show how one can construct lattice rules which perform well for weights taken from a set with large measure. Such lattice rules are therefore robust with respect to certain changes in the weights. The construction algorithm uses the component-by-component (cbc) idea based on two criteria, one using the mean of the worst case error and the second criterion using a bound on the variance of the worst-case error. We call the new algorithm the cbc2c (component-by-component with 2 constraints) algorithm. We also study a generalized version which uses r constraints which we call the cbcrc (component-by-component with r constraints) algorithm. We show that lattice rules generated by the cbcrc algorithm simultaneously work well for all weights in a subspace spanned by the chosen weights ?? (1), . . . , ?? (r). Thus, in applications, instead of finding one set of weights, it is enough to find a convex polytope in which the optimal weights lie. The price for this method is a factor r in the upper bound on the error and in the construction cost of the lattice rule. Thus the burden of determining one set of weights very precisely can be shifted to the construction of good lattice rules. Numerical results indicate the benefit of using the cbc2c algorithm for certain choices of weights.  相似文献   

15.
This paper deals with simulation-based estimation of the probability distribution for completion time in stochastic activity networks. These distribution functions may be valuable in many applications. A simulation method, using importance-sampling techniques, is presented for estimation of the probability distribution function. Separating the state space into two sets, one which must be sampled and another which need not be, is suggested. The sampling plan of the simulation can then be decided after the probabilities of the two sets are adjusted. A formula for the adjustment of the probabilities is presented. It is demonstrated that the estimator is unbiased and the upper bound of variance minimized. Adaptive sampling, utilizing the importance sampling techniques, is discussed to solve problems where there is no information or more than one way to separate the state space. Examples are used to illustrate the sampling plan.  相似文献   

16.
Forecasts may be combined using a minimum variance criterion to yield a composite forecast of smaller error variance than any of the components. This paper considers the sampling distributions of the weights to be attached to the components and of the error variance of the combined forecast. Confidence limits are derived for the estimates of the weights and of the variance of a composite forecast with two components. The theoretical analysis reveals that, in practice, it is doubtful whether combined forecasts offer much improvement because of the unreliability of the weight estimates.  相似文献   

17.
In order to serve their customers, natural gas local distribution companies (LDCs) can select from a variety of financial and non-financial contracts. The present paper is concerned with the choice of an appropriate portfolio of natural gas purchases that would allow a LDC to satisfy its demand with a minimum tradeoff between cost and risk, while taking into account risk associated with modeling error. We propose two types of strategies for natural gas procurement. Dynamic strategies model the procurement problem as a mean-risk stochastic program with various risk measures. Naive strategies hedge a fixed fraction of winter demand. The hedge is allocated equally between storage, futures and options. We propose a simulation framework to evaluate the proposed strategies and show that: (i) when the appropriate model for spot prices and its derivatives is used, dynamic strategies provide cheaper gas with low risk compared to naive strategies. (ii) In the presence of a modeling error, dynamic strategies are unable to control the variance of the procurement cost though they provide cheaper cost on average. Based on these results, we define robust strategies as convex combinations of dynamic and naive strategies. The weight of each strategy represents the fraction of demand to be satisfied following this strategy. A mean–variance problem is then solved to obtain optimal weights and construct an efficient frontier of robust strategies that take advantage of the diversification effect.  相似文献   

18.
A new and very fast method of bootstrap for sampling without replacement from a finite population is proposed. This method can be used to estimate the variance in sampling with unequal inclusion probabilities and does not require artificial populations or utilization of bootstrap weights. The bootstrap samples are directly selected from the original sample. The bootstrap procedure contains two steps: in the first step, units are selected once with Poisson sampling using the same inclusion probabilities as the original design. In the second step, amongst the non-selected units, half of the units are randomly selected twice. This procedure enables us to efficiently estimate the variance. A set of simulations show the advantages of this new resampling method.  相似文献   

19.
In this article we propose a modification to the output from Metropolis-within-Gibbs samplers that can lead to substantial reductions in the variance over standard estimates. The idea is simple: at each time step of the algorithm, introduce an extra sample into the estimate that is negatively correlated with the current sample, the rationale being that this provides a two-sample numerical approximation to a Rao–Blackwellized estimate. As the conditional sampling distribution at each step has already been constructed, the generation of the antithetic sample often requires negligible computational effort. Our method is implementable whenever one subvector of the state can be sampled from its full conditional and the corresponding distribution function may be inverted, or the full conditional has a symmetric density. We demonstrate our approach in the context of logistic regression and hierarchical Poisson models. The data and computer code used in this article are available online.  相似文献   

20.
In this paper, we incorporate importance sampling strategy into accelerated framework of stochastic alternating direction method of multipliers for solving a class of stochastic composite problems with linear equality constraint. The rates of convergence for primal residual and feasibility violation are established. Moreover, the estimation of variance of stochastic gradient is improved due to the use of important sampling. The proposed algorithm is capable of dealing with the situation, where the feasible set is unbounded. The experimental results indicate the effectiveness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号