首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper applies importance sampling simulation for estimating rare event probabilities of the first passage time in the infinite server queue with renewal arrivals and general service time distributions. We consider importance sampling algorithms which are based on large deviations results of the infinite server queue, and we consider an algorithm based on the cross-entropy method, where we allow light-tailed and heavy-tailed distributions for the interarrival times and the service times. Efficiency of the algorithms is discussed by simulation experiments.  相似文献   

2.
3.
The accurate estimation of rare event probabilities is a crucial problem in engineering to characterize the reliability of complex systems. Several methods such as Importance Sampling or Importance Splitting have been proposed to perform the estimation of such events more accurately (i.e., with a lower variance) than crude Monte Carlo method. However, these methods assume that the probability distributions of the input variables are exactly defined (e.g., mean and covariance matrix perfectly known if the input variables are defined through Gaussian laws) and are not able to determine the impact of a change in the input distribution parameters on the probability of interest. The problem considered in this paper is the propagation of the input distribution parameter uncertainty defined by intervals to the rare event probability. This problem induces intricate optimization and numerous probability estimations in order to determine the upper and lower bounds of the probability estimate. The calculation of these bounds is often numerically intractable for rare event probability (say 10?5), due to the high computational cost required. A new methodology is proposed to solve this problem with a reduced simulation budget, using the adaptive Importance Sampling. To this end, a method for estimating the Importance Sampling optimal auxiliary distribution is proposed, based on preceding Importance Sampling estimations. Furthermore, a Kriging-based adaptive Importance Sampling is used in order to minimize the number of evaluations of the computationally expensive simulation code. To determine the bounds of the probability estimate, an evolutionary algorithm is employed. This algorithm has been selected to deal with noisy problems since the Importance Sampling probability estimate is a random variable. The efficiency of the proposed approach, in terms of accuracy of the found results and computational cost, is assessed on academic and engineering test cases.  相似文献   

4.
Although importance sampling is an established and effective sampling and estimation technique, it becomes unstable and unreliable for high-dimensional problems. The main reason is that the likelihood ratio in the importance sampling estimator degenerates when the dimension of the problem becomes large. Various remedies to this problem have been suggested, including heuristics such as resampling. Even so, the consensus is that for large-dimensional problems, likelihood ratios (and hence importance sampling) should be avoided. In this paper we introduce a new adaptive simulation approach that does away with likelihood ratios, while retaining the multi-level approach of the cross-entropy method. Like the latter, the method can be used for rare-event probability estimation, optimization, and counting. Moreover, the method allows one to sample exactly from the target distribution rather than asymptotically as in Markov chain Monte Carlo. Numerical examples demonstrate the effectiveness of the method for a variety of applications.   相似文献   

5.
This article proposes a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive importance sampling function which is itself a mixture, with two types of component distributions, one concentrated and one diffuse. The more concentrated type of component serves the usual purpose of an importance sampling function, sampling mostly group assignments of high posterior probability. The less concentrated type of component allows for the importance sampling function to explore the space in a controlled way to find other, unvisited assignments with high posterior probability. Components are added adaptively, one at a time, to cover areas of high posterior probability not well covered by the current importance sampling function. The method is called incremental mixture importance sampling (IMIS).

IMIS is easy to implement and to monitor for convergence. It scales easily for higher dimensional mixture distributions when a conjugate prior is specified for the mixture parameters. The simulated values on which the estimate is based are independent, which allows for straightforward estimation of standard errors. The self-monitoring aspects of the method make it easier to adjust tuning parameters in the course of estimation than standard Markov chain Monte Carlo algorithms. With only small modifications to the code, one can use the method for a wide variety of mixture distributions of different dimensions. The method performed well in simulations and in mixture problems in astronomy and medical research.  相似文献   

6.
The contribution of this paper is to introduce change of measure based techniques for the rare-event analysis of heavy-tailed random walks. Our changes of measures are parameterized by a family of distributions admitting a mixture form. We exploit our methodology to achieve two types of results. First, we construct Monte Carlo estimators that are strongly efficient (i.e. have bounded relative mean squared error as the event of interest becomes rare). These estimators are used to estimate both rare-event probabilities of interest and associated conditional expectations. We emphasize that our techniques allow us to control the expected termination time of the Monte Carlo algorithm even if the conditional expected stopping time (under the original distribution) given the event of interest is infinity–a situation that sometimes occurs in heavy-tailed settings. Second, the mixture family serves as a good Markovian approximation (in total variation) of the conditional distribution of the whole process given the rare event of interest. The convenient form of the mixture family allows us to obtain functional conditional central limit theorems that extend classical results in the literature.  相似文献   

7.
We discuss the estimation of the tail index of a heavy-tailed distribution when covariate information is available. The approach followed here is based on the technique of local polynomial maximum likelihood estimation. The generalized Pareto distribution is fitted locally to exceedances over a high specified threshold. The method provides nonparametric estimates of the parameter functions and their derivatives up to the degree of the chosen polynomial. Consistency and asymptotic normality of the proposed estimators will be proven under suitable regularity conditions. This approach is motivated by the fact that in some applications the threshold should be allowed to change with the covariates due to significant effects on scale and location of the conditional distributions. Using the asymptotic results we are able to derive an expression for the asymptotic mean squared error, which can be used to guide the selection of the bandwidth and the threshold. The applicability of the method will be demonstrated with a few practical examples.  相似文献   

8.
This paper reports simulation experiments, applying the cross entropy method such as the importance sampling algorithm for efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failure biasing schemes that have been proved to give estimators with bounded relative errors. The results from the experiments indicate a considerable improvement of the performance of the importance sampling estimators, where performance is measured by the relative error of the estimate, by the relative error of the estimator, and by the gain of the importance sampling simulation to the normal simulation.  相似文献   

9.
For heavy-tailed distributions, the so-called tail index is an important parameter that controls the behavior of the tail distribution and is thus of primary interest to estimate extreme quantiles. In this paper, the estimation of the tail index is considered in the presence of a finite-dimensional random covariate. Uniform weak consistency and asymptotic normality of the proposed estimator are established and some illustrations on simulations are provided.  相似文献   

10.
11.
汪浩 《应用概率统计》2003,19(3):267-276
由于金融市场中的日周期或短周期对数回报率的样本数据多数呈现胖尾分布,于是现有的正态或对数正态分布模型都在不同程度上失效,为了准确模拟这种胖尾分布和提高投资风险估计及金融管理,本文引进了一种可根据实际金融市场数据作出调正的蒙特卡洛模拟方法.这个方法可以有效地复制金融产品价格的日周期对数回报率数据的胖尾分布.结合非参数估计方法,利用该模拟方法还得到投资高风险值以及高风险置信区间的准确估计。  相似文献   

12.
研究无穷方差厚尾过程中含有变点的非参数函数的估计问题。通过小波方法给出变点位置的估计值并得到其收敛速度。在已知变点估计值的基础上,将截尾方法与小波压缩方法相结合得到非参数函数的估计值。模拟研究结果说明对于无穷方差厚尾过程中的函数估计问题小波方法是有效的。  相似文献   

13.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

14.
15.
Input data modeling is a critical component of a successful simulation application. A perspective of the area is given with an emphasis on available probability distributions as models, estimation methods, model selection and discrimination, and goodness of fit. Three specific distribution classes (lambda,S B , TES processes) are discussed in some detail to illustrate characteristics that favor input models. Regarding estimation, we argue for maximum likelihood estimation over method of moments and other matching schemes due to intrinsic superior properties (presuming a specific model) and the capability of accommodating messy data types. We conclude with a list of specific research problems and areas warranting additional attention.  相似文献   

16.
In applied statistics, the coefficient of variation is widely used. However, inference concerning the coefficient of variation of non-normal distributions are rarely reported. In this article, a simulation-based Bayesian approach is adopted to estimate the coefficient of variation (CV) under progressive first-failure censored data from Gompertz distribution. The sampling schemes such as, first-failure censoring, progressive type II censoring, type II censoring and complete sample can be obtained as special cases of the progressive first-failure censored scheme. The simulation-based approach will give us a point estimate as well as the empirical sampling distribution of CV. The joint prior density as a product of conditional gamma density and inverted gamma density for the unknown Gompertz parameters are considered. In addition, the results of maximum likelihood and parametric bootstrap techniques are also proposed. An analysis of a real life data set is presented for illustrative purposes. Results from simulation studies assessing the performance of our proposed method are included.  相似文献   

17.
Estimation of the extreme conditional quantiles with functional covariate is an important problem in quantile regression. The existing methods, however, are only applicable for heavy-tailed distributions with a positive conditional tail index. In this paper, we propose a new framework for estimating the extreme conditional quantiles with functional covariate that combines the nonparametric modeling techniques and extreme value theory systematically. Our proposed method is widely applicable, no matter whether the conditional distribution of a response variable Y given a vector of functional covariates X is short, light or heavy-tailed. It thus enriches the existing literature.  相似文献   

18.
Likelihood Based Confidence Intervals for the Tail Index   总被引:1,自引:0,他引:1  
Jye-Chyi Lu  Liang Peng 《Extremes》2002,5(4):337-352
For the estimation of the tail index of a heavy tailed distribution, one of the well-known estimators is the Hill estimator (Hill, 1975). One obvious way to construct a confidence interval for the tail index is via the normal approximation of the Hill estimator. In this paper we apply both the empirical likelihood method and the parametric likelihood method to obtaining confidence intervals for the tail index. Our limited simulation study indicates that the normal approximation method is worse than the other two methods in terms of coverage probability, and the empirical likelihood method and the parametric likelihood method are comparable.  相似文献   

19.
The paper is dealing with estimation of rare event probabilities in stochastic networks. The well known variance reduction technique, called Importance Sampling (IS) is an effective tool for doing this. The main idea of IS is to simulate the random system under a modified set of parameters, so as to make the occurrence of the rare event more likely. The major problem of the IS technique is that the optimal modified parameters, called reference parameters to be used in IS are usually very difficult to obtain. Rubinstein (Eur J Oper Res 99:89–112, 1997) developed the Cross Entropy (CE) method for the solution of this problem of IS technique and then he and his collaborators applied this for estimation of rare event probabilities in stochastic networks with exponential distribution [see De Boer et al. (Ann Oper Res 134:19–67, 2005)]. In this paper, we test this simulation technique also for medium sized stochastic networks and compare its effectiveness to the simple crude Monte Carlo (CMC) simulation. The effectiveness of a variance reduction simulation algorithm is measured in the following way. We calculate the product of the necessary CPU time and the estimated variance of the estimation. This product is compared to the same for the simple Crude Monte Carlo simulation. This was originally used for comparison of different variance reduction techniques by Hammersley and Handscomb (Monte Carlo Methods. Methuen & Co Ltd, London, 1967). The main result of the paper is the extension of CE method for estimation of rare event probabilities in stochastic networks with beta distributions. In this case the calculation of reference parameters of the importance sampling distribution requires numerical solution of a nonlinear equation system. This is done by applying a Newton–Raphson iteration scheme. In this case the CPU time spent for calculation of the reference parameter values cannot be neglected. Numerical results will also be presented. This work was supported by grant from the Hungarian National Scientific Research Grant OTKA T047340.  相似文献   

20.
《随机分析与应用》2012,30(1):76-96
Abstract

We introduce a completely novel method for estimation of the parameter which governs the tail behavior of the cumulative distribution function of the observed random variable. We call it Inverse Probabilities for p-Outside values (IPO) estimation method. We show that this approach is applicable for wider class of distributions than the one with regularly varying tails. We demonstrate that IPO method is a valuable competitor to regularly varying tails based estimation methods. Some of the properties of the estimators are derived. The results are illustrated by a convenient simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号