共查询到20条相似文献,搜索用时 31 毫秒
1.
REpetitive Simulation Trials After Reaching Thresholds (RESTART) is a widely applied accelerated simulation technique that allows the evaluation of extremely low probabilities. In this method a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of the rare event is higher. Formulas for evaluating the optimal number of regions and retrials as well as guidelines for obtaining suitable importance functions were provided in previous papers. Nevertheless, further investigations are required to apply these guidelines to practical cases. 相似文献
2.
N-H Shih 《The Journal of the Operational Research Society》2005,56(6):744-749
This paper deals with simulation-based estimation of the probability distribution for completion time in stochastic activity networks. These distribution functions may be valuable in many applications. A simulation method, using importance-sampling techniques, is presented for estimation of the probability distribution function. Separating the state space into two sets, one which must be sampled and another which need not be, is suggested. The sampling plan of the simulation can then be decided after the probabilities of the two sets are adjusted. A formula for the adjustment of the probabilities is presented. It is demonstrated that the estimator is unbiased and the upper bound of variance minimized. Adaptive sampling, utilizing the importance sampling techniques, is discussed to solve problems where there is no information or more than one way to separate the state space. Examples are used to illustrate the sampling plan. 相似文献
3.
The paper is dealing with estimation of rare event probabilities in stochastic networks. The well known variance reduction
technique, called Importance Sampling (IS) is an effective tool for doing this. The main idea of IS is to simulate the random
system under a modified set of parameters, so as to make the occurrence of the rare event more likely. The major problem of
the IS technique is that the optimal modified parameters, called reference parameters to be used in IS are usually very difficult
to obtain. Rubinstein (Eur J Oper Res 99:89–112, 1997) developed the Cross Entropy (CE) method for the solution of this problem
of IS technique and then he and his collaborators applied this for estimation of rare event probabilities in stochastic networks
with exponential distribution [see De Boer et al. (Ann Oper Res 134:19–67, 2005)]. In this paper, we test this simulation
technique also for medium sized stochastic networks and compare its effectiveness to the simple crude Monte Carlo (CMC) simulation.
The effectiveness of a variance reduction simulation algorithm is measured in the following way. We calculate the product
of the necessary CPU time and the estimated variance of the estimation. This product is compared to the same for the simple
Crude Monte Carlo simulation. This was originally used for comparison of different variance reduction techniques by Hammersley
and Handscomb (Monte Carlo Methods. Methuen & Co Ltd, London, 1967). The main result of the paper is the extension of CE method
for estimation of rare event probabilities in stochastic networks with beta distributions. In this case the calculation of
reference parameters of the importance sampling distribution requires numerical solution of a nonlinear equation system. This
is done by applying a Newton–Raphson iteration scheme. In this case the CPU time spent for calculation of the reference parameter
values cannot be neglected. Numerical results will also be presented.
This work was supported by grant from the Hungarian National Scientific Research Grant OTKA
T047340. 相似文献
4.
We propose a modification, based on the RESTART (repetitive simulation trials after reaching thresholds) and DPR (dynamics probability redistribution) rare event simulation algorithms, of the standard diffusion Monte Carlo (DMC) algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the “naïve” generalization of the standard algorithm would be impractical due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard‐Jones cluster), as well as a high‐frequency data assimilation problem. © 2014 Wiley Periodicals, Inc. 相似文献
5.
The surplus process of an insurance portfolio is defined as the wealth obtained by the premium payments minus the reimbursements made at the time of claims. When this process becomes negative (if ever), we say that ruin has occurred. The general setting is the Gambler's Ruin Problem. In this paper we address the problem of estimating derivatives (sensitivities) of ruin probabilities with respect to the rate of accidents. Estimating probabilities of rare events is a challenging problem, since naïve estimation is not applicable. Solution approaches are very recent, mostly through the use of importance sampling techniques. Sensitivity estimation is an even harder problem for these situations. We shall study three methods for estimating ruin probabilities: one via importance sampling (IS), and two others via indirect simulation: the storage process (SP), which restates the problems in terms of a queuing system, and the convolution formula (CF). To estimate the sensitivities, we apply the Rare Perturbation Analysis (RPA) method to IS, the Infinitesimal Perturbation Analysis (IPA) method to SP and the score function method to CF. Simulation methods are compared in terms of their efficiency, a criterion that appropriately weighs precision and CPU time. As well, we indicate how other criteria such as set-up time and prior formulae development may actually be problem-dependent. 相似文献
6.
There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator. 相似文献
7.
We consider systems of GI/M/1 type with bulk arrivals, bulk service and exponential server vacations. The generating functions of the steady-state probabilities
of the embedded Markov chain are found in terms of Riemann boundary value problems, a necessary and sufficient condition of
ergodicity is proved. Explicit formulas are obtained for the case where the generating function of the arrival group size
is rational. Resonance between the vacation rate and the system is studied. Complete formulas are given for the cases of single
and geometric arrivals.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
8.
9.
This paper applies importance sampling simulation for estimating rare event probabilities of the first passage time in the infinite server queue with renewal arrivals and general service time distributions. We consider importance sampling algorithms which are based on large deviations results of the infinite server queue, and we consider an algorithm based on the cross-entropy method, where we allow light-tailed and heavy-tailed distributions for the interarrival times and the service times. Efficiency of the algorithms is discussed by simulation experiments. 相似文献
10.
Steven N. Maceachern Mario Peruggia 《Journal of computational and graphical statistics》2013,22(1):99-121
Abstract This article focuses on improving estimation for Markov chain Monte Carlo simulation. The proposed methodology is based upon the use of importance link functions. With the help of appropriate importance sampling weights, effective estimates of functionals are developed. The method is most easily applied to irreducible Markov chains, where application is typically immediate. An important conceptual point is the applicability of the method to reducible Markov chains through the use of many-to-many importance link functions. Applications discussed include estimation of marginal genotypic probabilities for pedigree data, estimation for models with and without influential observations, and importance sampling for a target distribution with thick tails. 相似文献
11.
Rare event simulation in the context of queueing networks has been an active area of research for more than two decades. A commonly used technique to increase the efficiency of Monte Carlo simulation is importance sampling. However, there are few rigorous results on the design of efficient or asymptotically optimal importance sampling schemes for queueing networks. Using a recently developed game/subsolution approach, we construct simple and efficient state-dependent importance sampling schemes for simulating buffer overflows in stable open Jackson networks. The sampling distributions do not depend on the particular event of interest, and hence overflow probabilities for different events can be estimated simultaneously. A by-product of the analysis is the identification of the minimizing trajectory for the calculus of variation problem that is associated with the sample-path large deviation rate function. 相似文献
12.
While many single station queues possess explicit forms for their equilibrium probabilities, queueing networks are more problematic. Outside of the class of product form networks (e.g., Jackson, Kelly, and BCMP networks), one must resort to bounds, simulation, asymptotic studies or approximations. By focusing on a class of two-station closed reentrant queueing networks under the last buffer first served (LBFS) policy, we show that non-product form equilibrium probabilities can be obtained. When the number of customer classes in the network is five or fewer, explicit solutions can be obtained. Otherwise, we require the roots of a characteristic polynomial and a matrix inversion that depend only on the network topology. The approach relies on two key points. First, under LBFS, the state space can be reduced to four dimensions independent of the number of buffers in the system. Second, there is a sense of spatial causality in the global balance equations that can then be exploited. To our knowledge, these two-station closed reentrant queueing networks under LBFS represent the first class of queueing networks for which explicit non-product form equilibrium probabilities can be constructed (for five customer classes or less), the generic form of the equilibrium probabilities can be deduced and matrix analytic approaches can be applied. As discussed via example, there may be other networks for which related observations can be exploited. 相似文献
13.
Importance Sampling Simulations of Markovian Reliability Systems Using Cross-Entropy 总被引:1,自引:0,他引:1
This paper reports simulation experiments, applying the cross entropy method such as the importance sampling algorithm for
efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failure
biasing schemes that have been proved to give estimators with bounded relative errors. The results from the experiments indicate
a considerable improvement of the performance of the importance sampling estimators, where performance is measured by the
relative error of the estimate, by the relative error of the estimator, and by the gain of the importance sampling simulation
to the normal simulation. 相似文献
14.
《European Journal of Operational Research》1997,101(2):393-405
This paper is concerned with the determination of blocking probabilities in loss networks. In fact, our study consists of two parts. Primarily, we scale both arrival rates and link capacities, in order to derive rough asymptotical expressions. These expressions arise as the result of mathematical programming problems. Secondly, we develop a fast simulation technique to estimate the blocking probabilities. This technique is based on importance sampling, where the choice of the alternative probability model is closely related to the optimizing arguments of the above mentioned mathematical programming problem. Some examples show that huge gain of simulation effort can be achieved. 相似文献
15.
Nadaniela Egidi 《Journal of Mathematical Analysis and Applications》2011,377(2):670-682
We consider the integral operator defined on a circular disk, and with kernel the Green function of the Helmholtz operator. We present an analytic framework for the explicit computation of the singular system of this kernel. In particular, the main formulas of this framework are given by a characteristic equation for the singular values and explicit expressions for the corresponding singular functions. We provide also a property of the singular values, that gives an important information for the numerical evaluation of the singular system. Finally, we present a simple numerical experiment, where the singular system computed by a simple implementation of these analytic formulas is compared with the singular system obtained by a discretization of the Green function of the Helmholtz operator. 相似文献
16.
In our study of electrical networks we develop two themes: finding explicit formulas for special classes of functions defined on the vertices of a transient network, namely monopoles, dipoles, and harmonic functions. Secondly, our interest is focused on the properties of electrical networks supported on Bratteli diagrams. We show that the structure of Bratteli diagrams allows one to describe algorithmically harmonic functions as well as monopoles and dipoles. We also discuss some special classes of Bratteli diagrams (stationary, Pascal, trees), and we give conditions under which the harmonic functions defined on these diagrams have finite energy.
相似文献17.
In the selection of investment projects, it is important to account for exogenous uncertainties (such as macroeconomic developments) which may impact the performance of projects. These uncertainties can be addressed by examining how the projects perform across several scenarios; but it may be difficult to assign well-founded probabilities to such scenarios, or to characterize the decision makers’ risk preferences through a uniquely defined utility function. Motivated by these considerations, we develop a portfolio selection framework which (i) uses set inclusion to capture incomplete information about scenario probabilities and utility functions, (ii) identifies all the non-dominated project portfolios in view of this information, and (iii) offers decision support for rejection and selection of projects. The proposed framework enables interactive decision support processes where the implications of additional probability and utility information or further risk constraints are shown in terms of corresponding decision recommendations. 相似文献
18.
The approximation of the optimal policy functions is investigated for dynamic optimization problems with an objective that
is additive over a finite number of stages. The distance between optimal and suboptimal values of the objective functional
is estimated, in terms of the errors in approximating the optimal policy functions at the various stages. Smoothness properties
are derived for such functions and exploited to choose the approximating families. The approximation error is measured in
the supremum norm, in such a way to control the error propagation from stage to stage. Nonlinear approximators corresponding
to Gaussian radial-basis-function networks with adjustable centers and widths are considered. Conditions are defined, guaranteeing
that the number of Gaussians (hence, the number of parameters to be adjusted) does not grow “too fast” with the dimension
of the state vector. The results help to mitigate the curse of dimensionality in dynamic optimization. An example of application
is given and the use of the estimates is illustrated via a numerical simulation. 相似文献
19.
A. J. Hayter 《Computational Statistics》2014,29(5):1207-1219
Recursive formulas are provided for computing probabilities of a multinomial distribution. Firstly, a recursive formula is provided for computing rectangular probabilities which include the cumulative distribution function as a special case. These rectangular probabilities can be used to provide goodness-of-fit tests for the cell probabilities. The probability that a certain cell count is the maximum of all cell counts is also considered, which can be used to assess the probability that the maximum cell count corresponds to the cell with the maximum probability. Finally, a recursive formula is provided for computing the probability that the cell counts satisfy a certain ordering, which can be used to assess the probability that the ordering of the cell counts corresponds to the ordering of the cell probabilities. The computational intensity of these recursive formulas is linear in the number of cells, and they provide opportunities for calculating probabilities that would otherwise be computationally challenging. Some examples of the applications of these formulas are provided. 相似文献
20.
Following (López and Volle, J Convex Anal 17, 2010) we provide new formulas for the Fenchel subdifferential of the conjugate of functions defined on locally convex spaces. In particular, this allows deriving expressions for the minimizers set of the lower semicontinuous convex hull of such functions. These formulas are written by means of primal objects related to the subdifferential of the initial function, namely a new enlargement of the Fenchel subdifferential operator. 相似文献