首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Manufacturing or multivariate yield, the fraction of unscreened products which conforms to all product specification limits, is an important and commonly used metric for assessing and improving the quality of a production process. Current procedures for multivariate yield evaluation, such as Monte Carlo simulation, require substantial computing effort, making the iterative adjustment of design parameters often impractical. This paper introduces a new approach to multivariate yield evaluation based on a numerical integration procedure called Gaussian quadrature reduction (GQR). The advantage of this approach is a large reduction in the computational burden associated with multivariate yield evaluation with virtually no loss in accuracy of the estimates. The proposed procedure can be generalized to evaluate many other multivariate criteria such as expected costs and the desirability index. The method is demonstrated for three yield evaluation test problems, and comparisons to Monte Carlo-based evaluations are presented.  相似文献   

2.
We propose algorithms of adaptive integration for calculation of the tail probability in multi-factor credit portfolio loss models. We first modify the classical Genz-Malik rule, a deterministic multiple integration rule suitable for portfolio credit models with number of factors less than 8. Later on we arrive at the adaptive Monte Carlo integration, which essentially replaces the deterministic integration rule by antithetic random numbers. The latter can not only handle higher-dimensional models but is also able to provide reliable probabilistic error bounds. Both algorithms are asymptotic convergent and consistently outperform the plain Monte Carlo method.  相似文献   

3.
New regulations, stronger competitions and more volatile capital markets have increased the demand for stochastic asset-liability management (ALM) models for insurance companies in recent years. The numerical simulation of such models is usually performed by Monte Carlo methods which suffer from a slow and erratic convergence, though. As alternatives to Monte Carlo simulation, we propose and investigate in this article the use of deterministic integration schemes, such as quasi-Monte Carlo and sparse grid quadrature methods. Numerical experiments with different ALM models for portfolios of participating life insurance products demonstrate that these deterministic methods often converge faster, are less erratic and produce more accurate results than Monte Carlo simulation even for small sample sizes and complex models if the methods are combined with adaptivity and dimension reduction techniques. In addition, we show by an analysis of variance (ANOVA) that ALM problems are often of very low effective dimension which provides a theoretical explanation for the success of the deterministic quadrature methods.  相似文献   

4.
We study numerical integration of Lipschitz functionals on a Banach space by means of deterministic and randomized (Monte Carlo) algorithms. This quadrature problem is shown to be closely related to the problem of quantization and to the average Kolmogorov widths of the underlying probability measure. In addition to the general setting, we analyze, in particular, integration with respect to Gaussian measures and distributions of diffusion processes. We derive lower bounds for the worst case error of every algorithm in terms of its cost, and we present matching upper bounds, up to logarithms, and corresponding almost optimal algorithms. As auxiliary results, we determine the asymptotic behavior of quantization numbers and Kolmogorov widths for diffusion processes.   相似文献   

5.
In this paper we examine the relationship between a newly developed local dependence measure, the local Gaussian correlation, and standard copula theory. We are able to describe characteristics of the dependence structure in different copula models in terms of the local Gaussian correlation. Further, we construct a goodness-of-fit test for bivariate copula models. An essential ingredient of this test is the use of a canonical local Gaussian correlation and Gaussian pseudo-observations which make the test independent of the margins, so that it is a genuine test of the copula structure. A Monte Carlo study reveals that the test performs very well compared to a commonly used alternative test. We also propose two types of diagnostic plots which can be used to investigate the cause of a rejected null. Finally, our methods are applied to a “classical” insurance data set.  相似文献   

6.
In this paper, we propose a reliability–mechanical study combination for treating the metal forming process. This combination is based on the augmented Lagrangian method for solving the deterministic case and the response surface method. Our goal is the computation of the failure probability of the frictionless contact problem. Normally, contact problems in mechanics are particularly complex and have to be solved numerically. There are several numerical techniques available for computing the solution. However, some design parameters are uncertain and the deterministic solutions could be unacceptable. Thus, a mechanical contact study is an important subject for reliability analysis: the augmented Lagrangian method coupled with the first order reliability method, and we use the Monte Carlo method to obtain the founding results. The metal forming process is treated numerically to validate the new approach.  相似文献   

7.
Summary  The Gibbs sampler, being a popular routine amongst Markov chain Monte Carlo sampling methodologies, has revolutionized the application of Monte Carlo methods in statistical computing practice. The performance of the Gibbs sampler relies heavily on the choice of sweep strategy, that is, the means by which the components or blocks of the random vector X of interest are visited and updated. We develop an automated, adaptive algorithm for implementing the optimal sweep strategy as the Gibbs sampler traverses the sample space. The decision rules through which this strategy is chosen are based on convergence properties of the induced chain and precision of statistical inferences drawn from the generated Monte Carlo samples. As part of the development, we analytically derive closed form expressions for the decision criteria of interest and present computationally feasible implementations of the adaptive random scan Gibbs sampler via a Gaussian approximation to the target distribution. We illustrate the results and algorithms presented by using the adaptive random scan Gibbs sampler developed to sample multivariate Gaussian target distributions, and screening test and image data. Research by RL and ZY supported in part by a US National Science Foundation FRG grant 0139948 and a grant from Lawrence Livermore National Laboratory, Livermore, California, USA.  相似文献   

8.
In this work, we propose a smart idea to couple importance sampling and Multilevel Monte Carlo (MLMC). We advocate a per level approach with as many importance sampling parameters as the number of levels, which enables us to handle the different levels independently. The search for parameters is carried out using sample average approximation, which basically consists in applying deterministic optimisation techniques to a Monte Carlo approximation rather than resorting to stochastic approximation. Our innovative estimator leads to a robust and efficient procedure reducing both the discretization error (the bias) and the variance for a given computational effort. In the setting of discretized diffusions, we prove that our estimator satisfies a strong law of large numbers and a central limit theorem with optimal limiting variance, in the sense that this is the variance achieved by the best importance sampling measure (among the class of changes we consider), which is however non tractable. Finally, we illustrate the efficiency of our method on several numerical challenges coming from quantitative finance and show that it outperforms the standard MLMC estimator.  相似文献   

9.
The possibility of applying probability-theoretical methods to a deterministic procedure for estimating the error of evaluating multiple integrals by the quasi-Monte Carlo method is considered. The existing methods for estimating this error are nonconstructive. The well-known Koksma-Hlawka inequality contains the variations of the integrand as a constant. The problem of calculating these variations is more difficult than the initial one. Since the quasi-Monte Carlo method uses the arithmetic mean value of the integrand as an estimate of the integral, it is natural to expect that the distribution (in the number-theoretic sense) of the remainder in the approximate integration procedure obeys the normal law. However, there is an additional difficulty that, from the probability-theoretic point of view, quasi-random points are dependent, and numerically estimating the second moment of the remainder is thereby impeded. An approach to estimating the second moment of the error is proposed, which is based on the results of the theory of random cubature formulas obtained by the authors. Numerical examples are given, which show that the proposed error estimation method has great potential.  相似文献   

10.
We investigate the structure of a large precision matrix in Gaussian graphical models by decomposing it into a low rank component and a remainder part with sparse precision matrix.Based on the decomposition,we propose to estimate the large precision matrix by inverting a principal orthogonal decomposition(IPOD).The IPOD approach has appealing practical interpretations in conditional graphical models given the low rank component,and it connects to Gaussian graphical models with latent variables.Specifically,we show that the low rank component in the decomposition of the large precision matrix can be viewed as the contribution from the latent variables in a Gaussian graphical model.Compared with existing approaches for latent variable graphical models,the IPOD is conveniently feasible in practice where only inverting a low-dimensional matrix is required.To identify the number of latent variables,which is an objective of its own interest,we investigate and justify an approach by examining the ratios of adjacent eigenvalues of the sample covariance matrix?Theoretical properties,numerical examples,and a real data application demonstrate the merits of the IPOD approach in its convenience,performance,and interpretability.  相似文献   

11.
We investigate in this work a recently proposed diagrammatic quantum Monte Carlo method—the inchworm Monte Carlo method—for open quantum systems. We establish its validity rigorously based on resummation of Dyson series. Moreover, we introduce an integro-differential equation formulation for open quantum systems, which illuminates the mathematical structure of the inchworm algorithm. This new formulation leads to an improvement of the inchworm algorithm by introducing classical deterministic time-integration schemes. The numerical method is validated by applications to the spin-boson model. © 2020 Wiley Periodicals, Inc.  相似文献   

12.
《Optimization》2012,61(5):681-694
As global or combinatorial optimization problems are not effectively tractable by means of deterministic techniques, Monte Carlo methods are used in practice for obtaining ”good“ approximations to the optimum. In order to test the accuracy achieved after a sample of finite size, the Bayesian nonparametric approach is proposed as a suitable context, and the theoretical as well as computational implications of prior distributions in the class of neutral to the right distributions are examined. The feasibility of the approach relatively to particular Monte Carlo procedures is finally illustrated both for the global optimization problem and the {0 - 1} programming problem.  相似文献   

13.
We introduce a class of spatiotemporal models for Gaussian areal data. These models assume a latent random field process that evolves through time with random field convolutions; the convolving fields follow proper Gaussian Markov random field (PGMRF) processes. At each time, the latent random field process is linearly related to observations through an observational equation with errors that also follow a PGMRF. The use of PGMRF errors brings modeling and computational advantages. With respect to modeling, it allows more flexible model structures such as different but interacting temporal trends for each region, as well as distinct temporal gradients for each region. Computationally, building upon the fact that PGMRF errors have proper density functions, we have developed an efficient Bayesian estimation procedure based on Markov chain Monte Carlo with an embedded forward information filter backward sampler (FIFBS) algorithm. We show that, when compared with the traditional one-at-a-time Gibbs sampler, our novel FIFBS-based algorithm explores the posterior distribution much more efficiently. Finally, we have developed a simulation-based conditional Bayes factor suitable for the comparison of nonnested spatiotemporal models. An analysis of the number of homicides in Rio de Janeiro State illustrates the power of the proposed spatiotemporal framework.

Supplemental materials for this article are available online in the journal’s webpage.  相似文献   

14.
The Hooke and Jeeves algorithm (HJ) is a pattern search procedure widely used to optimize non-linear functions that are not necessarily continuous or differentiable. The algorithm performs repeatedly two types of search routines; an exploratory search and a pattern search. The HJ algorithm requires deterministic evaluation of the function being optimized. In this paper we consider situations where the objective function is stochastic and can be evaluated only through Monte Carlo simulation. To overcome the problem of expensive use of function evaluations for Monte Carlo simulation, a likelihood ratio performance extrapolation (LRPE) technique is used. We extrapolate the performance measure for different values of the decision parameters while simulating a single sample path from the underlying system. Our modified Hooke and Jeeves algorithm uses a likelihood ratio performance extrapolation for simulation optimization. Computational results are provided to demonstrate the performance of the proposed modified HJ algorithm.  相似文献   

15.
Recently proposed computationally efficient Markov chain Monte Carlo (MCMC) and Monte Carlo expectation–maximization (EM) methods for estimating covariance parameters from lattice data rely on successive imputations of values on an embedding lattice that is at least two times larger in each dimension. These methods can be considered exact in some sense, but we demonstrate that using such a large number of imputed values leads to slowly converging Markov chains and EM algorithms. We propose instead the use of a discrete spectral approximation to allow for the implementation of these methods on smaller embedding lattices. While our methods are approximate, our examples indicate that the error introduced by this approximation is small compared to the Monte Carlo errors present in long Markov chains or many iterations of Monte Carlo EM algorithms. Our results are demonstrated in simulation studies, as well as in numerical studies that explore both increasing domain and fixed domain asymptotics. We compare the exact methods to our approximate methods on a large satellite dataset, and show that the approximate methods are also faster to compute, especially when the aliased spectral density is modeled directly. Supplementary materials for this article are available online.  相似文献   

16.
Feature screening plays an important role in ultrahigh dimensional data analysis. This paper is concerned with conditional feature screening when one is interested in detecting the association between the response and ultrahigh dimensional predictors (e.g., genetic makers) given a low-dimensional exposure variable (such as clinical variables or environmental variables). To this end, we first propose a new index to measure conditional independence, and further develop a conditional screening procedure based on the newly proposed index. We systematically study the theoretical property of the proposed procedure and establish the sure screening and ranking consistency properties under some very mild conditions. The newly proposed screening procedure enjoys some appealing properties. (a) It is model-free in that its implementation does not require a specification on the model structure; (b) it is robust to heavy-tailed distributions or outliers in both directions of response and predictors; and (c) it can deal with both feature screening and the conditional screening in a unified way. We study the finite sample performance of the proposed procedure by Monte Carlo simulations and further illustrate the proposed method through two real data examples.  相似文献   

17.
Many optimal experimental designs depend on one or more unknown model parameters. In such cases, it is common to use Bayesian optimal design procedures to seek designs that perform well over an entire prior distribution of the unknown model parameter(s). Generally, Bayesian optimal design procedures are viewed as computationally intensive. This is because they require numerical integration techniques to approximate the Bayesian optimality criterion at hand. The most common numerical integration technique involves pseudo Monte Carlo draws from the prior distribution(s). For a good approximation of the Bayesian optimality criterion, a large number of pseudo Monte Carlo draws is required. This results in long computation times. As an alternative to the pseudo Monte Carlo approach, we propose using computationally efficient Gaussian quadrature techniques. Since, for normal prior distributions, suitable quadrature techniques have already been used in the context of optimal experimental design, we focus on quadrature techniques for nonnormal prior distributions. Such prior distributions are appropriate for variance components, correlation coefficients, and any other parameters that are strictly positive or have upper and lower bounds. In this article, we demonstrate the added value of the quadrature techniques we advocate by means of the Bayesian D-optimality criterion in the context of split-plot experiments, but we want to stress that the techniques can be applied to other optimality criteria and other types of experimental designs as well. Supplementary materials for this article are available online.  相似文献   

18.
19.
When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.  相似文献   

20.
The curse of high-dimensionality has emerged in the statistical fields more and more frequently. Many techniques have been developed to address this challenge for classification problems. We propose a novel feature screening procedure for dichotomous response data. This new method can be implemented as easily as t-test marginal screening approach, and the proposed procedure is free of any subexponential tail probability conditions and moment requirement and not restricted in a specific model structure. We prove that our method possesses the sure screening property and also illustrate the effect of screening by Monte Carlo simulation and apply it to a real data example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号