首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In dynamic linear models (DLMs) with unknown fixed parameters, a standard Markov chain Monte Carlo (MCMC) sampling strategy is to alternate sampling of latent states conditional on fixed parameters and sampling of fixed parameters conditional on latent states. In some regions of the parameter space, this standard data augmentation (DA) algorithm can be inefficient. To improve efficiency, we apply the interweaving strategies of Yu and Meng to DLMs. For this, we introduce three novel alternative DAs for DLMs: the scaled errors, wrongly scaled errors, and wrongly scaled disturbances. With the latent states and the less well known scaled disturbances, this yields five unique DAs to employ in MCMC algorithms. Each DA implies a unique MCMC sampling strategy and they can be combined into interweaving and alternating strategies that improve MCMC efficiency. We assess these strategies using the local level model and demonstrate that several strategies improve efficiency relative to the standard approach and the most efficient strategy interweaves the scaled errors and scaled disturbances. Supplementary materials are available online for this article.  相似文献   

2.
To predict or control the response of a complicated numerical model which involves a large number of input variables but is mainly affected by only a part of variables, it is necessary to screening those active variables. This paper proposes a new space-filling sampling strategy, which is used to screening the parameters based on the Morris’ elementary effect method. The beginning points of sampling trajectories are selected by using the maximin principle of Latin Hypercube Sampling method. The remaining points of trajectories are determined by using the one-factor-at-a-time design. Being different from other sampling strategies to determine the sequence of factors randomly in one-factor-at-a-time design, the proposed method formulates the sequence of factors by a deterministic algorithm, which sequentially maximizes the Euclidean distance among sampling trajectories. A new efficient algorithm is proposed to transform the distance maximization problem to a coordinate sorting problem, which saves computational cost much. After the elementary effects are computed using the sampling points, a detailed criterion is presented to select the active factors. Two mathematic examples and an engineering problem are used to validate the proposed sampling method, which demonstrates the priority in computational efficiency, space-filling performance, and screening efficiency.  相似文献   

3.
Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems.However,RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient.In this work,we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO.In particular,we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution-yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach.We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach,which shows that DNN-RTO can significantly outperform the traditional RTO.  相似文献   

4.
Summary In this paper, we have undertaken an investigation covering three occasions in sampling on successive occasions with a view to examining efficiency robustness of the best linear unbiased estimator (BLUE) visa-vis certain other potentially conceivable estimators when the usual correlation model breaks down. We have inferred that the BLUE, is by and large, an efficiency robust estimate in the face of unforeseen deviations from the usual correlation model.  相似文献   

5.

Many methods have been developed for analyzing survival data which are commonly right-censored. These methods, however, are challenged by complex features pertinent to the data collection as well as the nature of data themselves. Typically, biased samples caused by left-truncation (or length-biased sampling) and measurement error often accompany survival analysis. While such data frequently arise in practice, little work has been available to simultaneously address these features. In this paper, we explore valid inference methods for handling left-truncated and right-censored survival data with measurement error under the widely used Cox model. We first exploit a flexible estimator for the survival model parameters which does not require specification of the baseline hazard function. To improve the efficiency, we further develop an augmented nonparametric maximum likelihood estimator. We establish asymptotic results and examine the efficiency and robustness issues for the proposed estimators. The proposed methods enjoy appealing features that the distributions of the covariates and of the truncation times are left unspecified. Numerical studies are reported to assess the finite sample performance of the proposed methods.

  相似文献   

6.
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm’s accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.  相似文献   

7.
We consider Jackson queueing networks with finite buffer constraints (JQN) and analyze the efficiency of sampling from their stationary distribution. In the context of exact sampling, the monotonicity structure of JQNs ensures that such efficiency is of the order of the coupling time (or meeting time) of two extremal sample paths. In the context of approximate sampling, it is given by the mixing time. Under a condition on the drift of the stochastic process underlying a JQN, which we call hyper-stability, in our main result we show that the coupling time is polynomial in both the number of queues and buffer sizes. Then, we use this result to show that the mixing time of JQNs behaves similarly up to a given precision threshold. Our proof relies on a recursive formula relating the coupling times of trajectories that start from network states having “distance one”, and it can be used to analyze the coupling and mixing times of other Markovian networks, provided that they are monotone. An illustrative example is shown in the context of JQNs with blocking mechanisms.  相似文献   

8.
In this paper we extend the best choice of subsample size m in the 2-stage sampling,which suggested by Mohammad(1986), to the 3-stage sampling in cases of known and of unknown cost and variance ratio. We find the subsample size m,k which ensures more than the relative efficiency 90 %. Also we see that the choice of 3-stage subsample size depends on the design parameters using in 2-stage sampling.  相似文献   

9.
There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator.  相似文献   

10.
Case-cohort sampling is a commonly used and efficient method for studying large cohorts. In many situations, some covariates are easily measured on all cohort subjects, and surrogate measurements of the expensive covariates also may be observed. In this paper, to make full use of the covariate data collected outside the case-cohort sample, we propose'a class of weighted estimators with general time-varying weights for the additive hazards model, and the estimators are shown to be consistent and asymptotically normal. We also identify the estimator within this class that maximizes efficiency, and simulation studies show that the efficiency gains of the proposed estimator over the existing ones can be substantial in practical situations. A real example is provided.  相似文献   

11.
Epidemiologic studies use outcome-dependent sampling (ODS) schemes where, in addition to a simple random sample, there are also a number of supplement samples that are collected based on outcome variable. ODS scheme is a cost-effective way to improve study efficiency. We develop a maximum semiparametric empirical likelihood estimation (MSELE) for data from a two-stage ODS scheme under the assumption that given covariate, the outcome follows a general linear model. The information of both validation samples and nonvalidation samples are used. What is more, we prove the asymptotic properties of the proposed MSELE.  相似文献   

12.
In this paper, we consider a probabilistic model to represent some general dependent production processes and present a unified approach for designing attribute sampling plans for monitoring the ongoing production process. This model includes the classical iid model, independent model, Markov-dependent model and previous-sum dependent model, to mention a few. Some important properties of this model are established. We derive the recurrence relations for the probability distribution of the sum of n consecutive characteristics observed from the process. Using these recurrence relations, we present efficient algorithms for designing optimal single and double sampling plans for attributes, for monitoring the ongoing production process. Our algorithmic approach, which uses effectively the recurrence relations, yields a direct and an exact method, unlike many approximate methods adopted in the literature. Several interesting examples concerning specific models are discussed and a few tables for some special cases are also presented. It is demonstrated that the optimal double sampling plans lead to about 42% reduction in average sample number over the single sampling plans for process monitoring. AMS 2000 Subject Classifications: Primary 62P30; Secondary 62E15, 65C60  相似文献   

13.
Summary  Sampling from probability density functions (pdfs) has become more and more important in many areas of applied science, and has therefore been the subject of great attention. Many sampling procedures proposed allow for approximate or asymptotic sampling. On the other hand, very few methods allow for exact sampling. Direct sampling of standard pdfs is feasible, but sampling of much more complicated pdfs is often required. Rejection sampling allows to exactly sample from univariate pdfs, but has the huge drawback of needing a case-by-case calculation of a comparison function that often reveals as a tremendous chore, whose results dramatically affect the efficiency of the sampling procedure. In this paper, we restrict ourselves to a pdf that is proportional to a product of standard distributions. From there, we show that an automated selection of both the comparison function and the upper bound is possible. Moreover, this choice is performed in order to optimize the sampling efficiency among a range of potential solutions. Finally, the method is illustrated on a few examples.  相似文献   

14.
Mei  Yu  Chen  Zhiping  Liu  Jia  Ji  Bingbing 《Journal of Global Optimization》2022,83(3):585-613

We study the multi-stage portfolio selection problem where the utility function of an investor is ambiguous. The ambiguity is characterized by dynamic stochastic dominance constraints, which are able to capture the dynamics of the random return sequence during the investment process. We propose a multi-stage dynamic stochastic dominance constrained portfolio selection model, and use a mixed normal distribution with time-varying weights and the K-means clustering technique to generate a scenario tree for the transformation of the proposed model. Based on the scenario tree representation, we derive two linear programming approximation problems, using the sampling approach or the duality theory, which provide an upper bound approximation and a lower bound approximation for the original nonconvex problem. The upper bound is asymptotically tight with infinitely many samples. Numerical results illustrate the practicality and efficiency of the proposed new model and solution techniques.

  相似文献   

15.
For structural systems with both epistemic and aleatory uncertainties, the effect of epistemic uncertainty on failure probability is measured by the variance based sensitivity analysis, which generally needs a “triple-loop” crude sampling procedure to solve and is time consuming. Thus, the Kriging method is employed to avoid the complex sampling procedure and improve the computational efficiency. By utilizing the Kriging predictor model, the conditional expectation of failure probability on the given epistemic uncertainty can be calculated efficiently. Compared with the Sobol’s method, the proposed one can ensure reasonable accuracy of results but with lower computational cost. Three examples are employed to demonstrate the reasonability and efficiency of the proposed method.  相似文献   

16.
In this work, we propose a smart idea to couple importance sampling and Multilevel Monte Carlo (MLMC). We advocate a per level approach with as many importance sampling parameters as the number of levels, which enables us to handle the different levels independently. The search for parameters is carried out using sample average approximation, which basically consists in applying deterministic optimisation techniques to a Monte Carlo approximation rather than resorting to stochastic approximation. Our innovative estimator leads to a robust and efficient procedure reducing both the discretization error (the bias) and the variance for a given computational effort. In the setting of discretized diffusions, we prove that our estimator satisfies a strong law of large numbers and a central limit theorem with optimal limiting variance, in the sense that this is the variance achieved by the best importance sampling measure (among the class of changes we consider), which is however non tractable. Finally, we illustrate the efficiency of our method on several numerical challenges coming from quantitative finance and show that it outperforms the standard MLMC estimator.  相似文献   

17.
基于模型对称分解的对称全局敏感性分析在高维复杂模型的推断中起着重要作用.Wang和Chen (2017)提出了一种对称设计来获得对称灵敏度指标的估计,此设计具有较高的抽样效率且不需要得到对称分解项的解析表达.然而,给定试验次数,对称设计的生成具有较强的随机性,导致某些设计的空间填充性较差且在低维投影出现塌陷.文章提出了一种对称拉丁超立方体,使对称设计同时具有拉丁超立方体结构,从而在保持设计对称性的基础上最大化一维投影的均匀性.通过剖析设计的结构得到了对称拉丁超立方体的构造方法.同时,进一步提出最优化算法,得到具有最优中心化L2偏差的对称拉丁超立方体设计.通过一个构造算例,验证了所得设计的优良性.  相似文献   

18.
We propose a new model for cluster analysis in a Bayesian nonparametric framework. Our model combines two ingredients, species sampling mixture models of Gaussian distributions on one hand, and a deterministic clustering procedure (DBSCAN) on the other. Here, two observations from the underlying species sampling mixture model share the same cluster if the distance between the densities corresponding to their latent parameters is smaller than a threshold; this yields a random partition which is coarser than the one induced by the species sampling mixture. Since this procedure depends on the value of the threshold, we suggest a strategy to fix it. In addition, we discuss implementation and applications of the model; comparison with more standard clustering algorithms will be given as well. Supplementary materials for the article are available online.  相似文献   

19.
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. When there are linear dependencies among potential predictor variables in a generalized linear model, existing Markov chain Monte Carlo algorithms for sampling from the posterior distribution on the model and parameter space in Bayesian variable selection problems may not work well. This article describes a sampling algorithm based on the Swendsen-Wang algorithm for the Ising model, and which works well when the predictors are far from orthogonality. In problems of variable selection for generalized linear models we can index different models by a binary parameter vector, where each binary variable indicates whether or not a given predictor variable is included in the model. The posterior distribution on the model is a distribution on this collection of binary strings, and by thinking of this posterior distribution as a binary spatial field we apply a sampling scheme inspired by the Swendsen-Wang algorithm for the Ising model in order to sample from the model posterior distribution. The algorithm we describe extends a similar algorithm for variable selection problems in linear models. The benefits of the algorithm are demonstrated for both real and simulated data.  相似文献   

20.
Data from most complex surveys are subject to selection bias and clustering due to the sampling design. Results developed for a random sample from a super-population model may not apply. Ignoring the survey sampling weights may cause biased estimators and erroneous confidence intervals. In this paper, we use the design approach for fitting the proportional hazards (PH) model and prove formally the asymptotic normality of the sample maximum partial likelihood (SMPL) estimators under the PH model for both stochastically independent and clustered failure times. In the first case, we use the central limit theorem for martingales in the joint design-model space, and this enables us to obtain results for a general multistage sampling design under mild and easily verifiable conditions. In the case of clustered failure times, we require asymptotic normality in the sampling design space directly, and this holds for fewer sampling designs than in the first case. We also propose a variance estimator of the SMPL estimator. A key property of this variance estimator is that we do not have to specify the second-stage correlation model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号