首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The efficiency of discrete stochastic consistent estimators (the weighted uniform sampling and estimator with a correcting multiplier) of the Monte Carlo method is investigated. Confidence intervals and upper bounds on the variances are obtained, and the computational cost of the corresponding discrete stochastic numerical scheme is estimated.  相似文献   

2.
Abstract

This article proposes alternative methods for constructing estimators from accept-reject samples by incorporating the variables rejected by the algorithm. The resulting estimators are quick to compute, and turn out to be variations of importance sampling estimators, although their derivations are quite different. We show that these estimators are superior asymptotically to the classical accept-reject estimator, which ignores the rejected variables. In addition, we consider the issue of rescaling of estimators, a topic that has implications beyond accept-reject and importance sampling. We show how rescaling can improve an estimator and illustrate the domination of the standard importance sampling techniques in different setups.  相似文献   

3.
We develop importance sampling estimators for Monte Carlo pricing of European and path-dependent options in models driven by Lévy processes. Using results from the theory of large deviations for processes with independent increments, we compute an explicit asymptotic approximation for the variance of the pay-off under a time-dependent Esscher-style change of measure. Minimizing this asymptotic variance using convex duality, we then obtain an importance sampling estimator of the option price. We show that our estimator is logarithmically optimal among all importance sampling estimators. Numerical tests in the variance gamma model show consistent variance reduction with a small computational overhead.  相似文献   

4.
This paper introduces the “piggyback bootstrap.” Like the weighted bootstrap, this bootstrap procedure can be used to generate random draws that approximate the joint sampling distribution of the parametric and nonparametric maximum likelihood estimators in various semiparametric models, but the dimension of the maximization problem for each bootstrapped likelihood is smaller. This reduction results in significant computational savings in comparison to the weighted bootstrap. The procedure can be stated quite simply. First obtain a valid random draw for the parametric component of the model. Then take the draw for the nonparametric component to be the maximizer of the weighted bootstrap likelihood with the parametric component fixed at the parametric draw. We prove the procedure is valid for a class of semiparametric models that includes frailty regression models airsing in survival analysis and biased sampling models that have application to vaccine efficacy trials. Bootstrap confidence sets from the piggyback, and weighted bootstraps are compared for biased sampling data from simulated vaccine efficacy trials.  相似文献   

5.
This paper reports simulation experiments, applying the cross entropy method such as the importance sampling algorithm for efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failure biasing schemes that have been proved to give estimators with bounded relative errors. The results from the experiments indicate a considerable improvement of the performance of the importance sampling estimators, where performance is measured by the relative error of the estimate, by the relative error of the estimator, and by the gain of the importance sampling simulation to the normal simulation.  相似文献   

6.
Summary This paper establishes the uniform closeness of a weighted residual empirical process to its natural estimate in the linear regression setting when the errors are Gaussian, or a function of Gaussian random variables, that are strictly stationary and long range dependent. This result is used to yield the asymptotic uniform linearity of a class of rank statistics in linear regression models with long range dependent errors. The latter result, in turn, yields the asymptotic distribution of the Jaeckel (1972) rank estimators. The paper also studies the least absolute deviation and a class of certain minimum distance estimators of regression parameters and the kernel type density estimators of the marginal error density when the errors are long range dependent.Research of this author was partly supported by the NSF grant: DMS-9102041  相似文献   

7.
Given observations of a Lévy process, we provide nonparametric estimators of its Lévy tail and study the asymptotic properties of the corresponding weighted empirical processes. Within a special class of weight functions, we give necessary and sufficient conditions that ensure strong consistency and asymptotic normality of the weighted empirical processes, provided that complete information on the jumps is available. To cope with infinite activity processes, we depart from this assumption and analyze the weighted empirical processes of a sampling scheme where small jumps are neglected. We establish a bootstrap principle and provide a simulation study for some prominent Lévy processes.  相似文献   

8.
This article deals with the progressively first failure censored Lindley distribution. Maximum likelihood and Bayes estimators of the parameter and reliability characteristics of Lindley distribution based on progressively first failure censored samples are derived. Asymptotic confidence intervals based on observed Fisher information and bootstrap confidence intervals of the parameter are constructed. Bayes estimators using non-informative and gamma informative priors are derived using importance sampling procedure and Metropolis–Hastings (MH) algorithm under squared error loss function. Also, HPD credible intervals based on importance sampling procedure and MH algorithm for the parameter are constructed. To study the performance of various estimators discussed in this article, a Monte Carlo simulation study is conducted. Finally, a real data set is studied for illustration purposes.  相似文献   

9.
The stationary density of an invertible linear processes can be estimated at the parametric rate by a convolution of residual-based kernel estimators. We have shown elsewhere that the convergence is uniform and that a functional central limit theorem holds in the space of continuous functions vanishing at infinity. Here we show that analogous results hold in weighted L 1-spaces. We do not require smoothness of the innovation density.   相似文献   

10.
New minimum distance estimators are constructed with the help of a preliminary estimator. The asymptotic normality of the constructed estimator is proved with use of a uniform linear expansion of a randomly weighted residual empirical process in a non-standard neighborhood of the true parameter value. Also the question on asymptotic efficiency of the constructed estimator is discussed.  相似文献   

11.
Equally weighted mixture models are recommended for situations where it is required to draw precise finite sample inferences requiring population parameters, but where the population distribution is not constrained to belong to a simple parametric family. They lead to an alternative procedure to the Laird-DerSimonian maximum likelihood algorithm for unequally weighted mixture models. Their primary purpose lies in the facilitation of exact Bayesian computations via importance sampling. Under very general sampling and prior specifications, exact Bayesian computations can be based upon an application of importance sampling, referred to as Permutable Bayesian Marginalization (PBM). An importance function based upon a truncated multivariatet-distribution is proposed, which refers to a generalization of the maximum likelihood procedure. The estimation of discrete distributions, by binomial mixtures, and inference for survivor distributions, via mixtures of exponential or Weibull distributions, are considered. Equally weighted mixture models are also shown to lead to an alternative Gibbs sampling methodology to the Lavine-West approach.  相似文献   

12.
Sample average approximation (SAA) is one of the most popular methods for solving stochastic optimization and equilibrium problems. Research on SAA has been mostly focused on the case when sampling is independent and identically distributed (iid) with exceptions (Dai et al. (2000) [9], Homem-de-Mello (2008) [16]). In this paper we study SAA with general sampling (including iid sampling and non-iid sampling) for solving nonsmooth stochastic optimization problems, stochastic Nash equilibrium problems and stochastic generalized equations. To this end, we first derive the uniform exponential convergence of the sample average of a class of lower semicontinuous random functions and then apply it to a nonsmooth stochastic minimization problem. Exponential convergence of estimators of both optimal solutions and M-stationary points (characterized by Mordukhovich limiting subgradients (Mordukhovich (2006) [23], Rockafellar and Wets (1998) [32])) are established under mild conditions. We also use the unform convergence result to establish the exponential rate of convergence of statistical estimators of a stochastic Nash equilibrium problem and estimators of the solutions to a stochastic generalized equation problem.  相似文献   

13.
We study multiple sampling and interpolation problems with unbounded multiplicities in the weighted Bergman space, both in the hilbertian case and the uniform case.  相似文献   

14.
It is shown that the weighted residual-based estimator of Schick, Zhu, and Du (2017) is efficient in some special cases and can be made to be efficient by adding a stochastic correction term. The efficiency is shown by deriving the efficient influence function and establishing a uniform stochastic expansion with this influence function. The correction term relies on estimators of the score function for the errors and other characteristics of the model.  相似文献   

15.
Based on empirical likelihood method, we construct new weighted estimators of conditional density and conditional survival functions when the interest random variable is subject to random left-truncation; further, we define a plug-in weighted estimator of the conditional hazard rate. Under strong mixing assumptions, we derive asymptotic normality of the proposed estimators which permit to built a confidence interval for the conditional hazard rate. The finite sample behavior of the estimators is investigated via simulations too.  相似文献   

16.

We study parametric estimation of ergodic diffusions observed at high frequency. Different from the previous studies, we suppose that sampling stepsize is unknown, thereby making the conventional Gaussian quasi-likelihood not directly applicable. In this situation, we construct estimators of both model parameters and sampling stepsize in a fully explicit way, and prove that they are jointly asymptotically normally distributed. High order uniform integrability of the obtained estimator is also derived. Further, we propose the Schwarz (BIC) type statistics for model selection and show its model-selection consistency. We conducted some numerical experiments and found that the observed finite-sample performance well supports our theoretical findings.

  相似文献   

17.
对于多元失效时间数据,可以根据工作独立的假定来估计边际风险模型中的未知参数,但工作独立方法通常会失去估计的效率.为了充分利用不同失效类型之间的潜在相关性,提高估计的效率,可以通过加权的方法给出参数的加权部分似然估计.然而由于多元失效数据是高维数的数据,选择最优权是困难的.因此,Fan,Zhou,Cai和Chen曾基于参数估计向量中每个元的方差提出了一些次优加权方法,然后从参数向量所有分量估计的角度出发,构造了未知参数的复合加权部分似然估计,但他们没有给出这些复合加权估计的渐近性质.本文将对复合加权部分似然估计进一步的研究,推导了这个估计的渐近正态性,并给出了该估计的协方差阵以及协方差估计.同时,将该方法应用于艾滋病临床试验的实际数据,给出了有意义的解释和说明.最后进行了相关估计的一些数值模拟计算.  相似文献   

18.
We introduce a new importance sampling method for pricing basket default swaps employing exchangeable Archimedean copulas and nested Gumbel copulas. We establish more realistic dependence structures than existing copula models for credit risks in the underlying portfolio, and propose an appropriate density for importance sampling by analyzing multivariate Archimedean copulas. To justify efficiency and accuracy of the proposed algorithms, we present numerical examples and compare them with the crude Monte Carlo simulation, and finally show that our proposed estimators produce considerably smaller variances.  相似文献   

19.
In this paper we address the problem of estimating θ1 when , are observed and |θ1θ2|?c for a known constant c. Clearly Y2 contains information about θ1. We show how the so-called weighted likelihood function may be used to generate a class of estimators that exploit that information. We discuss how the weights in the weighted likelihood may be selected to successfully trade bias for precision and thus use the information effectively. In particular, we consider adaptively weighted likelihood estimators where the weights are selected using the data. One approach selects such weights in accord with Akaike's entropy maximization criterion. We describe several estimators obtained in this way. However, the maximum likelihood estimator is investigated as a competitor to these estimators along with a Bayes estimator, a class of robust Bayes estimators and (when c is sufficiently small), a minimax estimator. Moreover we will assess their properties both numerically and theoretically. Finally, we will see how all of these estimators may be viewed as adaptively weighted likelihood estimators. In fact, an over-riding theme of the paper is that the adaptively weighted likelihood method provides a powerful extension of its classical counterpart.  相似文献   

20.
Case-cohort sampling is a commonly used and efficient method for studying large cohorts. In many situations, some covariates are easily measured on all cohort subjects, and surrogate measurements of the expensive covariates also may be observed. In this paper, to make full use of the covariate data collected outside the case-cohort sample, we propose'a class of weighted estimators with general time-varying weights for the additive hazards model, and the estimators are shown to be consistent and asymptotically normal. We also identify the estimator within this class that maximizes efficiency, and simulation studies show that the efficiency gains of the proposed estimator over the existing ones can be substantial in practical situations. A real example is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号