首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
平衡抽样是广受关注的利用辅助信息改善样本结构的抽样方法.MPPS抽样在多目标调查中采用的与多变量规模成比例的入样概率,可以在平衡抽样中精确满足.基于MPPS抽样和平衡抽样的性质,文章提出多目标调查下的MPPS平衡抽样,该方法主要思想是将多个调查变量的辅助信息在确定入样概率和随机抽样过程中同时使用,提高HT估计量精度.模...  相似文献   

2.
随机前沿模型中如果忽略单边干扰项的异质性(heterogeneity)往往导致错误的效率估计.从个体特征的影响和方差的时变性两方面对单边干扰项进行考虑,提出异方差动态随机前沿模型.利用Gibbs抽样方法对动态异方差随机前沿模型进行Bayesian分析.导出了模型参数的后验条件分布,对中小样本的模拟实验显示在最小后验均方误差准则下得到的参数估计值非常接近真值.对电力公司的实际数据进行分析显示对数无效率项的方差有一定的时变性.  相似文献   

3.
简单随机抽样,包括有无放回两种形式,是最基础的抽样设计.本文基于“虚拟普查”想法,对简单估计量的抽样方差给出了新的计算思路,并揭示了有无放回简单随机抽样之间的内在联系.“虚拟普查”想法的核心在于虚拟普查矩阵.该矩阵记录了逐一抽取的无放回简单随机抽样,假设拓展为普查时—–“虚拟普查”名称的由来,所有总体单元的入样轨迹.论文构建的“虚拟普查”技术框架,可以看作已有对称化论证方法和以入样指示变量为工具的论证方法的融合,对增进理解传统抽样策略,看起来具有潜在价值.作为示例,论文针对实践中常用的两个抽样策略,有放回不等概抽样和自适应整群抽样,给出了基于简单随机抽样的解读视角.  相似文献   

4.
基于有序抽样样本的参数的极大似然估计的性质   总被引:1,自引:0,他引:1  
有序抽样是一种新的抽样方法 ,与简单随机抽样方法相比它具有很多很好的性质 .本文讨论了在有序抽样样本下的参数的极大似然估计的性质 .  相似文献   

5.
当研究目标的实际测量具有不可修复的破坏性或耗资巨大时,有效的抽样设计将是一项重要的研究课题.在统计推断方面,排序集抽样被视为一种更为有效的收集数据的方式.极值排序集抽样(ERSS)是一种改进的排序集抽样.文章在ERSS下研究了总体均值的比率估计.以正态分布为例,比较了简单随机抽样和ERSS下比率估计的相对效率.数值结果表明ERSS下的比率估计优于简单随机抽样下的比率估计.  相似文献   

6.
混凝土构件检测时,一般采用百分比抽样的方式.以回弹法中对混凝土强度进行检测时采用百分比抽样的抽样方式为例,从绝对误差限和相对误差限的角度分析了不同构件总量均采用此方法抽取样本的不合理性,并提出在不同混凝土强度等级、不同构件总数的情况下,通过控制一定的误差限来确定样本数量的方法.通过理论和实例分析,提出在抽样过程中,采用分层抽样技术对检测构件进行合理分层,降低总体方差,可减少样本数量.这种方法也适用于其它检测问题的样本容量的确定.  相似文献   

7.
本文讨论了二阶抽样中一阶抽样为不等概率抽样、二阶抽样为简单随机抽样时,二级单位总样本量的几种分配方案。给出各种分配方案下总体总值的Horvitz-Thompson估计量的方差。并加以比较,最后给出最优分配方案。  相似文献   

8.
正交表型均匀LH设计和抽样   总被引:5,自引:1,他引:4  
本文提出了一种新的设计和抽样方法-正交表型均匀LH设计和抽样,证明了这种抽样空间是OALH抽样空间的优良子集。这种设计和抽样空间中所有样本都与初始设计具有同阶低偏差等一些优良性质。并将它用于数值积分,证明了对有关参数的估计的方差阶低于其他抽样。同时还给出了有关的模拟结果。  相似文献   

9.
排序集抽样(RSS)是一种著名的抽样技术,它有许多变体,中位数排序集抽样(MRSS)就是其中一种.与简单随机抽样(SRS)相比,RSS在估计总体均值方面具有优势.然而,RSS及其变体的局限性在于,当给定样本大小n时,抽样过程中每次SRS的规模m只能是n或者n的因子.介绍了一种改进的中位数排序集抽样方法MRSS(m),它比原方法在抽样过程上有更多的选择.实验表明,采用新的抽样方法MRSS(m),可以提高Horvitz-Thompson(HT)估计量的估计效率,同时还降低了抽样成本.  相似文献   

10.
抽样调查是获取社会经济调查数据的主要手段,其抽样设计一般采用分层多阶段不等概的抽样设计。但是,在抽样设计和实际抽样中,人们往往忽视末端样本个体的抽样,本文主要基于中国家庭动态跟踪调查数据对末端样本的概率抽样方法进行比较研究。  相似文献   

11.
相比不放回抽样,放回抽样的实施比较简单,操作性强,但缺点在于单元可能被重复抽到,抽出的有效样本量小于等于样本量,不是固定的。本文应用逆抽样的原理,设计了一种放回抽样方法,满足有效样本量固定,并且估计量的性质优良。  相似文献   

12.
A comparison is made between the variance of the estimator of the total of a variable obtained from both a simple and a stratified random sampling, in which the sample sizes of some strata are equal to the stratum population size.It is shown that in this case, the advantage of the stratified sample could depend on the sample size. The paper presents inequalities that determine, as a function of the sample size, when the variance of the estimator obtained with simple sampling is lower than the variance obtained with the stratified sampling. The results give insight in order to prevent overstratification.  相似文献   

13.
李涛  吴边 《数学学报》2017,60(6):897-910
本文提出了无重叠κ-序对排序集抽样方法,即在每个排序集中对κ-序对个体进行观测,并且不同的排序集的κ-序对之间没有任何重复.我们首先探究了此抽样方法得到的样本均值的有效性随每个排序集中κ-序对个体间的相关性变化的趋势.κ-序对个体间的相关性越强,样本均值的有效性损失越大.本文的目的是找到无重叠κ-序对排序集抽样方法中κ-序对分配的最优方案从而使样本均值的有效性损失最小,并证明了最优的无重叠κ-序对排序集抽样比广义排序集抽样以及简单随机抽样更有效.尽管无重叠κ-序对排序集抽样方法的统计效率低于经典的排序集抽样,但是在成本模型下,最优的无重叠κ-序对排序集抽样方法可以比经典的排序集抽样更有效.  相似文献   

14.
??How to solve the inference problem of candidate database web surveys is an urgent problem to be solved in the development of web survey. In order to solve this problem, the inference method of non-probability sampling based on superpopulation pseudo design and the combined sample is proposed. A superpopulation model is firstly built up to construct pseudo weights for a survey sample of the web candidate database. The estimator of the population mean is then computed according to the combined sample composed of the survey sample of the web candidate database and a probability sample. The variance estimator of the population mean estimator is lastly derived according to the variance estimation theory of the superpopulation model. The Bootstrap and Jackknife methods are also used to compute the variance estimator. And all these variance estimation methods are compared. The research results show that the population mean estimator based on superpopulation pseudo design and the combined sample is better, and has higher efficiency than the estimator only using the probability sample and the weighted estimator only using the survey sample of the web candidate database. The variance estimator computed by using the VM1, VM2 and VM3 method are relatively better.  相似文献   

15.
How to solve the inference problem of candidate database web surveys is an urgent problem to be solved in the development of web survey. In order to solve this problem, the inference method of non-probability sampling based on superpopulation pseudo design and the combined sample is proposed. A superpopulation model is firstly built up to construct pseudo weights for a survey sample of the web candidate database. The estimator of the population mean is then computed according to the combined sample composed of the survey sample of the web candidate database and a probability sample. The variance estimator of the population mean estimator is lastly derived according to the variance estimation theory of the superpopulation model. The Bootstrap and Jackknife methods are also used to compute the variance estimator. And all these variance estimation methods are compared. The research results show that the population mean estimator based on superpopulation pseudo design and the combined sample is better, and has higher efficiency than the estimator only using the probability sample and the weighted estimator only using the survey sample of the web candidate database. The variance estimator computed by using the VM1, VM2 and VM3 method are relatively better.  相似文献   

16.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

17.
18.
When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. For students in statistical theory, this short article seeks to revive the question of the relevance of randomized methods. When comparing methods for construction of confidence bounds in discrete settings, randomization methods are useful in fixing like confidence levels and hence facilitating the comparisons. Under simple random sampling, this article defines and presents a simple algorithm for the construction of exact “randomized” upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using “nonrandomized” methods. A general theory for exact randomized confidence bounds is presented in Lehmann (1959, p. 81), but Lehmann's development requires more mathematical development than is required in this application. Not only is the development of these “randomized” bounds in this paper elementary, but their desirable properties and their link with the usual nonrandomized bounds are easy to see with the presented approach which leads to the same results as would be obtained using the method of Lehmann.  相似文献   

19.
We consider the three progressively more general sampling schemes without replacement from a finite population: simple random sampling without replacement, Midzuno sampling and successive sampling. We (i) obtain a lower bound on the expected sample coverage of a successive sample, (ii) show that the vector of first order inclusion probabilities divided by the sample size is majorized by the vector of selection probabilities of a successive sample, and (iii) partially order the vectors of first order inclusion probabilities for the three sampling schemes by majorization. We also show that the probability of an ordered successive sample enjoys the arrangement increasing property and for sample size two the expected sample coverage of a successive sample is Schur convex in its selection probabilities. We also study the spacings of a simple random sample from a linearly ordered finite population and characterize in several ways a simple random sample.  相似文献   

20.
High-dimensional reliability analysis is still an open challenge in structural reliability community. To address this problem, a new sampling approach, named the good lattice point method based partially stratified sampling is proposed in the fractional moments-based maximum entropy method. In this approach, the original sample space is first partitioned into several orthogonal low-dimensional sample spaces, say 2 and 1 dimensions. Then, the samples in each low-dimensional sample space are generated by the good lattice point method, which are deterministic points and possess the property of large variance reduction. Finally, the samples in the original space can be obtained by randomly pairing the samples in low-dimensions, which may also significantly reduce the variance in high-dimensional cases. Then, this sampling approach is applied to evaluate the low-order fractional moments in the maximum entropy method with the tradeoff of efficiency and accuracy for high-dimensional reliability problems. In this regard, the probability density function of the performance function involving a large number of random inputs can be derived accordingly, where the reliability can be straightforwardly evaluated by a simple integral over the probability density function. Numerical examples are studied to validate the proposed method, which indicate the proposed method is of accuracy and efficiency for high-dimensional reliability analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号