首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
本文研究左截断数据(U,V)含θ的Fisher信息量,并给出了基于分布和失效率表示的两个表达式.特别在Koziol-Green模型下研究左截断观测数据(U,V)含θ的Fisher信息量与完全数据X含θ的Fisher信息量的关系,得出在该模型下左截断观测数据(U,V)含θ的Fisher信息量不仅未损失,反而增加的结论.  相似文献   

2.
当研究目标的实际测量具有不可修复的破坏性或耗资巨大时,有效的抽样设计将是一项重要的研究课题.在统计推断方面,排序集抽样(RSS)被视为一种比简单随机抽样(SRS)更为有效的收集数据的方式.动态极值RSS (MERSS)是一种修正的RSS.文章在SRS和MERSS下研究了Logistic分布中参数的极大似然估计(MLEs).在这两种抽样下证明了该分布中位置参数和刻度参数的MLEs的存在性和唯一性,并计算了所含参数的Fisher信息量和Fisher信息矩阵.比较了这两种抽样下对应估计的渐近效率.数值结果表明MERSS下的MLEs一致优于SRS下的MLEs.  相似文献   

3.
本文研究在循序-I型删失数据情形下Gompertz-sinh分布的统计推断问题.利用处理删失数据的EM算法,讨论Gompertz-sinh分布未知参数的最大似然估计(MLE)问题.为了讨论未知参数的近似置信区间估计,基于遗失信息原则,给出观测Fisher信息矩阵.为了演示本文的方法,给出相关数值模拟结果和一个真实数据实例.  相似文献   

4.
白龙  程从华 《应用数学》2016,29(1):104-116
本文研究了在循序-I型删失数据情形下Gompertz-sinh分布的统计推断问题.利用处理删失数据的EM算法,我们讨论了Gompertz-sinh分布未知参数的最大似然估计(MLE)问题.为了讨论未知参数的近似置信区间估计,基于遗失信息原则,我们给出了观测Fisher信息矩阵.为了演示本文的方法,我们给出了相关数值模拟结果和一个真实数据实例.  相似文献   

5.
当研究目标的实际测量具有不可修复的破坏性或耗资巨大时,有效的抽样设计将是一项重要的研究课题.在统计推断方面,排序集抽样(RSS)被视为一种比简单随机抽样(SRS)更为有效的收集数据的方式.文章分别在SRS和RSS下研究了Power-law分布中参数的极大似然估计(MLE),修正MLE,修正无偏估计和修正最优线性无偏估计(BLUE).进一步针对该分布,文章找到了基于次序统计量Fisher信息量最大化的RSS,并在这种RSS下研究了上述估计.模拟结果显示RSS下的对应估计一致优于SRS下的对应估计.  相似文献   

6.
配对相关数据经常在医学研究中应用,例如,在眼科或耳鼻喉科研究中,对配对器官中每个样本信息进行数据分析.配对器官的测量数据通常具有高度相关性,而大多数统计推断方法假定样本的观测值是相互独立的.研究表明,忽略配对数据的组内相关性会导致显著性水平的增加.有很多统计方法来解决配对数据的组内相关性.此外,忽略配对数据的相关性或混杂效应可导致结果发生偏差,因此,调整和控制统计推断中的混杂效应至关重要.本文回顾讨论这些统计方法,并提出统计推断方法的建议.  相似文献   

7.
指数分布样本中异常数据检验的有效性   总被引:6,自引:1,他引:5  
本文讨论指数样本中异常数据检验的有效性,证明了Fisher型统计量在异常值检验中的某种优良性。用随机模拟方法比较了Fisher统计量,Epstoin统计量和Dixon统计量在指数样本异常值检验中的功效。  相似文献   

8.
本文对纵向数据的线性混合模型,用Fisher得分法得到了参数的M估计(稳健估计),给出了其渐近性质,研究了M估计下异方差的Score检验问题,并对检验统计量的功效进行了模拟,最后通过葡萄糖数据的实例说明了本文方法的有效性.  相似文献   

9.
岳珠 《数学研究》1996,29(3):98-102
对线性模型:Y=Xβ ρ,E(ρ)=0,cov(ρ)=σ^2G,G=diag(g1,…gn),探讨了单个数据点对σ^2的估计-↑σ^2,β的估计-↑β的广义方差|cov-↑β|及Fisher信息阵:σ^2X′G^-1X的影响,给出了度量准则和它的化简式及统计解释。  相似文献   

10.
LBVW分布的Fisher信息矩阵   总被引:2,自引:0,他引:2  
文本考察由Larry Lee提出的其生存函数为 F(x_1,x_2)=exp{-[(x_1/θ_1)~(1/αδ)]~δ} (x_i>0,θ_i>0,i=1,2;0<δ≤1,α>0)的LBVW(θ_1,θ_2,δ,α)分布的统计性质,该分布的Fisher信息矩阵被导出。  相似文献   

11.
We present a method to perform tomographic imaging starting from very little data. The reconstruction of arbitrary images is performed using techniques mutuated from quantum state reconstruction. Almost maximal information gain from each datum is achieved, yielding a reduction in the radiation exposure of the imaged sample.  相似文献   

12.
A fundamental problem in financial trading is the correct and timely identification of turning points in stock value series. This detection enables to perform profitable investment decisions, such as buying‐at‐low and selling‐at‐high. This paper evaluates the ability of sequential smoothing methods to detect turning points in financial time series. The novel idea is to select smoothing and alarm coefficients on the gain performance of the trading strategy. Application to real data shows that recursive smoothers outperform two‐sided filters at the out‐of‐sample level. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents panoramic unmanned aerial vehicle (UAV) image stitching techniques based on an optimal Scale Invariant Feature Transform (SIFT) method. The image stitching representation associates a transformation matrix with each input image. In this study, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between the images. An improved Geometric Algebra (GA-SIFT) algorithm is proposed to realize fast feature extraction and feature matching work for the scanned images. The proposed GA-SIFT method can locate more feature points with greater accurately than the traditional SIFT method. The adaptive threshold value method proposed solves the limitation problem of high computation load and high cost of stitching time by greater feature points extraction and stitching work. The modified random sample consensus method is proposed to estimate the image transformation parameters and to determine the solution with the best consensus for the data. The experimental results demonstrate that the proposed image stitching method greatly increases the speed of the image alignment process and produces a satisfactory image stitching result. The proposed image stitching model for aerial images has good distinctiveness and robustness, and can save considerable time for large UAV image stitching.  相似文献   

14.
We present an adaptive method to extract shape-preserving information from a univariate data sample. The behavior of the signal is obtained by interpolating at adaptively selected few data points by a linear combination of multiquadrics with variable scaling parameters. On the theoretical side, we give a sufficient condition for existence of the scaled multiquadric interpolant. On the practical side, we give various examples to show the applicability of the method.  相似文献   

15.
Scattered data collected at sample points may be used to determine simple functions to best fit the data. An ideal choice for these simple functions is bivariate splines. Triangulation of the sample points creates partitions over which the bivariate splines may be defined. But the optimality of the approximation is dependent on the choice of triangulation. An algorithm, referred to as an Edge Swapping Algorithm, has been developed to transform an arbitrary triangulation of the sample points into an optimal triangulation for representation of the scattered data. A Matlab package has been completed that implements this algorithm for any triangulation on a given set of sample points.  相似文献   

16.
One of the challenging tasks in today image processing is image registration. Image registration is inevitable whenever images taken for example at different times or from different perspectives need to be compared or to be integrated. Typically, the location of corresponding points in the different views of one object or even of different objects is distorted. For example, motion or different properties of the underlying optical systems (MR, CT) are responsible for the distortion. Thus, a basic problem is to find a meaningful spatial transformation of a given image, such that the transformed image becomes similar to a given second one. Typically, the transformation is computed by minimizing a suitable similarity measure. For many applications it is also desirable to guide the registration by additional information, like the locations of outstanding points. In this note, be present a general variational based approach for image registration which allows the choice of a user supplied similarity measure and a user supplied regularizer as well as the integration of external knowledge, like, for example, the location of outstanding points.  相似文献   

17.
稀疏表示是近年来新兴的一种数据表示方法,是对人类大脑皮层编码机制的模拟。稀疏表示以其良好的鲁棒性、抗干扰能力、可解释性和判别性等优势,广泛应用于模式识别领域。基于稀疏表示的分类器在人脸识别领域取得了令人惊喜的成就,它将训练样本看成字典,寻求测试样本在字典下的最稀疏的表示,即用尽可能少的训练样本的线性组合来重构测试样本。但是经典的基于稀疏表示的分类器没有考虑训练样本的类别信息,以致被选中的训练样本来自许多类,不利于分类,因此基于组稀疏的分类器被提出。组稀疏方法考虑了训练样本的类别相似性,其目的是用尽可能少类别的训练样本来表示测试样本,然而这类方法的缺点是同类的训练样本或者同时被选中或者同时被丢弃。在实际中,人脸受到光照、表情、姿势甚至遮挡等因素的影响,样本之间关系比较复杂,因此最后介绍局部加权组结构稀疏表示方法。该方法尽量用来自于与测试样本相似的类的训练样本和来自测试样本邻域的训练样本来表示测试样本,以减轻不相关类的干扰,并使得表示更稀疏和更具判别性。  相似文献   

18.
信息熵度量风险的探究   总被引:4,自引:1,他引:3  
本文分析了风险的本质后指出,风险是某一特定行为主体对某一金融投资中损失的不确定性和收益的不确定性的认识。在众多风险度量的方法中,熵函数法有着其独特的度量风险的优势,因此,在本文中重点讨论了熵函数作为风险度量的合理性。同时提出一个新的风险度量模型,剖析其主要的数学特性,阐明该模型可以针对不同行为主体能有效地度量金融风险,并且计算量小,易于操作。  相似文献   

19.
Metamodels are used in many disciplines to replace simulation models of complex multivariate systems. To discover metamodels ‘quality-of-fit’ for simulation, simple information returned by average-based statistics, such as root-mean-square error RMSE, are often used. The sample of points used in determining these averages is restricted in size, especially for simulation models of complex multivariate systems. Obviously, decisions made based on average values can be misleading when the sample size is not adequate, and contributions made by each individual data point in such samples need to be examined. This paper presents methods that can be used to discover metamodels quality-of-fit graphically by means of two-dimensional plots. Three plot types are presented; these are the so-called circle plots, marksman plots, and ordinal plots. Such plots can be used to facilitate visual inspection of the effect on metamodel accuracy of each individual point in the data sample used for metamodel validation. The proposed methods can be used to complement quantitative validation statistics; in particular, for situations where there is not enough validation data or the validation data is too expensive to generate.  相似文献   

20.
Variable-fidelity modeling (VFM), sometimes also termed multi-fidelity modeling, refers to the utilization of two or more data layers of different accuracy in order to construct an inexpensive emulator of a given numerical high-fidelity model. In practical applications, this situation arises when simulators of different accuracy for the same physical process are given and it is assumed that many low-fidelity sample points are affordable, but the high-fidelity model is extremely costly to assess. More precisely, the VFM objective is to construct a predictor function, which interpolates the primary sample data but is driven by the trend indicated by the secondary data. In view of the computational effort, the objective is to use as few high-fidelity sample points as possible to achieve a desired level of accuracy. A widely used VFM method is Cokriging. The technique yields the best linear unbiased estimator based on the given data, taking spatial correlation into account. One way to model spatial correlation is via positive definite correlation kernels that yield positive definite correlation matrices for distinct input sample points. In this contribution, we will address the positive definiteness of Cokriging correlation matrices which is necessary for the method but not granted, due to the modeling of the cross-correlations. We discuss both the cases of distinct and coincident low- and high-fidelity sample points. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号