首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
研究了一类含阻尼项与连续分布式偏差变元的中立型双曲泛函微分方程的振动性,利用微分不等式法与微积分技巧,获得了该类方程在Dirichlet边值条件下所有解振动的几个充分性判定定理,所得结果推广和改进了已有文献中的研究成果.  相似文献   

2.
针对带协变量的负二项回归模型中离散参数估计问题,推广了极大似然估计和Bootstrap极大似然估计方法,并在绝对偏差的意义下,通过模拟研究和实际数据分析研究了估计的优良性.研究结果表明协变量和样本量均对离散参数估计有影响.  相似文献   

3.
刘浩 《数学年刊A辑》2000,21(2):197-206
本文研究单位球上全纯映照的线性不变族.推广了Pommerenke在单位圆盘上建立的不变族理论.给出了单位球上不变族的秩的一些刻划,作为应用,给出不变族的偏差定理的一个简单证明.建立了一种更广泛的偏差定理,并讨论了不变族的单叶半径.最后,研究了不变族极值映照的齐次展开式的三次项系数,建立三次项系数和二次项系数的一些关系式.  相似文献   

4.
本文研究单位球上全纯映照的线性不变族.推广了Pommerenke在单位圆盘上建立的不变族理论。给出了单位球上不变族的秩的一些刻划,作为应用,给出不变族的偏差定理的一个简单证明。建立了一种更广泛的偏差定理,并讨论了不变族的单叶半径.最后,研究了不变族极值映照的齐次展开式的三次项系数,建立三次项系数和二次项系数的一些关系式。  相似文献   

5.
针对带协变量的负二项回归模型中离散参数估计问题,推广了极大似然估计和Bootstrap极大似然估计方法,并在绝对偏差的意义下,通过模拟研究和实际数据分析研究了估计的优良性.研究结果表明协变量和样本量均对离散参数估计有影响.  相似文献   

6.
考虑超稳定情形下Jacobi过程漂移项参数极大似然估计的中偏差原理,得到了其精确的速率函数.而且得到平方Bessel过程漂移项参数极大似然估计与基于Jacobi过程轨道构造的极大似然估计量满足不相同的中偏差原理.  相似文献   

7.
提出双重障碍期权定价的离散放法,反复使用反射原理计算触及障碍的轨线数,并拓展:Boyle-Lau方法计算时步数,而从减少传统Cox-Ross-Rubinstein二项式算法的偏差,数值模拟结果验证所提出离散方法的准确和有效性.  相似文献   

8.
广义最小偏差法(GLDM)是层次分析中一种重要的排序方法.本文讨论了广义最小偏差法的性质和灵敏度分析问题.  相似文献   

9.
本文利用多重Wiener-Ito积分的偏差不等式和中偏差结果,得到第二类分数Ornstein-Uhlenbeck(OU)过程漂移项系数最小二乘估计量的若干渐近性质,其中包括偏差不等式和Cramér-型的中偏差;同时,给出以上估计量自正则版本的渐近性质,并以此构造漂移项系数的置信区间估计和显著性检验中的拒绝域(第二类错误以指数速度趋于0).  相似文献   

10.
通过构造向量形式的振动微分方程组,利用均向量场(AVF)法得到振动响应的向量差分迭代格式.该离散格式能够保能量,同时具有二阶精度的特征,从而给出非线性振动问题的均向量场法.介绍了均向量场法的基本步骤.在建立AVF格式时,对于微分方程中若干常见的项,直接给出相应的映射项.应用均向量场法研究了非线性单摆问题和Kepler(开普勒)问题,数值结果说明了该方法保能量和具有长时间求解能力的特性.  相似文献   

11.
In a one-way random-effects model, we frequently estimate the variance components by the analysis-of-variance method and then, assuming the estimated values are true values of the variance components, we estimate the population mean. The conventional variance estimator for the estimate of the mean has a bias. This bias can become severe in contaminated data. We can reduce the bias by using the delta method. However, it still suffers from a large bias. We develop a jackknife variance estimator which is robust with respect to data contamination.This research was supported by the Korea Science and Engineering Foundation.  相似文献   

12.
This paper deals with the bias reduction of Akaike information criterion (AIC) for selecting variables in multivariate normal linear regression models when the true distribution of observation is an unknown nonnormal distribution. We propose a corrected version of AIC which is partially constructed by the jackknife method and is adjusted to the exact unbiased estimator of the risk when the candidate model includes the true model. It is pointed out that the influence of nonnormality in the bias of our criterion is smaller than the ones in AIC and TIC. We verify that our criterion is better than the AIC, TIC and EIC by conducting numerical experiments.  相似文献   

13.
The bootstrap (Efron, 1979, 1982) is a very simple resampling plan and has shown to be successful in estimating the bias and other measures of statistical error of a number of estimators. It gives freedom from the constraints of traditional parametric theory at the cost of performing the usual statistical calculations a hundred or a thousand times over. In this letter another example of bootstrap estimation of bias is given. It is interesting that Quenouille's (1949) jackknife (see Miller, 1964, for a review) fails completely in this case.  相似文献   

14.
The use of the jackknife method is successful in many situations. However, when the observations are from anm-dependent stationary process, the ordinary jackknife may provide an inconsistent variance estimator. It is shown in this note that this deficiency of the jackknife can be rectified and the jackknife variance estimator proposed is strongly consistent.  相似文献   

15.
To estimate the dispersion of an M-estimator computed using Newton's iterative method, the jackknife method usually requires to repeat the iterative process n times, where n is the sample size. To simplify the computation, one-step jackknife estimators, which require no iteration, are proposed in this paper. Asymptotic properties of the one-step jackknife estimators are obtained under some regularity conditions in the i.i.d. case and in a linear or nonlinear model. All the one-step jackknife estimators are shown to be asymptotically equivalent and they are also asymptotically equivalent to the original jackknife estimator. Hence one may use a dispersion estimator whose computation is the simplest. Finite sample properties of several one-step jackknife estimators are examined in a simulation study.The research was supported by Natural Sciences and Engineering Research Council of Canada.  相似文献   

16.
Abstract Consider a partially linear regression model with an unknown vector parameter β,an unknownfunction g(.),and unknown heteroscedastic error variances.Chen,You proposed a semiparametric generalizedleast squares estimator(SGLSE)for β,which takes the heteroscedasticity into account to increase efficiency.Forinference based on this SGLSE,it is necessary to construct a consistent estimator for its asymptotic covariancematrix.However,when there exists within-group correlation, the traditional delta method and the delete-1jackknife estimation fail to offer such a consistent estimator.In this paper, by deleting grouped partial residualsa delete-group jackknife method is examined.It is shown that the delete-group jackknife method indeed canprovide a consistent estimator for the asymptotic covariance matrix in the presence of within-group correlations.This result is an extension of that in[21].  相似文献   

17.
Data sharpening involves perturbing the data to improve the performance of a statistical method. The versions of it that have been proposed in the past have been for bias reduction in curve estimation, and the amount of perturbation of each datum has been determined by an explicit formula. This article suggests a distance-based form of data sharpening, in which the sum of the distances that data are moved is minimized subject to a constraint imposed on an estimator. The constraint could be one that leads to bias reduction, or to variance or variability reduction, or to a curve estimator being monotone or unimodal. In contrast to earlier versions of the method, in the form presented in this article the amount and extent of sharpening is determined implicitly by a formula that is typically given as the solution of a Lagrange-multiplier equation. Sometimes the solution can be found by Newton–Raphson iteration, although when qualitative constraints are imposed it usually requires quadratic programming or a related method.  相似文献   

18.
Smoothed jackknife empirical likelihood method for ROC curve   总被引:1,自引:0,他引:1  
In this paper we propose a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve. By applying the standard empirical likelihood method for a mean to the jackknife sample, the empirical likelihood ratio statistic can be calculated by simply solving a single equation. Therefore, this procedure is easy to implement. Wilks’ theorem for the empirical likelihood ratio statistic is proved and a simulation study is conducted to compare the performance of the proposed method with other methods.  相似文献   

19.
We discuss the problem of constructing information criteria by applying the bootstrap methods. Various bias and variance reduction methods are presented for improving the bootstrap bias correction term in computing the bootstrap information criterion. The properties of these methods are investigated both in theoretical and numerical aspects, for which we use a statistical functional approach. It is shown that the bootstrap method automatically achieves the second-order bias correction if the bias of the first-order bias correction term is properly removed. We also show that the variance associated with bootstrapping can be considerably reduced for various model estimation procedures without any analytical argument. Monte Carlo experiments are conducted to investigate the performance of the bootstrap bias and variance reduction techniques.  相似文献   

20.
In this article, we propose a new method of bias reduction in nonparametric regression estimation. The proposed new estimator has asymptotic bias order h4, where h is a smoothing parameter, in contrast to the usual bias order h2 for the local linear regression. In addition, the proposed estimator has the same order of the asymptotic variance as the local linear regression. Our proposed method is closely related to the bias reduction method for kernel density estimation proposed by Chung and Lindsay (2011). However, our method is not a direct extension of their density estimate, but a totally new one based on the bias cancelation result of their proof.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号