首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the estimation of parametric models for stationary spatial or spatio-temporal data on a d-dimensional lattice, for d?2, the achievement of asymptotic efficiency under Gaussianity, and asymptotic normality more generally, with standard convergence rate, faces two obstacles. One is the “edge effect”, which worsens with increasing d. The other is the possible difficulty of computing a continuous-frequency form of Whittle estimate or a time domain Gaussian maximum likelihood estimate, due mainly to the Jacobian term. This is especially a problem in “multilateral” models, which are naturally expressed in terms of lagged values in both directions for one or more of the d dimensions. An extension of the discrete-frequency Whittle estimate from the time series literature deals conveniently with the computational problem, but when subjected to a standard device for avoiding the edge effect has disastrous asymptotic performance, along with finite sample numerical drawbacks, the objective function lacking a minimum-distance interpretation and losing any global convexity properties. We overcome these problems by first optimizing a standard, guaranteed non-negative, discrete-frequency, Whittle function, without edge-effect correction, providing an estimate with a slow convergence rate, then improving this by a sequence of computationally convenient approximate Newton iterations using a modified, almost-unbiased periodogram, the desired asymptotic properties being achieved after finitely many steps. The asymptotic regime allows increase in both directions of all d dimensions, with the central limit theorem established after re-ordering as a triangular array. However our work offers something new for “unilateral” models also. When the data are non-Gaussian, asymptotic variances of all parameter estimates may be affected, and we propose consistent, non-negative definite estimates of the asymptotic variance matrix.  相似文献   

2.
We consider the estimation of the regression operator r in the functional model: Y=r(x)+ε, where the explanatory variable x is of functional fixed-design type, the response Y is a real random variable and the error process ε is a second order stationary process. We construct the kernel type estimate of r from functional data curves and correlated errors. Then we study their performances in terms of the mean square convergence and the convergence in probability. In particular, we consider the cases of short and long range error processes. When the errors are negatively correlated or come from a short memory process, the asymptotic normality of this estimate is derived. Finally, some simulation studies are conducted for a fractional autoregressive integrated moving average and for an Ornstein-Uhlenbeck error processes.  相似文献   

3.
Let (X,Y) be a Rd×N0-valued random vector where the conditional distribution of Y given X=x is a Poisson distribution with mean m(x). We estimate m by a local polynomial kernel estimate defined by maximizing a localized log-likelihood function. We use this estimate of m(x) to estimate the conditional distribution of Y given X=x by a corresponding Poisson distribution and to construct confidence intervals of level α of Y given X=x. Under mild regularity conditions on m(x) and on the distribution of X we show strong convergence of the integrated L1 distance between Poisson distribution and its estimate. We also demonstrate that the corresponding confidence interval has asymptotically (i.e., for sample size tending to infinity) level α, and that the probability that the length of this confidence interval deviates from the optimal length by more than one converges to zero with the number of samples tending to infinity.  相似文献   

4.
Consider observations (representing lifelengths) taken on a random field indexed by lattice points. Our purpose is to estimate the hazard rate r(x), which is the rate of failure at time x for the survivors up to time x. We estimate r(x) by the nonparametric estimator constructed in terms of a kernel-type estimator for f(x) and the natural estimator for . Under some general mixing assumptions, the limiting distribution of the estimator at multiple points is shown to be multivariate normal. The result is useful in establishing confidence bands for r(x) with x in an interval.  相似文献   

5.
Let f be a multivariate density and fn be a kernel estimate of f drawn from the n-sample X1,…,Xn of i.i.d. random variables with density f. We compute the asymptotic rate of convergence towards 0 of the volume of the symmetric difference between the t-level set {f?t} and its plug-in estimator {fn?t}. As a corollary, we obtain the exact rate of convergence of a plug-in-type estimate of the density level set corresponding to a fixed probability for the law induced by f.  相似文献   

6.
In this paper, we use the kernel method to estimate sliced average variance estimation (SAVE) and prove that this estimator is both asymptotically normal and root n consistent. We use this kernel estimator to provide more insight about the differences between slicing estimation and other sophisticated local smoothing methods. Finally, we suggest a Bayes information criterion (BIC) to estimate the dimensionality of SAVE. Examples and real data are presented for illustrating our method.  相似文献   

7.
We propose a parametric model for a bivariate stable Lévy process based on a Lévy copula as a dependence model. We estimate the parameters of the full bivariate model by maximum likelihood estimation. As an observation scheme we assume that we observe all jumps larger than some ε>0 and base our statistical analysis on the resulting compound Poisson process. We derive the Fisher information matrix and prove asymptotic normality of all estimates when the truncation point ε→0. A simulation study investigates the loss of efficiency because of the truncation.  相似文献   

8.
A density f=f(x1,…,xd) on [0,∞)d is block decreasing if for each j∈{1,…,d}, it is a decreasing function of xj, when all other components are held fixed. Let us consider the class of all block decreasing densities on [0,1]d bounded by B. We shall study the minimax risk over this class using n i.i.d. observations, the loss being measured by the L1 distance between the estimate and the true density. We prove that if S=log(1+B), lower bounds for the risk are of the form C(Sd/n)1/(d+2), where C is a function of d only. We also prove that a suitable histogram with unequal bin widths as well as a variable kernel estimate achieve the optimal multivariate rate. We present a procedure for choosing all parameters in the kernel estimate automatically without loosing the minimax optimality, even if B and the support of f are unknown.  相似文献   

9.
In this paper we aim to estimate the direction in general single-index models and to select important variables simultaneously when a diverging number of predictors are involved in regressions. Towards this end, we propose the nonconcave penalized inverse regression method. Specifically, the resulting estimation with the SCAD penalty enjoys an oracle property in semi-parametric models even when the dimension, pn, of predictors goes to infinity. Under regularity conditions we also achieve the asymptotic normality when the dimension of predictor vector goes to infinity at the rate of pn=o(n1/3) where n is sample size, which enables us to construct confidence interval/region for the estimated index. The asymptotic results are augmented by simulations, and illustrated by analysis of an air pollution dataset.  相似文献   

10.
We consider the problem of testing whether the common mean of a single n-vector of multivariate normal random variables with known variance and unknown common correlation ρ is zero. We derive the standardized likelihood ratio test for known ρ and explore different ways of proceeding with ρ unknown. We evaluate the performance of the standardized statistic where ρ is replaced with an estimate of ρ and determine the critical value cn that controls the type I error rate for the least favorable ρ in [0,1]. The constant cn increases with n and this procedure has pathological behavior if ρ depends on n and ρn converges to zero at a certain rate. As an alternate approach, we replace ρ with the upper limit of a (1−βn) confidence interval chosen so that cn=c for all n. We determine βn so that the type I error rate is exactly controlled for all ρ in [0,1]. We also investigate a simpler approach where we bound the type I error rate. The former method performs well for all n while the less powerful bound method may be a useful in some settings as a simple approach. The proposed tests can be used in different applications, including within-cluster resampling and combining exchangeable p-values.  相似文献   

11.
Euclidean distance-based classification rules are derived within a certain nonclassical linear model approach and applied to elliptically contoured samples having a density generating function g. Then a geometric measure theoretical method to evaluate exact probabilities of correct classification for multivariate uncorrelated feature vectors is developed. When doing this one has to measure suitably defined sets with certain standardized measures. The geometric key point is that the intersection percentage functions of the areas under investigation coincide with those of certain parabolic cylinder type sets. The intersection percentage functions of the latter sets can be described as threefold integrals. It turns out that these intersection percentage functions yield simultaneously geometric representation formulae for the doubly noncentral g-generalized F-distributions. Hence, we get beyond new formulae for evaluating probabilities of correct classification new geometric representation formulae for the doubly noncentral g-generalized F-distributions. A numerical study concerning several aspects of evaluating both probabilities of correct classification and values of the doubly noncentral g-generalized F-distributions demonstrates the advantageous computational properties of the present new approach. This impression will be supported by comparison with the literature.It is shown that probabilities of correct classification depend on the parameters of the underlying sample distribution through a certain well-defined set of secondary parameters. If the underlying parameters are unknown, we propose to estimate probabilities of correct classification.  相似文献   

12.
Let (X, Y) have regression function m(x) = E(Y | X = x), and let X have a marginal density f1(x). We consider two nonparameteric estimates of m(x): the Watson estimate when f1 is known and the Yang estimate when f1 is known or unknown. For both estimates the asymptotic distribution of the maximal deviation from m(x) is proved, thus extending results of Bickel and Rosenblatt for the estimation of density functions.  相似文献   

13.
This paper deals with the bias correction of the cross-validation (CV) criterion to estimate the predictive Kullback-Leibler information. A bias-corrected CV criterion is proposed by replacing the ordinary maximum likelihood estimator with the maximizer of the adjusted log-likelihood function. The adjustment is just slight and simple, but the improvement of the bias is remarkable. The bias of the ordinary CV criterion is O(n-1), but that of the bias-corrected CV criterion is O(n-2). We verify that our criterion has smaller bias than the AIC, TIC, EIC and the ordinary CV criterion by numerical experiments.  相似文献   

14.
The censored single-index model provides a flexible way for modelling the association between a response and a set of predictor variables when the response variable is randomly censored and the link function is unknown. It presents a technique for “dimension reduction” in semiparametric censored regression models and generalizes the existing accelerated failure time models for survival analysis. This paper proposes two methods for estimation of single-index models with randomly censored samples. We first transform the censored data into synthetic data or pseudo-responses unbiasedly, then obtain estimates of the index coefficients by the rOPG or rMAVE procedures of Xia (2006) [1]. Finally, we estimate the unknown nonparametric link function using techniques for univariate censored nonparametric regression. The estimators for the index coefficients are shown to be root-n consistent and asymptotically normal. In addition, the estimator for the unknown regression function is a local linear kernel regression estimator and can be estimated with the same efficiency as the parameters are known. Monte Carlo simulations are conducted to illustrate the proposed methodologies.  相似文献   

15.
In this paper, an information-based criterion is proposed for carrying out change point analysis and variable selection simultaneously in linear models with a possible change point. Under some weak conditions, this criterion is shown to be strongly consistent in the sense that with probability one, it chooses the smallest true model for large n. Its byproducts include strongly consistent estimates of the regression coefficients regardless if there is a change point. In case that there is a change point, its byproducts also include a strongly consistent estimate of the change point parameter. In addition, an algorithm is given which has significantly reduced the computation time needed by the proposed criterion for the same precision. Results from a simulation study are also presented.  相似文献   

16.
Let X be a p-variate (p ≥ 3) vector normally distributed with mean θ and known covariance matrix
. It is desired to estimate θ under the quadratic loss (δ ? θ)tQ(δ ? θ), where Q is a known positive definite matrix. A broad class of minimax estimators for θ is developed.  相似文献   

17.
Consider the model Y=m(X)+ε, where m(⋅)=med(Y|⋅) is unknown but smooth. It is often assumed that ε and X are independent. However, in practice this assumption is violated in many cases. In this paper we propose modeling the dependence between ε and X by means of a copula model, i.e. (ε,X)∼Cθ(Fε(⋅),FX(⋅)), where Cθ is a copula function depending on an unknown parameter θ, and Fε and FX are the marginals of ε and X. Since many parametric copula families contain the independent copula as a special case, the so-obtained regression model is more flexible than the ‘classical’ regression model.We estimate the parameter θ via a pseudo-likelihood method and prove the asymptotic normality of the estimator, based on delicate empirical process theory. We also study the estimation of the conditional distribution of Y given X. The procedure is illustrated by means of a simulation study, and the method is applied to data on food expenditures in households.  相似文献   

18.
This paper proposes a technique [termed censored average derivative estimation (CADE)] for studying estimation of the unknown regression function in nonparametric censored regression models with randomly censored samples. The CADE procedure involves three stages: firstly-transform the censored data into synthetic data or pseudo-responses using the inverse probability censoring weighted (IPCW) technique, secondly estimate the average derivatives of the regression function, and finally approximate the unknown regression function by an estimator of univariate regression using techniques for one-dimensional nonparametric censored regression. The CADE provides an easily implemented methodology for modelling the association between the response and a set of predictor variables when data are randomly censored. It also provides a technique for “dimension reduction” in nonparametric censored regression models. The average derivative estimator is shown to be root-n consistent and asymptotically normal. The estimator of the unknown regression function is a local linear kernel regression estimator and is shown to converge at the optimal one-dimensional nonparametric rate. Monte Carlo experiments show that the proposed estimators work quite well.  相似文献   

19.
This paper proposes two estimation methods based on a weighted least squares criterion for non-(strictly) stationary power ARCH models. The weights are the squared volatilities evaluated at a known value in the parameter space. The first method is adapted for fixed sample size data while the second one allows for online data available in real time. It will be shown that these methods provide consistent and asymptotically Gaussian estimates having asymptotic variance equal to that of the quasi-maximum likelihood estimate (QMLE) regardless of the value of the weighting parameter. Finite-sample performances of the proposed WLS estimates are shown via a simulation study for various sub-classes of power ARCH models.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号