首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Rates of convergence for minimum contrast estimators   总被引:3,自引:0,他引:3  
Summary We shall present here a general study of minimum contrast estimators in a nonparametric setting (although our results are also valid in the classical parametric case) for independent observations. These estimators include many of the most popular estimators in various situations such as maximum likelihood estimators, least squares and other estimators of the regression function, estimators for mixture models or deconvolution... The main theorem relates the rate of convergence of those estimators to the entropy structure of the space of parameters. Optimal rates depending on entropy conditions are already known, at least for some of the models involved, and they agree with what we get for minimum contrast estimators as long as the entropy counts are not too large. But, under some circumstances (large entropies or changes in the entropy structure due to local perturbations), the resulting the rates are only suboptimal. Counterexamples are constructed which show that the phenomenon is real for non-parametric maximum likelihood or regression. This proves that, under purely metric assumptions, our theorem is optimal and that minimum contrast estimators happen to be suboptimal.  相似文献   

2.
This paper investigates the generalized least squares estimation and the maximum likelihood estimation of the parameters in a multivariate polychoric correlations model, based on data from a multidimensional contingency table. Asymptotic properties of the estimators are discussed. An iterative procedure based on the Gauss-Newton algorithm is implemented to produce the generalized least squares estimates and the standard errors estimates. It is shown that via an iteratively reweighted method, the algorithm produces the maximum likelihood estimates as well. Numerical results on the finite sample behaviors of the methods are reported.  相似文献   

3.
First, the second-order bias of the estimator of the autoregressive parameter based on the ordinary least squares residuals in a linear model with serial correlation is given. Second, the second-order expansion of the risk matrix of a generalized least squares estimator with the above estimated parameter is obtained. This expansion is the same as that based on a suitable estimator of the autoregressive parameter independent of the sample. Third, it is shown that the risk matrix of the generalized least squares estimator is asymptotically equivalent to that of the maximum likelihood estimator up to the second order. Last, a sufficient condition is given for the term due to the estimation of the autoregressive parameter in this expansion to vanish under Grenander's condition for the explanatory variates.  相似文献   

4.
The restricted maximum likelihood (REML) procedure is useful for inferences about variance components in mixed linear models. However, its extension to hierarchical generalized linear models (HGLMs) is often hampered by analytically intractable integrals. Numerical integration such as Gauss-Hermite quadrature (GHQ) is generally not recommended when the dimensionality of the integral is high. With binary data various extensions of the REML method have been suggested, but they have had unsatisfactory biases in estimation. In this paper we propose a statistically and computationally efficient REML procedure for the analysis of binary data, which is applicable over a wide class of models and design structures. We propose a bias-correction method for models such as binary matched pairs and discuss how the REML estimating equations for mixed linear models can be modified to implement more general models.  相似文献   

5.
Linear regression models with vague concepts extend the classical single equation linear regression models by admitting observations in form of fuzzy subsets instead of real numbers. They have lately been introduced (cf. [V. Krätschmer, Induktive Statistik auf Basis unscharfer Meßkonzepte am Beispiel linearer Regressionsmodelle, unpublished postdoctoral thesis, Faculty of Law and Economics of the University of Saarland, Saarbrücken, 2001; V. Krätschmer, Least squares estimation in linear regression models with vague concepts, Fuzzy Sets and Systems, accepted for publication]) to improve the empirical meaningfulness of the relationships between the involved items by a more sensitive attention to the problems of data measurement, in particular, the fundamental problem of adequacy. The parameters of such models are still real numbers, and a method of estimation can be applied which extends directly the ordinary least squares method. In another recent contribution (cf. [V. Krätschmer, Strong consistency of least squares estimation in linear regression models with vague concepts, J. Multivar. Anal., accepted for publication]) strong consistency and -consistency of this generalized least squares estimation have been shown. The aim of the paper is to complete these results by an investigation of the limit distributions of the estimators. It turns out that the classical results can be transferred, in some cases even asymptotic normality holds.  相似文献   

6.
The paper considers general multiplicative models for complete and incomplete contingency tables that generalize log-linear and several other models and are entirely coordinate free. Sufficient conditions for the existence of maximum likelihood estimates under these models are given, and it is shown that the usual equivalence between multinomial and Poisson likelihoods holds if and only if an overall effect is present in the model. If such an effect is not assumed, the model becomes a curved exponential family and a related mixed parameterization is given that relies on non-homogeneous odds ratios. Several examples are presented to illustrate the properties and use of such models.  相似文献   

7.
Quantile regression for longitudinal data   总被引:18,自引:0,他引:18  
The penalized least squares interpretation of the classical random effects estimator suggests a possible way forward for quantile regression models with a large number of “fixed effects”. The introduction of a large number of individual fixed effects can significantly inflate the variability of estimates of other covariate effects. Regularization, or shrinkage of these individual effects toward a common value can help to modify this inflation effect. A general approach to estimating quantile regression models for longitudinal data is proposed employing ?1 regularization methods. Sparse linear algebra and interior point methods for solving large linear programs are essential computational tools.  相似文献   

8.
Maximum likelihood methods are important for system modeling and parameter estimation. This paper derives a recursive maximum likelihood least squares identification algorithm for systems with autoregressive moving average noises, based on the maximum likelihood principle. In this derivation, we prove that the maximum of the likelihood function is equivalent to minimizing the least squares cost function. The proposed algorithm is different from the corresponding generalized extended least squares algorithm. The simulation test shows that the proposed algorithm has a higher estimation accuracy than the recursive generalized extended least squares algorithm.  相似文献   

9.
The empirical likelihood method is especially useful for constructing confidence intervals or regions of parameters of interest. Yet, the technique cannot be directly applied to partially linear single-index models for longitudinal data due to the within-subject correlation. In this paper, a bias-corrected block empirical likelihood (BCBEL) method is suggested to study the models by accounting for the within-subject correlation. BCBEL shares some desired features: unlike any normal approximation based method for confidence region, the estimation of parameters with the iterative algorithm is avoided and a consistent estimator of the asymptotic covariance matrix is not needed. Because of bias correction, the BCBEL ratio is asymptotically chi-squared, and hence it can be directly used to construct confidence regions of the parameters without any extra Monte Carlo approximation that is needed when bias correction is not applied. The proposed method can naturally be applied to deal with pure single-index models and partially linear models for longitudinal data. Some simulation studies are carried out and an example in epidemiology is given for illustration.  相似文献   

10.
We present a new approach to univariate partial least squares regression (PLSR) based on directional signal-to-noise ratios (SNRs). We show how PLSR, unlike principal components regression, takes into account the actual value and not only the variance of the ordinary least squares (OLS) estimator. We find an orthogonal sequence of directions associated with decreasing SNR. Then, we state partial least squares estimators as least squares estimators constrained to be null on the last directions. We also give another procedure that shows how PLSR rebuilds the OLS estimator iteratively by seeking at each step the direction with the largest difference of signals over the noise. The latter approach does not involve any arbitrary scale or orthogonality constraints.  相似文献   

11.
Risk bounds for model selection via penalization   总被引:11,自引:0,他引:11  
Performance bounds for criteria for model selection are developed using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term motivated by empirical process theory and roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of our examples involve density or regression estimation settings and we focus on the problem of estimating the unknown density or regression function. We show that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. This accuracy index quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size. If we choose a list of models which exhibit good approximation properties with respect to different classes of smoothness, the estimator can be simultaneously minimax rate optimal in each of those classes. This is what is usually called adaptation. The type of classes of smoothness in which one gets adaptation depends heavily on the list of models. If too many models are involved in order to get accurate approximation of many wide classes of functions simultaneously, it may happen that the estimator is only approximately adaptive (typically up to a slowly varying function of the sample size). We shall provide various illustrations of our method such as penalized maximum likelihood, projection or least squares estimation. The models will involve commonly used finite dimensional expansions such as piecewise polynomials with fixed or variable knots, trigonometric polynomials, wavelets, neural nets and related nonlinear expansions defined by superposition of ridge functions. Received: 7 July 1995 / Revised version: 1 November 1997  相似文献   

12.
非寿险分类费率模型及其参数估计   总被引:1,自引:1,他引:0  
在非寿险分类费率厘定中,存在各种模型可供选择,如加法模型、乘法模型、混合模型和广义线性模型等,而在这些模型的参数估计中,还存在各种可供选择的估计方法,如最小二乘法、极大似然法、最小x2法、直接法和边际总和法等。这些模型和参数估计方法散见于各种精算学文献中,本文对这些模型和参数估计方法进行了系统的比较和分析,并揭示了它们之间存在的一些等价关系。  相似文献   

13.
We investigate depth notions for general models which are derived via the likelihood principle. We show that the so-called likelihood depth for regression in generalized linear models coincides with the regression depth of Rousseeuw and Hubert (J. Amer. Statist. Assoc. 94 (1999) 388) if the dependent observations are appropriately transformed. For deriving tests, the likelihood depth is extended to simplicial likelihood depth. The simplicial likelihood depth is always a U-statistic which is in some cases not degenerated. Since the U-statistic is degenerated in the most cases, we demonstrate that nevertheless the asymptotic distribution of the simplicial likelihood depth and thus asymptotic α-level tests for general types of hypotheses can be derived. The tests are distribution-free. We work out the method for linear and quadratic regression.  相似文献   

14.
We consider the linear regression model where prior information in the form of linear inequalities restricts the parameter space to a polyhedron. Since the linear minimax estimator has, in general, to be determined numerically, it was proposed to minimize an upper bound of the maximum risk instead. The resulting so-called quasiminimax estimator can be easily calculated in closed form. Unfortunately, both minimax estimators may violate the prior information. Therefore, we consider projection estimators which are obtained by projecting the estimate in an optional second step. The performance of these estimators is investigated in a Monte Carlo study together with several least squares estimators, including the inequality restricted least squares estimator. It turns out that both the projected and the unprojected quasiminimax estimators have the best average performance.  相似文献   

15.
In the estimation of parametric models for stationary spatial or spatio-temporal data on a d-dimensional lattice, for d?2, the achievement of asymptotic efficiency under Gaussianity, and asymptotic normality more generally, with standard convergence rate, faces two obstacles. One is the “edge effect”, which worsens with increasing d. The other is the possible difficulty of computing a continuous-frequency form of Whittle estimate or a time domain Gaussian maximum likelihood estimate, due mainly to the Jacobian term. This is especially a problem in “multilateral” models, which are naturally expressed in terms of lagged values in both directions for one or more of the d dimensions. An extension of the discrete-frequency Whittle estimate from the time series literature deals conveniently with the computational problem, but when subjected to a standard device for avoiding the edge effect has disastrous asymptotic performance, along with finite sample numerical drawbacks, the objective function lacking a minimum-distance interpretation and losing any global convexity properties. We overcome these problems by first optimizing a standard, guaranteed non-negative, discrete-frequency, Whittle function, without edge-effect correction, providing an estimate with a slow convergence rate, then improving this by a sequence of computationally convenient approximate Newton iterations using a modified, almost-unbiased periodogram, the desired asymptotic properties being achieved after finitely many steps. The asymptotic regime allows increase in both directions of all d dimensions, with the central limit theorem established after re-ordering as a triangular array. However our work offers something new for “unilateral” models also. When the data are non-Gaussian, asymptotic variances of all parameter estimates may be affected, and we propose consistent, non-negative definite estimates of the asymptotic variance matrix.  相似文献   

16.
A method for simultaneous modelling of the Cholesky decomposition of several covariance matrices is presented. We highlight the conceptual and computational advantages of the unconstrained parameterization of the Cholesky decomposition and compare the results with those obtained using the classical spectral (eigenvalue) and variance-correlation decompositions. All these methods amount to decomposing complicated covariance matrices into “dependence” and “variance” components, and then modelling them virtually separately using regression techniques. The entries of the “dependence” component of the Cholesky decomposition have the unique advantage of being unconstrained so that further reduction of the dimension of its parameter space is fairly simple. Normal theory maximum likelihood estimates for complete and incomplete data are presented using iterative methods such as the EM (Expectation-Maximization) algorithm and their improvements. These procedures are illustrated using a dataset from a growth hormone longitudinal clinical trial.  相似文献   

17.
The purpose of this paper is two-fold. First, for the estimation or inference about the parameters of interest in semiparametric models, the commonly used plug-in estimation for infinite-dimensional nuisance parameter creates non-negligible bias, and the least favorable curve or under-smoothing is popularly employed for bias reduction in the literature. To avoid such strong structure assumptions on the models and inconvenience of estimation implementation, for the diverging number of parameters in a varying coefficient partially linear model, we adopt a bias-corrected empirical likelihood (BCEL) in this paper. This method results in the distribution of the empirical likelihood ratio to be asymptotically tractable. It can then be directly applied to construct confidence region for the parameters of interest. Second, different from all existing methods that impose strong conditions to ensure consistency of estimation when diverging the number of the parameters goes to infinity as the sample size goes to infinity, we provide techniques to show that, other than the usual regularity conditions, the consistency holds under moment conditions alone on the covariates and error with a diverging rate being even faster than those in the literature. A simulation study is carried out to assess the performance of the proposed method and to compare it with the profile least squares method. A real dataset is analyzed for illustration.  相似文献   

18.
Parallel to Cox's [JRSS B34 (1972) 187-230] proportional hazards model, generalized logistic models have been discussed by Anderson [Bull. Int. Statist. Inst. 48 (1979) 35-53] and others. The essential assumption is that the two densities ratio has a known parametric form. A nice property of this model is that it naturally relates to the logistic regression model for categorical data. In astronomic, demographic, epidemiological, and other studies the variable of interest is often truncated by an associated variable. This paper studies generalized logistic models for the two-sample truncated data problem, where the two lifetime densities ratio is assumed to have the form exp{α+φ(x;β)}. Here φ is a known function of x and β, and the baseline density is unspecified. We develop a semiparametric maximum likelihood method for the case where the two samples have a common truncation distribution. It is shown that inferences for β do not depend the nonparametric components. We also derive an iterative algorithm to maximize the semiparametric likelihood for the general case where different truncation distributions are allowed. We further discuss how to check goodness of fit of the generalized logistic model. The developed methods are illustrated and evaluated using both simulated and real data.  相似文献   

19.
Model selection by means of the predictive least squares (PLS) principle has been thoroughly studied in the context of regression model selection and autoregressive (AR) model order estimation. We introduce a new criterion based on sequentially minimized squared deviations, which are smaller than both the usual least squares and the squared prediction errors used in PLS. We also prove that our criterion has a probabilistic interpretation as a model which is asymptotically optimal within the given class of distributions by reaching the lower bound on the logarithmic prediction errors, given by the so called stochastic complexity, and approximated by BIC. This holds when the regressor (design) matrix is non-random or determined by the observed data as in AR models. The advantages of the criterion include the fact that it can be evaluated efficiently and exactly, without asymptotic approximations, and importantly, there are no adjustable hyper-parameters, which makes it applicable to both small and large amounts of data.  相似文献   

20.
Many statistical models, e.g. regression models, can be viewed as conditional moment restrictions when distributional assumptions on the error term are not assumed. For such models, several estimators that achieve the semiparametric efficiency bound have been proposed. However, in many studies, auxiliary information is available as unconditional moment restrictions. Meanwhile, we also consider the presence of missing responses. We propose the combined empirical likelihood (CEL) estimator to incorporate such auxiliary information to improve the estimation efficiency of the conditional moment restriction models. We show that, when assuming responses are strongly ignorable missing at random, the CEL estimator achieves better efficiency than the previous estimators due to utilization of the auxiliary information. Based on the asymptotic property of the CEL estimator, we also develop Wilks’ type tests and corresponding confidence regions for the model parameter and the mean response. Since kernel smoothing is used, the CEL method may have difficulty for problems with high dimensional covariates. In such situations, we propose an instrumental variable-based empirical likelihood (IVEL) method to handle this problem. The merit of the CEL and IVEL are further illustrated through simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号