首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The empirical likelihood method is especially useful for constructing confidence intervals or regions of parameters of interest. Yet, the technique cannot be directly applied to partially linear single-index models for longitudinal data due to the within-subject correlation. In this paper, a bias-corrected block empirical likelihood (BCBEL) method is suggested to study the models by accounting for the within-subject correlation. BCBEL shares some desired features: unlike any normal approximation based method for confidence region, the estimation of parameters with the iterative algorithm is avoided and a consistent estimator of the asymptotic covariance matrix is not needed. Because of bias correction, the BCBEL ratio is asymptotically chi-squared, and hence it can be directly used to construct confidence regions of the parameters without any extra Monte Carlo approximation that is needed when bias correction is not applied. The proposed method can naturally be applied to deal with pure single-index models and partially linear models for longitudinal data. Some simulation studies are carried out and an example in epidemiology is given for illustration.  相似文献   

2.
In this paper, we investigate the empirical likelihood for constructing a confidence region of the parameter of interest in a multi-link semiparametric model when an infinite-dimensional nuisance parameter exists. The new model covers the commonly used varying coefficient, generalized linear, single-index, multi-index, hazard regression models and their generalizations, as its special cases. Because of the existence of the infinite-dimensional nuisance parameter, the classical empirical likelihood with plug-in estimation cannot be asymptotically distribution-free, and the existing bias correction is not extendable to handle such a general model. We then propose a link-based correction approach to solve this problem. This approach gives a general rule of bias correction via an inner link, and consists of two parts. For the model whose estimating equation contains the score functions that are easy to estimate, we use a centering for the scores to correct the bias; for the model of which the score functions are of complex structure, a bias-correction procedure using simpler functions instead of the scores is given without loss of asymptotic efficiency. The resulting empirical likelihood shares the desired features: it has a chi-square limit and, under-smoothing technique, high order kernel and parameter estimation are not needed. Simulation studies are carried out to examine the performance of the new method.  相似文献   

3.
A bias-corrected technique for constructing the empirical likelihood ratio is used to study a semiparametric regression model with missing response data. We are interested in inference for the regression coefficients, the baseline function and the response mean. A class of empirical likelihood ratio functions for the parameters of interest is defined so that undersmoothing for estimating the baseline function is avoided. The existing data-driven algorithm is also valid for selecting an optimal bandwidth. Our approach is to directly calibrate the empirical log-likelihood ratio so that the resulting ratio is asymptotically chi-squared. Also, a class of estimators for the parameters of interest is constructed, their asymptotic distributions are obtained, and consistent estimators of asymptotic bias and variance are provided. Our results can be used to construct confidence intervals and bands for the parameters of interest. A simulation study is undertaken to compare the empirical likelihood with the normal approximation-based method in terms of coverage accuracies and average lengths of confidence intervals. An example for an AIDS clinical trial data set is used for illustrating our methods.  相似文献   

4.
Copula as an effective way of modeling dependence has become more or less a standard tool in risk management, and a wide range of applications of copula models appear in the literature of economics, econometrics, insurance, finance, etc. How to estimate and test a copula plays an important role in practice, and both parametric and nonparametric methods have been studied in the literature. In this paper, we focus on interval estimation and propose an empirical likelihood based confidence interval for a copula. A simulation study and a real data analysis are conducted to compare the finite sample behavior of the proposed empirical likelihood method with the bootstrap method based on either the empirical copula estimator or the kernel smoothing copula estimator.  相似文献   

5.
Double-sampling designs are commonly used in real applications when it is infeasible to collect exact measurements on all variables of interest. Two samples, a primary sample on proxy measures and a validation subsample on exact measures, are available in these designs. We assume that the validation sample is drawn from the primary sample by the Bernoulli sampling with equal selection probability. An empirical likelihood based approach is proposed to estimate the parameters of interest. By allowing the number of constraints to grow as the sample size goes to infinity, the resulting maximum empirical likelihood estimator is asymptotically normal and its limiting variance-covariance matrix reaches the semiparametric efficiency bound. Moreover, the Wilks-type result of convergence to chi-squared distribution for the empirical likelihood ratio based test is established. Some simulation studies are carried out to assess the finite sample performances of the new approach.  相似文献   

6.
A new empirical likelihood approach is developed to analyze data from two-stage sampling designs, in which a primary sample of rough or proxy measures for the variables of interest and a validation subsample of exact information are available. The validation sample is assumed to be a simple random subsample from the primary one. The proposed empirical likelihood approach is capable of utilizing all the information from both the specific models and the two available samples flexibly. It maintains some nice features of the empirical likelihood method and improves the asymptotic efficiency of the existing inferential procedures. The asymptotic properties are derived for the new approach. Some numerical studies are carried out to assess the finite sample performance.  相似文献   

7.
Empirical likelihood for general estimating equations is a method for testing hypothesis or constructing confidence regions on parameters of interest. If the number of parameters of interest is smaller than that of estimating equations, a profile empirical likelihood has to be employed. In case of dependent data, a profile blockwise empirical likelihood method can be used. However, if too many nuisance parameters are involved, a computational difficulty in optimizing the profile empirical likelihood arises. Recently, Li et al. (2011) [9] proposed a jackknife empirical likelihood method to reduce the computation in the profile empirical likelihood methods for independent data. In this paper, we propose a jackknife-blockwise empirical likelihood method to overcome the computational burden in the profile blockwise empirical likelihood method for weakly dependent data.  相似文献   

8.
In this paper we aim to estimate the direction in general single-index models and to select important variables simultaneously when a diverging number of predictors are involved in regressions. Towards this end, we propose the nonconcave penalized inverse regression method. Specifically, the resulting estimation with the SCAD penalty enjoys an oracle property in semi-parametric models even when the dimension, pn, of predictors goes to infinity. Under regularity conditions we also achieve the asymptotic normality when the dimension of predictor vector goes to infinity at the rate of pn=o(n1/3) where n is sample size, which enables us to construct confidence interval/region for the estimated index. The asymptotic results are augmented by simulations, and illustrated by analysis of an air pollution dataset.  相似文献   

9.
The varying coefficient partially linear model is considered in this paper. When the plug-in estimators of coefficient functions are used, the resulting smoothing score function becomes biased due to the slow convergence rate of nonparametric estimations. To reduce the bias of the resulting smoothing score function, a profile-type smoothed score function is proposed to draw inferences on the parameters of interest without using the quasi-likelihood framework, the least favorable curve, a higher order kernel or under-smoothing. The resulting profile-type statistic is still asymptotically Chi-squared under some regularity conditions. The results are then used to construct confidence regions for the parameters of interest. A simulation study is carried out to assess the performance of the proposed method and to compare it with the profile least-squares method. A real dataset is analyzed for illustration.  相似文献   

10.
Risk bounds for model selection via penalization   总被引:11,自引:0,他引:11  
Performance bounds for criteria for model selection are developed using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term motivated by empirical process theory and roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of our examples involve density or regression estimation settings and we focus on the problem of estimating the unknown density or regression function. We show that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. This accuracy index quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size. If we choose a list of models which exhibit good approximation properties with respect to different classes of smoothness, the estimator can be simultaneously minimax rate optimal in each of those classes. This is what is usually called adaptation. The type of classes of smoothness in which one gets adaptation depends heavily on the list of models. If too many models are involved in order to get accurate approximation of many wide classes of functions simultaneously, it may happen that the estimator is only approximately adaptive (typically up to a slowly varying function of the sample size). We shall provide various illustrations of our method such as penalized maximum likelihood, projection or least squares estimation. The models will involve commonly used finite dimensional expansions such as piecewise polynomials with fixed or variable knots, trigonometric polynomials, wavelets, neural nets and related nonlinear expansions defined by superposition of ridge functions. Received: 7 July 1995 / Revised version: 1 November 1997  相似文献   

11.
Linear regression models with vague concepts extend the classical single equation linear regression models by admitting observations in form of fuzzy subsets instead of real numbers. They have lately been introduced (cf. [V. Krätschmer, Induktive Statistik auf Basis unscharfer Meßkonzepte am Beispiel linearer Regressionsmodelle, unpublished postdoctoral thesis, Faculty of Law and Economics of the University of Saarland, Saarbrücken, 2001; V. Krätschmer, Least squares estimation in linear regression models with vague concepts, Fuzzy Sets and Systems, accepted for publication]) to improve the empirical meaningfulness of the relationships between the involved items by a more sensitive attention to the problems of data measurement, in particular, the fundamental problem of adequacy. The parameters of such models are still real numbers, and a method of estimation can be applied which extends directly the ordinary least squares method. In another recent contribution (cf. [V. Krätschmer, Strong consistency of least squares estimation in linear regression models with vague concepts, J. Multivar. Anal., accepted for publication]) strong consistency and -consistency of this generalized least squares estimation have been shown. The aim of the paper is to complete these results by an investigation of the limit distributions of the estimators. It turns out that the classical results can be transferred, in some cases even asymptotic normality holds.  相似文献   

12.
The censored linear regression model, also referred to as the accelerated failure time (AFT) model when the logarithm of the survival time is used as the response variable, is widely seen as an alternative to the popular Cox model when the assumption of proportional hazards is questionable. Buckley and James [Linear regression with censored data, Biometrika 66 (1979) 429-436] extended the least squares estimator to the semiparametric censored linear regression model in which the error distribution is completely unspecified. The Buckley-James estimator performs well in many simulation studies and examples. The direct interpretation of the AFT model is also more attractive than the Cox model, as Cox has pointed out, in practical situations. However, the application of the Buckley-James estimation was limited in practice mainly due to its illusive variance. In this paper, we use the empirical likelihood method to derive a new test and confidence interval based on the Buckley-James estimator of the regression coefficient. A standard chi-square distribution is used to calculate the P-value and the confidence interval. The proposed empirical likelihood method does not involve variance estimation. It also shows much better small sample performance than some existing methods in our simulation studies.  相似文献   

13.
Empirical likelihood for single-index models   总被引:1,自引:0,他引:1  
The empirical likelihood method is especially useful for constructing confidence intervals or regions of the parameter of interest. This method has been extensively applied to linear regression and generalized linear regression models. In this paper, the empirical likelihood method for single-index regression models is studied. An estimated empirical log-likelihood approach to construct the confidence region of the regression parameter is developed. An adjusted empirical log-likelihood ratio is proved to be asymptotically standard chi-square. A simulation study indicates that compared with a normal approximation-based approach, the proposed method described herein works better in terms of coverage probabilities and areas (lengths) of confidence regions (intervals).  相似文献   

14.
The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including e.g. the regression, density, Poisson and binary response model. The method extends the adaptive weights smoothing (AWS) procedure introduced in Polzehl and Spokoiny (2000) in context of image denoising. The main idea of the method is to describe a greatest possible local neighborhood of every design point Xi in which the local parametric assumption is justified by the data. The method is especially powerful for model functions having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical examples for density estimation and classification. We also establish some remarkable theoretical nonasymptotic results on properties of the new algorithm. This includes the ``propagation' property which particularly yields the root-n consistency of the resulting estimate in the homogeneous case. We also state an ``oracle' result which implies rate optimality of the estimate under usual smoothness conditions and a ``separation' result which explains the sensitivity of the method to structural changes.  相似文献   

15.
We consider a problem of nonparametric density estimation under shape restrictions. We deal with the case where the density belongs to a class of Lipschitz functions. Devroye [L. Devroye, A Course in Density Estimation, in: Progress in Probability and Statistics, vol. 14, Birkhäuser Boston Inc., Boston, MA, 1987] considered these classes of estimates as tailor-made estimates, in contrast in some way to universally consistent estimates. In our framework we get the existence and uniqueness of the maximum likelihood estimate as well as strong consistency. This NPMLE can be easily characterized but it is not easy to compute. Some simpler approximations are also considered.  相似文献   

16.
Time series of counts have a wide variety of applications in real life. Analyzing time series of counts requires accommodations for serial dependence, discreteness, and overdispersion of data. In this paper, we extend blockwise empirical likelihood (Kitamura, 1997 [15]) to the analysis of time series of counts under a regression setting. In particular, our contribution is the extension of Kitamura’s (1997) [15] method to the analysis of nonstationary time series. Serial dependence among observations is treated nonparametrically using a blocking technique; and overdispersion in count data is accommodated by the specification of a variance-mean relationship. We establish consistency and asymptotic normality of the maximum blockwise empirical likelihood estimator. Simulation studies show that our method has a good finite sample performance. The method is also illustrated by analyzing two real data sets: monthly counts of poliomyelitis cases in the USA and daily counts of non-accidental deaths in Toronto, Canada.  相似文献   

17.
In this paper, we use an empirical likelihood method to construct confidence regions for the stationary ARMA(p,q) models with infinite variance. An empirical log-likelihood ratio is derived by the estimating equation of the self-weighted LAD estimator. It is proved that the proposed statistic has an asymptotic standard chi-squared distribution. Simulation studies show that in a small sample case, the performance of empirical likelihood method is better than that of normal approximation of the LAD estimator in terms of the coverage accuracy.  相似文献   

18.
Partially linear errors-in-function models were proposed by Liang (2000), but their inferences have not been systematically studied. This article proposes an empirical likelihood method to construct confidence regions of the parametric components. Under mild regularity conditions, the nonparametric version of the Wilk’s theorem is derived. Simulation studies show that the proposed empirical likelihood method provides narrower confidence regions, as well as higher coverage probabilities than those based on the traditional normal approximation method.  相似文献   

19.
We present methods to handle error-in-variables models. Kernel-based likelihood score estimating equation methods are developed for estimating conditional density parameters. In particular, a semiparametric likelihood method is proposed for sufficiently using the information in the data. The asymptotic distribution theory is derived. Small sample simulations and a real data set are used to illustrate the proposed estimation methods.  相似文献   

20.
Principal component analysis (PCA) is one of the key techniques in functional data analysis. One important feature of functional PCA is that there is a need for smoothing or regularizing of the estimated principal component curves. Silverman’s method for smoothed functional principal component analysis is an important approach in a situation where the sample curves are fully observed due to its theoretical and practical advantages. However, lack of knowledge about the theoretical properties of this method makes it difficult to generalize it to the situation where the sample curves are only observed at discrete time points. In this paper, we first establish the existence of the solutions of the successive optimization problems in this method. We then provide upper bounds for the bias parts of the estimation errors for both eigenvalues and eigenfunctions. We also prove functional central limit theorems for the variation parts of the estimation errors. As a corollary, we give the convergence rates of the estimations for eigenvalues and eigenfunctions, where these rates depend on both the sample size and the smoothing parameters. Under some conditions on the convergence rates of the smoothing parameters, we can prove the asymptotic normalities of the estimations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号