首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We analyze in a regression setting the link between a scalar response and a functional predictor by means of a Functional Generalized Linear Model. We first give a theoretical framework and then discuss identifiability of the model. The functional coefficient of the model is estimated via penalized likelihood with spline approximation. The L2 rate of convergence of this estimator is given under smoothness assumption on the functional coefficient. Heuristic arguments show how these rates may be improved for some particular frameworks.  相似文献   

2.
We consider the estimation of the support of a probability density function with iid observations. The estimator to be considered is a minimizer of a complexity penalized excess mass criterion. We present a fast algorithm for the construction of the estimator. The estimator is able to estimate supports which consists of disconnected regions. We will prove that the estimator achieves minimax rates of convergence up to a logarithmic factor simultaneously over a scale of Hölder smoothness classes for the boundary of the support. The proof assumes a sharp boundary for the support.  相似文献   

3.
The censored linear regression model, also referred to as the accelerated failure time (AFT) model when the logarithm of the survival time is used as the response variable, is widely seen as an alternative to the popular Cox model when the assumption of proportional hazards is questionable. Buckley and James [Linear regression with censored data, Biometrika 66 (1979) 429-436] extended the least squares estimator to the semiparametric censored linear regression model in which the error distribution is completely unspecified. The Buckley-James estimator performs well in many simulation studies and examples. The direct interpretation of the AFT model is also more attractive than the Cox model, as Cox has pointed out, in practical situations. However, the application of the Buckley-James estimation was limited in practice mainly due to its illusive variance. In this paper, we use the empirical likelihood method to derive a new test and confidence interval based on the Buckley-James estimator of the regression coefficient. A standard chi-square distribution is used to calculate the P-value and the confidence interval. The proposed empirical likelihood method does not involve variance estimation. It also shows much better small sample performance than some existing methods in our simulation studies.  相似文献   

4.
The empirical likelihood method is especially useful for constructing confidence intervals or regions of parameters of interest. Yet, the technique cannot be directly applied to partially linear single-index models for longitudinal data due to the within-subject correlation. In this paper, a bias-corrected block empirical likelihood (BCBEL) method is suggested to study the models by accounting for the within-subject correlation. BCBEL shares some desired features: unlike any normal approximation based method for confidence region, the estimation of parameters with the iterative algorithm is avoided and a consistent estimator of the asymptotic covariance matrix is not needed. Because of bias correction, the BCBEL ratio is asymptotically chi-squared, and hence it can be directly used to construct confidence regions of the parameters without any extra Monte Carlo approximation that is needed when bias correction is not applied. The proposed method can naturally be applied to deal with pure single-index models and partially linear models for longitudinal data. Some simulation studies are carried out and an example in epidemiology is given for illustration.  相似文献   

5.
This article is devoted to nonlinear approximation and estimation via piecewise polynomials built on partitions into dyadic rectangles. The approximation rate is studied over possibly inhomogeneous and anisotropic smoothness classes that contain Besov classes. Highlighting the interest of such a result in statistics, adaptation in the minimax sense to both inhomogeneity and anisotropy of a related multivariate density estimator is proved. Besides, that estimation procedure can be implemented with a computational complexity simply linear in the sample size.  相似文献   

6.
In this paper we consider the estimation of the error distribution in a heteroscedastic nonparametric regression model with multivariate covariates. As estimator we consider the empirical distribution function of residuals, which are obtained from multivariate local polynomial fits of the regression and variance functions, respectively. Weak convergence of the empirical residual process to a Gaussian process is proved. We also consider various applications for testing model assumptions in nonparametric multiple regression. The model tests obtained are able to detect local alternatives that converge to zero at an n−1/2-rate, independent of the covariate dimension. We consider in detail a test for additivity of the regression function.  相似文献   

7.
For estimating a rare event via the multivariate extreme value theory, the so-called tail dependence function has to be investigated (see [L. de Haan, J. de Ronde, Sea and wind: Multivariate extremes at work, Extremes 1 (1998) 7-45]). A simple, but effective estimator for the tail dependence function is the tail empirical distribution function, see [X. Huang, Statistics of Bivariate Extreme Values, Ph.D. Thesis, Tinbergen Institute Research Series, 1992] or [R. Schmidt, U. Stadtmüller, Nonparametric estimation of tail dependence, Scand. J. Stat. 33 (2006) 307-335]. In this paper, we first derive a bootstrap approximation for a tail dependence function with an approximation rate via the construction approach developed by [K. Chen, S.H. Lo, On a mapping approach to investigating the bootstrap accuracy, Probab. Theory Relat. Fields 107 (1997) 197-217], and then apply it to construct a confidence band for the tail dependence function. A simulation study is conducted to assess the accuracy of the bootstrap approach.  相似文献   

8.
Summary We introduce nonparametric estimators of the autocovariance of a stationary random field. One of our estimators has the property that it is itself an autocovatiance. This feature enables the estimator to be used as the basis of simulation studies such as those which are necessary when constructing bootstrap confidence intervals for unknown parameters. Unlike estimators proposed recently by other authors, our own do not require assumptions such as isotropy or monotonicity. Indeed, like nonparametric function estimators considered more widely in the context of curve estimation, our approach demands only smoothness and tail conditions on the underlying curve or surface (here, the autocovariance), and moment and mixing conditions on the random field. We show that by imposing the condition that the estimator be a covariance function we actually reduce the numerical value of integrated squared error.  相似文献   

9.
A nonparametric estimatef * of an unknown distribution densityf W is called locally minimax iff it is minimax for all not too small neighborhoodsW g ,g W, simultaneously, whereW is some dense subset ofW. Radaviius and Rudzkis proved the existence of such an estimate under some general conditions. However, the construction of the estimate is rather complicated. In this paper, a new estimate is proposed. This estimate is locally minimax under some additional assumptions which usually hold for orthobases of algebraic polynomial and is almost as simple as the linear projective estimate. Thus, it takes a form convenient for the construction of an adaptive estimator, which does not usea-priori information about the smoothness of the density. The adaptive estimation problem is briefly discussed and an unknown density fitting by Jacobi polynomials is investigated more explicitly.  相似文献   

10.
This paper proposes a technique [termed censored average derivative estimation (CADE)] for studying estimation of the unknown regression function in nonparametric censored regression models with randomly censored samples. The CADE procedure involves three stages: firstly-transform the censored data into synthetic data or pseudo-responses using the inverse probability censoring weighted (IPCW) technique, secondly estimate the average derivatives of the regression function, and finally approximate the unknown regression function by an estimator of univariate regression using techniques for one-dimensional nonparametric censored regression. The CADE provides an easily implemented methodology for modelling the association between the response and a set of predictor variables when data are randomly censored. It also provides a technique for “dimension reduction” in nonparametric censored regression models. The average derivative estimator is shown to be root-n consistent and asymptotically normal. The estimator of the unknown regression function is a local linear kernel regression estimator and is shown to converge at the optimal one-dimensional nonparametric rate. Monte Carlo experiments show that the proposed estimators work quite well.  相似文献   

11.
Estimation of a quadratic functional of a function observed in the Gaussian white noise model is considered. A data-dependent method for choosing the amount of smoothing is given. The method is based on comparing certain quadratic estimators with each other. It is shown that the method is asymptotically sharp or nearly sharp adaptive simultaneously for the “regular” and “irregular” region. We consider lp bodies and construct bounds for the risk of the estimator which show that for p=4 the estimator is exactly optimal and for example when p ∈[3,100], then the upper bound is at most 1.055 times larger than the lower bound. We show the connection of the estimator to the theory of optimal recovery. The estimator is a calibration of an estimator which is nearly minimax optimal among quadratic estimators. Writing of this article was financed by Deutsche Forschungsgemeinschaft under project MA1026/6-2, CIES, France, and Jenny and AnttiWihuri Foundation.  相似文献   

12.
We propose in this article a unified approach to functional estimation problems based on possibly censored data. The general framework that we define allows, for instance, to handle density and hazard rate estimation based on randomly right-censored data, or regression. Given a collection of histograms, our estimation procedure consists in selecting the best histogram among that collection from the data, by minimizing a penalized least-squares type criterion. For a general collection of histograms, we obtain nonasymptotic oracle-type inequalities. Then, we consider the collection of histograms built on partitions into dyadic intervals, a choice inspired by an approximation result due to DeVore and Yu. In that case, our estimator is also adaptive in the minimax sense over a wide range of smoothness classes that contain functions of inhomogeneous smoothness. Besides, its computational complexity is only linear in the size of the sample.  相似文献   

13.
In this paper, we use an empirical likelihood method to construct confidence regions for the stationary ARMA(p,q) models with infinite variance. An empirical log-likelihood ratio is derived by the estimating equation of the self-weighted LAD estimator. It is proved that the proposed statistic has an asymptotic standard chi-squared distribution. Simulation studies show that in a small sample case, the performance of empirical likelihood method is better than that of normal approximation of the LAD estimator in terms of the coverage accuracy.  相似文献   

14.
The censored single-index model provides a flexible way for modelling the association between a response and a set of predictor variables when the response variable is randomly censored and the link function is unknown. It presents a technique for “dimension reduction” in semiparametric censored regression models and generalizes the existing accelerated failure time models for survival analysis. This paper proposes two methods for estimation of single-index models with randomly censored samples. We first transform the censored data into synthetic data or pseudo-responses unbiasedly, then obtain estimates of the index coefficients by the rOPG or rMAVE procedures of Xia (2006) [1]. Finally, we estimate the unknown nonparametric link function using techniques for univariate censored nonparametric regression. The estimators for the index coefficients are shown to be root-n consistent and asymptotically normal. In addition, the estimator for the unknown regression function is a local linear kernel regression estimator and can be estimated with the same efficiency as the parameters are known. Monte Carlo simulations are conducted to illustrate the proposed methodologies.  相似文献   

15.
The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including e.g. the regression, density, Poisson and binary response model. The method extends the adaptive weights smoothing (AWS) procedure introduced in Polzehl and Spokoiny (2000) in context of image denoising. The main idea of the method is to describe a greatest possible local neighborhood of every design point Xi in which the local parametric assumption is justified by the data. The method is especially powerful for model functions having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical examples for density estimation and classification. We also establish some remarkable theoretical nonasymptotic results on properties of the new algorithm. This includes the ``propagation' property which particularly yields the root-n consistency of the resulting estimate in the homogeneous case. We also state an ``oracle' result which implies rate optimality of the estimate under usual smoothness conditions and a ``separation' result which explains the sensitivity of the method to structural changes.  相似文献   

16.
Copula as an effective way of modeling dependence has become more or less a standard tool in risk management, and a wide range of applications of copula models appear in the literature of economics, econometrics, insurance, finance, etc. How to estimate and test a copula plays an important role in practice, and both parametric and nonparametric methods have been studied in the literature. In this paper, we focus on interval estimation and propose an empirical likelihood based confidence interval for a copula. A simulation study and a real data analysis are conducted to compare the finite sample behavior of the proposed empirical likelihood method with the bootstrap method based on either the empirical copula estimator or the kernel smoothing copula estimator.  相似文献   

17.
We give expansions for the unbiased estimator of a parametric function of the mean vector in a multivariate natural exponential family with simple quadratic variance function. This expansion is given in terms of a system of multivariate orthogonal polynomials with respect to the density of the sample mean. We study some limit properties of the system of orthogonal polynomials. We show that these properties are useful to establish the limit distribution of unbiased estimators.  相似文献   

18.
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients.  相似文献   

19.
Quantile regression for longitudinal data   总被引:18,自引:0,他引:18  
The penalized least squares interpretation of the classical random effects estimator suggests a possible way forward for quantile regression models with a large number of “fixed effects”. The introduction of a large number of individual fixed effects can significantly inflate the variability of estimates of other covariate effects. Regularization, or shrinkage of these individual effects toward a common value can help to modify this inflation effect. A general approach to estimating quantile regression models for longitudinal data is proposed employing ?1 regularization methods. Sparse linear algebra and interior point methods for solving large linear programs are essential computational tools.  相似文献   

20.
We consider the problem of estimating the support of a multivariate density based on contaminated data. We introduce an estimator, which achieves consistency under weak conditions on the target density and its support, respecting the assumption of a known error density. Especially, no smoothness or sharpness assumptions are needed for the target density. Furthermore, we derive an iterative and easily computable modification of our estimation and study its rates of convergence in a special case; a numerical simulation is given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号