共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider nonparametric estimation of a smooth function of one variable. Global selection procedures cannot sufficiently account for local sparseness of the covariate nor can they adapt to local curvature of the regression function. We propose a new method for selecting local smoothing parameters which takes into account sparseness and adapts to local curvature. A Bayesian type argument provides an initial smoothing parameter which adapts to the local sparseness of the covariate and provides the basis for local bandwidth selection procedures which further adjust the bandwidth according to the local curvature of the regression function. Simulation evidence indicates that the proposed method can result in reduction of both pointwise mean squared error and integrated mean squared error. 相似文献
2.
Peter Hall 《Journal of multivariate analysis》1990,32(2)
We describe a bootstrap method for estimating mean squared error and smoothing parameter in nonparametric problems. The method involves using a resample of smaller size than the original sample. There are many applications, which are illustrated using the special cases of nonparametric density estimation, nonparametric regression, and tail parameter estimation. 相似文献
3.
This paper addresses the problem of modelling time series with nonstationarity from a finite number of observations. Problems encountered with the time varying parameters in regression type models led to the smoothing techniques. The smoothing methods basically rely on the finiteness of the error variance, and thus, when this requirement fails, particularly when the error distribution is heavy tailed, the existing smoothing methods due to [1], are no longer optimal. In this paper, we propose a penalized minimum dispersion method for time varying parameter estimation when a regression model generated by an infinite variance stable process with characteristic exponent α ε (1, 2). Recursive estimates are evaluated and it is shown that these estimates for a nonstationary process with normal errors is a special case. 相似文献
4.
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function
associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all
of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random.
Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator.
All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of
the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection
method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform
relatively well. We illustrate our methods with an analysis of some vascular disease data. 相似文献
5.
Lina Liao Cheolwoo Park Jan Hannig Kee-Hoon Kang 《Journal of computational and graphical statistics》2016,25(4):1041-1056
The work revisits the autocovariance function estimation, a fundamental problem in statistical inference for time series. We convert the function estimation problem into constrained penalized regression with a generalized penalty that provides us with flexible and accurate estimation, and study the asymptotic properties of the proposed estimator. In case of a nonzero mean time series, we apply a penalized regression technique to a differenced time series, which does not require a separate detrending procedure. In penalized regression, selection of tuning parameters is critical and we propose four different data-driven criteria to determine them. A simulation study shows effectiveness of the tuning parameter selection and that the proposed approach is superior to three existing methods. We also briefly discuss the extension of the proposed approach to interval-valued time series. Supplementary materials for this article are available online. 相似文献
6.
Suojin Wang 《Annals of the Institute of Statistical Mathematics》1995,47(1):65-80
In this paper we develop the technique of a generalized rescaling in the smoothed bootstrap, extending Silverman and Young's idea of shrinking. Unlike most existing methods of smoothing, with a proper choice of the rescaling parameter the rescaled smoothed bootstrap method produces estimators that have the asymptotic minimum mean (integrated) squared error, asymptotically improving existing bootstrap methods, both smoothed and unsmoothed. In fact, the new method includes existing smoothed bootstrap methods as special cases. This unified approach is investigated in the problems of estimation of global and local functionals and kernel density estimation. The emphasis of this investigation is on theoretical improvements which in some cases offer practical potential. 相似文献
7.
Based on the weekly closing price of Shenzhen Integrated Index, this article studies the volatility of Shenzhen Stock Market using three different models: Logistic, AR(1) and AR(2). The time-variable parameters of Logistic regression model is estimated by using both the index smoothing method and the time-variable parameter estimation method. And both the AR(1) model and the AR(2) model of zero-mean series of the weekly closing price and its zero-mean series of volatility rate are established based on the analysis results of zero-mean series of the weekly closing price. Six common statistical methods for error prediction are used to test the predicting results. These methods are: mean error (ME), mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), Akaike's information criterion (AIC), and Bayesian information criterion (BIC). The investigation shows that AR(1) model exhibits the best predicting result, whereas AR(2) model exhibits predicting results that is intermediate between AR(1) model and the Logistic regression model. 相似文献
8.
Tristan Senga Kiessé Nabil Zougab Célestin C. Kokonendji 《Computational Statistics》2016,31(1):189-206
This work takes advantage of semiparametric modelling which improves significantly in many situations the estimation accuracy of the purely nonparametric approach. Herein for semiparametric estimations of probability mass function (pmf) of count data, and an unknown count regression function (crf), the kernel used is a binomial one and the bandiwdth selection is investigated by developing Bayesian approaches. About the latter, Bayes local and global bandwidth approaches are used to establish data-driven selection procedures in semiparametric framework. From conjugate beta prior distributions of the smoothing parameter and under the squared errors loss function, Bayes estimate for pmf is obtained in closed form. This is not available for the crf which is computed by the Markov Chain Monte Carlo technique. Simulation studies demonstrate that both proposed methods perform better than the classical cross-validation procedures, in particular the smoothing quality and execution times are optimized. All applications are made on real data sets. 相似文献
9.
We focus on nonparametric multivariate regression function estimation by locally weighted least squares. The asymptotic behavior for a sequence of error processes indexed by bandwidth matrices is derived. We discuss feasible data-driven consistent estimators minimizing asymptotic mean squared error or efficient estimators reducing asymptotic bias at points where opposite sign curvatures of the regression function are present in different directions. 相似文献
10.
In this paper, we consider composite quantile regression for partial functional linear regression model with polynomial spline approximation. Under some mild conditions, the convergence rates of the estimators and mean squared prediction error, and asymptotic normality of parameter vector are obtained. Simulation studies demonstrate that the proposed new estimation method is robust and works much better than the least-squares based method when there are outliers in the dataset or the random error follows heavy-tailed distributions. Finally, we apply the proposed methodology to a spectroscopic data sets to illustrate its usefulness in practice. 相似文献
11.
Ying YANG 《数学学报(英文版)》2006,22(5):1565-1582
M-cross-validation criterion is proposed for selecting a smoothing parameter in a nonparametric median regression model in which a uniform weak convergency rate for the M-cross-validated local median estimate, and the upper and lower bounds of the smoothing parameter selected by the proposed criterion are established. The main contribution of this study shows a drastic difference from those encountered in the classical L2-, L1- cross-validation technique, which leads only to the consistency in the sense of the average. Obviously, our results are novel and nontrivial from the point of view of mathematics and statistics, which provides insight and possibility for practitioners substituting maximum deviation for average deviation to evaluate the performance of the data-driven technique. 相似文献
12.
Summary. It has been shown that local linear smoothing possesses a variety of very attractive properties, not least being its mean
square performance. However, such results typically refer only to asymptotic mean squared error, meaning the mean squared error of the asymptotic distribution, and in fact, the actual mean squared error
is often infinite. See Seifert and Gasser (1996). This difficulty may be overcome by shrinking the local linear estimator
towards another estimator with bounded mean square. However, that approach requires information about the size of the shrinkage
parameter. From at least a theoretical viewpoint, very little is known about the effects of shrinkage. In particular, it is
not clear how small the shrinkage parameter may be chosen without affecting first-order properties, or whether infinitely
supported kernels such as the Gaussian require shrinkage in order to achieve first-order optimal performance. In the present
paper we provide concise and definitive answers to such questions, in the context of general ridged and shrunken local linear
estimators. We produce necessary and sufficient conditions on the size of the shrinkage parameter that ensure the traditional
mean squared error formula. We show that a wide variety of infinitely-supported kernels, with tails even lighter than those
of the Gaussian kernel, do not require any shrinkage at all in order to achieve traditional first-order optimal mean square
performance.
Received: 22 May 1995 / In revised form: 23 January 1997 相似文献
13.
《Statistics & probability letters》1988,6(5):349-355
For the estimation of variance components in the one way random effects models, we propose some estimators which avoid negative and zero estimates of the variance component, a well-known problem with customary estimators such as the maximum likelihood or the restricted maximum likelihood estimators. The proposed estimators are shown to have lower mean squared error than customary estimators over a large range of the parameter space. This is also exhibited in a Monte Carlo study. Extensions of the proposed procedure to more complex situations are also discussed. 相似文献
14.
Xiaohui Yuan Tianqing Liu Nan Lin Baoxue Zhang 《Journal of multivariate analysis》2010,101(10):2420-2433
Many statistical models, e.g. regression models, can be viewed as conditional moment restrictions when distributional assumptions on the error term are not assumed. For such models, several estimators that achieve the semiparametric efficiency bound have been proposed. However, in many studies, auxiliary information is available as unconditional moment restrictions. Meanwhile, we also consider the presence of missing responses. We propose the combined empirical likelihood (CEL) estimator to incorporate such auxiliary information to improve the estimation efficiency of the conditional moment restriction models. We show that, when assuming responses are strongly ignorable missing at random, the CEL estimator achieves better efficiency than the previous estimators due to utilization of the auxiliary information. Based on the asymptotic property of the CEL estimator, we also develop Wilks’ type tests and corresponding confidence regions for the model parameter and the mean response. Since kernel smoothing is used, the CEL method may have difficulty for problems with high dimensional covariates. In such situations, we propose an instrumental variable-based empirical likelihood (IVEL) method to handle this problem. The merit of the CEL and IVEL are further illustrated through simulation studies. 相似文献
15.
We consider the kernel estimation of a multivariate regression function at a point. Theoretical choices of the bandwidth are possible for attaining minimum mean squared error or for local scaling, in the sense of asymptotic distribution. However, these choices are not available in practice. We follow the approach of Krieger and Pickands (Ann. Statist.9 (1981) 1066–1078) and Abramson (J. Multivariate Anal.12 (1982), 562–567) in constructing adaptive estimates after demonstrating the weak convergence of some error process. As consequences, efficient data-driven consistent estimation is feasible, and data-driven local scaling is also feasible. In the latter instance, nearest-neighbor-type estimates and variance-stabilizing estimates are obtained as special cases. 相似文献
16.
17.
《European Journal of Operational Research》2020,280(1):351-364
Researchers rely on the distance function to model multiple product production using multiple inputs. A stochastic directional distance function (SDDF) allows for noise in potentially all input and output variables. Yet, when estimated, the direction selected will affect the functional estimates because deviations from the estimated function are minimized in the specified direction. Specifically, the parameters of the parametric SDDF are point identified when the direction is specified; we show that the parameters of the parametric SDDF are set identified when multiple directions are considered. Further, the set of identified parameters can be narrowed via data-driven approaches to restrict the directions considered. We demonstrate a similar narrowing of the identified parameter set for a shape constrained nonparametric method, where the shape constraints impose standard features of a cost function such as monotonicity and convexity.Our Monte Carlo simulation studies reveal significant improvements, as measured by out of sample radial mean squared error, in functional estimates when we use a directional distance function with an appropriately selected direction and the errors are uncorrelated across variables. We show that these benefits increase as the correlation in error terms across variables increase. This correlation is a type of endogeneity that is common in production settings. From our Monte Carlo simulations we conclude that selecting a direction that is approximately orthogonal to the estimated function in the central region of the data gives significantly better estimates relative to the directions commonly used in the literature. For practitioners, our results imply that selecting a direction vector that has non-zero components for all variables that may have measurement error provides a significant improvement in the estimator’s performance. We illustrate these results using cost and production data from samples of approximately 500 US hospitals per year operating in 2007, 2008, and 2009, respectively, and find that the shape constrained nonparametric methods provide a significant increase in flexibility over second order local approximation parametric methods. 相似文献
18.
Local polynomial methods hold considerable promise for boundary estimation, where they offer unmatched flexibility and adaptivity. Most rival techniques provide only a single order of approximation; local polynomial approaches allow any order desired. Their more conventional rivals, for example high-order kernel methods in the context of regression, do not have attractive versions in the case of boundary estimation. However, the adoption of local polynomial methods for boundary estimation is inhibited by lack of knowledge about their properties, in particular about the manner in which they are influenced by bandwidth; and by the absence of techniques for empirical bandwidth choice. In the present paper we detail the way in which bandwidth selection determines mean squared error of local polynomial boundary estimators, showing that it is substantially more complex than in regression settings. For example, asymptotic formulae for bias and variance contributions to mean squared error no longer decompose into monotone functions of bandwidth. Nevertheless, once these properties are understood, relatively simple empirical bandwidth selection methods can be developed. We suggest a new approach to both local and global bandwidth choice, and describe its properties. 相似文献
19.
In this paper we propose a new method of local linear adaptive smoothing for nonparametric conditional quantile regression. Some theoretical properties of the procedure are investigated. Then we demonstrate the performance of the method on a simulated example and compare it with other methods. The simulation results demonstrate a reasonable performance of our method proposed especially in situations when the underlying image is piecewise linear or can be approximated by such images. Generally speaking, our method outperforms most other existing methods in the sense of the mean square estimation (MSE) and mean absolute estimation (MAE) criteria. The procedure is very stable with respect to increasing noise level and the algorithm can be easily applied to higher dimensional situations. 相似文献
20.
在带有罚函数的变量选择中,调节参数的选择是一个关键性问题,但遗憾的是,在大多数文献中,调节参数选择的方法较为模糊,多凭经验,缺乏系统的理论方法.本文基于含随机效应的面板数据模型,提出分位回归中适应性LASSO调节参数的选择标准惩罚交叉验证准则(PCV),并讨论比较了该准则与其他选择调节参数的准则的效果.通过对不同分位点进行模拟,我们发现当残差E来自尖峰分布和厚尾分布时,该准则能更好地估计模型参数,尤其对于高分位点和低分位点而言.选取其他分位点时,PCV的效果虽稍逊色于Schwarz信息准则,但明显优于A1kaike 信息准则和交叉验证准则.且在选择变量的准确性方面,该准则比Schwarz信息准则、Akaike信息准则等更加有效.文章最后对我国各地区多个宏观经济指标的面板数据进行建模分析,展示了惩罚交叉验证准则的性能,得到了在不同分位点处宏观经济指标之间的回归关系. 相似文献