首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we study the problem of adaptive estimation of a multivariate function satisfying some structural assumption. We propose a novel estimation procedure that adapts simultaneously to unknown structure and smoothness of the underlying function. The problem of structural adaptation is stated as the problem of selection from a given collection of estimators. We develop a general selection rule and establish for it global oracle inequalities under arbitrary ${\mathbb{L}}_p$ -losses. These results are applied for adaptive estimation in the additive multi-index model.  相似文献   

2.
In the context of adaptive nonparametric curve estimation a common assumption is that a function (signal) to estimate belongs to a nested family of functional classes. These classes are often parametrized by a quantity representing the smoothness of the signal. It has already been realized by many that the problem of estimating the smoothness is not sensible. What can then be inferred about the smoothness? The paper attempts to answer this question. We consider implications of our results to hypothesis testing about the smoothness and smoothness classification problem. The test statistic is based on the empirical Bayes approach, i.e., it is the marginalized maximum likelihood estimator of the smoothness parameter for an appropriate prior distribution on the unknown signal.  相似文献   

3.
We consider a problem of estimating local smoothness of a spatially inhomogeneous function from noisy data under the framework of smoothing splines. Most existing studies related to this problem deal with estimation induced by a single smoothing parameter or partially local smoothing parameters, which may not be efficient to characterize various degrees of smoothness of the underlying function when it is spatially varying. In this paper, we propose a new nonparametric method to estimate local smoothness of the function based on a moving local risk minimization coupled with spatially adaptive smoothing splines. The proposed method provides full information of the local smoothness at every location on the entire data domain, so that it is able to understand the degrees of spatial inhomogeneity of the function. A successful estimate of the local smoothness is useful for identifying abrupt changes of smoothness of the data, performing functional clustering and improving the uniformity of coverage of the confidence intervals of smoothing splines. We further consider a nontrivial extension of the local smoothness of inhomogeneous two-dimensional functions or spatial fields. Empirical performance of the proposed method is evaluated through numerical examples, which demonstrates promising results of the proposed method.  相似文献   

4.
We consider the problem of estimating the support of a multivariate density based on contaminated data. We introduce an estimator, which achieves consistency under weak conditions on the target density and its support, respecting the assumption of a known error density. Especially, no smoothness or sharpness assumptions are needed for the target density. Furthermore, we derive an iterative and easily computable modification of our estimation and study its rates of convergence in a special case; a numerical simulation is given.  相似文献   

5.
We propose a method for estimating nonstationary spatial covariance functions by representing a spatial process as a linear combination of some local basis functions with uncorrelated random coefficients and some stationary processes, based on spatial data sampled in space with repeated measurements. By incorporating a large collection of local basis functions with various scales at various locations and stationary processes with various degrees of smoothness, the model is flexible enough to represent a wide variety of nonstationary spatial features. The covariance estimation and model selection are formulated as a regression problem with the sample covariances as the response and the covariances corresponding to the local basis functions and the stationary processes as the predictors. A constrained least squares approach is applied to select appropriate basis functions and stationary processes as well as estimate parameters simultaneously. In addition, a constrained generalized least squares approach is proposed to further account for the dependencies among the response variables. A simulation experiment shows that our method performs well in both covariance function estimation and spatial prediction. The methodology is applied to a U.S. precipitation dataset for illustration. Supplemental materials relating to the application are available online.  相似文献   

6.
We establish results on convergence and smoothness of subdivision rules operating on manifold-valued data which are based on a general dilation matrix. In particular we cover irregular combinatorics. For the regular grid case results are not restricted to isotropic dilation matrices. The nature of the results is that intrinsic subdivision rules which operate on geometric data inherit smoothness properties of their linear counterparts.  相似文献   

7.
In this article, we consider the problem of estimating the eigenvalues and eigenfunctions of the covariance kernel (i.e., the functional principal components) from sparse and irregularly observed longitudinal data. We exploit the smoothness of the eigenfunctions to reduce dimensionality by restricting them to a lower dimensional space of smooth functions. We then approach this problem through a restricted maximum likelihood method. The estimation scheme is based on a Newton–Raphson procedure on the Stiefel manifold using the fact that the basis coefficient matrix for representing the eigenfunctions has orthonormal columns. We also address the selection of the number of basis functions, as well as that of the dimension of the covariance kernel by a second-order approximation to the leave-one-curve-out cross-validation score that is computationally very efficient. The effectiveness of our procedure is demonstrated by simulation studies and an application to a CD4+ counts dataset. In the simulation studies, our method performs well on both estimation and model selection. It also outperforms two existing approaches: one based on a local polynomial smoothing, and another using an EM algorithm. Supplementary materials including technical details, the R package fpca, and data analyzed by this article are available online.  相似文献   

8.
Statistical estimation with model selection   总被引:1,自引:0,他引:1  
The purpose of this paper is to explain the interest and importance of (approximate) models and model selection in Statistics. Starting from the very elementary example of histograms we present a general notion of finite dimensional model for statistical estimation and we explain what type of risk bounds can be expected from the use of one such model. We then give the performance of snitable model selection procedures from a family of such models. We illustrate our point of view by two main examples: the choice of a partition for designing a histogram from an n-sample and the problem of variable selection in the context of Gaussian regression.  相似文献   

9.
In recent years, several methods have been proposed to deal with functional data classification problems (e.g., one-dimensional curves or two- or three-dimensional images). One popular general approach is based on the kernel-based method, proposed by Ferraty and Vieu (Comput Stat Data Anal 44:161–173, 2003). The performance of this general method depends heavily on the choice of the semi-metric. Motivated by Fan and Lin (J Am Stat Assoc 93:1007–1021, 1998) and our image data, we propose a new semi-metric, based on wavelet thresholding for classifying functional data. This wavelet-thresholding semi-metric is able to adapt to the smoothness of the data and provides for particularly good classification when data features are localized and/or sparse. We conduct simulation studies to compare our proposed method with several functional classification methods and study the relative performance of the methods for classifying positron emission tomography images.  相似文献   

10.
非均匀分形插值函数的光滑性和Hlder指数   总被引:7,自引:0,他引:7  
卢建朱 《计算数学》2000,22(2):177-182
In this paper we consider the smoothness of fractal interpolation functions on a general set of nodes, and obtain the estimation of its Hlder exponent.  相似文献   

11.
In this paper, a little known computational approach to density estimation based on filtered polynomial approximation is investigated. It is accompanied by the first online available density estimation computer program based on a filtered polynomial approach. The approximation yields the unknown distribution and density as the product of a monotonic increasing polynomial and a filter. The filter may be considered as a target distribution which gets fixed prior to the estimation. The filtered polynomial approach then provides coefficient estimates for (close) algebraic approximations to (a) the unknown density function and (b) the unknown cumulative distribution function as well as (c) a transformation (e.g., normalization) from the unknown data distribution to the filter. This approach provides a high degree of smoothness in its estimates for univariate as well as for multivariate settings. The nice properties as the high degree of smoothness and the ability to select from different target distributions are suited especially in MCMC simulations. Two applications in Sects. 1 and 7 will show the advantages of the filtered polynomial approach over the commonly used kernel estimation method.   相似文献   

12.
THE SMOOTHNESS AND DIMENSION OF FRACTAL INTERPOLATION FUNCTIONS   总被引:2,自引:0,他引:2  
In this paper, we investigate the smoothness of non-equidistant fractal interpolation functions We obtain the Holder exponents of such fractal interpolation functions by using the technique of operator approximation. At last, We discuss the series expressiong of these functions and give a Box-counting dimension estimation of “critical” fractal interpohltion functions by using our smoothness results.  相似文献   

13.
In this paper we discuss the theory of one-step extrapolation methods applied both to ordinary differential equations and to index 1 semi-explicit differential-algebraic systems. The theoretical background of this numerical technique is the asymptotic global error expansion of numerical solutions obtained from general one-step methods. It was discovered independently by Henrici, Gragg and Stetter in 1962, 1964 and 1965, respectively. This expansion is also used in most global error estimation strategies as well. However, the asymptotic expansion of the global error of one-step methods is difficult to observe in practice. Therefore we give another substantiation of extrapolation technique that is based on the usual local error expansion in a Taylor series. We show that the Richardson extrapolation can be utilized successfully to explain how extrapolation methods perform. Additionally, we prove that the Aitken-Neville algorithm works for any one-step method of an arbitrary order s, under suitable smoothness.  相似文献   

14.
The minimum spanning tree write policy for the maintenance of the consistency of a distributed database, where replicated data exist, has been proposed in [1]. In this paper, we first present a data placement heuristic algorithm in general networks for minimizing the overall transmission cost for processing the typical demands of queries (by a “simple” process strategy) and updates (by the minimum spanning tree write policy). Several interesting optimality estimation results of this algorithm are shown, while the computational intractability of the complete optimization, with respect to the simple strategy, is shown as well. Secondly, we apply a classical climbing hill technique to obtain a dynamic database placement algorithm based on an employed optimizer—a collection of distributed query process algorithms. This is guaranteed to output a “locally optimal” data allocation. The implementation results also show that those two heuristics work well in practice.  相似文献   

15.
In our previous work, we introduced a convex-concave regularization approach to the reconstruction of binary objects from few projections within a limited range of angles. A convex reconstruction functional, comprising the projections equations and a smoothness prior, was complemented with a concave penalty term enforcing binary solutions. In the present work we investigate alternatives to the smoothness prior in terms of probabilistically learnt priors encoding local object structure. We show that the difference-of-convex-functions DC-programming framework is flexible enough to cope with this more general model class. Numerical results show that reconstruction becomes feasible under conditions where our previous approach fails.  相似文献   

16.
Risk bounds for model selection via penalization   总被引:11,自引:0,他引:11  
Performance bounds for criteria for model selection are developed using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term motivated by empirical process theory and roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of our examples involve density or regression estimation settings and we focus on the problem of estimating the unknown density or regression function. We show that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. This accuracy index quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size. If we choose a list of models which exhibit good approximation properties with respect to different classes of smoothness, the estimator can be simultaneously minimax rate optimal in each of those classes. This is what is usually called adaptation. The type of classes of smoothness in which one gets adaptation depends heavily on the list of models. If too many models are involved in order to get accurate approximation of many wide classes of functions simultaneously, it may happen that the estimator is only approximately adaptive (typically up to a slowly varying function of the sample size). We shall provide various illustrations of our method such as penalized maximum likelihood, projection or least squares estimation. The models will involve commonly used finite dimensional expansions such as piecewise polynomials with fixed or variable knots, trigonometric polynomials, wavelets, neural nets and related nonlinear expansions defined by superposition of ridge functions. Received: 7 July 1995 / Revised version: 1 November 1997  相似文献   

17.
Under special conditions on data set and underlying distribution, the limit of finite sample breakdown point of Tukey’s halfspace median (1/3) has been obtained in the literature. In this paper, we establish the result under weaker assumptions imposed on underlying distribution (weak smoothness) and on data set (not necessary in general position). The refined representation of Tukey’s sample depth regions for data set not necessary in general position is also obtained, as a by-product of our derivation.  相似文献   

18.
In this paper, we introduce a class of a directed acyclic graph on the assumption that the collection of random variables indexed by the vertices has a Markov property. We present a flexible approach for the study of the exact distributions of runs and scans on the directed acyclic graph by extending the method of conditional probability generating functions. The results presented here provide a wide framework for developing the exact distribution theory of runs and scans on the graphical models. We also show that our theoretical results can easily be carried out through some computer algebra systems and give some numerical examples in order to demonstrate the feasibility of our theoretical results. As applications, two special reliability systems are considered, which are closely related to our general results. Finally, we address the parameter estimation in the distributions of runs and scans.  相似文献   

19.
We present a general framework for studying harmonic analysis of functions in the settings of various emerging problems in the theory of diffusion geometry. The starting point of the now classical diffusion geometry approach is the construction of a kernel whose discretization leads to an undirected graph structure on an unstructured data set. We study the question of constructing such kernels for directed graph structures, and argue that our construction is essentially the only way to do so using discretizations of kernels. We then use our previous theory to develop harmonic analysis based on the singular value decomposition of the resulting non-self-adjoint operators associated with the directed graph. Next, we consider the question of how functions defined on one space evolve to another space in the paradigm of changing data sets recently introduced by Coifman and Hirn. While the approach of Coifman and Hirn requires that the points on one space should be in a known one-to-one correspondence with the points on the other, our approach allows the identification of only a subset of landmark points. We introduce a new definition of distance between points on two spaces, construct localized kernels based on the two spaces and certain interaction parameters, and study the evolution of smoothness of a function on one space to its lifting to the other space via the landmarks. We develop novel mathematical tools that enable us to study these seemingly different problems in a unified manner.  相似文献   

20.
Coorbit space theory is an abstract approach to function spaces and their atomic decompositions. The original theory developed by Feichtinger and Gröchenig in the late 1980ies heavily uses integrable representations of locally compact groups. Their theory covers, in particular, homogeneous Besov-Lizorkin-Triebel spaces, modulation spaces, Bergman spaces and the recent shearlet spaces. However, inhomogeneous Besov-Lizorkin-Triebel spaces cannot be covered by their group theoretical approach. Later it was recognized by Fornasier and Rauhut (2005) [24] that one may replace coherent states related to the group representation by more general abstract continuous frames. In the first part of the present paper we significantly extend this abstract generalized coorbit space theory to treat a wider variety of coorbit spaces. A unified approach towards atomic decompositions and Banach frames with new results for general coorbit spaces is presented. In the second part we apply the abstract setting to a specific framework and study coorbits of what we call Peetre spaces. They allow to recover inhomogeneous Besov-Lizorkin-Triebel spaces of various types of interest as coorbits. We obtain several old and new wavelet characterizations based on explicit smoothness, decay, and vanishing moment assumptions of the respective wavelet. As main examples we obtain results for weighted spaces (Muckenhoupt, doubling), general 2-microlocal spaces, Besov-Lizorkin-Triebel-Morrey spaces, spaces of dominating mixed smoothness and even mixtures of the mentioned ones. Due to the generality of our approach, there are many more examples of interest where the abstract coorbit space theory is applicable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号