首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A generalization of G. M. Nielson's method for bivariate scattered data interpolation based upon a minimum norm network is presented. The essential part of the new method is the use of a variational principle for definition of function values as well as cross-boundary derivatives over the edges of a triangulation of the data points. We mainly discuss the case ofC 2 interpolants and present some examples including quality control with systems of isophotes.  相似文献   

2.
A new local algorithm for bivariate interpolation of large sets of scattered and track data is presented. The method, which changes partially depending on the kind of data, is based on the partition of the interpolation domain in a suitable number of parallel strips, and, starting from these, on the construction for any data point of a square neighbourhood containing a convenient number of data points. Then, the well-known modified Shepard’s formula for surface interpolation is applied with some effective improvements. The proposed algorithm is very fast, owing to the optimal nearest neighbour searching, and achieves good accuracy. Computational cost and storage requirements are analyzed. Moreover, the efficiency and reliability of the algorithm are shown by several numerical tests, also performed by Renka’s algorithm for a comparison.  相似文献   

3.
In this paper, we propose a completely local scheme based on continuously differentiable quadratic piecewise polynomials for interpolating scattered positional data in the plane, in such a way that quadratic polynomials are reproduced exactly. We present some numerical examples and applications to contour plotting.  相似文献   

4.
A minimum volume (MV) set, at level α, is a set having minimum volume among all those sets containing at least α probability mass. MV sets provide a natural notion of the ‘central mass’ of a distribution and, as such, have recently become popular as a tool for the detection of anomalies in multivariate data. Motivated by the fact that anomaly detection problems frequently arise in settings with temporally indexed measurements, we propose here a new method for the estimation of MV sets from dependent data. Our method is based on the concept of complexity-penalized estimation, extending recent work of Scott and Nowak for the case of independent and identically distributed measurements, and has both desirable theoretical properties and a practical implementation. Of particular note is the fact that, for a large class of stochastic processes, choice of an appropriate complexity penalty reduces to the selection of a single tuning parameter, which represents the data dependency of the underlying stochastic process. While in reality the dependence structure is unknown, we offer a data-dependent method for selecting this parameter, based on subsampling principles. Our work is motivated by and illustrated through an application to the detection of anomalous traffic levels in Internet traffic time series.  相似文献   

5.
Compared to conforming P1 finite elements, nonconforming P1 finite element discretizations are thought to be less sensitive to the appearance of distorted triangulations. E.g., optimal-order discrete H1 norm best approximation error estimates for H2 functions hold for arbitrary triangulations. However, the constants in similar estimates for the error of the Galerkin projection for second-order elliptic problems show a dependence on the maximum angle of all triangles in the triangulation. We demonstrate on an example of a special family of distorted triangulations that this dependence is essential, and due to the deterioration of the consistency error. We also provide examples of sequences of triangulations such that the nonconforming P1 Galerkin projections for a Poisson problem with polynomial solution do not converge or converge at arbitrarily low speed. The results complement analogous findings for conforming P1 finite elements.  相似文献   

6.
The penalized spline method has been widely used for estimating univariate smooth functions based on noisy data. This paper studies its extension to the two-dimensional case. To accommodate the need of handling data distributed on irregular regions, we consider bivariate splines defined on triangulations. Penalty functions based on the second-order derivatives are employed to regularize the spline fit and generalized cross-validation is used to select the penalty parameters. A simulation study shows that the penalized bivariate spline method is competitive to some well-established two-dimensional smoothers. The method is also illustrated using a real dataset on Texas temperature.  相似文献   

7.
In this paper we look at some iterative interpolation schemes and investigate how they may be used in data compression. In particular, we use the pointwise polynomial interpolation method to decompose discrete data into a sequence of difference vectors. By compressing these differences, one can store an approximation to the data within a specified tolerance using a fraction of the original storage space (the larger the tolerance, the smaller the fraction).We review the iterative interpolation scheme, describe the decomposition algorithm and present some numerical examples. The numerical results are that the best compression rate (ratio of non-zero data in the approximation to the data in the original) is often attained by using cubic polynomials and in some cases polynomials of higher degree.This work was supported by The Royal Norwegian Council for Scientific and Industrial Research (NTNF).  相似文献   

8.
9.
Non-parametric density estimation is an important technique in probabilistic modeling and reasoning with uncertainty. We present a method for learning mixtures of polynomials (MoPs) approximations of one-dimensional and multidimensional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. We compute maximum likelihood estimators of the mixing coefficients of the linear combination. The Bayesian information criterion is used as the score function to select the order of the polynomials and the number of pieces of the MoP. The method is evaluated in two ways. First, we test the approximation fitting. We sample artificial datasets from known one-dimensional and multidimensional densities and learn MoP approximations from the datasets. The quality of the approximations is analyzed according to different criteria, and the new proposal is compared with MoPs learned with Lagrange interpolation and mixtures of truncated basis functions. Second, the proposed method is used as a non-parametric density estimation technique in Bayesian classifiers. Two of the most widely studied Bayesian classifiers, i.e., the naive Bayes and tree-augmented naive Bayes classifiers, are implemented and compared. Results on real datasets show that the non-parametric Bayesian classifiers using MoPs are comparable to the kernel density-based Bayesian classifiers. We provide a free R package implementing the proposed methods.  相似文献   

10.
In this paper we deal with the problem of finding the smallest and the largest elements of a totally ordered set of size n using pairwise comparisons if one of the comparisons might be erroneous and prove a conjecture of Aigner stating that the minimum number of comparisons needed is for some constant c. We also address some related problems.  相似文献   

11.
The problem of fitting a nice curve or surface to scattered, possibly noisy, data arises in many applications in science and engineering. In this paper we solve the problem using a standard regularized least square framework in an approximation space spanned by the shifts and dilates of a single compactly supported function . We first provide an error analysis to our approach which, roughly speaking, states that the error between the exact (probably unknown) data function and the obtained fitting function is small whenever the scattered samples have a high sampling density and a low noise level. We then give a computational formulation in the univariate case when is a uniform B-spline and in the bivariate case when is the tensor product of uniform B-splines. Though sparse, the arising system of linear equations is ill-conditioned; however, when written in terms of a short support wavelet basis with a well-chosen normalization, the resulting system, which is symmetric positive definite, appears to be well-conditioned, as evidenced by the fast convergence of the conjugate gradient iteration. Finally, our method is compared with the classical cubic/thin-plate smoothing spline methods via numerical experiments, where it is seen that the quality of the obtained fitting function is very much equivalent to that of the classical methods, but our method offers advantages in terms of numerical efficiency. We expect that our method remains numerically feasible even when the number of samples in the given data is very large.  相似文献   

12.
We deal with the solutions to nonlinear elliptic equations of the form $$-{\rm div}\, a(x, Du) + g(x, u)=f$$ , with f being just a summable function, under standard growth conditions on g and a. We prove general local decay estimates for level sets of the gradient of solutions in turn implying very general estimates in rearrangement and non-rearrangement function spaces, up to Lorentz–Morrey spaces. The results obtained are in clear accordance with the classical Gagliardo–Nirenberg interpolation theory.  相似文献   

13.
Zero data of rectangular matrix polynomials are described in various forms. The basic interpolation problem of constructing rectangular matrix polynomials from their zero data is solved. Certain rectangular factorizations are analyzed in terms of spectral data.  相似文献   

14.
The error between appropriately smooth functions and their radial basis function interpolants, as the interpolation points fill out a bounded domain in Rd, is a well studied artifact. In all of these cases, the analysis takes place in a natural function space dictated by the choice of radial basis function – the native space. The native space contains functions possessing a certain amount of smoothness. This paper establishes error estimates when the function being interpolated is conspicuously rough. AMS subject classification 41A05, 41A25, 41A30, 41A63R.A. Brownlee: Supported by a studentship from the Engineering and Physical Sciences Research Council.  相似文献   

15.
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.  相似文献   

16.
A new notion of universally optimal experimental design is introduced, relevant from the perspective of adaptive nonparametric estimation. It is demonstrated that both discrete and continuous Chebyshev designs are universally optimal in the problem of fitting properly weighted algebraic polynomials to random data. The result is a direct consequence of the well-known relation between Chebyshev’s polynomials and the trigonometric functions. Optimal interpolating designs in rational regression proved particularly elusive in the past. The question can be effectively handled using its connection to elliptic interpolation, in which the ordinary circular sinus, appearing in the classical trigonometric interpolation, is replaced by the Abel-Jacobi elliptic sinus sn(x, k) of a modulus k. First, it is demonstrated that — in a natural setting of equidistant design — the elliptic interpolant is never optimal in the so-called normal case k ∈ (?1, 1), except for the trigonometric case k = 0. However, the equidistant elliptic interpolation is always optimal in the imaginary case ki?. Through a relation between elliptic and rational functions, the result leads to a long sought optimal design, for properly weighted rational interpolants. Both the poles and nodes of the interpolants can be conveniently expressed in terms of classical Jacobi’s theta functions.  相似文献   

17.
In this study, strain gradient theory is used to show the small scale effects on bending, vibration and stability of microscaled functionally graded (FG) beams. For this purpose, Euler–Bernoulli beam model is used and the numerical results are given for different boundary conditions. Analytical solutions are given for static deflection and buckling loads of the microbeams while generalized differential quadrature (GDQ) method is used to calculate their natural frequencies. The results are compared with classical elasticity ones to show the significance of the material length scale parameter (MLSP) effects and the general trend of the scale dependencies. In addition, it is shown the effect of surface energies relating to the strain gradient elasticity is negligible and can be ignored in vibration and buckling analyses. Combination of the well-known experimental setups with the results given in this paper can be used to find the effective MLSP for metal-ceramic FG microbeams. This helps to predict their accurate scale dependent mechanical behaviors by the introduced theoretical framework.  相似文献   

18.
Translated from Ukrainskii Matematicheskii Zhurnal, Vol. 41, No. 1, pp. 34–42, January, 1989.  相似文献   

19.
This paper proposes the use of the bootstrap in penalized model selection for possibly dependent heterogeneous data. The results show that we can establish (at least asymptotically) a direct relationship between estimation error and a data based complexity penalization. This requires redefinition of the target function as the sum of the individual expected predicted risks. In this framework, the wild bootstrap and related approaches can be used to estimate the penalty with no need to account for heterogeneous dependent data. The methodology is highlighted by a simulation study whose results are particularly encouraging.  相似文献   

20.
In this paper a new efficient algorithm for spherical interpolation of large scattered data sets is presented. The solution method is local and involves a modified spherical Shepard’s interpolant, which uses zonal basis functions as local approximants. The associated algorithm is implemented and optimized by applying a nearest neighbour searching procedure on the sphere. Specifically, this technique is mainly based on the partition of the sphere in a suitable number of spherical zones, the construction of spherical caps as local neighbourhoods for each node, and finally the employment of a spherical zone searching procedure. Computational cost and storage requirements of the spherical algorithm are analyzed. Moreover, several numerical results show the good accuracy of the method and the high efficiency of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号