首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
A new local algorithm for bivariate interpolation of large sets of scattered and track data is presented. The method, which changes partially depending on the kind of data, is based on the partition of the interpolation domain in a suitable number of parallel strips, and, starting from these, on the construction for any data point of a square neighbourhood containing a convenient number of data points. Then, the well-known modified Shepard’s formula for surface interpolation is applied with some effective improvements. The proposed algorithm is very fast, owing to the optimal nearest neighbour searching, and achieves good accuracy. Computational cost and storage requirements are analyzed. Moreover, the efficiency and reliability of the algorithm are shown by several numerical tests, also performed by Renka’s algorithm for a comparison.  相似文献   

2.
In this paper, a hybrid approximation method on the sphere is analysed. As interpolation scheme, we consider a partition of unity method, such as the modified spherical Shepard method, which uses zonal basis functions plus spherical harmonics as local approximants. The associated algorithm is efficiently implemented and works well also when the amount of data is very large, as it is based on an optimized searching procedure. Locality of the method guarantees stability in numerical computations, and numerical results show good accuracy. Moreover, we aimed to discuss preservation of such features when the method and the related algorithm are applied to experimental data. To achieve this purpose, we considered the Magnetic Field Satellite data. The goal was reached, as efficiency and accuracy are maintained on several sets of real data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper a new efficient algorithm for spherical interpolation of large scattered data sets is presented. The solution method is local and involves a modified spherical Shepard’s interpolant, which uses zonal basis functions as local approximants. The associated algorithm is implemented and optimized by applying a nearest neighbour searching procedure on the sphere. Specifically, this technique is mainly based on the partition of the sphere in a suitable number of spherical zones, the construction of spherical caps as local neighbourhoods for each node, and finally the employment of a spherical zone searching procedure. Computational cost and storage requirements of the spherical algorithm are analyzed. Moreover, several numerical results show the good accuracy of the method and the high efficiency of the proposed algorithm.  相似文献   

4.
An efficient and flexible algorithm for the spherical interpolation of large scattered data sets is proposed. It is based on a partition of unity method on the sphere and uses spherical radial basis functions as local approximants. This technique exploits a suitable partition of the sphere into a number of spherical zones, the construction of a certain number of cells such that the sphere is contained in the union of the cells, with some mild overlap among the cells, and finally the employment of an optimized spherical zone searching procedure. Some numerical experiments show the good accuracy of the spherical partition of unity method and the high efficiency of the algorithm.  相似文献   

5.
A greedy algorithm in combination with radial basis functions partition of unity collocation (GRBF‐PUC) scheme is used as a locally meshless method for American option pricing. The radial basis function partition of unity method (RBF‐PUM) is a localization technique. Because of having interpolation matrices with large condition numbers, global approximants and some local ones suffer from instability. To overcome this, a greedy algorithm is added to RBF‐PUM. The greedy algorithm furnishes a subset of best nodes among the points X. Such nodes are then used as points of trial in a locally supported RBF approximant for each partition. Using of greedy selected points leads to decreasing the condition number of interpolation matrices and reducing the burdensome in pricing American options.  相似文献   

6.
This paper investigates the approximation of multivariate functions from data via linear combinations of translates of a positive definite kernel from a reproducing kernel Hilbert space. If standard interpolation conditions are relaxed by Chebyshev-type constraints, one can minimize the norm of the approximant in the Hilbert space under these constraints. By standard arguments of optimization theory, the solutions will take a simple form, based on the data related to the active constraints, called support vectors in the context of machine learning. The corresponding quadratic programming problems are investigated to some extent. Using monotonicity results concerning the Hilbert space norm, iterative techniques based on small quadratic subproblems on active sets are shown to be finite, even if they drop part of their previous information and even if they are used for infinite data, e.g., in the context of online learning. Numerical experiments confirm the theoretical results. Dedicated to C.A. Micchelli at the occasion of his 60th birthday Mathematics subject classifications (2000) 65D05, 65D10, 41A15, 41A17, 41A27, 41A30, 41A40, 41A63.  相似文献   

7.
Powerful response surface methods based on kriging and radial basis function (RBF) interpolation have been developed for expensive, i.e. computationally costly, global nonconvex optimization. We have implemented some of these methods in the solvers rbfSolve and EGO in the TOMLAB Optimization Environment (http://www.tomopt.com/tomlab/). In this paper we study algorithms based on RBF interpolation. The practical performance of the RBF algorithm is sensitive to the initial experimental design, and to the static choice of target values. A new adaptive radial basis interpolation (ARBF) algorithm, suitable for parallel implementation, is presented. The algorithm is described in detail and its efficiency is analyzed on the standard test problem set of Dixon–Szegö. Results show that it outperforms the published results of rbfSolve and several other solvers.  相似文献   

8.
Nonlinear manifold learning algorithms, such as diffusion maps, have been fruitfully applied in recent years to the analysis of large and complex data sets. However, such algorithms still encounter challenges when faced with real data. One such challenge is the existence of “repeated eigendirections,” which obscures the detection of the true dimensionality of the underlying manifold and arises when several embedding coordinates parametrize the same direction in the intrinsic geometry of the data set. We propose an algorithm, based on local linear regression, to automatically detect coordinates corresponding to repeated eigendirections. We construct a more parsimonious embedding using only the eigenvectors corresponding to unique eigendirections, and we show that this reduced diffusion maps embedding induces a metric which is equivalent to the standard diffusion distance. We first demonstrate the utility and flexibility of our approach on synthetic data sets. We then apply our algorithm to data collected from a stochastic model of cellular chemotaxis, where our approach for factoring out repeated eigendirections allows us to detect changes in dynamical behavior and the underlying intrinsic system dimensionality directly from data.  相似文献   

9.
High‐dimensionality reduction techniques are very important tools in machine learning and data mining. The method of generalized low rank approximations of matrices (GLRAM) is a popular technique for dimensionality reduction and image compression. However, it suffers from heavily computational overhead in practice, especially for data with high dimension. In order to reduce the cost of this algorithm, we propose a randomized GLRAM algorithm based on randomized singular value decomposition (RSVD). The theoretical contribution of our work is threefold. First, we discuss the decaying property of singular values of the matrices during iterations of the GLRAM algorithm, and provide a target rank required in the RSVD process from a theoretical point of view. Second, we establish the relationship between the reconstruction errors generated by the standard GLRAM algorithm and the randomized GLRAM algorithm. It is shown that the reconstruction errors generated by the former and the latter are comparable, even if the solutions are computed inaccurately during iterations. Third, the convergence of the randomized GLRAM algorithm is investigated. Numerical experiments on some real‐world data sets illustrate the superiority of our proposed algorithm over its original counterpart and some state‐of‐the‐art GLRAM‐type algorithms.  相似文献   

10.
针对现实生活中大量数据存在偏斜的情况,构建偏正态数据下的众数回归模型.又加之数据的缺失常有发生,采用插补方法处理缺失数据集,为比较插补效果,考虑对响应变量随机缺失情形进行统计推断研究.利用高斯牛顿迭代法给出众数回归模型参数的极大似然估计,比较该模型在均值插补,回归插补,众数插补三种插补条件下的插补效果.随机模拟和实例分...  相似文献   

11.
基于滤波反投影的Feldkamp-Davis-Kress(FDK)算法,具有数学形式简单、容易实现和计算速度快等优点,在医疗和工业等领域得到了广泛的应用.平行重排(PF DK)算法是FDK算法的一种推广,针对PFDK算法重建出的图像受锥角的影响加大的问题,给出一种三维加权PFDK图像重建算法,并研究了重排过程中径向插值间隔对重建图像质量的影响,分别采用三种不同插值总数(插值间隔分别是1单位,0.5单位,0.25单位)重排数据.实验结果表明给出的三维加权PFDK算法可有效减少锥角的影响,且当采用2倍插值总数时重建结果较好.  相似文献   

12.
The goal of this paper is to construct data-independent optimal point sets for interpolation by radial basis functions. The interpolation points are chosen to be uniformly good for all functions from the associated native Hilbert space. To this end we collect various results on the power function, which we use to show that good interpolation points are always uniformly distributed in a certain sense. We also prove convergence of two different greedy algorithms for the construction of near-optimal sets which lead to stable interpolation. Finally, we provide several examples. AMS subject classification 41A05, 41063, 41065, 65D05, 65D15This work has been done with the support of the Vigoni CRUI-DAAD programme, for the years 2001/2002, between the Universities of Verona and Göttingen.  相似文献   

13.
In this paper, we propose an algorithm for globally solving optimization problems over efficient sets. The algorithm is established based on a branch and bound scheme in which the bounding procedure is performed by using the well known weak duality theorem in Lagrange duality. A suitable combination of this algorithm with a local search procedure in d.c. optimization (named DCA) leads to a promising global algorithm, whose efficiency is more or less confirmed by computational experiments on a large set of test problems.  相似文献   

14.
The local Hermitian interpolation (LHI) method is a strong‐form meshless numerical technique in which the solution domain is covered by a series of small and heavily overlapping radial basis function (RBF) interpolation systems. Aside from its meshless nature and the ability to work on very large scattered datasets, the main strength of the LHI method lies in the formation of local interpolations, which themselves satisfy both boundary and governing PDE operators, leading to an accurate and stable reconstruction of partial derivatives without the need for artificial upwinding or adaptive stencil selection. In this work, an extension is proposed to the LHI formulation which allows the accurate capture of solution profiles across discontinuities in governing equation parameters. Continuity of solution value and mass flux is enforced between otherwise disconnected interpolation systems, at the location of the discontinuity. In contrast to other local meshless methods, due to the robustness of the Hermite RBF formulation, it is possible to impose both matching conditions simultaneously at the interface nodes. The procedure is demonstrated for 1D and 3D convection–diffusion problems, both steady and unsteady, with discontinuities in various PDE properties. The analytical solution profiles for these problems, which experience discontinuities in their first derivatives, are replicated to a high degree of accuracy. The technique has been developed as a tool for solving flow and transport problems around geological layers, as experienced in groundwater flow problems. The accuracy of the captured solution profiles, in scenarios where the local convective velocities exceed those typically encountered in such Darcy flow problems, suggests that the technique is indeed suitable for modeling discontinuities in porous media properties. © 2010 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 27: 1201–1230, 2011  相似文献   

15.
In some approximation problems, sampling from the target function can be both expensive and time-consuming. It would be convenient to have a method for indicating where approximation quality is poor, so that generation of new data provides the user with greater accuracy where needed. In this paper, we propose a new adaptive algorithm for radial basis function (RBF) interpolation which aims to assess the local approximation quality, and add or remove points as required to improve the error in the specified region. For Gaussian and multiquadric approximation, we have the flexibility of a shape parameter which we can use to keep the condition number of interpolation matrix at a moderate size. Numerical results for test functions which appear in the literature are given for dimensions 1 and 2, to show that our method performs well. We also give a three-dimensional example from the finance world, since we would like to advertise RBF techniques as useful tools for approximation in the high-dimensional settings one often meets in finance.  相似文献   

16.
The meshless local Petrov–Galerkin (MLPG) method with global radial basis functions (RBF) as trial approximation leads to a full final linear system and a large condition number. This makes MLPG less efficient when the number of data points is increased. We can overcome this drawback if we avoid using more points from the data site than absolutely necessary. In this article, we equip the MLPG method with the greedy sparse approximation technique of (Schaback, Numercail Algorithms 67 (2014), 531–547) and use it for numerical solution of partial differential equations. This scheme uses as few neighbor nodal values as possible and allows to control the consistency error by explicit calculation. Whatever the given RBF is, the final system is sparse and the algorithm is well‐conditioned. © 2015 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 32: 847–861, 2016  相似文献   

17.
Motivated by the method for the reconstruction of 3D objects from a set of parallel cross sections, based on the binary operation between 2D sets termed “metric average”, we developed an algorithm for the computation of the metric average between two intersecting convex polygons in 2D. For two 1D sets there is an algorithm for the computation of the metric average, with linear time in the number of intervals in the two 1D sets. The proposed algorithm has linear computation time in the number of vertices of the two polygons. As an application of this algorithm, a new technique for morphing between two convex polygons is developed. The new algorithm performs morphing in a non-intuitive way.  相似文献   

18.
We address an important issue in knowledge discovery using neural networks that has been left out in a recent article “Knowledge discovery using a neural network simultaneous optimization algorithm on a real world classification problem” by Sexton et al. [R.S. Sexton, S. McMurtrey, D.J. Cleavenger, Knowledge discovery using a neural network simultaneous optimization algorithm on a real world classification problem, European Journal of Operational Research 168 (2006) 1009–1018]. This important issue is the generation of comprehensible rule sets from trained neural networks. In this note, we present our neural network rule extraction algorithm that is very effective in discovering knowledge embedded in a neural network. This algorithm is particularly appropriate in applications where comprehensibility as well as accuracy are required. For the same data sets used by Sexton et al. our algorithm produces accurate rule sets that are concise and comprehensible, and hence helps validate the claim that neural networks could be viable alternatives to other data mining tools for knowledge discovery.  相似文献   

19.
Radial basis function (RBF) methods can provide excellent interpolants for a large number of poorly distributed data points. For any finite data set in any Euclidean space, one can construct an interpolation of the data by using RBFs. However, RBF interpolant trends between and beyond the data points depend on the RBF used and may exhibit undesirable trends using some RBFs while the trends may be desirable using other RBFs. The fact that a certain RBF is commonly used for the class of problems at hand, previous good behavior in that (or other) class of problems, and bibliography, are just some of the many valid reasons given to justify a priori selection of RBF. Even assuming that the justified choice of the RBF is most likely the correct choice, one should nonetheless confirm numerically that, in fact, the most adequate RBF for the problem at hand is the RBF chosen a priori. The main goal of this paper is to alert the analyst as to the danger of a priori selection of RBF and to present a strategy to numerically choose the most adequate RBF that better captures the trends of the given data set. The wing weight data fitting problem is used to illustrate the benefits of an adequate choice of RBF for each given data set.  相似文献   

20.
In this paper, geometric interpolation by G 1 cubic spline is studied. A wide class of sufficient conditions that admit a G 1 cubic spline interpolant is determined. In particular, convex data as well as data with inflection points are included. The existence requirements are based upon geometric properties of data entirely, and can be easily verified in advance. The algorithm that carries out the verification is added. AMS subject classification (2000)  65D05, 65D07, 65D17  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号